^{1}

^{2}

^{3}

^{2}

^{1}

^{2}

^{3}

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

In this work, we illustrate the implementation of physics informed neural networks (PINNs) for solving forward and inverse problems in structural vibration. Physics informed deep learning has lately proven to be a powerful tool for the solution and data-driven discovery of physical systems governed by differential equations. In spite of the popularity of PINNs, their application in structural vibrations is limited. This motivates the extension of the application of PINNs in yet another new domain and leverages from the available knowledge in the form of governing physical laws. On investigating the performance of conventional PINNs in vibrations, it is mostly found that it suffers from a very recently pointed out similar scaling or regularization issue, leading to inaccurate predictions. It is thereby demonstrated that a simple strategy of modifying the loss function helps to combat the situation and enhance the approximation accuracy significantly without adding any extra computational cost. In addition to the above two contributing factors of this work, the implementation of the conventional and modified PINNs is performed in the MATLAB environment owing to its recently developed rich deep learning library. Since all the developments of PINNs till date is Python based, this is expected to diversify the field and reach out to greater scientific audience who are more proficient in MATLAB but are interested to explore the prospect of deep learning in computational science and engineering. As a bonus, complete executable codes of all four representative (both forward and inverse) problems in structural vibrations have been provided along with their line-by-line lucid explanation and well-interpreted results for better understanding.

Deep learning (DL) has recently emerged as an incredibly successful tool for solving ordinary differential equations (ODEs) and partial differential equations (PDEs). One of the major reasons for the popularity of DL as an alternative ODE/PDE solver which may be attributed to the exploitation of the recent developments in automatic differentiation (AD) [

The architecture of PINNs can be customized to comply with any symmetries, invariance, or conservation principles originating from the governing physical laws modelled by time-dependant and nonlinear ODEs and PDEs. This feature make PINNs an ideal platform to incorporate this domain of knowledge in the form of soft constraints so that this prior information can act as a regularization mechanism to effectively explore and exploit the space of feasible solutions. Due to the above features and generalized framework of PINNs, they are expected to be as suitable in structural vibration problems as in any other applications of computational physics. Therefore, in this paper, we investigate the performance of conventional PINNs for solving forward and inverse problems in structural vibrations. Then, it is shown that with the modification of the loss function, the scaling or regularization issue which is an inherent drawback of first generation PINNs referred to as “gradient pathology” [

One of the major challenges PINNs circumvent is the overdependence of data-centric deep neural networks (DNN) on training data. This is especially useful as sufficient information in the form of data is often not available for physical systems. The basic concept of PINNs is to evaluate hyperparameters of the DNN by making use of the governing physics and encoding this prior information within the architecture in the form of the ODE/PDE. As a result of the soft constraining, it ensures the conservation of the physical laws modelled by the governing equation, initial and boundary conditions and available measurements.

Considering the PDE for the solution

Note that the derivatives

Usually,

Despite immense success, the plain vanilla version of PINNs (as discussed above) has been often criticized for not performing well even for simple problems. This is due to the regularization of the composite loss term as defined in Eq.

Alternatively, we employ a different approach to address the scaling issue and at the same time requires no extra computational effort. To avoid multiple terms in the composite loss function, the DNN output

Note that the new loss function only involves the PDE residual of the modified output

Next, a flow diagram of the PINNs architecture is presented in

A schematic flow diagram of physics informed neural networks (PINNs). In the figure, the abbreviations FC-DNN, PDE, AD, BCs and ICs represent fully connected deep neural network, partial differential equation, automatic differentiation, boundary conditions and initial conditions, respectively. All of the symbols used here to express the mathematical quantities are explained in

One useful feature of PINNs is that the same framework can be employed for solving inverse problems with a slight modification of the loss function. The necessary modification is discussed next. If the parameter

This term

Lastly, the parameters

In this section, the implementation of PINNs in MATLAB has been presented following its theoretical formulation discussed in the previous section. A step-wise explanatory approach has been adopted for better understanding of the readers and care has been taken to maintain the code as generalized as possible so that others can easily edit only the necessary portions of the code for their purpose. The complete code has been divided into several sub-parts and each of these are explained in detail separately for the solution of forward and inverse problems.

The first part is the input data generation. For the conventional PINNs, points have to be generated 1) in the interior of domain to satisfy the PDE residual, 2) on the boundary of domain to satisfy the boundary conditions, and 3) additional points to satisfy the initial conditions. However, in the modified approach, since the output is adapted so as to satisfy all of the conditions simultaneously, only the interior points are required to be generated. The part of the code generating the interior data points by Latin hypercube sampling has been illustrated in the following snippet.

Next, the fully connected deep neural net architecture is constructed according to the user-defined number of layers “^{1}

At this stage, the network is to be trained with user-specified value of parameters like, number of epochs, initial learning, decay rate along with several other tuning options. It is worth noting that multiple facilities to allocate hardware resources are available in MATLAB for training the network in an optimal computational cost. This include using CPU, GPU, multi GPU, parallel (local or remote) and cloud computing. The steps performed during the model training within the nested loops of epoch and iteration in mini-batches have been illustrated in the following snippet. To recall, an epoch is the full pass of the training algorithm over the entire training set and an iteration is one step of the gradient descent algorithm towards minimizing the loss function using a mini-batch. As it can be observed from the snippet that three operations are involved during the model training. These are 1) evaluating the model gradients and loss using “^{2}

The next snippet presents the function “

As the modified DNN output ensures the satisfaction of ICs and BCs, only the loss term corresponding to PDE residual is necessary. Instead, if conventional PINNs was used, separate loss terms originating from the ICs and BCs would have to be added to the residual loss. Finally, the gradients of the combined loss w.r.t. the network parameters are computed and passed as the function output. These gradients are further used during backpropagation.

As obvious, there will be another loss term involved while solving an inverse problem which minimizes the discrepancy between the model prediction and the measured data. The parameter to be identified is updated as another additional hyperparameter of the DNN along with the network weights and biases. This can be easily implemented by adding the following line:

The “

Once the PINNs model is trained, it can be used to predict on the test dataset. It is worth noting that the deep learning library of MATLAB is rich and consists of a diverse range of built-in functions, providing the users adequate choice and modelling freedom. In the next section, the performance of conventional and modified PINNs is accessed for solving four representative structural vibration problems, involving solution of ODE including multi-DOF systems, and PDE. In doing so, both forward and inverse problems have been addressed. Complete executable MATLAB codes of PINNs implementation for all the example problems can be found in the

The forced vibration of the spring-mass system can be expressed by_{n}, f_{n}, ω

where, _{n}

As mentioned previously, in the realm of the PINNs framework, solution space (of the ODE, for this case) can be approximated by DNN such that

Results of the forced spring-mass system

It can be observed from

Therefore, an alternative approach has been employed in this work to address the scaling issue which requires no additional computational cost compared to that of conventional PINNs. For avoiding multiple terms in the loss function, a simple scheme for modifying the neural network output has been adopted so that the initial and/or, boundary conditions are satisfied. To automatically satisfy the initial conditions in the above problem, the output of the neural network

Since the modified neural network output is

Following this approach, significant improvement in approximation of the displacement response has been achieved as shown in

The second example concerns a forced vibration of a damped spring-mass system and can be expressed by_{n}
_{0}, ω

As mentioned previously, in the realm of the PINNs framework, solution space (of the ODE, for this case) can be approximated by DNN such that

Results of the damped forced spring-mass system

It can be observed from

Following this approach, significant improvement in approximation of the displacement response has been achieved as shown in

Next, the implementation of PINNs has been illustrated for an inverse setting. For doing so, the same problem as defined by Eq.

The results have been presented in the form of convergence of the identified parameters (natural frequency and damping ratio) in

Identification results for the damped forced spring-mass system

A 2-DOF lumped mass system as shown in

A schematic representation of the 2-DOF lumped mass system.

As opposed to the previous examples, in general, the response associated with each DOF has to be represented by an output node of (multi-output) FC-DNN. Since the above example is a 2-DOF system, the response of the two DOFs are represented by two output nodes of an FC-DNN in the realm of PINNs architecture such that

Results of free vibration of the 2-DOF lumped mass system.

It can be observed from

Next, PINNs has been implemented in an inverse setup for identification of system parameters both for the undamped and damped cases. For doing so, the same problem as defined by Eq.

Identification results for the undamped 2-DOF system

Identification results for the damped 2-DOF system

The converged values of

A rectangular membrane with unit dimensions excited by an initial displacement

Using the PINNs framework, solution of the PDE is approximated by a DNN such that

The displacement

Results of free vibration of the rectangular membrane

It can be observed from

To ensure the satisfaction of residual, initial and boundary conditions and improve upon the approximation accuracy, the neural network output has been modified as,

Since the modified neural network output is

Following this modified PINNs approach, significant improvement in the spatial distribution of the displacement response has been achieved as shown in

This work presents the MATLAB implementation of PINNs for solving forward and inverse problems in structural vibrations. The contribution of the study lies in the following:

1. It is one of the very few applications of PINNs in structural vibrations till date and thus aims to fill-up the gap. This also makes the work timely in nature.

2. It demonstrates a critical drawback of the first generation PINNs while solving vibration problems, which leads to inaccurate predictions.

3. It mostly addresses the above drawback with the help of a simple modification in the PINNs framework without adding any extra computational cost. This results in significant improvement in the approximation accuracy.

4. The implementation of conventional and modified PINNs is performed in MATLAB. As per the authors’ knowledge, this is the first published PINNs code for structural vibrations carried out in MATLAB, which is expected to benefit a wide scientific audience interested in the application of deep learning in computational science and engineering.

5. Complete executable MATLAB codes of all the examples undertaken have been provided along with their line-by-line explanation so that the interested readers can readily implement these codes.

Four representative problems in structural vibrations, involving ODE and PDE have been solved including multi-DOF systems. Both forward and inverse problems have been addressed while solving each of the problems. The results in three examples involving single DOF systems clearly state that the conventional PINNs is incapable of approximating the response due to a regularization issue. The modified PINNs approach addresses the above issue and captures the solution of the ODE/PDE adequately. For the 2-DOF system, the conventional PINNs performs satisfactorily for the inference and identification formulations. It is recommended to employ

Making the codes public is a humble and timely attempt for expanding the scientific contribution of deep learning in MATLAB, owing to its recently developed rich deep learning library. The research model can be based similar to that of authors adding their Python codes in public repositories like, GitHub. Since the topic is hot, it is expected to quickly populate with the latest developments and improvements, bringing the best to the research community. The authors can envision a huge prospect of their modest research of a recently developed and widely popular method in a new application field and its implementation in a new and more user-friendly software.

Our investigation of the proposed PINNs approach on complex structural dynamic problems, such as beams, plates, and nonlinear oscillators (e.g., cubic stiffness and Van der Pol oscillator), showed opportunities for improvement. To better capture the forward solution and identify unknown parameters in inverse problems, modifications to the proposed approach in this paper are needed. Based on our observation, the need for further systematic investigation has been identified. This aligns with the recent findings in [

The original contributions presented in the study are included in the article/

TC came up with the idea of the work, carried out the analysis and wrote the manuscript. MF, SA, and HK participated in weekly brainstorming sessions, reviewed the results and manuscript. MF secured funding for the work. All authors contributed to the article and approved the submitted version.

The authors declare that financial support was received for the research, authorship, and/or publication of this article. TC gratefully acknowledges the support of the University of Surrey through the award of a faculty start-up grant. All authors gratefully acknowledge the support of the Engineering and Physical Sciences Research Council through the award of a Programme Grant “Digital Twins for Improved Dynamic Design,” grant number EP/R006768.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The Supplementary Material for this article can be found online at:

Functions passed to ‘dlfeval’are allowed to contain calls to ‘dlgradient’, which compute gradients by using automatic differentiation.