This is a cache of https://www.uibk.ac.at/en/acinn/graduate-seminar/graduate-seminar-ss-2024/2024-03-06_senocak/. It is a snapshot of the page at 2024-11-26T00:53:22.507+0100.
2024-03-06_Senocak – Universität Innsbruck

ACINN Graduate Seminar - SS 2024

2024-03-06 at 12:00 (on-line & on-site)

Physics and Equality Constrained Artificial Neural Networks for Forward and Inverse Problems

Inanc Senocak

Swanson School of Engineering, University of Pittsburgh, PA, USA

Artificial neural networks (ANNs) have the capacity to be trained to learn the solutions of partial differential equations (PDEs) by employing the residual forms of the PDEs, their associated boundary conditions, and any pertinent data from measurements. This implies that the neural network-based solution of a PDE can be conceptualized as a meshless method, wherein the parameters of the neural network are optimized based on the governing equations and their respective boundary conditions. Despite the critical importance of the optimization algorithm utilized to minimize an objective function, the actual formulation of the optimization problem has been relatively overlooked. Typically, physics-informed neural networks (PINNs) are trained using a composite objective function, constituted by a weighted sum of the residuals of a governing PDE and its boundary conditions. However, a notable limitation of this approach is that the boundary conditions are not adequately utilized to constrain the solution, compounded by the fact that the weighting factors embedded in the objective function are problem-specific and cannot be determined a priori. To systematically address these inherent limitations, we put forth an equality-constrained optimization formulation that leverages boundary conditions and any high-fidelity data to effectively constrain the PDE loss. Subsequently, we recast the constrained optimization problem as an unconstrained optimization problem by employing an augmented Lagrangian method (ALM) with an adaptive update strategy for penalty parameters. Moreover, we expand our approach to encompass parallel training by incorporating a generalized Schwarz-type domain decomposition strategy, which is well-suited for both Helmholtz and Laplace's equations. Through various examples of forward and inverse problems in the field of thermo-fluid sciences, we demonstrate that our proposed methodology markedly enhances the relative error of the solutions learned.

 

 

The content of this webpage is the author’s opinion and the author’s intellectual property. ACINN has the author’s consent to make this information available on the webpage for the announcement of the seminar presentation and in the seminar archive.

Nach oben scrollen