2023-07-17 |
15:45-16:30 |
2023-07-17,15:45-16:30 | LR11 (A7 2F) |
07-17 Afternoon Math Lecture Room 11 (A7 2F)
|
Speaker |
From physics informed machine learning to physics informed machine intelligence: Quo vadimus? We will review physics-informed neural networks (NNs) and summarize available extensions for applications in computational science and engineering. We will also introduce new NNs that learn functionals and nonlinear operators from functions and corresponding responses for system identification. The universal approximation theorem of operators is suggestive of the potential of NNs in learning from scattered data any continuous operator or complex system. We first generalize the theorem to deep neural networks, and subsequently we apply it to design a new composite NN with small generalization error, the deep operator network (DeepONet), consisting of a NN for encoding the discrete input function space (branch net) and another NN for encoding the domain of the output functions (trunk net). We demonstrate that DeepONet can learn various explicit operators, e.g., integrals, Laplace transforms and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. More generally, DeepOnet can learn multiscale operators spanning across many scales and trained by diverse sources of data simultaneously. Finally, we will present first results on the next generation of these architectures to biologically plausible designs based on spiking neural networks and Hebbian learning that are more efficient and closer to human intelligence.
|