Machine Learning: Mathematical Theory and Scientific Applications
Modern machine learning has had remarkable success in all kinds of AI applications, and is poised to change fundamentally the way we do physical modeling. I will give an overview on some of the theoretical and practical issues that I consider most important in this exciting area.
The first part of the talk is devoted to integrating machine learning and first-principle-based modeling to build models for a wide range of applications, including molecular modeling, gas dynamics, finance and linguistics. We will emphasize: (1) building machine learning models that satisfy physical constraints, (2) using microscopic models to generate the optimal data set.
The second part is devoted to theoretical issues. We take the viewpoint that supervised learning is a problem in approximation theory. We need to extend classical approximation theory to high dimensions as well as to the over-parametrized regime. We show how this can be done for the random feature model, the two-layer neural network model and the deep residual network model.