Finite Neuron Method for Partial Differential Equations
In this talk, I will report some recent development of the design and analysis of neural network (NN) based methods for numerical solution of partial differential equations (PDEs), namely finite neuron method (FNM). After first giving an overview on our convergence analysis of FNM, I will focus on the training algorithms for solving the optimization problems associated with these methods. I will present a theoretical result that explains the success as well as the challenges of NN-based methods that are trained by gradient based methods such as SGD and Adam. I will then present a new class of training algorithms that can theoretically achieve and numerically observe the asymptotic rate of the underlying discretization algorithms (while the gradient based methods cannot).