

| Private Homepage | https://www.uni-muenster.de/AMM/Jentzen/Mitarbeiter/included.shtml |
| Research Interests | Mathematics for machine learning Numerical approximations for high-dimensional partial differential equations Numerical approximations for stochastic differential equations Deep Learning |
| Selected Publications | • Hairer M, Hutzenthaler M, Jentzen A Loss of regularity for Kolmogorov equations. Annals of Probability Vol. 43 (2), 2015, pp 468-527 online • Hutzenthaler M, Jentzen A On a perturbation theory and on strong convergence rates for stochastic ordinary and partial differential equations with nonglobally monotone coefficients. Annals of Probability Vol. 48 (1), 2020, pp 53-93 online • Hutzenthaler M, Jentzen A, Kloeden PE Strong and weak divergence in finite time of Euler's method for stochastic differential equations with non-globally Lipschitz continuous coefficients. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences Vol. 467 (2130), 2011, pp 1563-1576 online • Fehrman B, Gess B, Jentzen A Convergence rates for the stochastic gradient descent method for non-convex objective functions. Journal of Machine Learning Research Vol. 21, 2020, pp Paper No. 136, 48 online • Han J, Jentzen A, E W Solving high-dimensional partial differential equations using deep learning. Proceedings of the National Academy of Sciences of the United States of America Vol. 115 (34), 2018, pp 8505-8510 online • E W, Han J, Jentzen A Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations. Communications in Mathematics and Statistics Vol. 5 (4), 2017, pp 349-380 online • Hutzenthaler M, Jentzen A, Kloeden PE Strong convergence of an explicit numerical method for SDEs with nonglobally Lipschitz continuous coefficients. Annals of Applied Probability Vol. 22 (4), 2012, pp 1611-1641 online • Hutzenthaler M, Jentzen A Numerical approximations of stochastic differential equations with non-globally Lipschitz continuous coefficients. Memoirs of the American Mathematical Society Vol. 236 (1112), 2015, pp v+99 online • Hutzenthaler M, Jentzen A, Kruse T, Nguyen TA, von Wurstemberger P Overcoming the curse of dimensionality in the numerical approximation of semilinear parabolic partial differential equations. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences Vol. 476 (2244), 2020, pp 630-654 online |
| Selected Projects | • Mathematical Theory for Deep Learning It is the key goal of this project to provide a rigorous mathematical analysis for deep learning algorithms and thereby to establish mathematical theorems which explain the success and the limitations of deep learning algorithms. In particular, this projects aims (i) to provide a mathematical theory for high-dimensional approximation capacities for deep neural networks, (ii) to reveal suitable regular sequences of functions which can be approximated by deep neural networks but not by shallow neural networks without the curse of dimensionality,
and (iii) to establish dimension independent convergence rates for stochastic gradient descent optimization algorithms when employed to train deep neural networks with error constants which grow at most polynomially in the dimension. online • Overcoming the curse of dimensionality: stochastic algorithms for high-dimensional partial differential equations Partial differential equations (PDEs) are among the most universal tools used in modeling problems in nature and man-made complex systems. The PDEs appearing in applications are often high dimensional. Such PDEs can typically not be solved explicitly and developing efficient numerical algorithms for high dimensional PDEs is one of the most challenging tasks in applied mathematics. As is well-known, the difficulty lies in the so-called ''curse of dimensionality'' in the sense that the computational effort of standard approximation algorithms grows exponentially in the dimension of the considered PDE. It is the key objective of this research project to overcome this curse of dimensionality and to construct and analyze new approximation algorithms which solve high dimensional PDEs with a computational effffort that grows at most polynomially in both the dimension of the PDE and the reciprocal of the prescribed approximation precision. online • Existence, uniqueness, and regularity properties of solutions of partial differential equations The goal of this project is to reveal existence, uniqueness, and regularity properties of solutions of partial differential equations (PDEs). In particular, we intend to study existence, uniqueness, and regularity properties of viscosity solutions of degenerate semilinear Kolmogorov PDEs of the parabolic type. We plan to investigate such PDEs by means of probabilistic representations of the Feynman-Kac type. We also intend to study the connections of such PDEs to optimal control problems. online • Regularity properties and approximations for stochastic ordinary and partial differential equations with non-globally Lipschitz continuous nonlinearities A number of stochastic ordinary and partial differential equations from the literature (such as, for example, the Heston and the 3/2-model from financial engineering, (overdamped) Langevin-type equations from molecular dynamics, stochastic spatially extended FitzHugh-Nagumo systems from neurobiology, stochastic Navier-Stokes equations, Cahn-Hilliard-Cook equations) contain non-globally Lipschitz continuous nonlinearities in their drift or diffusion coefficients. A central aim of this project is to investigate regularity properties with respect to the initial values of such stochastic differential equations in a systematic way. A further goal of this project is to analyze the regularity of solutions of the deterministic Kolmogorov partial dfferential equations associated to such stochastic differential equations. Another aim of this project is to analyze weak and strong convergence and convergence rates of numerical approximations for such stochastic differential equations. online |
| Topics in Mathematics Münster | T7: Field theory and randomness T9: Multi-scale processes and effective behaviour T10: Deep learning and surrogate methods |
| Current Publications | • Eberle, Simon; Jentzen, Arnulf; Riekert, Adrian; Weiss, Georg S. Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation. Electronic Research Archive Vol. 31 (5), 2023 online • Jentzen A, Riekert A Strong overall error analysis for the training of artificial neural networks via random initializations. Communications in Mathematics and Statistics Vol. 0, 2023 online • Beck C, Hutzenthaler M, Jentzen A, Kuckuck B An overview on deep learning-based approximation methods for partial differential equations. Discrete and Continuous Dynamical Systems - Series B Vol. 28 (6), 2023 online • Jentzen A, Riekert A Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation. Journal of Mathematical Analysis and Applications Vol. 517 (2), 2023 online • Hutzenthaler M, Jentzen A, Kruse T, Nguyen TA Overcoming the curse of dimensionality in the numerical approximation of backward stochastic differential equations. Journal of Numerical Mathematics Vol. 31 (2), 2023 online • Grohs P, Ibragimov S, Jentzen A, Koppensteiner S Lower bounds for artificial neural network approximations: A proof that shallow neural networks fail to overcome the curse of dimensionality. Journal of Complexity Vol. 0, 2023 online • Grohs P, Hornung F, Jentzen A, Zimmermann P Space-time error estimates for deep neural network approximations for differential equations. Advances in Computational Mathematics Vol. 49 (1), 2023 online • Becker S, Gess B, Jentzen A, Kloeden PE Strong convergence rates for explicit space-time discrete numerical approximations of stochastic Allen--Cahn equations. Stochastics and Partial Differential Equations: Analysis and Computations Vol. 11 (1), 2023 online • Boussange, V.; Becker, S.; Jentzen, A.; Kuckuck, B.; Pellissier, L. Deep learning approximations for non-local nonlinear PDEs with Neumann boundary conditions. SN Partial Differential Equations and Applications Vol. 4, 2023 online |
| Current Projects | • EXC 2044 - T07: Field theory and randomness Quantum field theory (QFT) is the fundamental framework to describe matter at its smallest length scales. QFT has motivated groundbreaking developments in different mathematical fields: The theory of operator algebras goes back to the characterisation of observables in quantum mechanics; conformal field theory, based on the idea that physical observables are invariant under conformal transformations of space, has led to breakthrough developments in probability theory and representation theory; string theory aims to combine QFT with general relativity and has led to enormous progress in complex algebraic geometry, among others. online • EXC 2044 - T09: Multiscale processes and effective behaviour Many processes in physics, engineering and life sciences involve multiple spatial and temporal scales, where the underlying geometry and dynamics on the smaller scales typically influence the emerging structures on the coarser ones. A unifying theme running through this research topic is to identify the relevant spatial and temporal scales governing the processes under examination. This is achieved, e.g., by establishing sharp scaling laws, by rigorously deriving effective scale-free theories and by developing novel approximation algorithms which balance various parameters arising in multiscale methods. online • EXC 2044 - T10: Deep learning and surrogate methods In this topic we will advance the fundamental mathematical understanding of artificial neural networks, e.g., through the design and rigorous analysis of stochastic gradient descent methods for their training. Combining data-driven machine learning approaches with model order reduction methods, we will develop fully certified multi-fidelity modelling frameworks for parameterised PDEs, design and study higher-order deep learning-based approximation schemes for parametric SPDEs and construct cost-optimal multi-fidelity surrogate methods for PDE-constrained optimisation and inverse problems. online • GRK 3027: Rigorous Analysis of Complex Random Systems The Research Training Group is dedicated to educating mathematicians in the field of complex random systems. It provides a strong platform for the development of both industrial and academic careers for its graduate students. The central theme is a mathematically rigorous understanding of how probabilistic systems, modelled on a microscopic level, behave effectively at a macroscopic scale. A quintessential example for this RTG lies in statistical mechanics, where systems comprising an astronomical number of particles, upwards of 10^{23}, can be accurately described by a handful of observables including temperature and entropy. Other examples come from stochastic homogenisation in material sciences, from the behaviour of training algorithms in machine learning, and from geometric discrete structures build from point processes or random graphs. The challenge to understand these phenomena with mathematical rigour has been and continues to be a source of exciting research in probability theory. Within this RTG we strive for macroscopic representations of such complex random systems. It is the main research focus of this RTG to advance (tools for) both qualitative and quantitative analyses of random complex systems using macroscopic/effective variables and to unveil deeper insights into the nature of these intricate mathematical constructs. We will employ a blend of tools from discrete to continuous probability including point processes, large deviations, stochastic analysis and stochastic approximation arguments. Importantly, the techniques that we will use and the underlying mathematical ideas are universal across projects coming from completely different origin. This particular facet stands as a cornerstone within the RTG, holding significant importance for the participating students. For our students to be able to exploit the synergies between the different projects, they will pass through a structured and rich qualification programme with several specialised courses, regular colloquia and seminars, working groups, and yearly retreats. Moreover, the PhD students will benefit from the lively mathematical community in Münster with a mentoring programme and several interaction and networking activities with other mathematicians and the local industry. | ajentzen@uni-muenster.de |
| Phone | +49 251 83-33793 |
| FAX | +49 251 83-32729 |
| Room | 120.005 |
| Secretary | Sekretariat Claudia Giesbert Frau Claudia Giesbert Telefon +49 251 83-33792 Fax +49 251 83-32729 Zimmer 120.002 |
| Address | Prof. Dr. Arnulf Jentzen Angewandte Mathematik Münster: Institut für Analysis und Numerik Fachbereich Mathematik und Informatik der Universität Münster Orléans-Ring 10 48149 Münster |
| Diese Seite editieren |
