Arnulf Jentzen -- CUHK-Shenzhen & University of Münster
Arnulf Jentzen giving a talk
© Lynn Quiroz

Arnulf Jentzen

The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), China
University of Münster, Germany

Vacant PhD and postdoctoral positions: There are several vacant PhD and postdoctoral positions in my research group both at the University of Münster in Germany and at the Chinese University of Hong Kong, Shenzhen in China. I am especially interested in candidates with a strong background in differential geometry, dynamical systems, algebraic geometry, functional analysis, analysis of partial differential equations, or stochastic analysis. Interested candidates can approach me by e-mail at ajentzen (at) uni-muenster.de.

Address at The Chinese University of Hong Kong, Shenzhen:
Prof. Dr. Arnulf Jentzen
School of Data Science & Shenzhen Research Institute of Big Data
The Chinese University of Hong Kong, Shenzhen
Dao Yuan Building
2001 Longxiang Road
Longgang District, Shenzhen
China

Fon (Secretariat): +86 755 23517035
Office hour: by appointment
E-mail: ajentzen (at) cuhk.edu.cn

Address at the University of Münster:
Prof. Dr. Arnulf Jentzen
Institute for Analysis and Numerics
Applied Mathematics Münster
Faculty of Mathematics and Computer Science
University of Münster
Einsteinstraße 62
48149 Münster
Germany

Fon (Secretariat): +49 251 83-33792
Office hour: by appointment
E-mail: ajentzen (at) uni-muenster.de

Links:
[Homepage at the CUHK-Shenzhen] [Homepage at the University of Münster] [Personal homepage] [Mathematics Münster: Research Areas] [University of Münster Webmail]

Scientific profiles:
[Profile on Google Scholar] [Profile on ResearchGate] [Profile on MathSciNet] [Profile on Scopus] [ORCID] [ResearcherID]

Last update of this homepage: March 14th, 2024

Research areas

  • Dynamical systems and gradient flows (geometric properties of gradient flows, domains of attractions, blow-up phenomena for gradient flows, critical points, center-stable manifold theorems, Kurdyka-Lojasiewicz functions)
  • Analysis of partial differential equations (well-posedness and regularity analysis for partial differential equations)
  • Stochastic analysis (stochastic calculus, well-posedness and regularity analysis for stochastic ordinary and partial differential equations)
  • Machine learning (mathematics for deep learning, stochastic gradient descent methods, deep neural networks, empirical risk minimization)
  • Numerical analysis (computational stochastics/stochastic numerics, computational finance)

Short Curriculum Vitae

2021–Presidential Chair Professor,
School of Data Science, The Chinese University of Hong Kong, Shenzhen
2019–Full Professor,
Faculty of Mathematics and Computer Science, University of Münster
2012–2019Assistant Professor for Applied Mathematics,
Department of Mathematics, ETH Zurich
2011–2012Research Fellowship (German Research Foundation),
Program in Applied and Computational Mathematics, Princeton University
2009–2010Assistant Professor (Akademischer Rat a.Z.),
Faculty of Mathematics, Bielefeld University
2007–2009PhD studies in Mathematics,
Faculty of Computer Science and Mathematics, Goethe University Frankfurt
2004–2007 Diploma studies in Mathematics,
Faculty of Computer Science and Mathematics, Goethe University Frankfurt

Selected awards

  • Joseph F. Traub Prize for Achievement in Information-Based Complexity, 2022
  • ERC Consolidator Grant, 2022
  • Felix Klein Prize, European Mathematical Society (EMS), 2020

Research group

Current members of the research group

  • Davide Gallon (PhD Student at the Faculty of Mathematics and Computer Science, University of Münster)
  • Robin Graeber (PhD Student at the Faculty of Mathematics and Computer Science, University of Münster)
  • Sonja Hannibal (PhD Student at the Faculty of Mathematics and Computer Science, University of Münster)
  • Prof. Dr. Arnulf Jentzen (Head of the research group)
  • Shokhrukh Ibragimov (PhD Student at the Faculty of Mathematics and Computer Science, University of Münster)
  • Timo Kröger (PhD Student at the Faculty of Mathematics and Computer Science, University of Münster)
  • Dr. Benno Kuckuck (Postdoc at the Faculty of Mathematics and Computer Science, University of Münster)
  • Adrian Riekert (PhD Student at the Faculty of Mathematics and Computer Science, University of Münster)
  • Philippe von Wurstemberger (PhD student at D-MATH, ETH Zurich, joint supervision with Prof. Dr. Patrick Cheridito)

Former members of the research group

  • Dr. Christian Beck (former PhD student, joint supervision with Prof. Dr. Norbert Hungerbühler)
  • Dr. Sebastian Becker (former PhD student, joint supervision with Prof. Dr. Peter E. Kloeden, 2010–2017, now Postdoc at ETH Zurich)
  • Prof. Dr. Sonja Cox (former Postdoc/Fellow, 2012–2014, now Associate Professor at the University of Amsterdam)
  • Dr. Simon Eberle (former Postdoc, 2021–2022, now Postdoc at the Basque Center for Applied Mathematics, Bilbao)
  • Dr. Fabian Hornung (former Postdoc/Fellow, 2018–2018, now at SAP)
  • Prof. Dr. Raphael Kruse (former Postdoc, 2012–2014, now Associate Professor at the Martin Luther University Halle-Wittenberg)
  • Dr. Ryan Kurniawan (former PhD student, 2014–2018, now VP of Quantitative Research at Crédit Agricole CIB)
  • Prof. Dr. Ariel Neufeld (former Postdoc/Fellow, joint mentoring with Prof. Dr. Patrick Cheridito, 2018–2018, now Assistant Professor at NTU Singapore)
  • Dr. Primož Pušnik (former PhD Student, 2014–2020, now Quantitative Developer at Vontobel)
  • Dr. Florian Rossmannek (former PhD student, 2019–2013, now Postdoc at ETH Zurich)
  • Dr. Diyora Salimova (former PhD student, 2015–2019, now JProf. at Freiburg University)
  • Prof. Dr. Michaela Szoelgyenyi (former Postdoc/Fellow, 2017–2018, now Full Professor at the University of Klagenfurt)
  • Dr. Frederic Weber (former Postdoc, 2022–2022, now at Bosch)
  • Dr. Timo Welti (former PhD Student, 2015–2020, now Data Analytics Consultant at D ONE Solutions AG)
  • Dr. Larisa Yaroslavtseva (former Postdoc, 2018–2018, now interim professor at the University of Ulm)

Current editorial boards affiliations

Preprints

  • Jentzen, A. and Riekert, A., Non-convergence to global minimizers for Adam and stochastic gradient descent optimization and constructions of local minimizers in the training of artificial neural networks. [arXiv] (2024), 36 pp.
  • Jentzen, A., Kuckuck, B., and von Wurstemberger, P., Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory. [arXiv] (2023), 601 pp.
  • Ackermann, J., Jentzen, A., Kruse, T., Kuckuck, B., and Padgett, J. L., Deep neural networks with ReLU, leaky ReLU, and softplus activation provably overcome the curse of dimensionality for Kolmogorov partial differential equations with Lipschitz nonlinearities in the $L^p$-sense. [arXiv] (2023), 52 pp.
  • Beck, C., Jentzen, A., Kleinberg, K., and Kruse, T., Nonlinear Monte Carlo methods with polynomial runtime for Bellman equations of discrete time high-dimensional stochastic optimal control problems. [arXiv] (2023), 33 pp.
  • Dereich, S., Jentzen, A., and Kassing, S., On the existence of minimizers in shallow residual ReLU neural network optimization landscapes. [arXiv] (2023), 26 pp. Revision requested by SIAM J. Numer. Anal.
  • Jentzen, A., Riekert, A., and von Wurstemberger, P., Algorithmically Designed Artificial Neural Networks (ADANNs): Higher order deep operator learning for parametric partial differential equations. [arXiv] (2023), 22 pp.
  • Gonon, L., Graeber, R., and Jentzen, A., The necessity of depth for artificial neural networks to approximate certain classes of smooth and bounded functions without the curse of dimensionality. [arXiv] (2023), 101 pp.
  • Ibragimov, S., Jentzen, A., and Riekert, A., Convergence to good non-optimal critical points in the training of neural networks: Gradient descent optimization with one random initialization overcomes all bad non-global local minima with high probability. [arXiv] (2022), 98 pp.
  • Gallon, D., Jentzen, A., and Lindner, F., Blow up phenomena for gradient descent optimization methods in the training of artificial neural networks. [arXiv] (2022), 84 pp.
  • Cheridito, P., Jentzen, A., and Rossmannek, F., Gradient descent provably escapes saddle points in the training of shallow ReLU networks. [arXiv] (2022), 16 pp. Revision requested by J. Optim. Theory Appl.
  • Eberle, S., Jentzen, A., Riekert, A., and Weiss, G. S., Normalized gradient flow optimization in the training of ReLU artificial neural networks. [arXiv] (2022), 26 pp. Revision requested by Appl. Math. Comput.
  • Jentzen, A. and Kröger, T., On bounds for norms of reparameterized ReLU artificial neural network parameters: sums of fractional powers of the Lipschitz norm control the network parameter vector. [arXiv] (2022), 39 pp. Revision requested by Math. Methods Appl. Sci.
  • Ibragimov, S., Jentzen, A., Kröger, T., and Riekert, A., On the existence of infinitely many realization functions of non-global local minima in the training of artificial neural networks with ReLU activation. [arXiv] (2022), 49 pp. Revision requested by Discrete Contin. Dyn. Syst. Ser. B.
  • Beneventano, P., Cheridito, P., Graeber, R., Jentzen, A., and Kuckuck, B., Deep neural network approximation theory for high-dimensional functions. [arXiv] (2021), 82 pp.
  • Hutzenthaler, M., Jentzen, A., Kuckuck, B., and Padgett, J. L., Strong $L^p$-error analysis of nonlinear Monte Carlo approximations for high-dimensional semilinear partial differential equations. [arXiv] (2021), 42 pp. Revision requested by Numer. Algorithms.
  • Beck, C., Hutzenthaler, M., Jentzen, A., and Magnani, E., Full history recursive multilevel Picard approximations for ordinary differential equations with expectations. [arXiv] (2021), 24 pp.
  • Jentzen, A. and Kröger, T., Convergence rates for gradient descent in the training of overparameterized artificial neural networks with biases. [arXiv] (2021), 38 pp. Revision requested by Stoch. Dyn.
  • Beneventano, P., Cheridito, P., Jentzen, A., and von Wurstemberger, P., High-dimensional approximation spaces of artificial neural networks and applications to partial differential equations. [arXiv] (2020), 32 pp.
  • Beck, C., Becker, S., Cheridito, P., Jentzen, A., and Neufeld, A., Deep learning based numerical approximation algorithms for stochastic partial differential equations and high-dimensional nonlinear filtering problems. [arXiv] (2020), 58 pp. Revision requested by Comput. Math. Appl.
  • Hutzenthaler, M., Jentzen, A., Kruse, T., and Nguyen, T. A., Multilevel Picard approximations for high-dimensional semilinear second-order PDEs with Lipschitz nonlinearities. [arXiv] (2020), 37 pp. Revision requested by Numer. Methods Partial Differential Equations.
  • Beck, C., Jentzen, A., and Kruse, T., Nonlinear Monte Carlo methods with polynomial runtime for high-dimensional iterated nested expectations. [arXiv] (2020), 47 pp. Revision requested by Infin. Dimens. Anal. Quantum Probab. Relat. Top.
  • Bercher, A., Gonon, L., Jentzen, A., and Salimova, D., Weak error analysis for stochastic gradient descent optimization algorithms. [arXiv] (2020), 123 pp.
  • Giles, M. B., Jentzen, A., and Welti, T., Generalised multilevel Picard approximations. [arXiv] (2019), 61 pp. Revision requested by IMA J. Numer. Anal.
  • Hutzenthaler, M., Jentzen, A., Lindner, F., and Pušnik, P., Strong convergence rates on the whole probability space for space-time discrete numerical approximation schemes for stochastic Burgers equations. [arXiv] (2019), 60 pp.
  • Beccari, M., Hutzenthaler, M., Jentzen, A., Kurniawan, R., Lindner, F., and Salimova, D., Strong and weak divergence of exponential and linear-implicit Euler approximations for stochastic partial differential equations with superlinearly growing nonlinearities. [arXiv] (2019), 65 pp.
  • Hefter, M., Jentzen, A., and Kurniawan, R., Weak convergence rates for numerical approximations of stochastic partial differential equations with nonlinear diffusion coefficients in UMD Banach spaces. [arXiv] (2016), 51 pp.

Publications and accepted research articles

  • Becker, S., Jentzen, A., Müller, M. S., and von Wurstemberger, P., Learning the random variables in Monte Carlo simulations with stochastic gradient descent: Machine learning for parametric PDEs and financial derivative pricing. Math. Finance 34 (2024), 90-150. [arXiv]
  • Boussange, V., Becker, S., Jentzen, A., Kuckuck, B., and Pellissier, L., Deep learning approximations for non-local nonlinear PDEs with Neumann boundary conditions. Partial Differ. Equ. Appl. 4 (2023), Paper no. 51, 59 pp. [arXiv]
  • Beck, C., Becker, S., Cheridito, P., Jentzen, A., and Neufeld, A., An efficient Monte Carlo scheme for Zakai equations. Commun. Nonlinear Sci. Numer. Simul. 126 (2023), 107438, 37 pp. [arXiv]
  • Cox, S., Jentzen, A., and Lindner, F., Weak convergence rates for temporal numerical approximations of stochastic wave equations with multiplicative noise. [arXiv] (2019), 51 pp. To appear in Numer. Math.
  • Jentzen, A. and Welti, T., Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation. Appl. Math. Comput. 455 (2023), 127907, 34 pp. [arXiv]
  • Grohs, P., Ibragimov, S., Jentzen, A., and Koppensteiner, S., Lower bounds for artificial neural network approximations: A proof that shallow neural networks fail to overcome the curse of dimensionality. J. Complexity 77 (2023), 101746, 53 pp. [arXiv]
  • Beck, C., Hutzenthaler, M., Jentzen, A., and Kuckuck, B., An overview on deep learning-based approximation methods for partial differential equations. Discrete Contin. Dyn. Syst. Ser. B 28 (2023), no. 6, 3697-3746. [arXiv]
  • Hutzenthaler, M., Jentzen, A., Kruse, T., and Nguyen, T. A., Overcoming the curse of dimensionality in the numerical approximation of backward stochastic differential equations. J. Numer. Math. 31 (2023), no. 2, 1–28. [arXiv]
  • Grohs, P., Hornung, F., Jentzen, A., and von Wurstemberger, P., A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of Black–Scholes partial differential equations. Mem. Amer. Math. Soc. 284 (2023), no. 1410, 106 pp. [arXiv]
  • Eberle, S., Jentzen, A., Riekert, A., and Weiss, G. S., Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation. Electron. Res. Arch. 31 (2023), no. 5, 2519-2554. [arXiv]
  • Jentzen, A. and Riekert, A., Strong overall error analysis for the training of artificial neural networks via random initializations. Early access version available online. Commun. Math. Stat. (2023), 50 pp. [arXiv]
  • Becker, S., Gess, B., Jentzen, A., and Kloeden, P. E., Strong convergence rates for explicit space-time discrete numerical approximations of stochastic Allen–Cahn equations. Stoch. Partial Differ. Equ. Anal. Comput. 11 (2023), no. 1, 211-268. [arXiv]
  • Jentzen, A. and Riekert, A., Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation. J. Math. Anal. Appl. 517 (2023), no. 2, 126601, 43 pp. [arXiv]
  • Grohs, P., Hornung, F., Jentzen, A., and Zimmermann, P., Space-time error estimates for deep neural network approximations for differential equations. Adv. Comput. Math. 49 (2023), no. 1, Paper no. 4, 78 pp. [arXiv]
  • Hornung, F., Jentzen, A., and Salimova, D., Space-time deep neural network approximations for high-dimensional partial differential equations. [arXiv] (2020), 52 pp. To appear in J. Comput. Math.
  • Jentzen, A. and Riekert, A., A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions. Z. Angew. Math. Phys. 73 (2022), no. 5, Paper no. 188, 30 pp. [arXiv]
  • Jentzen, A. and Riekert, A., A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions. J. Mach. Learn. Res. 23 (2022), 260, 50 pp. [arXiv]
  • Hutzenthaler, M., Jentzen, A., and Kruse, T., Overcoming the curse of dimensionality in the numerical approximation of parabolic partial differential equations with gradient-dependent nonlinearities. Found. Comput. Math. 22 (2022), no. 4, 905-966. [arXiv]
  • Cheridito, P., Jentzen, A., and Rossmannek, F., Landscape Analysis for Shallow Neural Networks: Complete Classification of Critical Points for Affine Target Functions. J. Nonlinear Sci. 32 (2022), no. 5, 64, 45 pp. [arXiv]
  • Jentzen, A., Kuckuck, B., Müller-Gronbach, T., and Yaroslavtseva, L., Counterexamples to local Lipschitz and local Hölder continuity with respect to the initial values for additive noise driven SDEs with smooth drift coefficient functions with at most polynomially growing derivatives. Discrete Contin. Dyn. Syst. Ser. B 27 (2022), no. 7, 3707–3724. [arXiv]
  • Gonon, L., Grohs, P., Jentzen, A., Kofler, D., and Šiška, D., Uniform error estimates for artificial neural network approximations for heat equations. IMA J. Numer. Anal. 42 (2022), no. 3, 1991-2054. [arXiv]
  • Cheridito, P., Jentzen, A., and Rossmannek, F., Efficient approximation of high-dimensional functions with neural networks. IEEE Trans. Neural Netw. Learn. Syst. 33 (2022), no. 7, 3079–3093. [arXiv]
  • Jentzen, A. and Riekert, A., On the existence of global minima and convergence analyses for gradient descent methods in the training of deep neural networks. J. Mach. Learn. 1 (2022), no. 2, 141–246. [arXiv]
  • Cheridito, P., Jentzen, A., Riekert, A., and Rossmannek, F., A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions. J. Complexity 72 (2022), Paper no. 101646, 26 pp. [arXiv]
  • Grohs, P., Jentzen, A., and Salimova, D., Deep neural network approximations for solutions of PDEs based on Monte Carlo algorithms. Partial Differ. Equ. Appl. 3 (2022), no. 4, 45, 41 pp. [arXiv]
  • Beck, C., Jentzen, A., and Kuckuck, B., Full error analysis for the training of deep neural networks. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 25 (2022), no. 2, Paper no. 2150020, 77 pp. [arXiv]
  • Elbrächter, D., Grohs, P., Jentzen, A., and Schwab, C., DNN Expression Rate Analysis of High-dimensional PDEs: Application to Option Pricing. Constr. Approx. 55 (2022), no. 1, 3–71. [arXiv]
  • E, W., Han, J., and Jentzen, A., Algorithms for solving high dimensional PDEs: from nonlinear Monte Carlo to machine learning. Nonlinearity 35 (2022), no. 1, 278–310. [arXiv]
  • Hutzenthaler, M., Jentzen, A., Pohl, K., Riekert, A., and Scarpa, L., Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions. [arXiv] (2021), 52 pp. To appear in Electron. Res. Arch.
  • Jacobe de Naurois, L., Jentzen, A., and Welti, T., Weak convergence rates for spatial spectral Galerkin approximations of semilinear stochastic wave equations with multiplicative noise. Appl. Math. Optim. 84 (2021), suppl. 2, S1187–S1217. [arXiv]
  • Beck, C., Hutzenthaler, M., and Jentzen, A., On nonlinear Feynman–Kac formulas for viscosity solutions of semilinear parabolic partial differential equations. Stoch. Dyn. 21 (2021), no. 8, Paper no. 2150048, 68 pp. [arXiv]
  • E, W., Hutzenthaler, M., Jentzen, A., and Kruse, T., Multilevel Picard iterations for solving smooth semilinear parabolic heat equations. Partial Differ. Equ. Appl. 2 (2021), no. 6, Paper no. 80, 31 pp. [arXiv]
  • Jentzen, A., Salimova, D., and Welti, T., A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients. Commun. Math. Sci. 19 (2021), no. 5, 1167–1205. [arXiv]
  • Jentzen, A., Kuckuck, B., Müller-Gronbach, T., and Yaroslavtseva, L., On the strong regularity of degenerate additive noise driven stochastic differential equations with respect to their initial values. J. Math. Anal. Appl. 502 (2021), no. 2, 125240, 23 pp. [arXiv]
  • Beck, C., Becker, S., Cheridito, P., Jentzen, A., and Neufeld, A., Deep splitting method for parabolic PDEs. SIAM J. Sci. Comput. 43 (2021), no. 5, A3135–A3154. [arXiv]
  • Beck, C., Gonon, L., Hutzenthaler, M., and Jentzen, A., On existence and uniqueness properties for solutions of stochastic fixed point equations. Discrete Contin. Dyn. Syst. Ser. B 26 (2021), no. 9, 4927–4962. [arXiv]
  • Jentzen, A., Lindner, F., and Pušnik, P., Spatial Sobolev regularity for stochastic Burgers equations with additive trace class noise. Nonlinear Anal. 210 (2021), Paper no. 112310, 29 pp. [arXiv]
  • Beck, C., Becker, S., Grohs, P., Jaafari, N., and Jentzen, A., Solving the Kolmogorov PDE by means of deep learning. J. Sci. Comput. 88 (2021), no. 3, Paper no. 73, 28 pp. [arXiv]
  • Becker, S., Cheridito, P., Jentzen, A., and Welti, T., Solving high-dimensional optimal stopping problems using deep learning. European J. Appl. Math. 32 (2021), no. 3, 470–514. [arXiv]
  • Cheridito, P., Jentzen, A., and Rossmannek, F., Non-convergence of stochastic gradient descent in the training of deep neural networks. J. Complexity 64 (2021), Paper no. 101540, 10 pp. [arXiv]
  • Hudde, A., Hutzenthaler, M., Jentzen, A., and Mazzonetto, S., On the Itô–Alekseev–Gröbner formula for stochastic differential equations. [arXiv] (2018), 29 pp. To appear in Ann. Inst. Henri Poincaré Probab. Stat.
  • Jentzen, A. and Kurniawan, R., Weak convergence rates for Euler-type approximations of semilinear stochastic evolution equations with nonlinear diffusion coefficients. Found. Comput. Math. 21 (2021), no. 2, 445–536. [arXiv]
  • Andersson, A., Jentzen, A., and Kurniawan, R., Existence, uniqueness, and regularity for stochastic evolution equations with irregular initial values. J. Math. Anal. Appl. 495 (2021), no. 1, Paper no. 124558, 33 pp. [arXiv]
  • Cox, S., Hutzenthaler, M., Jentzen, A., van Neerven, J., and Welti, T., Convergence in Hölder norms with applications to Monte Carlo methods in infinite dimensions. IMA J. Numer. Anal. 41 (2021), no. 1, 493–548. [arXiv]
  • Jentzen, A., Kuckuck, B., Neufeld, A., and von Wurstemberger, P., Strong error analysis for stochastic gradient descent optimization algorithms. IMA J. Numer. Anal. 41 (2021), no. 1, 455–492. [arXiv]
  • Hutzenthaler, M., Jentzen, A., Kruse, T., Nguyen, T. A., and von Wurstemberger, P., Overcoming the curse of dimensionality in the numerical approximation of semilinear parabolic partial differential equations. Proc. A. 476 (2020), no. 2244, 20190630, 25 pp. [arXiv]
  • Beck, C., Hornung, F., Hutzenthaler, M., Jentzen, A., and Kruse, T., Overcoming the curse of dimensionality in the numerical approximation of Allen-Cahn partial differential equations via truncated full-history recursive multilevel Picard approximations. J. Numer. Math. 28 (2020), no. 4, 197–222. [arXiv]
  • Jentzen, A., Lindner, F., and Pušnik, P., Exponential moment bounds and strong convergence rates for tamed-truncated numerical approximations of stochastic convolutions. Numer. Algorithms 85 (2020), no. 4, 1447–1473. [arXiv]
  • Becker, S., Gess, B., Jentzen, A., and Kloeden, P. E., Lower and upper bounds for strong approximation errors for numerical approximations of stochastic heat equations. BIT 60 (2020), no. 4, 1057–1073. [arXiv]
  • Becker, S., Braunwarth, R., Hutzenthaler, M., Jentzen, A., and von Wurstemberger, P., Numerical simulations for full history recursive multilevel Picard approximations for systems of high-dimensional partial differential equations. Commun. Comput. Phys. 28 (2020), no. 5, 2109–2138. [arXiv]
  • Hutzenthaler, M., Jentzen, A., and von Wurstemberger, P., Overcoming the curse of dimensionality in the approximative pricing of financial derivatives with default risks. Electron. J. Probab. 25 (2020), Paper no. 101, 73 pp. [arXiv]
  • Berner, J., Grohs, P., and Jentzen, A., Analysis of the generalization error: empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of Black–Scholes partial differential equations. SIAM J. Math. Data Sci. 2 (2020), no. 3, 631–657. [arXiv]
  • Becker, S., Cheridito, P., and Jentzen, A., Pricing and hedging American-style options with deep learning. J. Risk Financial Manag. 13 (2020), no. 7, Paper no. 158, 12 pp. [arXiv]
  • Fehrman, B., Gess, B., and Jentzen, A., Convergence rates for the stochastic gradient descent method for non-convex objective functions. J. Mach. Learn. Res. 21 (2020), Paper no. 136, 48 pp. [arXiv]
  • Hutzenthaler, M., Jentzen, A., Kruse, T., and Nguyen, T. A., A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations. Partial Differ. Equ. Appl. 1 (2020), no. 2, Paper no. 10, 34 pp. [arXiv]
  • Jentzen, A. and Pušnik, P., Strong convergence rates for an explicit numerical approximation method for stochastic evolution equations with non-globally Lipschitz continuous nonlinearities. IMA J. Numer. Anal. 40 (2020), no. 2, 1005–1050. [arXiv]
  • Jentzen, A. and von Wurstemberger, P., Lower error bounds for the stochastic gradient descent optimization algorithm: sharp convergence rates for slowly and fast decaying learning rates. J. Complexity 57 (2020), 101438, 16 pp. [arXiv]
  • Berner, J., Elbrächter, D., Grohs, P., and Jentzen, A., Towards a regularity theory for ReLU networks – chain rule and global error estimates.
  • Beck, C., Gonon, L., and Jentzen, A., Overcoming the curse of dimensionality in the numerical approximation of high-dimensional semilinear elliptic partial differential equations. [arXiv] (2020), 50 pp. To appear in Partial Differ. Equ. Appl.
  • Hutzenthaler, M. and Jentzen, A., On a perturbation theory and on strong convergence rates for stochastic ordinary and partial differential equations with nonglobally monotone coefficients. Ann. Probab. 48 (2020), no. 1, 53–93. [arXiv]
  • Beck, C., E, W., and Jentzen, A., Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations. J. Nonlinear Sci. 29 (2019), no. 4, 1563–1619. [arXiv]
  • Jentzen, A., Lindner, F., and Pušnik, P., On the Alekseev–Gröbner formula in Banach spaces. Discrete Contin. Dyn. Syst. Ser. B 24 (2019), no. 8, 4475–4511. [arXiv]
  • E, W., Hutzenthaler, M., Jentzen, A., and Kruse, T., On multilevel Picard numerical approximations for high-dimensional nonlinear parabolic partial differential equations and high-dimensional nonlinear backward stochastic differential equations. J. Sci. Comput. 79 (2019), no. 3, 1534–1571. [arXiv]
  • Da Prato, G., Jentzen, A., and Röckner, M., A mild Itô formula for SPDEs. Trans. Amer. Math. Soc. 372 (2019), no. 6, 3755–3807. [arXiv]
  • Andersson, A., Hefter, M., Jentzen, A., and Kurniawan, R., Regularity properties for solutions of infinite dimensional Kolmogorov equations in Hilbert spaces. Potential Anal. 50 (2019), no. 3, 347–379. [arXiv]
  • Becker, S., Cheridito, P., and Jentzen, A., Deep optimal stopping. J. Mach. Learn. Res. 20 (2019), Paper no. 74, 25 pp. [arXiv]
  • Conus, D., Jentzen, A., and Kurniawan, R., Weak convergence rates of spectral Galerkin approximations for SPDEs with nonlinear diffusion coefficients. Ann. Appl. Probab. 29 (2019), no. 2, 653–716. [arXiv]
  • Hutzenthaler, M., Jentzen, A., and Salimova, D., Strong convergence of full-discrete nonlinearity-truncated accelerated exponential Euler-type approximations for stochastic Kuramoto–Sivashinsky equations. Commun. Math. Sci. 16 (2018), no. 6, 1489–1529. [arXiv]
  • Hefter, M. and Jentzen, A., On arbitrarily slow convergence rates for strong numerical approximations of Cox–Ingersoll–Ross processes and squared Bessel processes. Finance Stoch. 23 (2019), no. 1, 139–172. [arXiv]
  • Jentzen, A., Salimova, D., and Welti, T., Strong convergence for explicit space-time discrete numerical approximation methods for stochastic Burgers equations. J. Math. Anal. Appl. 469 (2019), no. 2, 661–704. [arXiv]
  • Becker, S. and Jentzen, A., Strong convergence rates for nonlinearity-truncated Euler-type approximations of stochastic Ginzburg–Landau equations. Stochastic Process. Appl. 129 (2019), no. 1, 28–69. [arXiv]
  • Jentzen, A. and Pušnik, P., Exponential moments for numerical approximations of stochastic partial differential equations. Stoch. Partial Differ. Equ. Anal. Comput. 6 (2018), no. 4, 565–617. [arXiv]
  • Han, J., Jentzen, A., and E, W., Solving high-dimensional partial differential equations using deep learning. Proc. Natl. Acad. Sci. USA 115 (2018), no. 34, 8505–8510. [arXiv]
  • Cox, S., Jentzen, A., Kurniawan, R., and Pušnik, P., On the mild Itô formula in Banach spaces. Discrete Contin. Dyn. Syst. Ser. B 23 (2018), no. 6, 2217–2243. [arXiv]
  • Jacobe de Naurois, L., Jentzen, A., and Welti, T., Lower bounds for weak approximation errors for spatial spectral Galerkin approximations of stochastic wave equations.
  • Hutzenthaler, M., Jentzen, A., and Wang, X., Exponential integrability properties of numerical approximation processes for nonlinear stochastic differential equations. Math. Comp. 87 (2018), no. 311, 1353–1413. [arXiv]
  • E, W., Han, J., and Jentzen, A., Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations. Commun. Math. Stat. 5 (2017), no. 4, 349–380. [arXiv]
  • Gerencsér, M., Jentzen, A., and Salimova, D., On stochastic differential equations with arbitrarily slow convergence rates for strong approximation in two space dimensions. Proc. A. 473 (2017), no. 2207, 20170104, 16 pp. [arXiv]
  • Andersson, A., Jentzen, A., Kurniawan, R., and Welti, T., On the differentiability of solutions of stochastic evolution equations with respect to their initial values. Nonlinear Anal. 162 (2017), 128–161. [arXiv]
  • Hefter, M., Jentzen, A., and Kurniawan, R., Counterexamples to regularities for the derivative processes associated to stochastic evolution equations. [arXiv] (2017), 26 pp. To appear in Stoch. Partial Differ. Equ. Anal. Comput.
  • E, W., Jentzen, A., and Shen, H., Renormalized powers of Ornstein–Uhlenbeck processes and well-posedness of stochastic Ginzburg–Landau equations. Nonlinear Anal. 142 (2016), 152–193. [arXiv]
  • Jentzen, A., Müller-Gronbach, T., and Yaroslavtseva, L., On stochastic differential equations with arbitrary slow convergence rates for strong approximation. Commun. Math. Sci. 14 (2016), no. 6, 1477–1500. [arXiv]
  • Becker, S., Jentzen, A., and Kloeden, P. E., An exponential Wagner–Platen type scheme for SPDEs. SIAM J. Numer. Anal. 54 (2016), no. 4, 2389–2426. [arXiv]
  • Hairer, M., Hutzenthaler, M., and Jentzen, A., Loss of regularity for Kolmogorov equations. Ann. Probab. 43 (2015), no. 2, 468–527. [arXiv]
  • Hutzenthaler, M. and Jentzen, A., Numerical approximations of stochastic differential equations with non-globally Lipschitz continuous coefficients. Mem. Amer. Math. Soc. 236 (2015), no. 1112, v+99 pp. [arXiv]
  • Hutzenthaler, M., Jentzen, A., and Noll, M., Strong convergence rates and temporal regularity for Cox–Ingersoll–Ross processes and Bessel processes with accessible boundaries. [arXiv] (2014), 32 pp. To appear in Numer. Math.
  • Hutzenthaler, M., Jentzen, A., and Kloeden, P. E., Divergence of the multilevel Monte Carlo Euler method for nonlinear stochastic differential equations. Ann. Appl. Probab. 23 (2013), no. 5, 1913–1966. [arXiv]
  • Cox, S., Hutzenthaler, M., and Jentzen, A., Local Lipschitz continuity in the initial value and strong completeness for nonlinear stochastic differential equations. [arXiv] (2014), 90 pp. To appear in Mem. Amer. Math. Soc.
  • Blömker, D. and Jentzen, A., Galerkin approximations for the stochastic Burgers equation. SIAM J. Numer. Anal. 51 (2013), no. 1, 694–715. [arXiv]
  • Hutzenthaler, M., Jentzen, A., and Kloeden, P. E., Strong convergence of an explicit numerical method for SDEs with nonglobally Lipschitz continuous coefficients. Ann. Appl. Probab. 22 (2012), no. 4, 1611–1641. (Awarded a second prize of the 15th Leslie Fox Prize in Numerical Analysis (Manchester, UK, June 2011).) [arXiv]
  • Jentzen, A. and Röckner, M., A Milstein scheme for SPDEs. Found. Comput. Math. 15 (2015), no. 2, 313–362. [arXiv]
  • Jentzen, A. and Röckner, M., Regularity analysis for stochastic partial differential equations with nonlinear multiplicative trace class noise. J. Differential Equations 252 (2012), no. 1, 114–136. [arXiv]
  • Hutzenthaler, M. and Jentzen, A., Convergence of the stochastic Euler scheme for locally Lipschitz coefficients. Found. Comput. Math. 11 (2011), no. 6, 657–706. [arXiv]
  • Hutzenthaler, M., Jentzen, A., and Kloeden, P. E., Strong and weak divergence in finite time of Euler's method for stochastic differential equations with non-globally Lipschitz continuous coefficients. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 467 (2011), no. 2130, 1563–1576. [arXiv]
  • Jentzen, A., Kloeden, P. E., and Winkel, G., Efficient simulation of nonlinear parabolic SPDEs with additive noise. Ann. Appl. Probab. 21 (2011), no. 3, 908–950. [arXiv]
  • Jentzen, A., Higher order pathwise numerical approximations of SPDEs with additive noise. SIAM J. Numer. Anal. 49 (2011), no. 2, 642–667.
  • Jentzen, A. and Kloeden, P. E., Taylor approximations for stochastic partial differential equations. CBMS-NSF Regional Conference Series in Applied Mathematics, 83, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2011, xiv+211 pp.
  • Jentzen, A., Taylor expansions of solutions of stochastic partial differential equations. Discrete Contin. Dyn. Syst. Ser. B 14 (2010), no. 2, 515–557. [arXiv]
  • Jentzen, A. and Kloeden, P. E., Taylor expansions of solutions of stochastic partial differential equations with additive noise. Ann. Probab. 38 (2010), no. 2, 532–569. [arXiv]
  • Jentzen, A. and Kloeden, P. E., A unified existence and uniqueness theorem for stochastic evolution equations. Bull. Aust. Math. Soc. 81 (2010), no. 1, 33–46.
  • Jentzen, A., Leber, F., Schneisgen, D., Berger, A., and Siegmund, S., An improved maximum allowable transfer interval for $L^p$-stability of networked control systems. IEEE Trans. Automat. Control 55 (2010), no. 1, 179–184.
  • Jentzen, A. and Kloeden, P. E., The numerical approximation of stochastic partial differential equations. Milan J. Math. 77 (2009), 205–244.
  • Jentzen, A., Pathwise numerical approximation of SPDEs with additive noise under non-global Lipschitz coefficients. Potential Anal. 31 (2009), no. 4, 375–404.
  • Jentzen, A. and Kloeden, P. E., Pathwise Taylor schemes for random ordinary differential equations. BIT 49 (2009), no. 1, 113–140.
  • Jentzen, A., Kloeden, P. E., and Neuenkirch, A., Pathwise approximation of stochastic differential equations on domains: higher order convergence rates without global Lipschitz coefficients. Numer. Math. 112 (2009), no. 1, 41–64.
  • Jentzen, A. and Kloeden, P. E., Overcoming the order barrier in the numerical approximation of stochastic partial differential equations with additive space-time noise. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 465 (2009), no. 2102, 649–667.
  • Jentzen, A. and Neuenkirch, A., A random Euler scheme for Carathéodory differential equations. J. Comput. Appl. Math. 224 (2009), no. 1, 346–359.
  • Jentzen, A., Kloeden, P. E., and Neuenkirch, A., Pathwise convergence of numerical schemes for random and stochastic differential equations.
  • Kloeden, P. E. and Jentzen, A., Pathwise convergent higher order numerical schemes for random ordinary differential equations. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 463 (2007), no. 2087, 2929–2944.

Theses

  • Jentzen, A., Taylor Expansions for Stochastic Partial Differential Equations. PhD thesis (2009), Frankfurt University, Germany.
  • Jentzen, A., Numerische Verfahren hoher Ordnung für zufällige Differentialgleichungen. Diploma thesis (2007), Frankfurt University, Germany.