# Modelle und Approximationen

### Forschungsschwerpunkt C

Alsmeyer, Böhm, Dereich, Engwer, Friedrich, Hille, Holzegel (seit 2020), Huesmann, Jentzen, Kabluchko, Lohkamp, Löwe, Mukherjee, Ohlberger, Rave, Schedensack (bis 2019), F. Schindler, Schlichting (seit 2020), Seis, Stevens, Wilking, Wirth, Wulkenhaar, Zeppieri.

Anwendungen aus den Natur- und Lebenswissenschaften bestimmen die Herausforderungen in diesem Forschungsschwerpunkt. Dabei zielen wir auf die Entwicklung und Analyse von grundlegenden dynamischen und geometrischen Modellierungs- und Approximationsansätzen zur Beschreibung deterministischer und stochastischer Systeme ab. Wir untersuchen beispielsweise das Zusammenspiel von makroskopischen Strukturen mit zugrundeliegenden mikroskopischen Prozessen und deren jeweiligen topologischen und geometrischen Eigenschaften. Ein weiterer Fokus ist die Untersuchung, Ausnutzung und Optimierung der zugrundeliegenden Geometrie in mathematischen Modellen. Wir untersuchen strukturelle Verbindungen zwischen unterschiedlichen mathematischen Konzepten, wie z.B. zwischen Lösungsmannigfaltigkeiten von partiellen Differentialgleichungen und nicht-linearer Interpolation oder zwischen verschiedenen metrischen, variationellen oder mehrskaligen Konvergenzkonzepten für Geometrien. Speziell zielen wir auf die Charakterisierung kennzeichnender geometrischer Eigenschaften von mathematischen Modellen und deren Approximationen.

## Weitere Forschungsprojekte von Mitgliedern des Forschungsschwerpunkts C

$\bullet$ Christian Engwer: Personalised diagnosis and treatment for refractory focal paediatric and adult epilepsy (2021-2024)

$\bullet$ Benedikt Wirth: CRC 1450 A05 - Targeting immune cell dynamics by longitudinal whole-body imaging and mathematical modelling (2021-2024)
We develop strategies for tracking and quantifying (immune) cell populations or even single cells in long-term (days) whole-body PET studies in mice and humans. This will be achieved through novel acquisition protocols, measured and simulated phantom data, use of prior information from MRI and microscopy, mathematical modelling, and mathematical analysis of image reconstruction with novel regularization paradigms based on optimal transport. Particular applications include imaging and tracking of macrophages and neutrophils following myocardial ischemia-reperfusion or in arthritis and sepsis.

$\bullet$ Benedikt Wirth: CRC 1450 A06 - Improving intravital microscopy of inflammatory cell response by active motion compensation using controlled adaptive optics (2021-2024)
We will advance multiphoton fluorescence microscopy by developing a novel optical module comprised of a high-speed deformable mirror that will actively compensate tissue motion during intravital imaging, for instance due to heart beat (8 Hz), breathing (3 Hz, in mm-range) or peristaltic movement of the gut in mice. To control this module in real-time, we will develop mathematical methods that track and predict tissue deformation. This will allow imaging of inflammatory processes at cellular resolution without mechanical tissue fixation.

$\bullet$ Caterina Zeppieri: SPP 2256: Variational Methods for Predicting Complex Phenomena in Engineering Structures and Materials - Subproject: Variational modelling of fracture in high-contrast microstructured materials: mathematical analysis and computational mechanics (2020-2023)
After the seminal work of Francfort and Marigo, free-discontinuity functionals of Mumford-Shah type have been established as simplified and yet relevant mathematical models to study fracture in brittle materials. For finite-contrast constituents, the homogenisation of brittle energies is by-now well-understood and provides a rigorous micro-to-macro upscaling for brittle fracture.Only recently, explicit high-contrast brittle microstructures have been provided, which show that, already for simple free-discontinuity energies of Mumford-Shah type, the high-contrast nature of the constituents can induce a complex effective behaviour going beyond that of the single constituents. In particular, macroscopic cohesive-zone models and damage models can be obtained by homogenising purely brittle microscopic energies with high-contrast coefficients. In this framework, the simple-to-complex transition originates from a microscopic bulk-surface energy-coupling which is possible due to the degeneracy of the functionals.Motivated by the need to understand the mathematical foundations of mechanical material-failure and to develop computationally tractable numerical techniques, the main goal of this project is to characterise all possible materials which can be obtained by homogenising simple high-contrast brittle materials. In mathematical terms, this amounts to determine the variational-limit closure of the set of high-contrast free-discontinuity functionals. This problem has a long history in the setting of elasticity, whereas is far less understood if fracture is allowed.For the variational analysis it will be crucial to determine novel homogenisation formulas which “quantify” the microscopic bulk-surface energy-coupling. Moreover, the effect of high-contrast constituents on macroscopic anisotropy will be investigated by providing explicit microstructures realising limit models with preferred crack-directions.The relevant mathematical tools will come from the Calculus of Variations and Geometric Measure Theory. Along the way, new ad hoc extension and approximation results for SBV-functions will be established. The latter will be of mathematical interest in their own right, and appear to be widely applicable in the analysis of scale-dependent free-discontinuity problems.The computational mechanics results will build upon the mathematical theory, and will complement it with relevant insights when the analysis becomes impracticable. High performance fast Fourier transform and adaptive tree-based computational methods will be developed to evaluate the novel cell formulas. The identified damage and cohesive-zone models will be transferred to simulations on component scale.The findings are expected to significantly enhance the understanding of the sources and mechanisms of material-failure and to provide computational tools for identifying anisotropic material-models useful for estimating the strength of industrial components.

$\bullet$ Christoph Böhm, Burkhard Wilking: CRC 1442: Geometry: Deformation and Rigidity - Geometric evolution equations (2020-2024)
Hamilton’s Ricci flow is a geometric evolution equation on the space of Riemannian metrics of a smooth manifold. In a first subproject we would like to show a differentiable stability result for noncollapsed converging sequences of Riemannian manifolds with nonnegative sectional curvature, generalising Perelman’s topological stability. In a second subproject, next to classifying homogeneous Ricci solitons on non-compact homogeneous spaces, we would like to prove the dynamical Alekseevskii conjecture. Finally, in a third subproject we would like to find new Ricci flow invariant curvature conditions, a starting point for introducing a Ricci flow with surgery in higher dimensions.

$\bullet$ Raimar Wulkenhaar: CRC 1442: Geometry: Deformation and Rigidity - D03: Integrability (2020-2024)
The project investigates a novel integrable system which arises from a quantum field theory on noncommutative geometry. It is characterised by a recursive system of equations with conjecturally rational solutions. The goal is to deduce their generating function and to relate the rational coefficients in the generating function to intersection numbers of tautological characteristic classes on some moduli space.

$\bullet$ Michael Wiemeler, Burkhard Wilking: CRC 1442: Geometry: Deformation and Rigidity - B01: Curvature and Symmetry (2020-2024)
The question of how far geometric properties of a manifold determine its global topology is a classical problem in global differential geometry. In a first subproject we study the topology of positively curved manifolds with torus symmetry. We think that the methods used in this subproject can also be used to attack the Salamon conjecture for positive quaternionic Kähler manifolds. In a third subproject we study fundamental groups of non-negatively curved manifolds. Two other subprojects are concerned with the classification of manifolds all of whose geodesics are closed and the existence of closed geodesics on Riemannian orbifolds.

$\bullet$ Manuel Friedrich: Variational Modeling of Molecular Geometries (2020-2023)
Wider research context: Driven by their fascinating electronic and mechanical properties, research on low-dimensional materials (such as graphene) is exponentially growing. New findings are emerging at an always increasing pace, ranging from fundamental concepts to applications. In contrast to the wealth of experimental and numerical evidence currently available, rigorous mathematical results on local and global crystalline geometries are scant and the study of the emergence of different scales within molecular structures is still in its infancy. Objectives: We focus on the variational modeling of molecular geometries within the frame of Molecular Mechanics: effective configurations are identified as minimizers of classical configurational potentials. The project aims at obtaining new mathematical understanding of molecular geometries and at investigating the emergence of scale effects across scales.Approach: Ranging from the nano to the macroscale, we address crystallization for molecular compounds, the description of local molecular features including defects and rigidity, the occurrence of global geometric characteristics such as flatness in 3d and stratification, and the passage from discrete to continuum theories. Grounded on variational methods for atomistic models, the methodology will integrate techniques from discrete mathematics and stochastics as well.Innovation: The project targets a number of hot research fronts in Materials Science from the rigorous mathematical standpoint. Compared with simulations, the theoretical approach bears the advantage of being system-size independent, a crucial asset for investigating effects across scales.Researchers involved: The new international research team between Münster and Vienna will be coordinated by Manuel Friedrich and Ulisse Stefanelli and will benefit from a network of local and international collaborators, including experimental and computational groups.

$\bullet$ Mario Ohlberger, Felix Schindler: ML-MORE: Machine learning and model order reduction to predict the efficiency of catalytic filters. Subproject 1: Model Order Reduction (2020-2023)
Reaktiver Stofftransport in porösen Medien in Verbindung mit katalytischen Reaktionen ist die Grundlage für viele industrielle Prozesse und Anlagen, wie z.B. Brennstoffzellen, Photovoltaikzellen, katalytische Filter für Abgase, etc. Die Modellierung und Simulation der Prozesse auf der Porenskala kann bei der Optimierung des Designs von katalytischen Komponenten und der Prozessführung helfen, ist jedoch derzeit dadurch eingeschränkt, dass solche Simulationen zu grossen Datenmengen führen, zeitaufwändig sind und von einer grossen Anzahl von Parametern abhängen. Außerdem werden auf diese Weise die im Laufe der Jahre gesammelten Versuchsdaten nicht wiederverwendet. Die Entwicklung von Lösungsansätzen für die Vorhersage der chemischen Konversionsrate mittels moderner datenbasierter Methoden des Maschinellen Lernens (ML) ist essenziell, um zu schnellen, zuverlässigen prädiktiven Modellen zu gelangen. Hierzu sind verschiedene Methodenklassen erforderlich. Neben den experimentellen Daten sind voll aufgelöste Simulationen auf der Porenskala notwendig. Diese sind jedoch zu teuer, um einen umfangreichen Satz an Trainingsdaten zu generieren. Daher ist die Modellordnungsreduktion (MOR) zur Beschleunigung entscheidend. Es werden reduzierte Modelle fur den betrachteten instationären reaktiven Transport entwickelt, um große Mengen an Trainingsdaten zu simulieren. Als ML-Methodik werden mehrschichtige Kern-basierte Lernverfahren entwickelt, um die heterogenen Daten zu kalibrieren und nichtlineare prädiktive Modelle zur Effizienzvorhersage zu entwickeln.Hierbei werden große Daten (bzgl. Dimensionalität und Sample-Zahl) zu behandeln sein, was Datenkompression und Parallelisierung des Trainings erfordern wird. Das Hauptziel des Projekts ist es, alle oben genannten Entwicklungen in einem prädiktiven ML-Tool zu integrieren, das die Industrie bei der Entwicklung neuer katalytischer Filter unterstützt und auf viele andere vergleichbare Prozesse übertragbar ist.

$\bullet$ Christian Seis: Transport Equations, mixing and fluid dynamics (2020-2023)
Advection-diffusion equations are of fundamental importance in many areas of science. They describe systems, in which a quantity is simultaneously diffused and advected by a velocity field. In many applications these velocity fields are highly irregular. In this project, several quantitative aspects shall be investigated. One is related to mixing properties in fluids caused by shear flows. The interplay between the transport by the shear flow and the regularizing diffusion leads after a certain time, to the emergence of a dominant length scales which persist during the subsequent evolution and determine mixing rates. A rigorous understanding of these phenomena is desired. In addition, stability estimates for advection-diffusion equations will be derived. These shall give a deep insight into how solutions depend on coefficients and data. The new results shall subsequently be applied to estimate the error generated by numerical finite volume schemes approximating the model equations.

$\bullet$ Christian Engwer: HyperCut – Stabilized DG schemes for hyperbolic conservation laws on cut cell meshes (2020-2022)
The goal of this project is to develop new tools for solving time-dependent, first order hyperbolic conservation laws, in particular the compressible Euler equations, on complex shaped domains.In practical applications, mesh generation is a major issue. When dealing with complicated geometries, the construction of corresponding body-fitted meshes is a very involved and time-consuming process.In this proposal, we will consider a different approach: In the last two decades so called cut cell methods have gained a lot of interest, as they reduce the burden of the meshing process. The idea is to simply cut the geometry out of a Cartesian background mesh. Theresulting cut cells can have various shapes and are not bounded from below in size. Compared to body-fitted meshes, this approach is fully automatic and much cheaper. However, standard explicit schemes are typically not stable when the time step is chosen with respect to the background mesh and does not reflect the size of small cut cells. Thisis referred to as the small cell problem.In the setting of standard meshes, both Finite Volume (FV) and Discontinuous Galerkin (DG) methods have been used successfully for solving non-linear hyperbolic conservation laws. For FV schemes, there already exist several approaches for extending thesemethods to cut cell meshes and overcoming the small cell problem while keeping the explicit time stepping. For DG schemes, this is not the case.The goal of this proposal is to develop stable DG schemes for solving time-dependent hyperbolic conservation laws, in particular the compressible Euler equations, on cut cell meshes using explicit time stepping.We particularly aim at a method that(1) solves the small cell problem and permits explicit time stepping,(2) preserves mass conservation,(3) is high-order along the cut cell boundary, where many important quantities are evaluated,(4) satisfies theoretical properties such as monotonicity and TVDM stability for model problems,(5) works for non-linear hyperbolic conservation laws, in particular the compressible Euler equations,(6) is robust in the presence of shocks or discontinuities,(7) and sufficiently simple to be implemented in higher dimensions.We base the spatial discretization on a DG approach to enable high accuracy. We plan to develop new stabilization terms to overcome the small cell problem for this setup. The starting point for this proposal is our recent publication for stabilizing a DG discretizationfor linear advection using piecewise linear polynomials. We will extend these results in different directions, namely to non-linear problems, including the compressible Euler equations, and to higher order, in particular to piecewise quadratic polynomials.We will implement these methods using the software framework DUNE and publish our code as open-source.

$\bullet$ Raimar Wulkenhaar: GRK 2149 - Starke und schwache Wechselwirkung - von Hadronen zu Dunkler Materie (2020-2024)
The Research Training Group (Graduiertenkolleg) 2149 "Strong and Weak Interactions - from Hadrons to Dark Matter" funded by the Deutsche Forschungsgemeinschaft focuses on the close collaboration of theoretical and experimental nuclear, particle and astroparticle physicists further supported by a mathematician and a computer scientist. This explicit cooperation is of essence for the PhD topics of our Research Training Group.Scientifically this Research Training Group addresses questions at the forefront of our present knowledge of particle physics. In strong interactions we investigate questions of high complexity, such as the parton distributions in nuclear matter, the transition of the hot quark-gluon plasma into hadrons, or features of meson decays and spectroscopy. In weak interactions we pursue questions, which are by definition more speculative and which go beyond the Standard Model of particle physics, particularly with regard to the nature of dark matter. We will confront theoretical predictions with direct searches for cold dark matter particles or for heavy neutrinos as well as with new particle searches at the LHC.The pillars of our qualification programme are individual supervision and mentoring by one senior experimentalist and one senior theorist, topical lectures in physics and related fields (e.g. advanced computation), peer-to-peer training through active participation in two research groups, dedicated training in soft skills, and the promotion of research experience in the international community. We envisage early career steps through a transfer of responsibilities and international visibility with stays at external partner institutions. An important goal of this Research Training Group is to train a new generation of scientists, who are not only successful specialists in their fields, but who have a broader training both in theoretical and experimental nuclear, particle and astroparticle physics.

$\bullet$ Caterina Zeppieri: Homogenisation and elliptic approximation of random free-discontinuity functionals (2020-2022)
Composite materials posses an incredibly complex microstructure. To reduce this complexity, in materials modelling reasonable idealizations have to be considered. Random composite materials represent a relevant class of such idealizations. Motivated by primary questions arising in the variational theory of fracture, the goal of this project is to study the large-scale behavior of random elastic composites which can undergo fracture. Mathematically this amounts to develop a qualitative theory of stochastic homogenization for free-discontinuity functionals. This will be done by combining two approaches: a "direct" approach and an "indirect" approximation-approach. The direct approach consists in extending the classical theory to the BV-setting. The approximation-approach, instead, consists in proposing suitable elliptic phase-field approximations of random free-discontinuity functionals which can provide regular-approximations of the homogenized coefficients.

$\bullet$ Benedikt Wirth: Mathematische Rekonstruktion und Modellierung der CAR T-Zell Verteilung in vivo in einem Tumormodell (2019-2023)

$\bullet$ Arnulf Jentzen, Benno Kuckuck: Mathematical Theory for Deep Learning (2019-2024)
It is the key goal of this project to provide a rigorous mathematical analysis for deep learning algorithms and thereby to establish mathematical theorems which explain the success and the limitations of deep learning algorithms. In particular, this projects aims (i) to provide a mathematical theory for high-dimensional approximation capacities for deep neural networks, (ii) to reveal suitable regular sequences of functions which can be approximated by deep neural networks but not by shallow neural networks without the curse of dimensionality, and (iii) to establish dimension independent convergence rates for stochastic gradient descent optimization algorithms when employed to train deep neural networks with error constants which grow at most polynomially in the dimension.

$\bullet$ Arnulf Jentzen: Existence, uniqueness, and regularity properties of solutions of partial differential equations (2019-2024)
The goal of this project is to reveal existence, uniqueness, and regularity properties of solutions of partial differential equations (PDEs). In particular, we intend to study existence, uniqueness, and regularity properties of viscosity solutions of degenerate semilinear Kolmogorov PDEs of the parabolic type. We plan to investigate such PDEs by means of probabilistic representations of the Feynman-Kac type. We also intend to study the connections of such PDEs to optimal control problems.

$\bullet$ Arnulf Jentzen: Regularity properties and approximations for stochastic ordinary and partial differential equations with non-globally Lipschitz continuous nonlinearities (2019-2024)
A number of stochastic ordinary and partial differential equations from the literature (such as, for example, the Heston and the 3/2-model from financial engineering, (overdamped) Langevin-type equations from molecular dynamics, stochastic spatially extended FitzHugh-Nagumo systems from neurobiology, stochastic Navier-Stokes equations, Cahn-Hilliard-Cook equations) contain non-globally Lipschitz continuous nonlinearities in their drift or diffusion coefficients. A central aim of this project is to investigate regularity properties with respect to the initial values of such stochastic differential equations in a systematic way. A further goal of this project is to analyze the regularity of solutions of the deterministic Kolmogorov partial dfferential equations associated to such stochastic differential equations. Another aim of this project is to analyze weak and strong convergence and convergence rates of numerical approximations for such stochastic differential equations.

$\bullet$ Arnulf Jentzen: Overcoming the curse of dimensionality: stochastic algorithms for high-dimensional partial differential equations (2019-2024)
Partial differential equations (PDEs) are among the most universal tools used in modeling problems in nature and man-made complex systems. The PDEs appearing in applications are often high dimensional. Such PDEs can typically not be solved explicitly and developing efficient numerical algorithms for high dimensional PDEs is one of the most challenging tasks in applied mathematics. As is well-known, the difficulty lies in the so-called ''curse of dimensionality'' in the sense that the computational effort of standard approximation algorithms grows exponentially in the dimension of the considered PDE. It is the key objective of this research project to overcome this curse of dimensionality and to construct and analyze new approximation algorithms which solve high dimensional PDEs with a computational effffort that grows at most polynomially in both the dimension of the PDE and the reciprocal of the prescribed approximation precision.

$\bullet$ Benedikt Wirth: SPP 1962: Non-smooth and Complementarity-Based Distributed Parameter Systems: Simulation and Hierarchical Optimization - SP: Non-smooth and non-convex optimal transport problems (2019-2022)
In recent years a strong interest has developed within mathematics in so-called "branched Transport" models, which allow to describe transportation networks as they occur in road systems, river basins, communication networks, vasculature, and many other natural and artificial contexts. As in classical optimal transport, an amount of material needs to be transported efficiently from a given initial to a final mass distribution. In branched transport, however, the transportation cost is not proportional, but subadditive in the transported mass, modelling an increased transport efficiency if mass is transported in bulk. This automatically favours transportation schemes in which the mass flux concentrates on a complicated, ramified network of one-dimensional lines. The branched transport problem is an intricate nonconvex, nonsmooth variational problem on Radon measures (in fact on normal currents) that describe the mass flux. Various different formulations were developed and analysed (including work by the PIs), however, they all all take the viewpoint of geometric measure theory, working with flat chains, probability measures on the space of Lipschitz curves, or the like. What is completely lacking is an optimization and optimal control perspective (even though some ideas of optimization shimmer through in the existing variational arguments such as regularity analysis via necessary optimality conditions or the concept of calibrations which are related to dual optimization variables). This situation is also reflected in the fact that the field of numerics for branched transport is rather underdeveloped and consists of ad hoc graph optimization methods for special cases and two-dimensional phase field approximations. We will reformulate branched transport in the framework of optimization and optimal control for Radon measures, work out this optimization viewpoint in the variational analysis of branched transport networks, and exploit the results in novel numerical approaches. The new perspective will at the same time help variational analysts, advance the understanding of nonsmooth, nonconvex optimization problems on measures, and provide numerical methods to obtain efficient transport networks.

$\bullet$ Mario Ohlberger, Felix Schindler, Tim Keil: Localized Reduced Basis Methods for PDE-constrained Parameter Optimization (2019-2021)
This projects is concerned with model reduction for parameter optimization of nonlinear elliptic partial differential equations (PDEs). The goal is to develop a new paradigm for PDE-constrained optimization based on adaptive online enrichment. The essential idea is to design a localized version of the reduced basis (RB) method which is called Localized Reduced Basis Method (LRBM).

$\bullet$ Benedikt Wirth: Nonlocal Methods for Arbitrary Data Sources (2018-2022)
In NoMADS we focus on data processing and analysis techniques which can feature potentially very complex, nonlocal, relationships within the data. In this context, methodologies such as spectral clustering, graph partitioning, and convolutional neural networks have gained increasing attention in computer science and engineering within the last years, mainly from a combinatorial point of view. However, the use of nonlocal methods is often still restricted to academic pet projects. There is a large gap between the academic theories for nonlocal methods and their practical application to real-world problems. The reason these methods work so well in practice is far from fully understood.

Our aim is to bring together a strong international group of researchers from mathematics (applied and computational analysis, statistics, and optimisation), computer vision, biomedical imaging, and remote sensing, to fill the current gaps between theory and applications of nonlocal methods. We will study discrete and continuous limits of nonlocal models by means of mathematical analysis and optimisation techniques, resulting in investigations on scale-independent properties of such methods, such as imposed smoothness of these models and their stability to noisy input data, as well as the development of resolution-independent, efficient and reliable computational techniques which scale well with the size of the input data. As an overarching applied theme we focus in particular on image data arising in biology and medicine, which offers a rich playground for structured data processing and has direct impact on society, as well as discrete point clouds, which represent an ambitious target for unstructured data processing. Our long-term vision is to discover fundamental mathematical principles for the characterisation of nonlocal operators, the development of new robust and efficient algorithms, and the implementation of those in high quality software products for real-world application.

$\bullet$ Mario Ohlberger, Stephan Rave, Marie Christin Tacke: Modellbasierte Abschätzung der Lebensdauer von gealterten Li-Batterien für die 2nd Life Anwendung als stationärer Stromspeicher (2018-2021)