| Private Homepage | https://www.uni-muenster.de/PVS/mitarbeiter/gorlatch.html |
| Research Interests | Algorithm- and software development for modern computer system Parallel and distributed systems, middleware, grids and clouds High performance computing, multi-core und GPU bases systems Distributed applications: Online games, simulations Performance modells and optimization |
| Current Publications | • Lenfers, Johannes; Spehr, Sven; Dieckmann, Justus; Jansen, Johannes; Lücke, Martin Paul; Gorlatch, Sergei Schedgehammer: Auto-tuning Compiler Optimizations beyond Numerical Parameters. Proceedings of the 35th ACM SIGPLAN International Conference on Compiler Construction, 2026 online • Lenfers, Johannes; Spehr, Sven; Dieckmann, Justus; Jansen, Johannes; Lücke, Martin Paul; Gorlatch, Sergei Schedgehammer: Auto-tuning Compiler Optimizations beyond Numerical Parameters. Proceedings of the 35th ACM SIGPLAN International Conference on Compiler Construction, 2026 online • Schulze, Richard; Gorlatch, Sergei; Rasch, Ari pyATF: Constraint-Based Auto-Tuning in Python. , 2025 online • Schulze, Richard; Gorlatch, Sergei; Rasch, Ari Reduction-Aware Directive-Based Programming via Multi-Dimensional Homomorphisms. Proceedings of the SC '25 Workshops of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2025 online • Gorlatch, S.; Garanina, N.; Staroletov, S. USING THE SPIN MODEL CHECKER FOR AUTO-TUNING HIGH-PERFORMANCE PROGRAMS. Journal of Mathematical Sciences Vol. - (-), 2025 online • Tomak, Juri; Gorlatch, Sergei A Toolset for Predicting Performance of Legacy Real-Time Software Based on the RAST Approach. ACM Transactions on Modeling and Computer Simulation Vol. 35 (3), 2025 online • Schulze, Richard; Gorlatch, Sergei; Rasch, Ari pyATF: Constraint-Based Auto-Tuning in Python. , 2025 online • Schulze, Richard; Gorlatch, Sergei; Rasch, Ari Reduction-Aware Directive-Based Programming via Multi-Dimensional Homomorphisms. Proceedings of the SC '25 Workshops of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2025 online • Gorlatch, S.; Garanina, N.; Staroletov, S. USING THE SPIN MODEL CHECKER FOR AUTO-TUNING HIGH-PERFORMANCE PROGRAMS. Journal of Mathematical Sciences Vol. - (-), 2025 online |
| Current Projects | • Structured Code Generation for Deep-Learning Computations via Multi-Dimensional Homomorphisms Deep Learning (DL) has revolutionized the world. Started in 2012, the AlexNet deep neural network was able to significantly outperform existing methods to image classification. A fundamental requirement for the success of DL is the software that implements DL: to make DL practicable, it is essential to achieve high performance for DL computations, and the software needs to be (performance) portable over different kinds of hardware architectures, ranging from mobile devices to high-performance computing clusters. Moreover, to make DL software attractive for the common DL domain scientist (who is generally not familiar with hardware details and low-level code optimizations), DL software must also be productively usable by the scientist – by offering easy-to-use programming abstractions for implementing DL programs, agnostic of hardware and optimization details. Furthermore, to meet the complex and increasing real-world demands on DL, DL software must be easily adaptable to new demands, due to the continuously increasing complexity of DL algorithms and the frequently changing landscape of computer systems in order. During our predecessor DFG project PPP-DL, we have designed and developed the algebraic formalism of Multi-Dimensional Homomorphisms (MDH) – a formal representation for DL computations, such as Matrix Multiplication and Convolution. We demonstrated that MDH achieves significantly higher performance, portability, and/or productivity for DL computations on GPUs and CPUs as compared to major DL approaches, including hand-optimized vendor libraries provided by NVIDIA and Intel which are viewed as gold standards the research and application communities. This new project proposal aims at designing and implementing a novel code generation approach for DL that relies on our formalism of MDH developed in our predecessor project PPP-DL. The ultimate goal of our envisaged code generation approach is to simultaneously achieve high performance, portability, and productivity for DL: 1. Performance: by optimizing entire DL programs (which consist of multiple DL computations, such as Matrix Mult. and Convolution); for this, we aim to use our MDH formalism as foundation, because it allows expressing DL computations uniformly in the same formalism and consequently to reason about optimizations across the computations. 2. Portability: by elaborating a generalized approach for supporting for DL-specific hardware (such as NVIDIA’s popular Tensor Cores) for our intended code generation approach. For this, we aim to exploit the algebraic high-level semantic information captured within our MDH representation for DL computations to check if DL-specific hardware can be (legally) used or not. 3. Productivity: by expressing DL computations conveniently via the optimization-agnostic high-level language of MDH; we plan in this project to significantly widen the expressivity of MDH, e.g., to express entire DL programs instead of individual DL computations only. • Automatische Skalierbarkeit in verteilten Systemen mit Echtzeit-Anforderungen online | gorlatch at uni-muenster dot de |
| Phone | +49 251 83-32741 |
| FAX | +49 251 83-32742 |
| Room | 711 |
| Secretary | Sekretariat Kaiser-Mariani Frau Julia Kaiser-Mariani Telefon +49 251 83-32740 Fax +49 251 83-32742 Zimmer 704 |
| Address | Prof. Dr. Sergei Gorlatch Institut für Informatik Fachbereich Mathematik und Informatik der Universität Münster Einsteinstrasse 62 48149 Münster Deutschland |
| Diese Seite editieren |