Updates and News.

Below on this page are the recent news about our group.

 

Gold Medal and Presentations at CGO 2026

At this year’s highly prestigious A-rank IEEE/ACM International Symposium on Code Generation and Optimization (CGO 2026), Richard Schulze, a doctoral researcher at the Institute of Computer Science (Department 10), won the gold medal in the renowned CGO PhD and Student Research Competition. He prevailed in two competitive rounds with his work on reduction optimizations in domain-specific programming interfaces for modern highly parallel computer architectures (supervised by Prof. Sergei Gorlatch and Dr. Ari Rasch). The work was carried out within the framework of the ongoing DFG project MDH-DL.

In addition, at the same prestigious CGO conference, Dr. Ari Rasch presented our latest results on the generation and optimization of high-performance program code for AI and heterogeneous computer architectures in two talks, also conducted within the framework of the DFG project MDH-DL. The talks were embedded in a high-caliber program featuring contributions from leading international experts in AI compiler research as well as researchers working on highly current topics in heterogeneous systems.

Richard Schulze
© PVS
Dr. Ari Rasch
© PVS
Dr. Ari Rasch
© PVS

Summa Cum Laude Dissertation and REACH Thesis Award

The dissertation of Dr. Ari Rasch (supervised by Prof. Sergei Gorlatch) was awarded the distinction summa cum laude in early 2025. His research focuses on the generation and optimization of program code for AI applications. A key objective of this work is to achieve high performance across diverse computer architectures—such as prominent NVIDIA GPUs—while placing minimal demands on the user of the system. The dissertation is distinguished in particular by its strong mathematical foundations.

In addition, Dr. Rasch’s dissertation received the Thesis Award of the REACH EUREGIO Start-up Center in late 2025.

Dr. Ari Rasch
© PVS
Dr. Ari Rasch
© PVS

New DFG Project: “Structured Code Generation for Deep-Learning Computations via Multi-Dimensional Homomorphisms (MDH.DL)”

The German Research Foundation (DFG) has approved further funding for our work on code generation and optimization for deep learning computations. The project will be funded for a total of 36 months with approximately €500,000 (including program allowance).

The new initiative, titled as above, builds upon our previous project, PPP-DL. Its goal is to develop a structured approach to automated code generation for deep learning computations on parallel processor architectures. The foundation for this is the algebraic theory of Multi-Dimensional Homomorphisms (MDH), developed during PPP-DL. A key aim is to implement the theoretical MDH approach in a practical way, particularly through integration with established programming paradigms such as Python. This gives rise to specific research questions regarding the translation of abstract concepts into real-world software contexts.

The project work will be carried out by research associates Ari Rasch and Richard Schulze, under the direction of Prof. Sergei Gorlatch.

© DFG

Presentations in March at ACM SIGPLAN CC & C4ML & NVIDIA GTC

In March, we had the opportunity to successfully present our current work on generation and optimization of program code for AI applications at three renowned international conferences.

At the ACM SIGPLAN International Conference on Compiler Construction (CC), Richard Schulze presented our pyATF framework, which implements in the Python programming language the fundamental concepts of our Auto-Tuning Framework (ATF) for fully automatically optimizing complex parallel implementations. Our new pyATF interface for ATF not only ensures high user-friendliness, but also allows for easy integration of our ATF concepts into Python-based AI frameworks, such as TensorFlow and PyTorch.

At the Compilers for Machine Learning workshop, where leading approaches to code generation for AI applications are presented, Ari Rasch showcased our current work on code generation for AI based on our approach of Multi-Dimensional Homomorphisms (MDH). Using MDH, highly optimized program code -- for various AI hardware architectures (e.g., GPUs) -- can be automatically generated, based on algebraic abstractions of AI  the applications.

The NVIDIA GTC (GPU Technology Conference) is the leading AI conference attended by developers, engineers, researchers, inventors, and IT experts. Our current work on Generating and Optimizing GPU code for AI applications was successfully presented by Richard Schulze and Ari Rasch to international AI experts. Future collaborations were agreed upon and discussed at GTC.

Richard Schulze
© PVS
Dr. Ari Rasch
© PVS
© PVS

Vorstellung auf PLDI'24

Wir freuen uns mitteilen zu können, dass wir auf der internationalen Top-Konferenz ACM SIGPLAN Conference on Programming Language Design and Implementation (Rank A*) dieses Jahr gleich zwei Arbeiten vorstellen konnten.

Bastian Köpcke stellte unsere GPU-Sprache Descend vor, die von Rust inspiriert eine sichere Programmierung von GPUs ermöglicht.

Ari Rasch stellte einen formal-basierten Ansatz zum Ausdrücken und Optimieren von datenparallelen Berechnungen vor, basierend auf unserer Theorie der Multi-Dimensionalen Homomorphismen. Die Arbeit ist insbesondere im Bereich der künstlichen Intelligenz von hoher Relevanz und in dem Top-Journal ACM Transactions on Programming Languages and Systems (TOPLAS) veröffentlicht. 

Bastian Köpcke
© PVS
Ari Rasch
© PVS

Vorstellung auf CC-Konferenz

Unsere neuesten Arbeiten zu GPU/CPU-Optimierungen mithilfe sog. Scheduling-Sprachen wurden auf der prominenten ACM SIGPLAN 2023 International Conference on Compiler Construction von Ari Rasch und Richard Schulze in Montreal, Kanada vorgestellt:

(De/Re)-Compositions Expressed Systematically via MDH-Based Schedules

Autoren:

  • Ari Rasch (University of Münster, Germany)

  • Richard Schulze (University of Münster, Germany)

  • Denys Shabalin (Google Zurich, Switzerland)

  • Anne C. Elster (Norwegian University of Science and Technology (NTNU), Norway)
  • Sergei Gorlatch (University of Münster, Germany)
  • Mary Hall (University of Utah, USA)

Die Arbeit ist entstanden in Kollaboration mit Google Zürich, der Norwegian University of Science and Technology (NTNU), sowie der University of Utah, USA.

Ari Rasch
© PVS

Co-Organisation einer Internationaler Tagung: Lorentz-Center Workshop "Generic Autotuning Technology for GPU Applications"

Das Lorentz-Zentrum ist ein Workshop-Zentrum in den Niederlanden, das wissenschaftliche Treffen für internationale Teilnehmer veranstaltet. Ungleich üblichen Workshops, zeichnen sich die Veranstaltungen des Lorentz-Zentrums durch eine offene und interaktive Atmosphäre aus, sowie durch eine hohe wissenschaftliche Qualität.

Unsere Arbeitsgruppe ist an der Organisation eines bevorstehenden Workshops im März 2022 maßgeblich beteiligt. Ziel des Workshops ist es, Technologien aus dem Bereich der automatischen Programmoptimierung (auch bekannt als auto-tuning) mit führenden internationalen Experten auf dem Gebiet zu diskutieren und offene Forschungsfragen zu identifizieren und anzugehen.

Unsere AG wird maßgeblich sowohl zur Organisation als auch zu den Diskussionen und Vorträgen des Workshops beitragen, gestützt durch unsere Arbeiten zu den Forschungsprojekten Auto-Tuning Framework (ATF) und Elevate. Vertreten wird die AG auf der Tagung durch: Richard Schulze (Teilnehmer), Johannes Lenfers (Teilnehmer), und Ari Rasch (Organisator).

© PVS

DFG project: "Performance, Portability, and Productivity for Deep Learning Applications on Multi- and Many-Core Architectures (PPP-DL)"

We are happy to announce that the German Research Foundation (DFG) has recently approved our application and will fund the research project with the above title for the period of 3 years, with a budget of approx. 600,000 € including overhead.

Deep learning (DL) is currently the most popular machine learning method used for solving a wide variety of real-world problems in both academia and industry. The success of DL applications critically depends on the quality of the software that implements DL algorithms on modern high-performance architectures such as multi-core CPU and Graphics Processing Unit (GPU).

Our project PPP-DL will develop a novel approach to automatic code generation and optimization for DL ​​applications, based on the theory of Multi-Dimensional Homomorphisms (MDH) which has been actively developed in our research group. Using our MDH approach, we will address three fundamental challenges in code generation and optimization for DL: Performance, Portability, and Productivity (PPP).

The work in this project will be conducted by two full-time research assistants — Ari Rasch and Richard Schulze — supported by a student assistant, under the general coordination by Prof. Sergei Gorlatch.

© DFG

SIGPLAN Research Highlight 2021

The Special Interest Group on Programming Languages (SIGPLAN) of the Association for Computing Machinery (ACM) organizes world-wide top-conferences exploring programming language concepts and tools, focusing on design, implementation, practice, and theory. In addition, the ACM SIGPLAN annually awards few papers of exceptional quality as Research Highlights. 

We are pleased to announce that our paper "Achieving high-performance the functional way: a functional pearl on expressing high-performance optimizations as rewrite strategies" (published at ICFP 2020, Rank A) was awarded as SIGPLAN Research Highlight

Nomination Statement of the ACM:

"High-performance array code, for applications such as machine learning or image processing, needs both good algorithms and highly tuned code. While the algorithms are quite general, the tuning–involving optimisations such as tiling, vectorisation, and loop unrolling–is very platform specific. This paper cleanly separates those concerns, providing domain-specific languages for specifying the algorithm and the optimisations independently, with an optimisation language that supports abstraction and reuse properly for the first time. As a result we can enjoy elegance, and state-of-the-art performance, both at the same time. Sometimes we can have our cake and eat it too."

Authors:
  • Dr. Bastian Hagedorn – former PhD-student in the group PVS @Universität Münster, now Senior Deep Learning Compiler Engineer @NVIDIA
  • Johannes Lenfers - PhD-student in the group PVS @Universität Münster
  • Thomas Kœhler - PhD-student @Univ. of Glasgow
  • Xueying Qin - now PhD-student @Univ. of Edinburgh
  • Prof. Sergei Gorlatch - Leader of the group PVS @Universität Münster
  • Dr. Michel Steuwer - Lecturer @Univ. of Edinburgh, former PhD-student in the group PVS @Universität Münster

This work is the result of our ongoing cooperation with the universities of Glasgow and Edinburgh (UK), which will continue in the future.

Ari Rasch
© PVS