Updates and News.

Below on this page are the recent news about our group.


Vorstellung auf CC-Konferenz

Unsere neuesten Arbeiten zu GPU/CPU-Optimierungen mithilfe sog. Scheduling-Sprachen wurden auf der prominenten ACM SIGPLAN 2023 International Conference on Compiler Construction von Ari Rasch und Richard Schulze in Montreal, Kanada vorgestellt:

(De/Re)-Compositions Expressed Systematically via MDH-Based Schedules


  • Ari Rasch (University of Münster, Germany)

  • Richard Schulze (University of Münster, Germany)

  • Denys Shabalin (Google Zurich, Switzerland)

  • Anne C. Elster (Norwegian University of Science and Technology (NTNU), Norway)
  • Sergei Gorlatch (University of Münster, Germany)
  • Mary Hall (University of Utah, USA)

Die Arbeit ist entstanden in Kollaboration mit Google Zürich, der Norwegian University of Science and Technology (NTNU), sowie der University of Utah, USA.


Co-Organisation einer Internationaler Tagung: Lorentz-Center Workshop "Generic Autotuning Technology for GPU Applications"

Das Lorentz-Zentrum ist ein Workshop-Zentrum in den Niederlanden, das wissenschaftliche Treffen für internationale Teilnehmer veranstaltet. Ungleich üblichen Workshops, zeichnen sich die Veranstaltungen des Lorentz-Zentrums durch eine offene und interaktive Atmosphäre aus, sowie durch eine hohe wissenschaftliche Qualität.

Unsere Arbeitsgruppe ist an der Organisation eines bevorstehenden Workshops im März 2022 maßgeblich beteiligt. Ziel des Workshops ist es, Technologien aus dem Bereich der automatischen Programmoptimierung (auch bekannt als auto-tuning) mit führenden internationalen Experten auf dem Gebiet zu diskutieren und offene Forschungsfragen zu identifizieren und anzugehen.

Unsere AG wird maßgeblich sowohl zur Organisation als auch zu den Diskussionen und Vorträgen des Workshops beitragen, gestützt durch unsere Arbeiten zu den Forschungsprojekten Auto-Tuning Framework (ATF) und Elevate. Vertreten wird die AG auf der Tagung durch: Richard Schulze (Teilnehmer), Johannes Lenfers (Teilnehmer), und Ari Rasch (Organisator).


DFG project: "Performance, Portability, and Productivity for Deep Learning Applications on Multi- and Many-Core Architectures (PPP-DL)"

We are happy to announce that the German Research Foundation (DFG) has recently approved our application and will fund the research project with the above title for the period of 3 years, with a budget of approx. 600,000 € including overhead.

Deep learning (DL) is currently the most popular machine learning method used for solving a wide variety of real-world problems in both academia and industry. The success of DL applications critically depends on the quality of the software that implements DL algorithms on modern high-performance architectures such as multi-core CPU and Graphics Processing Unit (GPU).

Our project PPP-DL will develop a novel approach to automatic code generation and optimization for DL ​​applications, based on the theory of Multi-Dimensional Homomorphisms (MDH) which has been actively developed in our research group. Using our MDH approach, we will address three fundamental challenges in code generation and optimization for DL: Performance, Portability, and Productivity (PPP).

The work in this project will be conducted by two full-time research assistants — Ari Rasch and Richard Schulze — supported by a student assistant, under the general coordination by Prof. Sergei Gorlatch.


SIGPLAN Research Highlight 2021

The Special Interest Group on Programming Languages (SIGPLAN) of the Association for Computing Machinery (ACM) organizes world-wide top-conferences exploring programming language concepts and tools, focusing on design, implementation, practice, and theory. In addition, the ACM SIGPLAN annually awards few papers of exceptional quality as Research Highlights. 

We are pleased to announce that our paper "Achieving high-performance the functional way: a functional pearl on expressing high-performance optimizations as rewrite strategies" (published at ICFP 2020, Rank A) was awarded as SIGPLAN Research Highlight

Nomination Statement of the ACM:

"High-performance array code, for applications such as machine learning or image processing, needs both good algorithms and highly tuned code. While the algorithms are quite general, the tuning–involving optimisations such as tiling, vectorisation, and loop unrolling–is very platform specific. This paper cleanly separates those concerns, providing domain-specific languages for specifying the algorithm and the optimisations independently, with an optimisation language that supports abstraction and reuse properly for the first time. As a result we can enjoy elegance, and state-of-the-art performance, both at the same time. Sometimes we can have our cake and eat it too."

  • Dr. Bastian Hagedorn – former PhD-student in the group PVS @Universität Münster, now Senior Deep Learning Compiler Engineer @NVIDIA
  • Johannes Lenfers - PhD-student in the group PVS @Universität Münster
  • Thomas Kœhler - PhD-student @Univ. of Glasgow
  • Xueying Qin - now PhD-student @Univ. of Edinburgh
  • Prof. Sergei Gorlatch - Leader of the group PVS @Universität Münster
  • Dr. Michel Steuwer - Lecturer @Univ. of Edinburgh, former PhD-student in the group PVS @Universität Münster

This work is the result of our ongoing cooperation with the universities of Glasgow and Edinburgh (UK), which will continue in the future.