AI@WWU — A Practical Introduction to AI Theory and Techniques for Interdisciplinary Research


  • The course is already fully booked. However, please register on the waiting list so that we can estimate the demand  consider possible alternatives in case of high interest.
  • The start of this course has been postponed according to the guidelines of the university in order to slow the spread of COVID-19 (see official information here...).
  • The Workshop will start on the 29.06. at 4pm via Zoom. Log-in details will be send to the registered users directly.


Short Summary:

AI and in particular machine learning (ML) tools become more and more accessible due to easy to use programming environments (esp. Python) and libraries (esp. Tensorflow and Pytorch). In order to apply these powerful tools for a variety of research projects, some basic understanding is required to tackle data preparation, visualisation and successful ML algorithm usage. In this course we will (1) teach AI and machine learning basics (70% of the course) and (2) apply these techniques to custom problems and custom data provided by the participants (30% of the course). In particular, we will introduce several state of the art deep learning algorithms like CNNs, LSTMs and Autoencoder. The entire course will be interactive and the participants will implement and use all presented techniques in pre-configured test environments.

  • Goal

    •  Teach AI and machine learning basics (70%)
    •  Work on own data and project (30%)

  • Targeted audience:

    University staff from all disciplines and career levels (doctoral students, PostDocs, Group Leaders,

    • Participants should be interested in learning the basic theory and practical usage of AI (in particular deep learning) algorithms
    • All disciplines are welcome (i.e. science and humanities with interest in quantitative methods)
    • Own datasets / problem sets are welcome; if these can be used needs to be evaluated in the
      beginning of this course
  • Requirements

    •  Programming skills are not required; some basic experience is however desirable (preferably Python; programming language introduction including basics will be given in the course)
    • Basic knowledge in statistics and linear algebra

  • Syllabus

    1. Python basics
      • Programming language and environment (Jupyter Notebooks)
      • Importing, exporting and visualising data
      • Fundamental libraries (Numpy, Scikit-learn, Matplotlib)
    2. Machine Learning basics
      • Brief conceptual overview of different techniques (supervised, unsupervised etc.)
      • Introduction to classical machine learning techniques (SVM, Decision Trees, Clustering, …)
      • Perceptron and neural networks (loss, activation function, etc.)
      • Brief intro to Gradient Decent (Backpropagation)
      • Practical Example 1: Neural networks for regression (1D and 2D functions, under- and overfitting, etc.)
      • Practical Example 2: Neural networks for character recognition (MNIST)
    3. State-of-the-art (deep) Machine Learning concepts
      • Convolutional neural networks (CNNs)
        • Practical Example 3: CNNs for character recognition (MNIST)
        • Practical Example 4: CNNs for object classification (CIFAR-10)
      • Recurrent Neural Networks (especially LSTMs)
        • Practical Example 5: LSTMs for sequence learning
      • Autoencoder
        • Practical Example 6: Autoencoder for dimensionality reduction
    4. Advanced training concepts
      • Regularisation techniques (dropout, batch normalisation, data augmentation)
      • Transfer learning
    5. AI @ own dataset
      • Final Project: ML on own dataset

  • Organisation

    • 10 sessions á 3 hours (in English)
    • Organisational changes: Presentations will be given via Zoom and the programming exercises will be made available online via JupyterHub. In addition, virtual conference rooms will be used to enable discussions and to provide assistance.
    • Preliminary timetable (each session starts at 4pm)
      1. 29.06.: Introduction and Python basics
      2. 06.07.: Machine Learning Theory on Neural Networks, Gradient Descent, ...
      3. 13.07.: Other Machine Learning Algorithms (SVM, Decision Trees, Random Forests, ...)
      4. 20.07.: Recap and an End-to-End-Project
      5. 27.07.: CNNs, Dropout and Data-Augmentation
      6. 03.08.: RNNs
      7. 10.08.: Autoencoder
      8. 17.08.: Advanced Concepts (Adverserial Attacks, Transfer Learning, ...)
      9. 24.08.: Final Remarks and own data
      10. 31.08.: Own data
    • Each Session would be 40-60 minutes presentation and ~ 2 hours of practical coding
    • Programming will be done in Python and Tensorflow2
    • Programming environment will be provided by pre-installed Jupyter Notebooks
      • Jupyter Notebooks allow interactive worksheets (only some code passages need to be filled in, the result is then plotted interactively)
    • Limited to 50 participants max.
    • To register follow this link (until March 31)...
    • Where: Aula am Steinhaus, Schlossplatz 34, 48143 Münster (due to COVID-19 guidelines, this course will be held online via Zoom instead)
    • Contact: Benjamin Risse (, Lars Haalck (

  • Literature

    • Practical Literature: “Hands-On Machine Learning with Scikit-Learn, Keras & TensorFlow” (2nd edition) by Aurelien Geron (primary resource for this course!)
    • Theoretical Literature (not required for this course):
      • “Deep Learning” (1st edition) by Ian Goodfellow, Yoshua Bengio and Aaron Courville
      • “Pattern Recognition and Machine Learning” (1st edition (corrected)) by Christopher Bishop
      • “Artificial Intelligence: A Modern Approach” (3rd edition) by Stuart Russel and Peter Norvig