!!! Attention !!! A new Wiki concerning information about PALMA II and HPC in general can be found at the WWU Confluence!


Palma II is the HPC system of the Zentrum für Informationsverarbeitung. To be able to log in, you have to register for the group u0clstr in MeinZIV. The login node is at the moment. You can reach it via ssh (from Windows with putty for example)


When you log in to the cluster for the first time, a directory in /home is created for you. Please use this only to store your programs, but don't store your numerical results there. We have limited your storage in home to 400GB. You have to create a directory in /scratch/tmp to store the data you create on the compute nodes there. To enforce this, we will mount home read only on the compute nodes in the future. And since /scratch is not intended as an archive you are asked to remove your data there as soon as you do not need them anymore.

Software/The module concept

The software on palma-ng can be accessed via modules. These are small script that set environment variables (like PATH and LD_LIBRARY_PATH) pointing to the locations where the software is installed (this is mostly on network drives so that the software is available on every node in the cluster). The module system we use here is LMOD (1). In contrast to the older environment modules we used on PALMA I and NWZPHI, there is the new command "module spider". Please find more information on this below.

The most important difference between Palma I and PALMA II is the [][hierarchical module naming scheme]] (2)



Command (Short- and Long-form) Meaning
module av[ailable] Lists all currently available modules
module spider List all available modules with their description
module spider modulename Show the description of a module and give a hint, which modules have to be loaded to make it available.
module li[st] Lists all modules in the actual enviroment
module show modulname Lists all changes caused by a module
module add modul1 modul2 ... Adds module to the current environment
module rm modul1 modul2 ... Deletes module from the current environment
module purge Deletes all modules from current environment
Hierarchical module naming scheme means that you do not see all modules at the same time. You will have to load a toolchain or compiler first to see the software that has been compiled with those. At the moment there are the following toolchains:

  • foss/2018a GCC with OpenMPI
  • intel/2018a Intel Compiler with Intel MPI

If you want to use the Intel compiler, you can type for example the following:

module add intel/2018a
module av

and you will see the software that has been compiled with this version. Alternatively you can use the "module spider" command.


  • Ganglia
  • If you have X forwarding enabled, you can use sview (Just type "sview" at the command line).
  • pestat (A command line tool for monitoring the batch system)

The batch system

The batch system on PALMA II is SLURM. If you are used to PBS/Maui and want to switch to SLURM, this document might help you:

The partitions

  • normal: 434 nodes with 72 CPU threads and 92 respectively 192 GB RAM. The maximal run time is 7 days. To be able to use the himem nodes (with 192 GB), you have to set the #SBATCH --mem parameter to a value higher than 92GB.
  • express: 5 nodes with 72 threads and 92 GB RAM (one of them with 192 GB). A partition for short running (test) jobs with a maximal walltime of 2 hours.
  • bigsmp: 3 nodes with 144 threads and 1,5 TB RAM
  • largesmp: 2 nodes with 144 threads and 3 TB RAM
  • requeue: Job in this queue will run on the nodes of the exclusive nodes below. If your jobs are running on one of the exclusive nodes while jobs are put in there, your job will be terminated and requeued, so use with care. The maximal walltime is 24 hours. There are also 2 1,5 TB machines available in the requeue partition.
  • gpuk20: Four nodes with 3 nvidia K20 GPUs
  • gpuv100: One node with 4 nvidia V100 GPUs
  • gputitanxp: One node with 8 nvidia TitanXP GPUs

There are some special partitions, which are only allowed for certain groups (these are also Skylake nodes like in the normal queue):

  • p0fuchs: 9 lowmen (96 GB) nodes
  • p0kulesz: 6 lowmem and 3 himem (192 GB) nodes
  • p0klasen: 1 lowmem an 1 himem node
  • p0kapp: 1 lowmem node
  • hims: 25 lowmem and 38 himem nodes
  • d0ow: 1 lowmem node
  • q0heuer: 15 lowmem nodes
  • e0mi: 2 himem nodes
  • p0rohlfi: 7 lowmem and 8 himem nodes

When using PBS skript, there are some differences to the old PALMA:

  • The first line of the submit script has to be #!/bin/bash
  • A queue is called partition in terms of SLURM. These terms will be used synonymous here.
  • The variable $PBS_O_WORKDIR will not be set. Instead you will start in the directory in which the script resides.

Submit a job

Create a file for example called submit.cmd


# set the number of nodes
#SBATCH --nodes=1

# set the number of CPU cores per node
#SBATCH --ntasks-per-node 72

# How much memory is needed (per node). Possible units: K, G, M, T
#SBATCH --mem=64G

# set a partition
#SBATCH --partition normal

# set max wallclock time
#SBATCH --time=24:00:00

# set name of job
#SBATCH --job-name=test123

# mail alert at start, end and abortion of execution
#SBATCH --mail-type=ALL

# set an output file
#SBATCH --output output.dat

# send mail to this address

# run the application

You can send your submission to the batch system with the command "sbatch submit.cmd"

It is recommended to reserve complete nodes, if you can use 72 threads.

A detailed description can be found here:

Starting jobs with MPI-parallel codes

mpirun will get all necessary information from SLURM, if submitted appropriately. If you for example want to start 144 MPI ranks distributed to two nodes, you could do this the following way:


# set the number of nodes
#SBATCH --nodes=2

# set the number of CPU cores per node
#SBATCH --exclusive

# How much memory is needed (per node). Possible units: K, G, M, T.
#SBATCH --mem=64G

# set a partition
#SBATCH --partition normal

# set max wallclock time
#SBATCH --time=2-00:00:00

# set name of job
#SBATCH --job-name=test123

# mail alert at start, end and abortion of execution
#SBATCH --mail-type=ALL

# set an output file
#SBATCH --output output.dat

# send mail to this address

# run the application
mpirun program

Some codes do not profit from Hyperthreading, so it is better, to start only 36 processes per node:


# set the number of nodes
#SBATCH --nodes=2

# set the number of CPU cores per node
#SBATCH --exclusive

#SBATCH --ntasks-per-node=36

# How much memory is needed (per node). Possible units: K, G, M, T.
#SBATCH --mem=64G

# set a partition
#SBATCH --partition normal

# set max wallclock time
#SBATCH --time=2-00:00:00

# set name of job
#SBATCH --job-name=test123

# mail alert at start, end and abortion of execution
#SBATCH --mail-type=ALL

# set an output file
#SBATCH --output output.dat

# send mail to this address

# run the application
mpirun program

For starting hybrid jobs (meaning that they are using MPI and OpenMP parallelization at the same time), you can use the --cpus-per-task switch.

srun -p normal --nodes=2 --ntasks=72 --ntasks-per-node=36 --cpus-per-task=2 --pty bash
OMP_NUM_THREADS=2 mpirun ./program

Using the GPU nodes

If you want to use a GPU for your computations:

  • Use one of the gpu... partitions (see above)
  • Start your jobs with #SBATCH --export=none This is because there are other modules on the GPU nodes.
  • You can use the batch system to reserve only some of the GPUs. Use Slurm's generic resources for this You can for example write #SBATCH --gres=gpu:1 to get only one GPU. Reserve CPUs accordingly.
Using Caffe

Caffe 1.0 is available for Python3 on the GPU partitions in the fosscuda/2018b toolchain. To use it, you have to load fosscuda/2018b and Caffe (ml fosscuda/2018b Caffe) and export the Caffe PYTHONPATH.

On Skylake nodes (gputitanxp and gpuv100 partitions)


On Broadwell nodes (gpuk20 partition)


Show information about the partitions

scontrol show partition

Show information about the nodes


Running interactive jobs with SLURM

Use for example the following command:

srun --partition express --nodes 1 --ntasks-per-node=8 --pty bash

This starts a job in the express partition on one node with eight cores.

Information on jobs

List all current jobs for a user:

squeue -u <username>

List all running jobs for a user:

squeue -u <username> -t RUNNING

List all pending jobs for a user:

squeue -u <username> -t PENDING

List all current jobs in the normal partition for a user:

squeue -u <username> -p normal

List detailed information for a job (useful for troubleshooting):

scontrol show job -dd <jobid>

Once your job has completed, you can get additional information that was not available during the run. This includes run time, memory used, etc.

To get statistics on completed jobs by jobID:

sacct -j <jobid> --format=JobID,JobName,MaxRSS,Elapsed

To view the same information for all jobs of a user:

sacct -u <username> --format=JobID,JobName,MaxRSS,Elapsed

Show priorities for waiting jobs:

sprio -l

Controlling jobs

To cancel one job:

scancel <jobid>

To cancel all the jobs for a user:

scancel -u <username>

To cancel all the pending jobs for a user:

scancel -t PENDING -u <username>

To cancel one or more jobs by name:

scancel --name myJobName

To pause a particular job:

scontrol hold <jobid>

To resume a particular job:

scontrol resume <jobid>

To requeue (cancel and rerun) a particular job:

scontrol requeue <jobid>


For the visualization of bigger data sets, it is impractical to copy them to your local machine. We therefore offer a solution to do the postprocessing on Palma II. Since the CPUs are quite fast, the rendering is done in software.

  • Prequisites: You need a local installation of TurboVNC
  • Log in to palma and call
    ml vis/vnc
  • Wait until the session has started and follow the instructions of the script (ssh to the compute node and start your local TurboVNC)
  • Open a terminal in the VNC window and enter "module add intel Mesa" or "module add foss Mesa"
  • Start an application with GUI

-- Holger Angenent - 2018-07-11

Edit | Attach | Watch | Print version | History: r18 < r17 < r16 < r15 < r14 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r17 - 2019-09-09 - SebastianPotthoff
  • Edit
  • Attach
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding ZIVwiki? Send feedback
Datenschutzerklärung Impressum