META TOPICPARENT |
name="HPC" |
NWZPHI the cluster of the IVV 4
NWZPHI is a cluster equipped with 98 Xeon Phi cards. These are PCIe based accelerators similar to GPUs, but can be used with regular programming languages.
Update: New Centos 7 Installation
Hard- and Software overview
- 2 developing and debugging servers (24 CPU cores with 2.4 GHz, 64 GB RAM, 1 Xeon Phi 5110p)
- 12 accelerator nodes (24 cores with 2.4 GHz, 128 GB RAM, 8 Xeon Phi 5110p)
- 1 SMP node (32 CPU cores, 1.5 TB RAM)
- 88 TB storage (with FhGFS) for home and scratch
- FDR Infinibad as interconnect
- The operating system is RedHat Enterprise Linux 6
NWZPHI for the impatient reader
The name of the login-server is NWZPHI. Allowed are all users that are members of the group u0clustr and at least one of the groups starting with p0, q0 or r0. In addition, every user allowed for PALMA may use NWZPHI. You can register yourself for u0clstr at MeinZIV (go to “Username (account) and group memberships” / „Nutzerkennung und Gruppenmitgliedschaften“).
The batch and module system are working very similar to PALMA.
Differences to PALMA
If you are familiar with PALMA, starting jobs on NWZPHI is quite easy. There are some differences mentioned here
- In the submit file, you do not need the switch "-A"
- One node has 24 CPU cores
- The node names and properties are different
- The operating system has another version, so you have to recompile your code
- To use the Xeon Phi accelerators, more work is necessary (see below)
Starting jobs on NWZPHI
- Choose your software environment and (optionally) compile your code
- Submit your job via the batch system
Environment Modules
Environment variables (like PATH, LD_LIBRARY_PATH) for compilers and libraries can be set by modules:
Command (Short- and Long-form) |
Meaning |
module av[ailable] |
Lists all available modules |
module li[st] |
Lists all modules in the actual enviroment |
module show modulname |
Lists all changes caused by a module |
module add modul1 modul2 ... |
Adds module to the actual environment |
module rm modul1 modul2 ... |
Deletes module from the actual environment |
module purge |
Deletes all modules from actual environment |
To use the same modules at every login, put the commands in your $HOME/.bashrc. The recommended default module is
module add intel/2016a
This is a toolchain that loads other modules like (Intel-) MPI and the MKL.
Batch system
The batch system Torque and the scheduler Moab are used to submit jobs. It is not allowed, to start jobs manually. Batch jobs should only be submitted from the server mn02.
Creating submit-files
Example of a submit-file of a MPI-job:
#PBS -o output.dat
#PBS -l walltime=01:00:00,nodes=2:ppn=24
#PBS -M username@uni-muenster.de
#PBS -m ae
#PBS -q default
#PBS -N job_name
#PBS -j oe
cd $PBS_O_WORKDIR
mpdboot -n 2 -f $PBS_NODEFILE -v
mpirun -machinefile $PBS_NODEFILE -np 48 ./executable
An MPI-job with 48 processes is started.
Further Information:
- username: Replace by own username
- job_directory: Replace by the path, where the executable can be found
- executable: Enter the name of the executable
- walltime: The time needed for a whole run. At the moment, maximal 48 hours are possible
When no MPI is needed, the submit-file can be simpler.
Example for a job using openMP:
#PBS -o output.dat
#PBS -l walltime=01:00:00,nodes=1:ppn=24
#PBS -M username@uni-muenster.de
#PBS -m ae
#PBS -q default
#PBS -N job_name
#PBS -j oe
cd $PBS_O_WORKDIR
export OMP_NUM_THREADS=12
./executable
Choosing the nodes
The cluster consists of the following nodes:
Name |
Hardware |
Queue |
Annotations |
Max Walltime |
sl250-01, sl250-02 |
24 cores, 64 GB RAM, 1 Xeon Phi accelerator |
debug |
Debugging node, short maximal walltime, so you have less waiting time |
4 hours |
sl270-01-12 |
24 cores, 128 GB RAM, 8 Xeon Phi accelerators |
default |
Production nodes |
48 hours |
|