NWZPHI with CentOS 7

We are reinstalling the complete cluster with a new operating system (CentOS 7) and new drivers. The Xeon Phi Accelerators are now able to communicate with those of other hosts. At the moment, the nodes sl270-01 - sl270-06 and sl270-12 are installed with the new software.

A recompilation of your software is recommended. Please use sl250-01 for this purpose at the moment (until we say something else). Just log in via ssh to the node and check with "module av" if your desired software is available. If you need software that has not been installed yet, please contact us at hpc@uni-muenster.de as usual.

As a starting point, please use the module "intel/2016a". This is a toolchain which loads other Intel modules. Have a look with "module li" afterwards. To make this permanent, add the line

module add intel/2016a

to your .bashrc and comment out every other module command.

To use the nodes sl270-01, sl270-02 and sl270-03, you have to use the queue "test" at the moment. If you want to use the accelerators, please reserve 3 times more CPU cores than accelerators (there are 24 CPU cores and 8 accelerators in each node). An example could look like this:

qsub -I -q test -l nodes=1:ppn=9:mics=3

To use the accelerators, you have to recompile your code (with the Intel compiler) with the "-mmic" Flag, so you have to create a separate version for the host and the accelerator.

mpiicpc code.c -mmic -o program.mic

To use the accelerators that have been reserved for you, you can use the script "allocated-mics.pl" which is in your PATH. The host names of the cards are no longer mic0, mic1..., but have the name of their hosts in it so this would be sl270-01-mic0, sl270-01-mic1 and so on. This is necessary to set up the communication between the accelerators.

An example how to use the accelerators with MPI could look like this:

allocated-mics.pl > ${HOME}/mics.list
mpirun -n 120 -hostfile ${HOME}/mics.list ./program.mic

Each card has 60 cores and can have up to four threads per core, so all in all 240 threads per card can be created. But from my experience it might be better to use only 120 threads per card.

-- Holger Angenent - 2016-06-07

Comments


This topic: Anleitungen > WebHome > HPC > NWZPHI > PhicusCentos7
Topic revision: r4 - 2016-07-18 - HolgerAngenent
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding ZIVwiki? Send feedback
Datenschutzerklärung Impressum