Tags:
view all tags
---+ !PALMA II ---+++!! *!!! Attention !!!* A new Wiki concerning information about PALMA II and HPC in general can be found at the <a href="https://zivconfluence.uni-muenster.de/display/HPC" target="_self" title="HPC">WWU Confluence</a>! %TOC{title="Content"}% ---++ Overview Palma II is the HPC system of the Zentrum für Informationsverarbeitung. To be able to log in, you have to register for the group u0clstr in [[https://mein-ziv.uni-muenster.de/][MeinZIV]]. The login node is palma2c.uni-muenster.de at the moment. You can reach it via ssh (from Windows with [[https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html][putty]] for example) ---++ Filesystems When you log in to the cluster for the first time, a directory in /home is created for you. Please use this only to store your programs, but don't store your numerical results there. We have limited your storage in home to 400GB. You have to create a directory in /scratch/tmp to store the data you create on the compute nodes there. To enforce this, we will mount home read only on the compute nodes in the future. And since /scratch is not intended as an archive you are asked to remove your data there as soon as you do not need them anymore. ---++ Software/The module concept The software on palma-ng can be accessed via modules. These are small script that set environment variables (like PATH and LD_LIBRARY_PATH) pointing to the locations where the software is installed (this is mostly on network drives so that the software is available on every node in the cluster). The module system we use here is [[https://www.tacc.utexas.edu/research-development/tacc-projects/lmod][LMOD]] (1). In contrast to the older environment modules we used on PALMA I and NWZPHI, there is the new command "module spider". Please find more information on this below. The most important difference between Palma I and !PALMA II is the [https://hpcugent.github.io/easybuild/files/hust14_paper.pdf][hierarchical module naming scheme]] (2) (1) https://www.tacc.utexas.edu/research-development/tacc-projects/lmod (2) https://hpcugent.github.io/easybuild/files/hust14_paper.pdf | *Command (Short- and Long-form)* | *Meaning* | | module av[ailable] | Lists all currently available modules | | module spider | List all available modules with their description | | module spider _modulename_ | Show the description of a module and give a hint, which modules have to be loaded to make it available. | | module li[st] | Lists all modules in the actual enviroment | | module show _modulname_ | Lists all changes caused by a module | | module add modul1 modul2 ... | Adds module to the current environment | | module rm modul1 modul2 ... | Deletes module from the current environment | | module purge | Deletes all modules from current environment | Hierarchical module naming scheme means that you do not see all modules at the same time. You will have to load a toolchain or compiler first to see the software that has been compiled with those. At the moment there are the following toolchains: * foss/2018a GCC with !OpenMPI * intel/2018a Intel Compiler with Intel MPI If you want to use the Intel compiler, you can type for example the following: <pre><br />module add intel/2018a<br />module av</pre> and you will see the software that has been compiled with this version. Alternatively you can use the "module spider" command. ---++ Monitoring * [[https://palma2c.uni-muenster.de/ganglia/?c=PALMA%20II&m=cpu_report&r=hour&s=by%20name&hc=4&mc=2][Ganglia]] * If you have X forwarding enabled, you can use sview (Just type "sview" at the command line). * pestat (A command line tool for monitoring the batch system) ---++ The batch system The batch system on PALMA II is SLURM. If you are used to PBS/Maui and want to switch to SLURM, this document might help you: https://slurm.schedmd.com/rosetta.pdf ---+++ The partitions * normal: 434 nodes with 72 CPU threads and 92 respectively 192 GB RAM. The maximal run time is 7 days. To be able to use the himem nodes (with 192 GB), you have to set the #SBATCH --mem parameter to a value higher than 92GB. * express: 5 nodes with 72 threads and 92 GB RAM (one of them with 192 GB). A partition for short running (test) jobs with a maximal walltime of 2 hours. * bigsmp: 3 nodes with 144 threads and 1,5 TB RAM * largesmp: 2 nodes with 144 threads and 3 TB RAM * requeue: Job in this queue will run on the nodes of the exclusive nodes below. If your jobs are running on one of the exclusive nodes while jobs are put in there, your job will be terminated and requeued, so use with care. The maximal walltime is 24 hours. There are also 2 1,5 TB machines available in the requeue partition. * gpuk20: Four nodes with 3 nvidia K20 GPUs * gpuv100: One node with 4 nvidia V100 GPUs * gputitanxp: One node with 8 nvidia !TitanXP GPUs There are some special partitions, which are only allowed for certain groups (these are also Skylake nodes like in the normal queue): * p0fuchs: 9 lowmen (96 GB) nodes * p0kulesz: 6 lowmem and 3 himem (192 GB) nodes * p0klasen: 1 lowmem an 1 himem node * p0kapp: 1 lowmem node * hims: 25 lowmem and 38 himem nodes * d0ow: 1 lowmem node * q0heuer: 15 lowmem nodes * e0mi: 2 himem nodes * p0rohlfi: 7 lowmem and 8 himem nodes When using PBS skript, there are some differences to the old !PALMA: * The first line of the submit script has to be #!/bin/bash * A queue is called partition in terms of SLURM. These terms will be used synonymous here. * The variable $PBS_O_WORKDIR will not be set. Instead you will start in the directory in which the script resides. ---+++ Submit a job Create a file for example called submit.cmd <verbatim> #!/bin/bash # set the number of nodes #SBATCH --nodes=1 # set the number of CPU cores per node #SBATCH --ntasks-per-node 72 # How much memory is needed (per node). Possible units: K, G, M, T #SBATCH --mem=64G # set a partition #SBATCH --partition normal # set max wallclock time #SBATCH --time=24:00:00 # set name of job #SBATCH --job-name=test123 # mail alert at start, end and abortion of execution #SBATCH --mail-type=ALL # set an output file #SBATCH --output output.dat # send mail to this address #SBATCH --mail-user=your_account@uni-muenster.de # run the application ./program</verbatim> You can send your submission to the batch system with the command "sbatch submit.cmd" It is recommended to reserve complete nodes, if you can use 72 threads. A detailed description can be found here: http://slurm.schedmd.com/sbatch.html ---+++ Starting jobs with MPI-parallel codes mpirun will get all necessary information from SLURM, if submitted appropriately. If you for example want to start 144 MPI ranks distributed to two nodes, you could do this the following way: <verbatim> #!/bin/bash # set the number of nodes #SBATCH --nodes=2 # set the number of CPU cores per node #SBATCH --exclusive # How much memory is needed (per node). Possible units: K, G, M, T. #SBATCH --mem=64G # set a partition #SBATCH --partition normal # set max wallclock time #SBATCH --time=2-00:00:00 # set name of job #SBATCH --job-name=test123 # mail alert at start, end and abortion of execution #SBATCH --mail-type=ALL # set an output file #SBATCH --output output.dat # send mail to this address #SBATCH --mail-user=your_account@uni-muenster.de # run the application mpirun program</verbatim> Some codes do not profit from Hyperthreading, so it is better, to start only 36 processes per node: <verbatim> #!/bin/bash # set the number of nodes #SBATCH --nodes=2 # set the number of CPU cores per node #SBATCH --exclusive #SBATCH --ntasks-per-node=36 # How much memory is needed (per node). Possible units: K, G, M, T. #SBATCH --mem=64G # set a partition #SBATCH --partition normal # set max wallclock time #SBATCH --time=2-00:00:00 # set name of job #SBATCH --job-name=test123 # mail alert at start, end and abortion of execution #SBATCH --mail-type=ALL # set an output file #SBATCH --output output.dat # send mail to this address #SBATCH --mail-user=your_account@uni-muenster.de # run the application mpirun program</verbatim> For starting hybrid jobs (meaning that they are using MPI and !OpenMP parallelization at the same time), you can use the --cpus-per-task switch. <verbatim>srun -p normal --nodes=2 --ntasks=72 --ntasks-per-node=36 --cpus-per-task=2 --pty bash OMP_NUM_THREADS=2 mpirun ./program</verbatim> ---+++ Using the GPU nodes If you want to use a GPU for your computations: * Use one of the gpu... partitions (see above) * Start your jobs with #SBATCH --export=none This is because there are other modules on the GPU nodes. * You can use the batch system to reserve only some of the GPUs. Use Slurm's generic resources for this [[https://slurm.schedmd.com/gres.html]] You can for example write #SBATCH --gres=gpu:1 to get only one GPU. Reserve CPUs accordingly. ---+++++ Using Caffe Caffe 1.0 is available for Python3 on the GPU partitions in the fosscuda/2018b toolchain. To use it, you have to load =fosscuda/2018b= and =Caffe= (=ml fosscuda/2018b Caffe)= and export the Caffe =PYTHONPATH=. On Skylake nodes (=gputitanxp= and =gpuv100= partitions) <verbatim>PYTHONPATH=/Applic.HPC/skylakegpu/software/MPI/GCC-CUDA/7.3.0-2.30-9.2.88/OpenMPI/3.1.1/Caffe/1.0-Python-3.6.6/python:$PYTHONPATH</verbatim> On Broadwell nodes (=gpuk20= partition) <verbatim>PYTHONPATH=/Applic.HPC/k20gpu/software/MPI/GCC-CUDA/7.3.0-2.30-9.2.88/OpenMPI/3.1.1/Caffe/1.0-Python-3.6.6/python:$PYTHONPATH</verbatim> ---+++ Show information about the partitions <pre>scontrol show partition</pre> ---+++ Show information about the nodes <pre>sinfo</pre> ---+++ Running interactive jobs with SLURM Use for example the following command: <pre>srun --partition express --nodes 1 --ntasks-per-node=8 --pty bash</pre> This starts a job in the express partition on one node with eight cores. ---+++ Information on jobs List all current jobs for a user: <pre>squeue -u <username></pre> List all running jobs for a user: <pre>squeue -u <username> -t RUNNING</pre> List all pending jobs for a user: <pre>squeue -u <username> -t PENDING</pre> List all current jobs in the normal partition for a user: <pre>squeue -u <username> -p normal</pre> List detailed information for a job (useful for troubleshooting): <pre>scontrol show job -dd <jobid></pre> Once your job has completed, you can get additional information that was not available during the run. This includes run time, memory used, etc.<br /><br />To get statistics on completed jobs by jobID: <pre>sacct -j <jobid> --format=JobID,JobName,MaxRSS,Elapsed</pre> To view the same information for all jobs of a user: <pre>sacct -u <username> --format=JobID,JobName,MaxRSS,Elapsed</pre> Show priorities for waiting jobs: <pre><br />sprio -l</pre> ---+++ Controlling jobs To cancel one job: <pre>scancel <jobid></pre> To cancel all the jobs for a user: <pre>scancel -u <username></pre> To cancel all the pending jobs for a user: <pre>scancel -t PENDING -u <username></pre> To cancel one or more jobs by name: <pre>scancel --name myJobName</pre> To pause a particular job: <pre>scontrol hold <jobid></pre> To resume a particular job: <pre>scontrol resume <jobid></pre> To requeue (cancel and rerun) a particular job: <pre>scontrol requeue <jobid></pre> ---++ [[PALMAIIVisialization][Visualization]] For the visualization of bigger data sets, it is impractical to copy them to your local machine. We therefore offer a solution to do the postprocessing on Palma II. Since the CPUs are quite fast, the rendering is done in software. * Prequisites: You need a local installation of [[https://sourceforge.net/projects/turbovnc/][TurboVNC]] * Log in to palma and call <verbatim> ml vis/vnc vnc.sh</verbatim> * Wait until the session has started and follow the instructions of the script (ssh to the compute node and start your local !TurboVNC) * Open a terminal in the !VNC window and enter "module add intel Mesa" or "module add foss Mesa" * Start an application with !GUI -- %USERSIG{HolgerAngenent - 2018-07-11}%
Edit
|
Attach
|
Watch
|
P
rint version
|
H
istory
:
r18
<
r17
<
r16
<
r15
<
r14
|
B
acklinks
|
V
iew topic
|
Raw edit
|
More topic actions...
Topic revision: r17 - 2019-09-09
-
SebastianPotthoff
Home
Site map
Anleitungen web
Exchange web
Main web
TWiki web
Anleitungen Web
Create New Topic
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
P
View
Raw View
Print version
Find backlinks
History
More topic actions
Edit
Raw edit
Attach file or image
Edit topic preference settings
Set new parent
More topic actions
Account
Log In
Български
Cesky
Dansk
Deutsch
English
Español
Suomi
_Français_
Italiano
日本語
한글
Nederlands
Polski
Português
Русский
Svenska
Українська
简体中文
簡體中文
Edit
Attach
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding ZIVwiki?
Send feedback
Datenschutzerklärung
Impressum