---+ PALMA-NG %TOC{title="Content"}% ---++ Overview palma3 is the login node to a newer part of the !PALMA system. It has various queues/partitions for different purposes: * u0dawin: A queue for general purpose. It is usable for everyone, even without being a member of the groups that have submittet a proposal for !PALMA. It replaces the old !ZIVHPC cluster * k20gpu: Four nodes equipped with 3 K20 nVidia Tesla accelerators each * normal: 44 nodes with 32 Broadwell CPU cores each. (Not fully installed yet) * zivsmp: A SMP machine with 512 GB RAM. The old login node of !ZIVHPC. (not available yet) * phi: 2 Nodes with 4 Intel Xeon Phi accelerators each. (not available yet). ---++ The batch system The batch system on PALMA3 is SLURM, but there is a wrapper for PBS installed, so most of your skripts should still be able to work. If you want to switch to SLURM, this document might help you: [[https://slurm.schedmd.com/rosetta.pdf]] When using PBS skript, there are some differences to PALMA: * The first line of the submit script has to be #!/bin/bash * A queue is called partition in terms of SLURM. These terms will be used synonymous here. * The variable $PBS_O_WORKDIR will not be set. Instead you will start in the directory in which the script resides. * For using the "module add" command, you will have to source some scripts first: "source /etc/profile.d/modules.sh; source /etc/profile.d/modules_local.sh" ---+++ Submit a job Create a file for example called submit.cmd <verbatim> #!/bin/bash # set the number of nodes #SBATCH --nodes=1 # set the number of CPU cores per node #SBATCH -n 8 # set a partition #SBATCH -p u0dawin # set max wallclock time #SBATCH --time=24:00:00 # set name of job #SBATCH --job-name=test123 # mail alert at start, end and abortion of execution #SBATCH --mail-type=ALL # set an output file #SBATCH -o output.dat # send mail to this address #SBATCH --mail-user=your_account@uni-muenster.de # run the application ./program</verbatim> You can send your submission to the batch system with the command "sbatch submit.cmd" A detailed description can be found here: [[http://slurm.schedmd.com/sbatch.html]] ---+++ Show running jobs * squeue * qstat * showq ---+++ Show information about the queues <verbatim>scontrol show partition</verbatim> ---+++ Running interactive jobs with SLURM Use for example the following command: <verbatim>srun -p u0dawin -N 1 --ntasks-per-node=8 --pty bash</verbatim> This starts a job in the u0dawin queue/partition on one node with eight cores. -- %USERSIG{HolgerAngenent - 2016-08-22}%
This topic: Anleitungen
>
WebHome
>
HPC
>
PALMA3
Topic revision: r6 - 2016-09-06 - HolgerAngenent
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding ZIVwiki?
Send feedback
Datenschutzerklärung
Impressum