Tags:
tag this topic
create new tag
view all tags
---+ NWZPHI with !CentOS 7 We are reinstalling the complete cluster with a new operating system (!CentOS 7) and new drivers. The Xeon Phi Accelerators are now able to communicate with those of other hosts. At the moment, the nodes sl270-01 - sl270-06 and sl270-12 are installed with the new software. A recompilation of your software is recommended. Please use sl250-01 for this purpose at the moment (until we say something else). Just log in via ssh to the node and check with "module av" if your desired software is available. If you need software that has not been installed yet, please contact us at hpc@uni-muenster.de as usual. As a starting point, please use the module "intel/2016a". This is a toolchain which loads other Intel modules. Have a look with "module li" afterwards. To make this permanent, add the line <blockquote> module add intel/2016a </blockquote> to your .bashrc and comment out every other module command. To use the nodes sl270-01, sl270-02 and sl270-03, you have to use the queue "test" at the moment. If you want to use the accelerators, please reserve 3 times more CPU cores than accelerators (there are 24 CPU cores and 8 accelerators in each node). An example could look like this: <blockquote> qsub -I -q test -l nodes=1:ppn=9:mics=3 </blockquote> To use the accelerators, you have to recompile your code (with the Intel compiler) with the "-mmic" Flag, so you have to create a separate version for the host and the accelerator. <blockquote> mpiicpc code.c -mmic -o program.mic </blockquote> To use the accelerators that have been reserved for you, you can use the script "allocated-mics.pl" which is in your PATH. The host names of the cards are no longer mic0, mic1..., but have the name of their hosts in it so this would be sl270-01-mic0, sl270-01-mic1 and so on. This is necessary to set up the communication between the accelerators. An example how to use the accelerators with MPI could look like this: <blockquote> allocated-mics.pl > ${HOME}/mics.list<br />mpirun -n 120 -hostfile ${HOME}/mics.list ./program.mic </blockquote> Each card has 60 cores and can have up to four threads per core, so all in all 240 threads per card can be created. But from my experience it might be better to use only 120 threads per card. -- %USERSIG{HolgerAngenent - 2016-06-07}% ---++ Comments %COMMENT%
E
dit
|
A
ttach
|
Watch
|
P
rint version
|
H
istory
: r4
<
r3
<
r2
<
r1
|
B
acklinks
|
V
iew topic
|
Ra
w
edit
|
M
ore topic actions
Topic revision: r4 - 2016-07-18
-
HolgerAngenent
Home
Site map
Anleitungen web
Exchange web
Main web
TWiki web
Anleitungen Web
Create New Topic
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
P
P
View
Raw View
Print version
Find backlinks
History
More topic actions
Edit
Raw edit
Attach file or image
Edit topic preference settings
Set new parent
More topic actions
Account
Log In
Български
Cesky
Dansk
Deutsch
English
Español
Suomi
_Français_
Italiano
日本語
한글
Nederlands
Polski
Português
Русский
Svenska
Українська
简体中文
簡體中文
E
dit
A
ttach
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding ZIVwiki?
Send feedback
Datenschutzerklärung
Impressum