Skip to content
Snippets Groups Projects
Commit 5dcd0a4f authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

add

parent 9a0dfe83
No related branches found
No related tags found
4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
Pipeline #
Showing
with 169 additions and 185 deletions
...@@ -6,10 +6,10 @@ Storage ...@@ -6,10 +6,10 @@ Storage
There are two main shared file systems on Anselm cluster, the There are two main shared file systems on Anselm cluster, the
[HOME](#home) and [SCRATCH](#scratch). All [HOME](../storage.html#home) and
login and compute nodes may access same data on shared filesystems. [SCRATCH](../storage.html#scratch). All login and compute
Compute nodes are also equipped with local (non-shared) scratch, ramdisk nodes may access same data on shared filesystems. Compute nodes are also
and tmp filesystems. equipped with local (non-shared) scratch, ramdisk and tmp filesystems.
Archiving Archiving
--------- ---------
...@@ -17,19 +17,19 @@ Archiving ...@@ -17,19 +17,19 @@ Archiving
Please don't use shared filesystems as a backup for large amount of data Please don't use shared filesystems as a backup for large amount of data
or long-term archiving mean. The academic staff and students of research or long-term archiving mean. The academic staff and students of research
institutions in the Czech Republic can use [CESNET storage institutions in the Czech Republic can use [CESNET storage
service](https://docs.it4i.cz/anselm-cluster-documentation/storage-1/cesnet-data-storage), service](../storage-1/cesnet-data-storage.html), which
which is available via SSHFS. is available via SSHFS.
Shared Filesystems Shared Filesystems
------------------ ------------------
Anselm computer provides two main shared filesystems, the [HOME Anselm computer provides two main shared filesystems, the [HOME
filesystem](#home) and the [SCRATCH filesystem](../storage.html#home) and the [SCRATCH
filesystem](#scratch). Both HOME and SCRATCH filesystems filesystem](../storage.html#scratch). Both HOME and
are realized as a parallel Lustre filesystem. Both shared file systems SCRATCH filesystems are realized as a parallel Lustre filesystem. Both
are accessible via the Infiniband network. Extended ACLs are provided on shared file systems are accessible via the Infiniband network. Extended
both Lustre filesystems for the purpose of sharing data with other users ACLs are provided on both Lustre filesystems for the purpose of sharing
using fine-grained control. data with other users using fine-grained control.
### Understanding the Lustre Filesystems ### Understanding the Lustre Filesystems
...@@ -206,7 +206,7 @@ The HOME filesystem should not be used to archive data of past Projects ...@@ -206,7 +206,7 @@ The HOME filesystem should not be used to archive data of past Projects
or other unrelated data. or other unrelated data.
The files on HOME filesystem will not be deleted until end of the [users The files on HOME filesystem will not be deleted until end of the [users
lifecycle](https://docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials). lifecycle](../../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.html).
The filesystem is backed up, such that it can be restored in case of The filesystem is backed up, such that it can be restored in case of
catasthropic failure resulting in significant data loss. This backup catasthropic failure resulting in significant data loss. This backup
......
...@@ -11,7 +11,7 @@ Accessing IT4Innovations internal resources via VPN ...@@ -11,7 +11,7 @@ Accessing IT4Innovations internal resources via VPN
**Failed to initialize connection subsystem Win 8.1 - 02-10-15 MS **Failed to initialize connection subsystem Win 8.1 - 02-10-15 MS
patch** patch**
Workaround can be found at Workaround can be found at
<https://docs.it4i.cz/vpn-connection-fail-in-win-8.1> [https://docs.it4i.cz/vpn-connection-fail-in-win-8.1](../../vpn-connection-fail-in-win-8.1.html)
...@@ -35,7 +35,7 @@ It is impossible to connect to VPN from other operating systems. ...@@ -35,7 +35,7 @@ It is impossible to connect to VPN from other operating systems.
You can install VPN client from web interface after successful login You can install VPN client from web interface after successful login
with LDAP credentials on address <https://vpn1.it4i.cz/anselm> with LDAP credentials on address <https://vpn1.it4i.cz/anselm>
![](https://docs.it4i.cz/anselm-cluster-documentation/login.jpg/@@images/30271119-b392-4db9-a212-309fb41925d6.jpeg) ![](../login.jpg/@@images/30271119-b392-4db9-a212-309fb41925d6.jpeg)
According to the Java settings after login, the client either According to the Java settings after login, the client either
automatically installs, or downloads installation file for your automatically installs, or downloads installation file for your
...@@ -43,29 +43,29 @@ operating system. It is necessary to allow start of installation tool ...@@ -43,29 +43,29 @@ operating system. It is necessary to allow start of installation tool
for automatic installation. for automatic installation.
![Java ![Java
detection](https://docs.it4i.cz/anselm-cluster-documentation/java_detection.jpg/@@images/5498e1ba-2242-4b9c-a799-0377a73f779e.jpeg "Java detection") detection](../java_detection.jpg/@@images/5498e1ba-2242-4b9c-a799-0377a73f779e.jpeg "Java detection")
![Execution ![Execution
access](https://docs.it4i.cz/anselm-cluster-documentation/executionaccess.jpg/@@images/4d6e7cb7-9aa7-419c-9583-6dfd92b2c015.jpeg "Execution access")![Execution access](../executionaccess.jpg/@@images/4d6e7cb7-9aa7-419c-9583-6dfd92b2c015.jpeg "Execution access")![Execution
access access
2](https://docs.it4i.cz/anselm-cluster-documentation/executionaccess2.jpg/@@images/bed3998c-4b82-4b40-83bd-c3528dde2425.jpeg "Execution access 2") 2](../executionaccess2.jpg/@@images/bed3998c-4b82-4b40-83bd-c3528dde2425.jpeg "Execution access 2")
After successful installation, VPN connection will be established and After successful installation, VPN connection will be established and
you can use available resources from IT4I network. you can use available resources from IT4I network.
![Successfull ![Successfull
instalation](https://docs.it4i.cz/anselm-cluster-documentation/successfullinstalation.jpg/@@images/c6d69ffe-da75-4cb6-972a-0cf4c686b6e1.jpeg "Successfull instalation") instalation](../successfullinstalation.jpg/@@images/c6d69ffe-da75-4cb6-972a-0cf4c686b6e1.jpeg "Successfull instalation")
If your Java setting doesn't allow automatic installation, you can If your Java setting doesn't allow automatic installation, you can
download installation file and install VPN client manually. download installation file and install VPN client manually.
![Installation ![Installation
file](https://docs.it4i.cz/anselm-cluster-documentation/instalationfile.jpg/@@images/202d14e9-e2e1-450b-a584-e78c018d6b6a.jpeg "Installation file") file](../instalationfile.jpg/@@images/202d14e9-e2e1-450b-a584-e78c018d6b6a.jpeg "Installation file")
After you click on the link, download of installation file will start. After you click on the link, download of installation file will start.
![Download file ![Download file
successfull](https://docs.it4i.cz/anselm-cluster-documentation/downloadfilesuccessfull.jpg/@@images/69842481-634a-484e-90cd-d65e0ddca1e8.jpeg "Download file successfull") successfull](../downloadfilesuccessfull.jpg/@@images/69842481-634a-484e-90cd-d65e0ddca1e8.jpeg "Download file successfull")
After successful download of installation file, you have to execute this After successful download of installation file, you have to execute this
tool with administrator's rights and install VPN client manually. tool with administrator's rights and install VPN client manually.
...@@ -76,50 +76,47 @@ Working with VPN client ...@@ -76,50 +76,47 @@ Working with VPN client
You can use graphical user interface or command line interface to run You can use graphical user interface or command line interface to run
VPN client on all supported operating systems. We suggest using GUI. VPN client on all supported operating systems. We suggest using GUI.
![Icon](https://docs.it4i.cz/anselm-cluster-documentation/icon.jpg "Icon") ![Icon](../icon.jpg "Icon")
Before the first login to VPN, you have to fill Before the first login to VPN, you have to fill
URL **https://vpn1.it4i.cz/anselm** into the text field. URL **https://vpn1.it4i.cz/anselm** into the text field.
![First ![First run](../firstrun.jpg "First run")
run](https://docs.it4i.cz/anselm-cluster-documentation/firstrun.jpg "First run")
After you click on the Connect button, you must fill your login After you click on the Connect button, you must fill your login
credentials. credentials.
![Login - ![Login - GUI](../logingui.jpg "Login - GUI")
GUI](https://docs.it4i.cz/anselm-cluster-documentation/logingui.jpg "Login - GUI")
After a successful login, the client will minimize to the system tray. After a successful login, the client will minimize to the system tray.
If everything works, you can see a lock in the Cisco tray icon. If everything works, you can see a lock in the Cisco tray icon.
![Successfull ![Successfull
connection](https://docs.it4i.cz/anselm-cluster-documentation/anyconnecticon.jpg "Successfull connection") connection](../anyconnecticon.jpg "Successfull connection")
If you right-click on this icon, you will see a context menu in which If you right-click on this icon, you will see a context menu in which
you can control the VPN connection. you can control the VPN connection.
![Context ![Context
menu](https://docs.it4i.cz/anselm-cluster-documentation/anyconnectcontextmenu.jpg "Context menu") menu](../anyconnectcontextmenu.jpg "Context menu")
When you connect to the VPN for the first time, the client downloads the When you connect to the VPN for the first time, the client downloads the
profile and creates a new item "ANSELM" in the connection list. For profile and creates a new item "ANSELM" in the connection list. For
subsequent connections, it is not necessary to re-enter the URL address, subsequent connections, it is not necessary to re-enter the URL address,
but just select the corresponding item. but just select the corresponding item.
![Anselm ![Anselm profile](../Anselmprofile.jpg "Anselm profile")
profile](https://docs.it4i.cz/anselm-cluster-documentation/Anselmprofile.jpg "Anselm profile")
Then AnyConnect automatically proceeds like in the case of first logon. Then AnyConnect automatically proceeds like in the case of first logon.
![Login with ![Login with
profile](https://docs.it4i.cz/anselm-cluster-documentation/loginwithprofile.jpg/@@images/a6fd5f3f-bce4-45c9-85e1-8d93c6395eee.jpeg "Login with profile") profile](../loginwithprofile.jpg/@@images/a6fd5f3f-bce4-45c9-85e1-8d93c6395eee.jpeg "Login with profile")
After a successful logon, you can see a green circle with a tick mark on After a successful logon, you can see a green circle with a tick mark on
the lock icon. the lock icon.
![successful ![successful
login](https://docs.it4i.cz/anselm-cluster-documentation/successfullconnection.jpg "successful login") login](../successfullconnection.jpg "successful login")
For disconnecting, right-click on the AnyConnect client icon in the For disconnecting, right-click on the AnyConnect client icon in the
system tray and select **VPN Disconnect**. system tray and select **VPN Disconnect**.
......
...@@ -12,7 +12,7 @@ The X Window system is a principal way to get GUI access to the ...@@ -12,7 +12,7 @@ The X Window system is a principal way to get GUI access to the
clusters. clusters.
Read more about configuring [**X Window Read more about configuring [**X Window
System**](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc). System**](../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html).
VNC VNC
--- ---
...@@ -27,5 +27,5 @@ to remotely control another <span ...@@ -27,5 +27,5 @@ to remotely control another <span
class="link-external">[computer](http://en.wikipedia.org/wiki/Computer "Computer")</span>. class="link-external">[computer](http://en.wikipedia.org/wiki/Computer "Computer")</span>.
Read more about configuring Read more about configuring
**[VNC](https://docs.it4i.cz/salomon/accessing-the-cluster/graphical-user-interface/vnc)**. **[VNC](../../salomon/accessing-the-cluster/graphical-user-interface/vnc.html)**.
...@@ -192,14 +192,14 @@ nodes.**** ...@@ -192,14 +192,14 @@ nodes.****
**![](https://docs.it4i.cz/anselm-cluster-documentation/bullxB510.png)** **![](bullxB510.png)**
****Figure Anselm bullx B510 servers**** ****Figure Anselm bullx B510 servers****
### Compute Nodes Summary******** ### Compute Nodes Summary********
Node type Count Range Memory Cores [Access](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy) Node type Count Range Memory Cores [Access](resource-allocation-and-job-execution/resources-allocation-policy.html)
---------------------------- ------- --------------- -------- ------------- ----------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------- ------- --------------- -------- ------------- --------------------------------------------------------------------------------------------------
Nodes without accelerator 180 cn[1-180] 64GB 16 @ 2.4Ghz qexp, qprod, qlong, qfree Nodes without accelerator 180 cn[1-180] 64GB 16 @ 2.4Ghz qexp, qprod, qlong, qfree
Nodes with GPU accelerator 23 cn[181-203] 96GB 16 @ 2.3Ghz qgpu, qprod Nodes with GPU accelerator 23 cn[181-203] 96GB 16 @ 2.3Ghz qgpu, qprod
Nodes with MIC accelerator 4 cn[204-207] 96GB 16 @ 2.3GHz qmic, qprod Nodes with MIC accelerator 4 cn[204-207] 96GB 16 @ 2.3GHz qmic, qprod
......
...@@ -49,7 +49,8 @@ We have also second modules repository. This modules repository is ...@@ -49,7 +49,8 @@ We have also second modules repository. This modules repository is
created using tool called EasyBuild. On Salomon cluster, all modules created using tool called EasyBuild. On Salomon cluster, all modules
will be build by this tool. If you want to use software from this will be build by this tool. If you want to use software from this
modules repository, please follow instructions in section [Application modules repository, please follow instructions in section [Application
Modules Path Expansion](#EasyBuild). Modules
Path Expansion](environment-and-modules.html#EasyBuild).
The modules may be loaded, unloaded and switched, according to momentary The modules may be loaded, unloaded and switched, according to momentary
needs. needs.
...@@ -112,3 +113,4 @@ This command expands your searched paths to modules. You can also add ...@@ -112,3 +113,4 @@ This command expands your searched paths to modules. You can also add
this command to the .bashrc file to expand paths permanently. After this this command to the .bashrc file to expand paths permanently. After this
command, you can use same commands to list/add/remove modules as is command, you can use same commands to list/add/remove modules as is
described above. described above.
...@@ -327,17 +327,16 @@ There are four types of compute nodes: ...@@ -327,17 +327,16 @@ There are four types of compute nodes:
5110P 5110P
- 2 fat nodes - equipped with 512GB RAM and two 100GB SSD drives - 2 fat nodes - equipped with 512GB RAM and two 100GB SSD drives
[More about Compute [More about Compute nodes](compute-nodes.html).
nodes](https://docs.it4i.cz/anselm-cluster-documentation/compute-nodes).
GPU and accelerated nodes are available upon request, see the [Resources GPU and accelerated nodes are available upon request, see the [Resources
Allocation Allocation
Policy](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy). Policy](resource-allocation-and-job-execution/resources-allocation-policy.html).
All these nodes are interconnected by fast <span All these nodes are interconnected by fast <span
class="WYSIWYG_LINK">InfiniBand <span class="WYSIWYG_LINK">QDR</span> class="WYSIWYG_LINK">InfiniBand <span class="WYSIWYG_LINK">QDR</span>
network</span> and Ethernet network. [More about the <span network</span> and Ethernet network. [More about the <span
class="WYSIWYG_LINK">Network</span>](https://docs.it4i.cz/anselm-cluster-documentation/network). class="WYSIWYG_LINK">Network</span>](network.html).
Every chassis provides Infiniband switch, marked **isw**, connecting all Every chassis provides Infiniband switch, marked **isw**, connecting all
nodes in the chassis, as well as connecting the chassis to the upper nodes in the chassis, as well as connecting the chassis to the upper
level switches. level switches.
...@@ -347,11 +346,11 @@ shared /scratch storage is available for the scratch data. These file ...@@ -347,11 +346,11 @@ shared /scratch storage is available for the scratch data. These file
systems are provided by Lustre parallel file system. There is also local systems are provided by Lustre parallel file system. There is also local
disk storage available on all compute nodes /lscratch. [More about disk storage available on all compute nodes /lscratch. [More about
<span <span
class="WYSIWYG_LINK">Storage</span>](https://docs.it4i.cz/anselm-cluster-documentation/storage-1/storage). class="WYSIWYG_LINK">Storage</span>](storage.html).
The user access to the Anselm cluster is provided by two login nodes The user access to the Anselm cluster is provided by two login nodes
login1, login2, and data mover node dm1. [More about accessing login1, login2, and data mover node dm1. [More about accessing
cluster.](https://docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster) cluster.](accessing-the-cluster.html)
The parameters are summarized in the following tables: The parameters are summarized in the following tables:
...@@ -362,8 +361,7 @@ Architecture of compute nodes ...@@ -362,8 +361,7 @@ Architecture of compute nodes
x86-64 x86-64
Operating system Operating system
Linux Linux
[**Compute [**Compute nodes**](compute-nodes.html)
nodes**](https://docs.it4i.cz/anselm-cluster-documentation/compute-nodes)
Totally Totally
209 209
Processor cores Processor cores
...@@ -397,8 +395,7 @@ Total amount of RAM ...@@ -397,8 +395,7 @@ Total amount of RAM
Fat compute node 2x Intel Sandy Bridge E5-2665, 2.4GHz 512GB - Fat compute node 2x Intel Sandy Bridge E5-2665, 2.4GHz 512GB -
For more details please refer to the [Compute For more details please refer to the [Compute
nodes](https://docs.it4i.cz/anselm-cluster-documentation/compute-nodes), nodes](compute-nodes.html),
[Storage](https://docs.it4i.cz/anselm-cluster-documentation/storage-1/storage), [Storage](storage.html), and
and [Network](network.html).
[Network](https://docs.it4i.cz/anselm-cluster-documentation/network).
...@@ -13,30 +13,29 @@ cores, at least 64GB RAM, and 500GB harddrive. Nodes are interconnected ...@@ -13,30 +13,29 @@ cores, at least 64GB RAM, and 500GB harddrive. Nodes are interconnected
by fully non-blocking fat-tree Infiniband network and equipped with by fully non-blocking fat-tree Infiniband network and equipped with
Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA
Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware
Overview](https://docs.it4i.cz/anselm-cluster-documentation/hardware-overview). Overview](hardware-overview.html).
The cluster runs bullx Linux [<span The cluster runs bullx Linux [<span
class="WYSIWYG_LINK"></span>](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)[operating class="WYSIWYG_LINK"></span>](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)[operating
system](https://docs.it4i.cz/anselm-cluster-documentation/software/operating-system), system](software/operating-system.html), which is
which is compatible with the <span class="WYSIWYG_LINK">RedHat</span> compatible with the <span class="WYSIWYG_LINK">RedHat</span> [<span
[<span class="WYSIWYG_LINK">Linux class="WYSIWYG_LINK">Linux
family.</span>](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) family.</span>](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg)
We have installed a wide range of We have installed a wide range of
[software](https://docs.it4i.cz/anselm-cluster-documentation/software) [software](software.1.html) packages targeted at
packages targeted at different scientific domains. These packages are different scientific domains. These packages are accessible via the
accessible via the [modules [modules environment](environment-and-modules.html).
environment](https://docs.it4i.cz/anselm-cluster-documentation/environment-and-modules).
User data shared file-system (HOME, 320TB) and job data shared User data shared file-system (HOME, 320TB) and job data shared
file-system (SCRATCH, 146TB) are available to users. file-system (SCRATCH, 146TB) are available to users.
The PBS Professional workload manager provides [computing resources The PBS Professional workload manager provides [computing resources
allocations and job allocations and job
execution](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution). execution](resource-allocation-and-job-execution.html).
Read more on how to [apply for Read more on how to [apply for
resources](https://docs.it4i.cz/get-started-with-it4innovations/applying-for-resources), resources](../get-started-with-it4innovations/applying-for-resources.html),
[obtain login [obtain login
credentials,](https://docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials) credentials,](../get-started-with-it4innovations/obtaining-login-credentials.html)
and [access the and [access the cluster](accessing-the-cluster.html).
cluster](https://docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster).
...@@ -18,7 +18,7 @@ not have a password and thus access to some services intended for ...@@ -18,7 +18,7 @@ not have a password and thus access to some services intended for
regular users. This can lower their comfort, but otherwise they should regular users. This can lower their comfort, but otherwise they should
be able to use the TIER-1 system as intended. Please see the [Obtaining be able to use the TIER-1 system as intended. Please see the [Obtaining
Login Credentials Login Credentials
section](https://docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials), section](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.html),
if the same level of access is required. if the same level of access is required.
All general [PRACE User All general [PRACE User
...@@ -33,8 +33,7 @@ install additional software, please use [PRACE ...@@ -33,8 +33,7 @@ install additional software, please use [PRACE
Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/). Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/).
Information about the local services are provided in the [introduction Information about the local services are provided in the [introduction
of general user of general user documentation](introduction.html).
documentation](https://docs.it4i.cz/anselm-cluster-documentation/introduction).
Please keep in mind, that standard PRACE accounts don't have a password Please keep in mind, that standard PRACE accounts don't have a password
to access the web interface of the local (IT4Innovations) request to access the web interface of the local (IT4Innovations) request
tracker and thus a new ticket should be created by sending an e-mail to tracker and thus a new ticket should be created by sending an e-mail to
...@@ -53,7 +52,7 @@ account at IT4Innovations. To get an account on the Anselm cluster, the ...@@ -53,7 +52,7 @@ account at IT4Innovations. To get an account on the Anselm cluster, the
user needs to obtain the login credentials. The procedure is the same as user needs to obtain the login credentials. The procedure is the same as
for general users of the cluster, so please see the corresponding for general users of the cluster, so please see the corresponding
[section of the general documentation [section of the general documentation
here](https://docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials). here](../get-started-with-it4innovations/obtaining-login-credentials.html).
Accessing the cluster Accessing the cluster
--------------------- ---------------------
...@@ -147,9 +146,9 @@ class="monospace">prace_service</span> script can be used: ...@@ -147,9 +146,9 @@ class="monospace">prace_service</span> script can be used:
Although the preferred and recommended file transfer mechanism is [using Although the preferred and recommended file transfer mechanism is [using
GridFTP](#file-transfers), the GSI SSH implementation on GridFTP](prace.html#file-transfers), the GSI SSH
Anselm supports also SCP, so for small files transfer gsiscp can be implementation on Anselm supports also SCP, so for small files transfer
used: gsiscp can be used:
$ gsiscp -P 2222 _LOCAL_PATH_TO_YOUR_FILE_ anselm.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_ $ gsiscp -P 2222 _LOCAL_PATH_TO_YOUR_FILE_ anselm.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_
...@@ -165,11 +164,11 @@ If the user needs to run X11 based graphical application and does not ...@@ -165,11 +164,11 @@ If the user needs to run X11 based graphical application and does not
have a X11 server, the applications can be run using VNC service. If the have a X11 server, the applications can be run using VNC service. If the
user is using regular SSH based access, please see the [section in user is using regular SSH based access, please see the [section in
general general
documentation](resolveuid/11e53ad0d2fd4c5187537f4baeedff33). documentation](https://docs.it4i.cz/anselm-cluster-documentation/resolveuid/11e53ad0d2fd4c5187537f4baeedff33).
If the user uses GSI SSH based access, then the procedure is similar to If the user uses GSI SSH based access, then the procedure is similar to
the SSH based access ([look the SSH based access ([look
here](resolveuid/11e53ad0d2fd4c5187537f4baeedff33)), here](https://docs.it4i.cz/anselm-cluster-documentation/resolveuid/11e53ad0d2fd4c5187537f4baeedff33)),
only the port forwarding must be done using GSI SSH: only the port forwarding must be done using GSI SSH:
$ gsissh -p 2222 anselm.it4i.cz -L 5961:localhost:5961 $ gsissh -p 2222 anselm.it4i.cz -L 5961:localhost:5961
...@@ -180,7 +179,7 @@ After successful obtainment of login credentials for the local ...@@ -180,7 +179,7 @@ After successful obtainment of login credentials for the local
IT4Innovations account, the PRACE users can access the cluster as IT4Innovations account, the PRACE users can access the cluster as
regular users using SSH. For more information please see the [section in regular users using SSH. For more information please see the [section in
general general
documentation](resolveuid/5d3d6f3d873a42e584cbf4365c4e251b). documentation](https://docs.it4i.cz/anselm-cluster-documentation/resolveuid/5d3d6f3d873a42e584cbf4365c4e251b).
[]()File transfers []()File transfers
------------------ ------------------
...@@ -188,7 +187,7 @@ documentation](resolveuid/5d3d6f3d873a42e584cbf4365c4e251b). ...@@ -188,7 +187,7 @@ documentation](resolveuid/5d3d6f3d873a42e584cbf4365c4e251b).
PRACE users can use the same transfer mechanisms as regular users (if PRACE users can use the same transfer mechanisms as regular users (if
they've undergone the full registration procedure). For information they've undergone the full registration procedure). For information
about this, please see [the section in the general about this, please see [the section in the general
documentation](resolveuid/5d3d6f3d873a42e584cbf4365c4e251b). documentation](https://docs.it4i.cz/anselm-cluster-documentation/resolveuid/5d3d6f3d873a42e584cbf4365c4e251b).
Apart from the standard mechanisms, for PRACE users to transfer data Apart from the standard mechanisms, for PRACE users to transfer data
to/from Anselm cluster, a GridFTP server running Globus Toolkit GridFTP to/from Anselm cluster, a GridFTP server running Globus Toolkit GridFTP
...@@ -263,7 +262,7 @@ Generally both shared file systems are available through GridFTP: ...@@ -263,7 +262,7 @@ Generally both shared file systems are available through GridFTP:
/scratch Lustre Shared SCRATCH mounted on the whole cluster /scratch Lustre Shared SCRATCH mounted on the whole cluster
More information about the shared file systems is available More information about the shared file systems is available
[here](https://docs.it4i.cz/anselm-cluster-documentation/storage). [here](storage.html).
Usage of the cluster Usage of the cluster
-------------------- --------------------
...@@ -278,14 +277,14 @@ because of insufficient amount of licenses. ...@@ -278,14 +277,14 @@ because of insufficient amount of licenses.
For production runs always use scratch file systems, either the global For production runs always use scratch file systems, either the global
shared or the local ones. The available file systems are described shared or the local ones. The available file systems are described
[here](https://docs.it4i.cz/anselm-cluster-documentation/hardware-overview). [here](hardware-overview.html).
### Software, Modules and PRACE Common Production Environment ### Software, Modules and PRACE Common Production Environment
All system wide installed software on the cluster is made available to All system wide installed software on the cluster is made available to
the users via the modules. The information about the environment and the users via the modules. The information about the environment and
modules usage is in this [section of general modules usage is in this [section of general
documentation](https://docs.it4i.cz/anselm-cluster-documentation/environment-and-modules). documentation](environment-and-modules.html).
PRACE users can use the "prace" module to use the [PRACE Common PRACE users can use the "prace" module to use the [PRACE Common
Production Production
...@@ -299,7 +298,7 @@ Environment](http://www.prace-ri.eu/PRACE-common-production). ...@@ -299,7 +298,7 @@ Environment](http://www.prace-ri.eu/PRACE-common-production).
General information about the resource allocation, job queuing and job General information about the resource allocation, job queuing and job
execution is in this [section of general execution is in this [section of general
documentation](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/introduction). documentation](resource-allocation-and-job-execution/introduction.html).
For PRACE users, the default production run queue is "qprace". PRACE For PRACE users, the default production run queue is "qprace". PRACE
users can also use two other queues "qexp" and "qfree". users can also use two other queues "qexp" and "qfree".
...@@ -334,7 +333,7 @@ accounting runs whenever the computational cores are allocated or ...@@ -334,7 +333,7 @@ accounting runs whenever the computational cores are allocated or
blocked via the PBS Pro workload manager (the qsub command), regardless blocked via the PBS Pro workload manager (the qsub command), regardless
of whether the cores are actually used for any calculation. See [example of whether the cores are actually used for any calculation. See [example
in the general in the general
documentation](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy). documentation](resource-allocation-and-job-execution/resources-allocation-policy.html).
PRACE users should check their project accounting using the [PRACE PRACE users should check their project accounting using the [PRACE
Accounting Tool Accounting Tool
...@@ -368,7 +367,8 @@ the quota use ...@@ -368,7 +367,8 @@ the quota use
$ lfs quota -u USER_LOGIN /scratch $ lfs quota -u USER_LOGIN /scratch
If the quota is insufficient, please contact the If the quota is insufficient, please contact the
[support](#help-and-support) and request an increase. [support](prace.html#help-and-support) and request an
increase.
......
...@@ -16,7 +16,7 @@ Currently two compute nodes are dedicated for this service with ...@@ -16,7 +16,7 @@ Currently two compute nodes are dedicated for this service with
following configuration for each node: following configuration for each node:
[**Visualization node [**Visualization node
configuration**](https://docs.it4i.cz/anselm-cluster-documentation/compute-nodes) configuration**](compute-nodes.html)
CPU CPU
2x Intel Sandy Bridge E5-2670, 2.6GHz 2x Intel Sandy Bridge E5-2670, 2.6GHz
Processor cores Processor cores
...@@ -32,9 +32,9 @@ InfiniBand QDR ...@@ -32,9 +32,9 @@ InfiniBand QDR
Schematic overview Schematic overview
------------------ ------------------
![rem_vis_scheme](https://docs.it4i.cz/anselm-cluster-documentation/scheme.png "rem_vis_scheme") ![rem_vis_scheme](scheme.png "rem_vis_scheme")
![rem_vis_legend](https://docs.it4i.cz/anselm-cluster-documentation/legend.png "rem_vis_legend") ![rem_vis_legend](legend.png "rem_vis_legend")
How to use the service How to use the service
---------------------- ----------------------
...@@ -55,7 +55,7 @@ The procedure is: ...@@ -55,7 +55,7 @@ The procedure is:
#### 1. Connect to a login node. {#1-connect-to-a-login-node} #### 1. Connect to a login node. {#1-connect-to-a-login-node}
Please [follow the Please [follow the
documentation](resolveuid/5d3d6f3d873a42e584cbf4365c4e251b). documentation](https://docs.it4i.cz/anselm-cluster-documentation/resolveuid/5d3d6f3d873a42e584cbf4365c4e251b).
#### 2. Run your own instance of TurboVNC server. {#2-run-your-own-instance-of-turbovnc-server} #### 2. Run your own instance of TurboVNC server. {#2-run-your-own-instance-of-turbovnc-server}
...@@ -120,7 +120,7 @@ $ ssh login2.anselm.it4i.cz -L 5901:localhost:5901 ...@@ -120,7 +120,7 @@ $ ssh login2.anselm.it4i.cz -L 5901:localhost:5901
*If you use Windows and Putty, please refer to port forwarding setup *If you use Windows and Putty, please refer to port forwarding setup
<span class="internal-link">in the documentation</span>:* <span class="internal-link">in the documentation</span>:*
<https://docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/x-window-and-vnc#section-12> [https://docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/x-window-and-vnc#section-12](accessing-the-cluster/x-window-and-vnc.html#section-12)
#### 7. If you don't have Turbo VNC installed on your workstation. {#7-if-you-don-t-have-turbo-vnc-installed-on-your-workstation} #### 7. If you don't have Turbo VNC installed on your workstation. {#7-if-you-don-t-have-turbo-vnc-installed-on-your-workstation}
...@@ -281,19 +281,19 @@ Tips and Tricks ...@@ -281,19 +281,19 @@ Tips and Tricks
If you want to increase the responsibility of the visualization, please If you want to increase the responsibility of the visualization, please
adjust your TurboVNC client settings in this way: adjust your TurboVNC client settings in this way:
![rem_vis_settings](https://docs.it4i.cz/anselm-cluster-documentation/turbovncclientsetting.png "rem_vis_settings") ![rem_vis_settings](turbovncclientsetting.png "rem_vis_settings")
To have an idea how the settings are affecting the resulting picture To have an idea how the settings are affecting the resulting picture
quality three levels of "JPEG image quality" are demonstrated: quality three levels of "JPEG image quality" are demonstrated:
1. JPEG image quality = 30 1. JPEG image quality = 30
![rem_vis_q3](https://docs.it4i.cz/anselm-cluster-documentation/quality3.png "rem_vis_q3") ![rem_vis_q3](quality3.png "rem_vis_q3")
2. JPEG image quality = 15 2. JPEG image quality = 15
![rem_vis_q2](https://docs.it4i.cz/anselm-cluster-documentation/quality2.png "rem_vis_q2") ![rem_vis_q2](quality2.png "rem_vis_q2")
3. JPEG image quality = 10 3. JPEG image quality = 10
![rem_vis_q1](https://docs.it4i.cz/anselm-cluster-documentation/quality1.png "rem_vis_q1") ![rem_vis_q1](quality1.png "rem_vis_q1")
...@@ -5,17 +5,15 @@ Resource Allocation and Job Execution ...@@ -5,17 +5,15 @@ Resource Allocation and Job Execution
To run a To run a [job](introduction.html), [computational
[job](https://docs.it4i.cz/anselm-cluster-documentation/introduction), resources](introduction.html) for this particular job
[computational must be allocated. This is done via the PBS Pro job workload manager
resources](https://docs.it4i.cz/anselm-cluster-documentation/introduction) software, which efficiently distributes workloads across the
for this particular job must be allocated. This is done via the PBS Pro supercomputer. Extensive informations about PBS Pro can be found in the
job workload manager software, which efficiently distributes workloads [official documentation
across the supercomputer. Extensive informations about PBS Pro can be here](../pbspro-documentation.html), especially in the
found in the [official documentation [PBS Pro User's
here](https://docs.it4i.cz/pbspro-documentation), Guide](../pbspro-documentation/pbspro-users-guide.1).
especially in the [PBS Pro User's
Guide](https://docs.it4i.cz/pbspro-documentation/pbspro-users-guide).
Resources Allocation Policy Resources Allocation Policy
--------------------------- ---------------------------
...@@ -23,7 +21,7 @@ Resources Allocation Policy ...@@ -23,7 +21,7 @@ Resources Allocation Policy
The resources are allocated to the job in a fairshare fashion, subject The resources are allocated to the job in a fairshare fashion, subject
to constraints set by the queue and resources available to the Project. to constraints set by the queue and resources available to the Project.
[The [The
Fairshare](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job-priority) Fairshare](resource-allocation-and-job-execution/job-priority.html)
at Anselm ensures that individual users may consume approximately equal at Anselm ensures that individual users may consume approximately equal
amount of resources per week. The resources are accessible via several amount of resources per week. The resources are accessible via several
queues for queueing the jobs. The queues provide prioritized and queues for queueing the jobs. The queues provide prioritized and
...@@ -39,7 +37,7 @@ available to Anselm users: ...@@ -39,7 +37,7 @@ available to Anselm users:
Check the queue status at <https://extranet.it4i.cz/anselm/> Check the queue status at <https://extranet.it4i.cz/anselm/>
Read more on the [Resource Allocation Read more on the [Resource Allocation
Policy](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy) Policy](resource-allocation-and-job-execution/resources-allocation-policy.html)
page. page.
Job submission and execution Job submission and execution
...@@ -56,7 +54,7 @@ resources are allocated the jobscript or interactive shell is executed ...@@ -56,7 +54,7 @@ resources are allocated the jobscript or interactive shell is executed
on first of the allocated nodes.** on first of the allocated nodes.**
Read more on the [Job submission and Read more on the [Job submission and
execution](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution) execution](resource-allocation-and-job-execution/job-submission-and-execution.html)
page. page.
Capacity computing Capacity computing
...@@ -74,6 +72,6 @@ huge number of jobs, including **ways to run huge number of single core ...@@ -74,6 +72,6 @@ huge number of jobs, including **ways to run huge number of single core
jobs**. jobs**.
Read more on [Capacity Read more on [Capacity
computing](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity-computing) computing](resource-allocation-and-job-execution/capacity-computing.html)
page. page.
...@@ -23,21 +23,23 @@ per user, 1000 per job array** ...@@ -23,21 +23,23 @@ per user, 1000 per job array**
Please follow one of the procedures below, in case you wish to schedule Please follow one of the procedures below, in case you wish to schedule
more than <span>100</span> jobs at a time. more than <span>100</span> jobs at a time.
- Use [Job arrays](#job-arrays) when running huge number - Use [Job arrays](capacity-computing.html#job-arrays)
of [multithread](#shared-jobscript-on-one-node) (bound when running huge number of
to one node only) or multinode (multithread across several nodes) [multithread](capacity-computing.html#shared-jobscript-on-one-node)
jobs (bound to one node only) or multinode (multithread across
- Use [GNU parallel](#gnu-parallel) when running single several nodes) jobs
core jobs - Use [GNU
parallel](capacity-computing.html#gnu-parallel) when
running single core jobs
- Combine[GNU parallel with Job - Combine[GNU parallel with Job
arrays](#combining-job-arrays-and-gnu-parallel) when arrays](capacity-computing.html#combining-job-arrays-and-gnu-parallel)
running huge number of single core jobs when running huge number of single core jobs
Policy Policy
------ ------
1. A user is allowed to submit at most 100 jobs. Each job may be [a job 1. A user is allowed to submit at most 100 jobs. Each job may be [a job
array](#job-arrays). array](capacity-computing.html#job-arrays).
2. The array size is at most 1000 subjobs. 2. The array size is at most 1000 subjobs.
[]()Job arrays []()Job arrays
...@@ -124,8 +126,8 @@ run has to be used properly. ...@@ -124,8 +126,8 @@ run has to be used properly.
### Submit the job array ### Submit the job array
To submit the job array, use the qsub -J command. The 900 jobs of the To submit the job array, use the qsub -J command. The 900 jobs of the
[example above](#array_example) may be submitted like [example above](capacity-computing.html#array_example) may
this: be submitted like this:
``` ```
$ qsub -N JOBNAME -J 1-900 jobscript $ qsub -N JOBNAME -J 1-900 jobscript
...@@ -203,7 +205,7 @@ $ qstat -u $USER -tJ ...@@ -203,7 +205,7 @@ $ qstat -u $USER -tJ
``` ```
Read more on job arrays in the [PBSPro Users Read more on job arrays in the [PBSPro Users
guide](https://docs.it4i.cz/pbspro-documentation). guide](../../pbspro-documentation.html).
[]()GNU parallel []()GNU parallel
---------------- ----------------
...@@ -283,7 +285,8 @@ $TASK.out name.  ...@@ -283,7 +285,8 @@ $TASK.out name. 
### Submit the job ### Submit the job
To submit the job, use the qsub command. The 101 tasks' job of the To submit the job, use the qsub command. The 101 tasks' job of the
[example above](#gp_example) may be submitted like this: [example above](capacity-computing.html#gp_example) may be
submitted like this:
``` ```
$ qsub -N JOBNAME jobscript $ qsub -N JOBNAME jobscript
...@@ -393,8 +396,9 @@ Select  subjob walltime and number of tasks per subjob  carefully ...@@ -393,8 +396,9 @@ Select  subjob walltime and number of tasks per subjob  carefully
### Submit the job array ### Submit the job array
To submit the job array, use the qsub -J command. The 992 tasks' job of To submit the job array, use the qsub -J command. The 992 tasks' job of
the [example above](#combined_example) may be submitted the [example
like this: above](capacity-computing.html#combined_example) may be
submitted like this:
``` ```
$ qsub -N JOBNAME -J 1-992:32 jobscript $ qsub -N JOBNAME -J 1-992:32 jobscript
...@@ -414,7 +418,7 @@ Examples ...@@ -414,7 +418,7 @@ Examples
-------- --------
Download the examples in Download the examples in
[capacity.zip](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity-computing-examples), [capacity.zip](capacity-computing-examples),
illustrating the above listed ways to run huge number of jobs. We illustrating the above listed ways to run huge number of jobs. We
recommend to try out the examples, before using this for running recommend to try out the examples, before using this for running
production jobs. production jobs.
......
...@@ -5,30 +5,27 @@ Resource Allocation and Job Execution ...@@ -5,30 +5,27 @@ Resource Allocation and Job Execution
To run a To run a [job](../introduction.html), [computational
[job](https://docs.it4i.cz/anselm-cluster-documentation/introduction), resources](../introduction.html) for this particular job
[computational must be allocated. This is done via the PBS Pro job workload manager
resources](https://docs.it4i.cz/anselm-cluster-documentation/introduction) software, which efficiently distributes workloads across the
for this particular job must be allocated. This is done via the PBS Pro supercomputer. Extensive informations about PBS Pro can be found in the
job workload manager software, which efficiently distributes workloads [official documentation
across the supercomputer. Extensive informations about PBS Pro can be here](../../pbspro-documentation.html), especially in
found in the [official documentation the [PBS Pro User's
here](https://docs.it4i.cz/pbspro-documentation), Guide](../../pbspro-documentation/pbspro-users-guide.1).
especially in the [PBS Pro User's
Guide](https://docs.it4i.cz/pbspro-documentation/pbspro-users-guide).
Resources Allocation Policy Resources Allocation Policy
--------------------------- ---------------------------
The resources are allocated to the job in a fairshare fashion, subject The resources are allocated to the job in a fairshare fashion, subject
to constraints set by the queue and resources available to the Project. to constraints set by the queue and resources available to the Project.
[The [The Fairshare](job-priority.html) at Anselm ensures
Fairshare](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job-priority) that individual users may consume approximately equal amount of
at Anselm ensures that individual users may consume approximately equal resources per week. The resources are accessible via several queues for
amount of resources per week. The resources are accessible via several queueing the jobs. The queues provide prioritized and exclusive access
queues for queueing the jobs. The queues provide prioritized and to the computational resources. Following queues are available to Anselm
exclusive access to the computational resources. Following queues are users:
available to Anselm users:
- **qexp**, the Express queue - **qexp**, the Express queue
- **qprod**, the Production queue**** - **qprod**, the Production queue****
...@@ -39,8 +36,7 @@ available to Anselm users: ...@@ -39,8 +36,7 @@ available to Anselm users:
Check the queue status at <https://extranet.it4i.cz/anselm/> Check the queue status at <https://extranet.it4i.cz/anselm/>
Read more on the [Resource Allocation Read more on the [Resource Allocation
Policy](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy) Policy](resources-allocation-policy.html) page.
page.
Job submission and execution Job submission and execution
---------------------------- ----------------------------
...@@ -56,8 +52,7 @@ resources are allocated the jobscript or interactive shell is executed ...@@ -56,8 +52,7 @@ resources are allocated the jobscript or interactive shell is executed
on first of the allocated nodes.** on first of the allocated nodes.**
Read more on the [Job submission and Read more on the [Job submission and
execution](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution) execution](job-submission-and-execution.html) page.
page.
Capacity computing Capacity computing
------------------ ------------------
...@@ -74,8 +69,7 @@ huge number of jobs, including **ways to run huge number of single core ...@@ -74,8 +69,7 @@ huge number of jobs, including **ways to run huge number of single core
jobs**. jobs**.
Read more on [Capacity Read more on [Capacity
computing](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity-computing) computing](capacity-computing.html) page.
page.
...@@ -38,7 +38,7 @@ Fairshare priority is used for ranking jobs with equal queue priority. ...@@ -38,7 +38,7 @@ Fairshare priority is used for ranking jobs with equal queue priority.
Fairshare priority is calculated as Fairshare priority is calculated as
![](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/fairshare_formula.png) ![](fairshare_formula.png)
where MAX_FAIRSHARE has value 1E6, where MAX_FAIRSHARE has value 1E6,
usage~Project~ is cumulated usage by all members of selected project, usage~Project~ is cumulated usage by all members of selected project,
...@@ -74,7 +74,7 @@ job.</span></span> ...@@ -74,7 +74,7 @@ job.</span></span>
Job execution priority (job sort formula) is calculated as: Job execution priority (job sort formula) is calculated as:
![](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job_sort_formula.png) ![](job_sort_formula.png)
### Job backfilling ### Job backfilling
......
...@@ -80,8 +80,8 @@ the first node in the allocation. ...@@ -80,8 +80,8 @@ the first node in the allocation.
All qsub options may be [saved directly into the All qsub options may be [saved directly into the
jobscript](#PBSsaved). In such a case, no options to qsub jobscript](job-submission-and-execution.html#PBSsaved). In
are needed. such a case, no options to qsub are needed.
``` ```
$ qsub ./myjob $ qsub ./myjob
...@@ -143,18 +143,16 @@ with Intel Xeon E5-2665 CPU. ...@@ -143,18 +143,16 @@ with Intel Xeon E5-2665 CPU.
Groups of computational nodes are connected to chassis integrated Groups of computational nodes are connected to chassis integrated
Infiniband switches. These switches form the leaf switch layer of the Infiniband switches. These switches form the leaf switch layer of the
[Infiniband [Infiniband network](../network.html) <span
network](https://docs.it4i.cz/anselm-cluster-documentation/network) class="internal-link">fat</span> tree topology. Nodes sharing the leaf
<span class="internal-link">fat</span> tree topology. Nodes sharing the switch can communicate most efficiently. Sharing the same switch
leaf switch can communicate most efficiently. Sharing the same switch
prevents hops in the network and provides for unbiased, most efficient prevents hops in the network and provides for unbiased, most efficient
network communication. network communication.
Nodes sharing the same switch may be selected via the PBS resource Nodes sharing the same switch may be selected via the PBS resource
attribute ibswitch. Values of this attribute are iswXX, where XX is the attribute ibswitch. Values of this attribute are iswXX, where XX is the
switch number. The node-switch mapping can be seen at [Hardware switch number. The node-switch mapping can be seen at [Hardware
Overview](https://docs.it4i.cz/anselm-cluster-documentation/hardware-overview) Overview](../hardware-overview.html) section.
section.
We recommend allocating compute nodes of a single switch when best We recommend allocating compute nodes of a single switch when best
possible computational network performance is required to run the job possible computational network performance is required to run the job
...@@ -449,8 +447,7 @@ directory. The mympiprog.x is executed as one process per node, on all ...@@ -449,8 +447,7 @@ directory. The mympiprog.x is executed as one process per node, on all
allocated nodes. allocated nodes.
Consider preloading inputs and executables onto [shared Consider preloading inputs and executables onto [shared
scratch](https://docs.it4i.cz/anselm-cluster-documentation/storage-1/storage) scratch](../storage.html) before the calculation starts.
before the calculation starts.
In some cases, it may be impractical to copy the inputs to scratch and In some cases, it may be impractical to copy the inputs to scratch and
outputs to home. This is especially true when very large input and outputs to home. This is especially true when very large input and
...@@ -497,9 +494,8 @@ allocated nodes. If mympiprog.x implements OpenMP threads, it will run ...@@ -497,9 +494,8 @@ allocated nodes. If mympiprog.x implements OpenMP threads, it will run
16 threads per node. 16 threads per node.
More information is found in the [Running More information is found in the [Running
OpenMPI](https://docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/Running_OpenMPI) OpenMPI](../software/mpi-1/Running_OpenMPI.html) and
and [Running [Running MPICH2](../software/mpi-1/running-mpich2.html)
MPICH2](https://docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/running-mpich2)
sections. sections.
### Example Jobscript for Single Node Calculation[]() ### Example Jobscript for Single Node Calculation[]()
...@@ -508,8 +504,7 @@ Local scratch directory is often useful for single node jobs. Local ...@@ -508,8 +504,7 @@ Local scratch directory is often useful for single node jobs. Local
scratch will be deleted immediately after the job ends. scratch will be deleted immediately after the job ends.
Example jobscript for single node calculation, using [local Example jobscript for single node calculation, using [local
scratch](https://docs.it4i.cz/anselm-cluster-documentation/storage-1/storage) scratch](../storage.html) on the node:
on the node:
``` ```
#!/bin/bash #!/bin/bash
...@@ -541,10 +536,8 @@ may use threads. ...@@ -541,10 +536,8 @@ may use threads.
### Other Jobscript Examples ### Other Jobscript Examples
Further jobscript examples may be found in the Further jobscript examples may be found in the
[Software](https://docs.it4i.cz/anselm-cluster-documentation/software) [Software](../software.1.html) section and the [Capacity
section and the [Capacity computing](capacity-computing.html) section.
computing](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity-computing)
section.
......
...@@ -12,12 +12,11 @@ The resources are allocated to the job in a fairshare fashion, subject ...@@ -12,12 +12,11 @@ The resources are allocated to the job in a fairshare fashion, subject
to constraints set by the queue and resources available to the Project. to constraints set by the queue and resources available to the Project.
The Fairshare at Anselm ensures that individual users may consume The Fairshare at Anselm ensures that individual users may consume
approximately equal amount of resources per week. Detailed information approximately equal amount of resources per week. Detailed information
in the [Job in the [Job scheduling](job-priority.html) section. The
scheduling](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job-priority) resources are accessible via several queues for queueing the jobs. The
section. The resources are accessible via several queues for queueing queues provide prioritized and exclusive access to the computational
the jobs. The queues provide prioritized and exclusive access to the resources. Following table provides the queue partitioning overview:
computational resources. Following table provides the queue partitioning
overview:
<table> <table>
<colgroup> <colgroup>
...@@ -110,12 +109,12 @@ Free resource queue</td> ...@@ -110,12 +109,12 @@ Free resource queue</td>
</table> </table>
**The qfree queue is not free of charge**. [Normal **The qfree queue is not free of charge**. [Normal
accounting](#resources-accounting-policy) applies. accounting](resources-allocation-policy.html#resources-accounting-policy)
However, it allows for utilization of free resources, once a Project applies. However, it allows for utilization of free resources, once a
exhausted all its allocated computational resources. This does not apply Project exhausted all its allocated computational resources. This does
for Directors Discreation's projects (DD projects) by default. Usage of not apply for Directors Discreation's projects (DD projects) by default.
qfree after exhaustion of DD projects computational resources is allowed Usage of qfree after exhaustion of DD projects computational resources
after request for this queue. is allowed after request for this queue.
**The qexp queue is equipped with the nodes not having the very same CPU **The qexp queue is equipped with the nodes not having the very same CPU
clock speed.** Should you need the very same CPU speed, you have to clock speed.** Should you need the very same CPU speed, you have to
...@@ -179,7 +178,7 @@ select the proper nodes during the PSB job submission. ...@@ -179,7 +178,7 @@ select the proper nodes during the PSB job submission.
The job wall clock time defaults to **half the maximum time**, see table The job wall clock time defaults to **half the maximum time**, see table
above. Longer wall time limits can be [set manually, see above. Longer wall time limits can be [set manually, see
examples](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution). examples](job-submission-and-execution.html).
Jobs that exceed the reserved wall clock time (Req'd Time) get killed Jobs that exceed the reserved wall clock time (Req'd Time) get killed
automatically. Wall clock time limit can be changed for queuing jobs automatically. Wall clock time limit can be changed for queuing jobs
...@@ -194,8 +193,7 @@ Anselm users may check current queue configuration at ...@@ -194,8 +193,7 @@ Anselm users may check current queue configuration at
Check the status of jobs, queues and compute nodes at Check the status of jobs, queues and compute nodes at
<https://extranet.it4i.cz/anselm/> <https://extranet.it4i.cz/anselm/>
![rspbs web ![rspbs web interface](rsweb.png)
interface](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/rsweb.png)
Display the queue status on Anselm: Display the queue status on Anselm:
...@@ -275,8 +273,7 @@ of whether the cores are actually used for any calculation. 1 core-hour ...@@ -275,8 +273,7 @@ of whether the cores are actually used for any calculation. 1 core-hour
is defined as 1 processor core allocated for 1 hour of wall clock time. is defined as 1 processor core allocated for 1 hour of wall clock time.
Allocating a full node (16 cores) for 1 hour accounts to 16 core-hours. Allocating a full node (16 cores) for 1 hour accounts to 16 core-hours.
See example in the [Job submission and See example in the [Job submission and
execution](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution) execution](job-submission-and-execution.html) section.
section.
### Check consumed resources ### Check consumed resources
......
...@@ -189,7 +189,8 @@ Intel MPI on Xeon Phi ...@@ -189,7 +189,8 @@ Intel MPI on Xeon Phi
--------------------- ---------------------
The[MPI section of Intel Xeon Phi The[MPI section of Intel Xeon Phi
chapter](https://docs.it4i.cz/anselm-cluster-documentation/software/intel-xeon-phi) chapter](../../../intel-xeon-phi.html) provides details
provides details on how to run Intel MPI code on Xeon Phi architecture. on how to run Intel MPI code on Xeon Phi architecture.
...@@ -16,7 +16,7 @@ of license is realized on command line respectively directly in user's ...@@ -16,7 +16,7 @@ of license is realized on command line respectively directly in user's
pbs file (see individual products). [<span id="result_box" pbs file (see individual products). [<span id="result_box"
class="short_text"><span class="hps">More</span> <span class="hps">about class="short_text"><span class="hps">More</span> <span class="hps">about
licensing</span> <span licensing</span> <span
class="hps">here</span></span>](https://docs.it4i.cz/anselm-cluster-documentation/software/ansys/licensing) class="hps">here</span></span>](ansys/licensing.html)
To load the latest version of any ANSYS product (Mechanical, Fluent, To load the latest version of any ANSYS product (Mechanical, Fluent,
CFX, MAPDL,...) load the module: CFX, MAPDL,...) load the module:
...@@ -32,3 +32,4 @@ solution to the Anselm directly from the client's Workbench project ...@@ -32,3 +32,4 @@ solution to the Anselm directly from the client's Workbench project
(see ANSYS RSM service). (see ANSYS RSM service).
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment