Skip to content
Snippets Groups Projects
Commit 231dd723 authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

fix links

parent 2620cbbe
No related branches found
No related tags found
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!219Virtual environment, upgrade MKdocs, upgrade Material design
Showing
with 66 additions and 66 deletions
...@@ -9,13 +9,13 @@ However, executing a huge number of jobs via the PBS queue may strain the system ...@@ -9,13 +9,13 @@ However, executing a huge number of jobs via the PBS queue may strain the system
!!! note !!! note
Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time. Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
* Use [Job arrays](/anselm/capacity-computing/#job-arrays) when running a huge number of [multithread](anselm/capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs * Use [Job arrays](anselm/capacity-computing/#job-arrays) when running a huge number of [multithread](anselm/capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
* Use [GNU parallel](/anselm/capacity-computing/#gnu-parallel) when running single core jobs * Use [GNU parallel](anselm/capacity-computing/#gnu-parallel) when running single core jobs
* Combine [GNU parallel with Job arrays](/anselm/capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs * Combine [GNU parallel with Job arrays](anselm/capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
## Policy ## Policy
1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](/anselm/capacity-computing/#job-arrays). 1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](anselm/capacity-computing/#job-arrays).
1. The array size is at most 1000 subjobs. 1. The array size is at most 1000 subjobs.
## Job Arrays ## Job Arrays
...@@ -76,7 +76,7 @@ If running a huge number of parallel multicore (in means of multinode multithrea ...@@ -76,7 +76,7 @@ If running a huge number of parallel multicore (in means of multinode multithrea
### Submit the Job Array ### Submit the Job Array
To submit the job array, use the qsub -J command. The 900 jobs of the [example above](/anselm/capacity-computing/#array_example) may be submitted like this: To submit the job array, use the qsub -J command. The 900 jobs of the [example above](anselm/capacity-computing/#array_example) may be submitted like this:
```console ```console
$ qsub -N JOBNAME -J 1-900 jobscript $ qsub -N JOBNAME -J 1-900 jobscript
...@@ -207,7 +207,7 @@ In this example, tasks from the tasklist are executed via the GNU parallel. The ...@@ -207,7 +207,7 @@ In this example, tasks from the tasklist are executed via the GNU parallel. The
### Submit the Job ### Submit the Job
To submit the job, use the qsub command. The 101 task job of the [example above](/anselm/capacity-computing/#gp_example) may be submitted as follows: To submit the job, use the qsub command. The 101 task job of the [example above](anselm/capacity-computing/#gp_example) may be submitted as follows:
```console ```console
$ qsub -N JOBNAME jobscript $ qsub -N JOBNAME jobscript
...@@ -292,7 +292,7 @@ When deciding this values, keep in mind the following guiding rules: ...@@ -292,7 +292,7 @@ When deciding this values, keep in mind the following guiding rules:
### Submit the Job Array (-J) ### Submit the Job Array (-J)
To submit the job array, use the qsub -J command. The 992 task job of the [example above](/anselm/capacity-computing/#combined_example) may be submitted like this: To submit the job array, use the qsub -J command. The 992 task job of the [example above](anselm/capacity-computing/#combined_example) may be submitted like this:
```console ```console
$ qsub -N JOBNAME -J 1-992:32 jobscript $ qsub -N JOBNAME -J 1-992:32 jobscript
......
...@@ -52,7 +52,7 @@ Anselm is cluster of x86-64 Intel based nodes built with Bull Extreme Computing ...@@ -52,7 +52,7 @@ Anselm is cluster of x86-64 Intel based nodes built with Bull Extreme Computing
### Compute Node Summary ### Compute Node Summary
| Node type | Count | Range | Memory | Cores | [Access](/general/resources-allocation-policy/) | | Node type | Count | Range | Memory | Cores | [Access](general/resources-allocation-policy/) |
| ---------------------------- | ----- | ----------- | ------ | ----------- | -------------------------------------- | | ---------------------------- | ----- | ----------- | ------ | ----------- | -------------------------------------- |
| Nodes without an accelerator | 180 | cn[1-180] | 64GB | 16 @ 2.4GHz | qexp, qprod, qlong, qfree, qprace, qatlas | | Nodes without an accelerator | 180 | cn[1-180] | 64GB | 16 @ 2.4GHz | qexp, qprod, qlong, qfree, qprace, qatlas |
| Nodes with a GPU accelerator | 23 | cn[181-203] | 96GB | 16 @ 2.3GHz | qnvidia, qexp | | Nodes with a GPU accelerator | 23 | cn[181-203] | 96GB | 16 @ 2.3GHz | qnvidia, qexp |
......
...@@ -17,16 +17,16 @@ There are four types of compute nodes: ...@@ -17,16 +17,16 @@ There are four types of compute nodes:
* 4 compute nodes with a MIC accelerator - an Intel Xeon Phi 5110P * 4 compute nodes with a MIC accelerator - an Intel Xeon Phi 5110P
* 2 fat nodes - equipped with 512 GB of RAM and two 100 GB SSD drives * 2 fat nodes - equipped with 512 GB of RAM and two 100 GB SSD drives
[More about Compute nodes](/anselm/compute-nodes/). [More about Compute nodes](anselm/compute-nodes/).
GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy](/anselm/resources-allocation-policy/). GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy](anselm/resources-allocation-policy/).
All of these nodes are interconnected through fast InfiniBand and Ethernet networks. [More about the Network](/anselm/network/). All of these nodes are interconnected through fast InfiniBand and Ethernet networks. [More about the Network](anselm/network/).
Every chassis provides an InfiniBand switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches. Every chassis provides an InfiniBand switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches.
All of the nodes share a 360 TB /home disk for storage of user files. The 146 TB shared /scratch storage is available for scratch data. These file systems are provided by the Lustre parallel file system. There is also local disk storage available on all compute nodes in /lscratch. [More about Storage](/anselm/storage/). All of the nodes share a 360 TB /home disk for storage of user files. The 146 TB shared /scratch storage is available for scratch data. These file systems are provided by the Lustre parallel file system. There is also local disk storage available on all compute nodes in /lscratch. [More about Storage](anselm/storage/).
User access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing the cluster.](/anselm/shell-and-data-access/) User access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing the cluster.](anselm/shell-and-data-access/)
The parameters are summarized in the following tables: The parameters are summarized in the following tables:
...@@ -35,7 +35,7 @@ The parameters are summarized in the following tables: ...@@ -35,7 +35,7 @@ The parameters are summarized in the following tables:
| Primary purpose | High Performance Computing | | Primary purpose | High Performance Computing |
| Architecture of compute nodes | x86-64 | | Architecture of compute nodes | x86-64 |
| Operating system | Linux (CentOS) | | Operating system | Linux (CentOS) |
| [**Compute nodes**](/anselm/compute-nodes/) | | | [**Compute nodes**](anselm/compute-nodes/) | |
| Total | 209 | | Total | 209 |
| Processor cores | 16 (2 x 8 cores) | | Processor cores | 16 (2 x 8 cores) |
| RAM | min. 64 GB, min. 4 GB per core | | RAM | min. 64 GB, min. 4 GB per core |
...@@ -57,4 +57,4 @@ The parameters are summarized in the following tables: ...@@ -57,4 +57,4 @@ The parameters are summarized in the following tables:
| MIC accelerated | 2 x Intel Sandy Bridge E5-2470, 2.3 GHz | 96 GB | Intel Xeon Phi 5110P | | MIC accelerated | 2 x Intel Sandy Bridge E5-2470, 2.3 GHz | 96 GB | Intel Xeon Phi 5110P |
| Fat compute node | 2 x Intel Sandy Bridge E5-2665, 2.4 GHz | 512 GB | - | | Fat compute node | 2 x Intel Sandy Bridge E5-2665, 2.4 GHz | 512 GB | - |
For more details refer to [Compute nodes](/anselm/compute-nodes/), [Storage](anselm/storage/), and [Network](anselm/network/). For more details refer to [Compute nodes](anselm/compute-nodes/), [Storage](anselm/storage/), and [Network](anselm/network/).
# Introduction # Introduction
Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totalling 3344 compute cores with 15 TB RAM, giving over 94 TFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB of RAM, and a 500 GB hard disk drive. Nodes are interconnected through a fully non-blocking fat-tree InfiniBand network, and are equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](/anselm/hardware-overview/). Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totalling 3344 compute cores with 15 TB RAM, giving over 94 TFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB of RAM, and a 500 GB hard disk drive. Nodes are interconnected through a fully non-blocking fat-tree InfiniBand network, and are equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](anselm/hardware-overview/).
The cluster runs with an [operating system](/software/operating-system/) which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/). The cluster runs with an [operating system](software/operating-system/) which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/).
The user data shared file-system (HOME, 320 TB) and job data shared file-system (SCRATCH, 146 TB) are available to users. The user data shared file-system (HOME, 320 TB) and job data shared file-system (SCRATCH, 146 TB) are available to users.
The PBS Professional workload manager provides [computing resources allocations and job execution](/anselm/resources-allocation-policy/). The PBS Professional workload manager provides [computing resources allocations and job execution](anselm/resources-allocation-policy/).
Read more on how to [apply for resources](/general/applying-for-resources/), [obtain login credentials](general/obtaining-login-credentials/obtaining-login-credentials/) and [access the cluster](/anselm/shell-and-data-access/). Read more on how to [apply for resources](general/applying-for-resources/), [obtain login credentials](general/obtaining-login-credentials/obtaining-login-credentials/) and [access the cluster](anselm/shell-and-data-access/).
...@@ -92,9 +92,9 @@ In this example, we allocate 4 nodes, 16 cores per node, selecting only the node ...@@ -92,9 +92,9 @@ In this example, we allocate 4 nodes, 16 cores per node, selecting only the node
### Placement by IB Switch ### Placement by IB Switch
Groups of computational nodes are connected to chassis integrated Infiniband switches. These switches form the leaf switch layer of the [Infiniband network](/anselm/network/) fat tree topology. Nodes sharing the leaf switch can communicate most efficiently. Sharing the same switch prevents hops in the network and facilitates unbiased, highly efficient network communication. Groups of computational nodes are connected to chassis integrated Infiniband switches. These switches form the leaf switch layer of the [Infiniband network](anselm/network/) fat tree topology. Nodes sharing the leaf switch can communicate most efficiently. Sharing the same switch prevents hops in the network and facilitates unbiased, highly efficient network communication.
Nodes sharing the same switch may be selected via the PBS resource attribute ibswitch. Values of this attribute are iswXX, where XX is the switch number. The node-switch mapping can be seen in the [Hardware Overview](/anselm/hardware-overview/) section. Nodes sharing the same switch may be selected via the PBS resource attribute ibswitch. Values of this attribute are iswXX, where XX is the switch number. The node-switch mapping can be seen in the [Hardware Overview](anselm/hardware-overview/) section.
We recommend allocating compute nodes to a single switch when best possible computational network performance is required to run the job efficiently: We recommend allocating compute nodes to a single switch when best possible computational network performance is required to run the job efficiently:
...@@ -373,7 +373,7 @@ exit ...@@ -373,7 +373,7 @@ exit
In this example, input and executable files are assumed to be preloaded manually in the /scratch/$USER/myjob directory. Note the **mpiprocs** and **ompthreads** qsub options controlling the behavior of the MPI execution. mympiprog.x is executed as one process per node, on all 100 allocated nodes. If mympiprog.x implements OpenMP threads, it will run 16 threads per node. In this example, input and executable files are assumed to be preloaded manually in the /scratch/$USER/myjob directory. Note the **mpiprocs** and **ompthreads** qsub options controlling the behavior of the MPI execution. mympiprog.x is executed as one process per node, on all 100 allocated nodes. If mympiprog.x implements OpenMP threads, it will run 16 threads per node.
More information can be found in the [Running OpenMPI](/software/mpi/Running_OpenMPI/) and [Running MPICH2](software/mpi/running-mpich2/) More information can be found in the [Running OpenMPI](software/mpi/Running_OpenMPI/) and [Running MPICH2](software/mpi/running-mpich2/)
sections. sections.
### Example Jobscript for Single Node Calculation ### Example Jobscript for Single Node Calculation
...@@ -381,7 +381,7 @@ sections. ...@@ -381,7 +381,7 @@ sections.
!!! note !!! note
The local scratch directory is often useful for single node jobs. Local scratch memory will be deleted immediately after the job ends. The local scratch directory is often useful for single node jobs. Local scratch memory will be deleted immediately after the job ends.
Example jobscript for single node calculation, using [local scratch](/anselm/storage/) memory on the node: Example jobscript for single node calculation, using [local scratch](anselm/storage/) memory on the node:
```bash ```bash
#!/bin/bash #!/bin/bash
...@@ -407,4 +407,4 @@ In this example, a directory in /home holds the input file input and executable ...@@ -407,4 +407,4 @@ In this example, a directory in /home holds the input file input and executable
### Other Jobscript Examples ### Other Jobscript Examples
Further jobscript examples may be found in the software section and the [Capacity computing](/anselm/capacity-computing/) section. Further jobscript examples may be found in the software section and the [Capacity computing](anselm/capacity-computing/) section.
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
## Job Queue Policies ## Job Queue Policies
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and the resources available to the Project. The Fair-share system of Anselm ensures that individual users may consume approximately equal amounts of resources per week. Detailed information can be found in the [Job scheduling](/anselm/job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. The following table provides the queue partitioning overview: The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and the resources available to the Project. The Fair-share system of Anselm ensures that individual users may consume approximately equal amounts of resources per week. Detailed information can be found in the [Job scheduling](anselm/job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. The following table provides the queue partitioning overview:
!!! note !!! note
Check the queue status at <https://extranet.it4i.cz/anselm/> Check the queue status at <https://extranet.it4i.cz/anselm/>
...@@ -29,7 +29,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const ...@@ -29,7 +29,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const
## Queue Notes ## Queue Notes
The job wall clock time defaults to **half the maximum time**, see the table above. Longer wall time limits can be [set manually, see examples](/anselm/job-submission-and-execution/). The job wall clock time defaults to **half the maximum time**, see the table above. Longer wall time limits can be [set manually, see examples](anselm/job-submission-and-execution/).
Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatically. The wall clock time limit can be changed for queuing jobs (state Q) using the qalter command, however it cannot be changed for a running job (state R). Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatically. The wall clock time limit can be changed for queuing jobs (state Q) using the qalter command, however it cannot be changed for a running job (state R).
......
...@@ -204,9 +204,9 @@ Now, configure the applications proxy settings to **localhost:6000**. Use port f ...@@ -204,9 +204,9 @@ Now, configure the applications proxy settings to **localhost:6000**. Use port f
## Graphical User Interface ## Graphical User Interface
* The [X Window system](/general/accessing-the-clusters/graphical-user-interface/x-window-system/) is the principal way to get GUI access to the clusters. * The [X Window system](general/accessing-the-clusters/graphical-user-interface/x-window-system/) is the principal way to get GUI access to the clusters.
* [Virtual Network Computing](/general/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer). * [Virtual Network Computing](general/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
## VPN Access ## VPN Access
* Access IT4Innovations internal resources via [VPN](/general/accessing-the-clusters/vpn-access/). * Access IT4Innovations internal resources via [VPN](general/accessing-the-clusters/vpn-access/).
...@@ -105,7 +105,7 @@ The HOME filesystem is mounted in directory /home. Users home directories /home/ ...@@ -105,7 +105,7 @@ The HOME filesystem is mounted in directory /home. Users home directories /home/
The HOME filesystem should not be used to archive data of past Projects or other unrelated data. The HOME filesystem should not be used to archive data of past Projects or other unrelated data.
The files on HOME filesystem will not be deleted until end of the [users lifecycle](/general/obtaining-login-credentials/obtaining-login-credentials/). The files on HOME filesystem will not be deleted until end of the [users lifecycle](general/obtaining-login-credentials/obtaining-login-credentials/).
The filesystem is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files. The filesystem is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files.
......
...@@ -30,7 +30,7 @@ fi ...@@ -30,7 +30,7 @@ fi
In order to configure your shell for running particular application on clusters we use Module package interface. In order to configure your shell for running particular application on clusters we use Module package interface.
Application modules on clusters are built using [EasyBuild](/software/tools/easybuild/). The modules are divided into the following structure: Application modules on clusters are built using [EasyBuild](software/tools/easybuild/). The modules are divided into the following structure:
``` ```
base: Default module class base: Default module class
...@@ -61,4 +61,4 @@ Application modules on clusters are built using [EasyBuild](/software/tools/easy ...@@ -61,4 +61,4 @@ Application modules on clusters are built using [EasyBuild](/software/tools/easy
!!! note !!! note
The modules set up the application paths, library paths and environment variables for running particular application. The modules set up the application paths, library paths and environment variables for running particular application.
The modules may be loaded, unloaded and switched, according to momentary needs. For details see [here](/software/modules/lmod/). The modules may be loaded, unloaded and switched, according to momentary needs. For details see [here](software/modules/lmod/).
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
The **Virtual Network Computing** (**VNC**) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing "Desktop sharing") system that uses the [Remote Frame Buffer protocol (RFB)](http://en.wikipedia.org/wiki/RFB_protocol "RFB protocol") to remotely control another [computer](http://en.wikipedia.org/wiki/Computer "Computer"). It transmits the [keyboard](http://en.wikipedia.org/wiki/Computer_keyboard "Computer keyboard") and [mouse](http://en.wikipedia.org/wiki/Computer_mouse") events from one computer to another, relaying the graphical [screen](http://en.wikipedia.org/wiki/Computer_screen "Computer screen") updates back in the other direction, over a [network](http://en.wikipedia.org/wiki/Computer_network "Computer network"). The **Virtual Network Computing** (**VNC**) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing "Desktop sharing") system that uses the [Remote Frame Buffer protocol (RFB)](http://en.wikipedia.org/wiki/RFB_protocol "RFB protocol") to remotely control another [computer](http://en.wikipedia.org/wiki/Computer "Computer"). It transmits the [keyboard](http://en.wikipedia.org/wiki/Computer_keyboard "Computer keyboard") and [mouse](http://en.wikipedia.org/wiki/Computer_mouse") events from one computer to another, relaying the graphical [screen](http://en.wikipedia.org/wiki/Computer_screen "Computer screen") updates back in the other direction, over a [network](http://en.wikipedia.org/wiki/Computer_network "Computer network").
Vnc-based connections are usually faster (require less network bandwidth) then [X11](/general/accessing-the-clusters/graphical-user-interface/x-window-system) applications forwarded directly through ssh. Vnc-based connections are usually faster (require less network bandwidth) then [X11](general/accessing-the-clusters/graphical-user-interface/x-window-system) applications forwarded directly through ssh.
The recommended clients are [TightVNC](http://www.tightvnc.com) or [TigerVNC](http://sourceforge.net/apps/mediawiki/tigervnc/index.php?title=Main_Page) (free, open source, available for almost any platform). The recommended clients are [TightVNC](http://www.tightvnc.com) or [TigerVNC](http://sourceforge.net/apps/mediawiki/tigervnc/index.php?title=Main_Page) (free, open source, available for almost any platform).
...@@ -230,7 +230,7 @@ Allow incoming X11 graphics from the compute nodes at the login node: ...@@ -230,7 +230,7 @@ Allow incoming X11 graphics from the compute nodes at the login node:
$ xhost + $ xhost +
``` ```
Get an interactive session on a compute node (for more detailed info [look here](/anselm/job-submission-and-execution/)). Use the **-v DISPLAY** option to propagate the DISPLAY on the compute node. In this example, we want a complete node (16 cores in this example) from the production queue: Get an interactive session on a compute node (for more detailed info [look here](anselm/job-submission-and-execution/)). Use the **-v DISPLAY** option to propagate the DISPLAY on the compute node. In this example, we want a complete node (16 cores in this example) from the production queue:
```console ```console
$ qsub -I -v DISPLAY=$(uname -n):$(echo $DISPLAY | cut -d ':' -f 2) -A PROJECT_ID -q qprod -l select=1:ncpus=16 $ qsub -I -v DISPLAY=$(uname -n):$(echo $DISPLAY | cut -d ':' -f 2) -A PROJECT_ID -q qprod -l select=1:ncpus=16
......
...@@ -23,7 +23,7 @@ We recommned you to download "**A Windows installer for everything except PuTTYt ...@@ -23,7 +23,7 @@ We recommned you to download "**A Windows installer for everything except PuTTYt
* Category - Connection - SSH - Auth: * Category - Connection - SSH - Auth:
Select Attempt authentication using Pageant. Select Attempt authentication using Pageant.
Select Allow agent forwarding. Select Allow agent forwarding.
Browse and select your [private key](ssh-keys/) file. Browse and select your [private key](general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) file.
![](../../../img/PuTTY_keyV.png) ![](../../../img/PuTTY_keyV.png)
...@@ -36,7 +36,7 @@ We recommned you to download "**A Windows installer for everything except PuTTYt ...@@ -36,7 +36,7 @@ We recommned you to download "**A Windows installer for everything except PuTTYt
![](../../../img/PuTTY_open_Salomon.png) ![](../../../img/PuTTY_open_Salomon.png)
* Enter your username if the _Host Name_ input is not in the format "username@salomon.it4i.cz". * Enter your username if the _Host Name_ input is not in the format "username@salomon.it4i.cz".
* Enter passphrase for selected [private key](/general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) file if Pageant **SSH authentication agent is not used.** * Enter passphrase for selected [private key](general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) file if Pageant **SSH authentication agent is not used.**
## Another PuTTY Settings ## Another PuTTY Settings
...@@ -63,7 +63,7 @@ PuTTYgen is the PuTTY key generator. You can load in an existing private key and ...@@ -63,7 +63,7 @@ PuTTYgen is the PuTTY key generator. You can load in an existing private key and
You can change the password of your SSH key with "PuTTY Key Generator". Make sure to backup the key. You can change the password of your SSH key with "PuTTY Key Generator". Make sure to backup the key.
* Load your [private key](/general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) file with _Load_ button. * Load your [private key](general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) file with _Load_ button.
* Enter your current passphrase. * Enter your current passphrase.
* Change key passphrase. * Change key passphrase.
* Confirm key passphrase. * Confirm key passphrase.
...@@ -104,4 +104,4 @@ You can generate an additional public/private key pair and insert public key int ...@@ -104,4 +104,4 @@ You can generate an additional public/private key pair and insert public key int
![](../../../img/PuttyKeygenerator_006V.png) ![](../../../img/PuttyKeygenerator_006V.png)
* Now you can insert additional public key into authorized_keys file for authentication with your own private key. * Now you can insert additional public key into authorized_keys file for authentication with your own private key.
You must log in using ssh key received after registration. Then proceed to [How to add your own key](/general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/). You must log in using ssh key received after registration. Then proceed to [How to add your own key](general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/).
...@@ -15,7 +15,7 @@ It is impossible to connect to VPN from other operating systems. ...@@ -15,7 +15,7 @@ It is impossible to connect to VPN from other operating systems.
## VPN Client Installation ## VPN Client Installation
You can install VPN client from web interface after successful login with [IT4I credentials](/general/obtaining-login-credentials/obtaining-login-credentials/#login-credentials) on address [https://vpn.it4i.cz/user](https://vpn.it4i.cz/user) You can install VPN client from web interface after successful login with [IT4I credentials](general/obtaining-login-credentials/obtaining-login-credentials/#login-credentials) on address [https://vpn.it4i.cz/user](https://vpn.it4i.cz/user)
![](../../img/vpn_web_login.png) ![](../../img/vpn_web_login.png)
......
...@@ -8,4 +8,4 @@ Anyone is welcomed to apply via the [Directors Discretion.](http://www.it4i.cz/o ...@@ -8,4 +8,4 @@ Anyone is welcomed to apply via the [Directors Discretion.](http://www.it4i.cz/o
Foreign (mostly European) users can obtain computational resources via the [PRACE (DECI) program](http://www.prace-ri.eu/DECI-Projects). Foreign (mostly European) users can obtain computational resources via the [PRACE (DECI) program](http://www.prace-ri.eu/DECI-Projects).
In all cases, IT4Innovations’ access mechanisms are aimed at distributing computational resources while taking into account the development and application of supercomputing methods and their benefits and usefulness for society. The applicants are expected to submit a proposal. In the proposal, the applicants **apply for a particular amount of core-hours** of computational resources. The requested core-hours should be substantiated by scientific excellence of the proposal, its computational maturity and expected impacts. Proposals do undergo a scientific, technical and economic evaluation. The allocation decisions are based on this evaluation. More information at [Computing resources allocation](http://www.it4i.cz/computing-resources-allocation/?lang=en) and [Obtaining Login Credentials](/general/obtaining-login-credentials/obtaining-login-credentials/) page. In all cases, IT4Innovations’ access mechanisms are aimed at distributing computational resources while taking into account the development and application of supercomputing methods and their benefits and usefulness for society. The applicants are expected to submit a proposal. In the proposal, the applicants **apply for a particular amount of core-hours** of computational resources. The requested core-hours should be substantiated by scientific excellence of the proposal, its computational maturity and expected impacts. Proposals do undergo a scientific, technical and economic evaluation. The allocation decisions are based on this evaluation. More information at [Computing resources allocation](http://www.it4i.cz/computing-resources-allocation/?lang=en) and [Obtaining Login Credentials](general/obtaining-login-credentials/obtaining-login-credentials/) page.
...@@ -7,7 +7,7 @@ The computational resources of IT4I are allocated by the Allocation Committee to ...@@ -7,7 +7,7 @@ The computational resources of IT4I are allocated by the Allocation Committee to
![](../../img/Authorization_chain.png) ![](../../img/Authorization_chain.png)
!!! note !!! note
You need to either [become the PI](/general/applying-for-resources) or [be named as a collaborator](#authorization-by-web) by a PI in order to access and use the clusters. You need to either [become the PI](general/applying-for-resources) or [be named as a collaborator](#authorization-by-web) by a PI in order to access and use the clusters.
Head of Supercomputing Services acts as a PI of a project DD-13-5. Joining this project, you may **access and explore the clusters**, use software, development environment and computers via the qexp and qfree queues. You may use these resources for own education/research, no paperwork is required. All IT4I employees may contact the Head of Supercomputing Services in order to obtain **free access to the clusters**. Head of Supercomputing Services acts as a PI of a project DD-13-5. Joining this project, you may **access and explore the clusters**, use software, development environment and computers via the qexp and qfree queues. You may use these resources for own education/research, no paperwork is required. All IT4I employees may contact the Head of Supercomputing Services in order to obtain **free access to the clusters**.
...@@ -141,7 +141,7 @@ You will receive your personal login credentials by protected e-mail. The login ...@@ -141,7 +141,7 @@ You will receive your personal login credentials by protected e-mail. The login
1. ssh private key and private key passphrase 1. ssh private key and private key passphrase
1. system password 1. system password
The clusters are accessed by the [private key](/general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) and username. Username and password is used for login to the [information systems](http://support.it4i.cz/). The clusters are accessed by the [private key](general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) and username. Username and password is used for login to the [information systems](http://support.it4i.cz/).
## Authorization by Web ## Authorization by Web
...@@ -192,7 +192,7 @@ On Linux, use ...@@ -192,7 +192,7 @@ On Linux, use
local $ ssh-keygen -f id_rsa -p local $ ssh-keygen -f id_rsa -p
``` ```
On Windows, use [PuTTY Key Generator](/general/accessing-the-clusters/shell-access-and-data-transfer/putty/#putty-key-generator). On Windows, use [PuTTY Key Generator](general/accessing-the-clusters/shell-access-and-data-transfer/putty/#putty-key-generator).
## Certificates for Digital Signatures ## Certificates for Digital Signatures
...@@ -207,7 +207,7 @@ Certificate generation process for academic purposes, utilizing the CESNET certi ...@@ -207,7 +207,7 @@ Certificate generation process for academic purposes, utilizing the CESNET certi
If you are not able to obtain certificate from any of the respected certification authorities, follow the Alternative Way bellow. If you are not able to obtain certificate from any of the respected certification authorities, follow the Alternative Way bellow.
A FAQ about certificates can be found here: [Certificates FAQ](/general/obtaining-login-credentials/certificates-faq/). A FAQ about certificates can be found here: [Certificates FAQ](general/obtaining-login-credentials/certificates-faq/).
## Alternative Way to Personal Certificate ## Alternative Way to Personal Certificate
......
# Resource Allocation and Job Execution # Resource Allocation and Job Execution
To run a [job](/#terminology-frequently-used-on-these-pages), [computational resources](/salomon/resources-allocation-policy#resource-accounting-policy) for this particular job must be allocated. This is done via the PBS Pro job workload manager software, which distributes workloads across the supercomputer. Extensive information about PBS Pro can be found in the [PBS Pro User's Guide](/pbspro). To run a [job](/#terminology-frequently-used-on-these-pages), [computational resources](salomon/resources-allocation-policy#resource-accounting-policy) for this particular job must be allocated. This is done via the PBS Pro job workload manager software, which distributes workloads across the supercomputer. Extensive information about PBS Pro can be found in the [PBS Pro User's Guide](/pbspro).
## Resources Allocation Policy ## Resources Allocation Policy
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share](/salomon/job-priority#fair-share-priority) ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following queues are are the most important: The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share](salomon/job-priority#fair-share-priority) ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following queues are are the most important:
* **qexp**, the Express queue * **qexp**, the Express queue
* **qprod**, the Production queue * **qprod**, the Production queue
...@@ -16,7 +16,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const ...@@ -16,7 +16,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const
!!! note !!! note
Check the queue status at [https://extranet.it4i.cz/](https://extranet.it4i.cz/) Check the queue status at [https://extranet.it4i.cz/](https://extranet.it4i.cz/)
Read more on the [Resource AllocationPolicy](/salomon/resources-allocation-policy) page. Read more on the [Resource AllocationPolicy](salomon/resources-allocation-policy) page.
## Job Submission and Execution ## Job Submission and Execution
...@@ -25,7 +25,7 @@ Read more on the [Resource AllocationPolicy](/salomon/resources-allocation-polic ...@@ -25,7 +25,7 @@ Read more on the [Resource AllocationPolicy](/salomon/resources-allocation-polic
The qsub submits the job into the queue. The qsub command creates a request to the PBS Job manager for allocation of specified resources. The **smallest allocation unit is entire node, 16 cores**, with exception of the qexp queue. The resources will be allocated when available, subject to allocation policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on first of the allocated nodes.** The qsub submits the job into the queue. The qsub command creates a request to the PBS Job manager for allocation of specified resources. The **smallest allocation unit is entire node, 16 cores**, with exception of the qexp queue. The resources will be allocated when available, subject to allocation policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on first of the allocated nodes.**
Read more on the [Job submission and execution](/salomon/job-submission-and-execution) page. Read more on the [Job submission and execution](salomon/job-submission-and-execution) page.
## Capacity Computing ## Capacity Computing
...@@ -36,4 +36,4 @@ Use GNU Parallel and/or Job arrays when running (many) single core jobs. ...@@ -36,4 +36,4 @@ Use GNU Parallel and/or Job arrays when running (many) single core jobs.
In many cases, it is useful to submit huge (100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization. In this chapter, we discuss the the recommended way to run huge number of jobs, including **ways to run huge number of single core jobs**. In many cases, it is useful to submit huge (100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization. In this chapter, we discuss the the recommended way to run huge number of jobs, including **ways to run huge number of single core jobs**.
Read more on [Capacity computing](/salomon/capacity-computing) page. Read more on [Capacity computing](salomon/capacity-computing) page.
...@@ -44,7 +44,7 @@ Configure network for virtualization, create interconnect for fast communication ...@@ -44,7 +44,7 @@ Configure network for virtualization, create interconnect for fast communication
$ qsub ... -l virt_network=true $ qsub ... -l virt_network=true
``` ```
[See Tap Interconnect](/software/tools/virtualization/#tap-interconnect) [See Tap Interconnect](software/tools/virtualization/#tap-interconnect)
## x86 Adapt Support ## x86 Adapt Support
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
## Introduction ## Introduction
PRACE users coming to the TIER-1 systems offered through the DECI calls are in general treated as standard users and so most of the general documentation applies to them as well. This section shows the main differences for quicker orientation, but often uses references to the original documentation. PRACE users who don't undergo the full procedure (including signing the IT4I AuP on top of the PRACE AuP) will not have a password and thus access to some services intended for regular users. This can lower their comfort, but otherwise they should be able to use the TIER-1 system as intended. Please see the [Obtaining Login Credentials section](/general/obtaining-login-credentials/obtaining-login-credentials/), if the same level of access is required. PRACE users coming to the TIER-1 systems offered through the DECI calls are in general treated as standard users and so most of the general documentation applies to them as well. This section shows the main differences for quicker orientation, but often uses references to the original documentation. PRACE users who don't undergo the full procedure (including signing the IT4I AuP on top of the PRACE AuP) will not have a password and thus access to some services intended for regular users. This can lower their comfort, but otherwise they should be able to use the TIER-1 system as intended. Please see the [Obtaining Login Credentials section](general/obtaining-login-credentials/obtaining-login-credentials/), if the same level of access is required.
All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation/) should be read before continuing reading the local documentation here. All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation/) should be read before continuing reading the local documentation here.
...@@ -10,13 +10,13 @@ All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation ...@@ -10,13 +10,13 @@ All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation
If you have any troubles, need information, request support or want to install additional software, use [PRACE Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/). If you have any troubles, need information, request support or want to install additional software, use [PRACE Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/).
Information about the local services are provided in the [introduction of general user documentation Salomon](/salomon/introduction/) and [introduction of general user documentation Anselm](/anselm/introduction/). Please keep in mind, that standard PRACE accounts don't have a password to access the web interface of the local (IT4Innovations) request tracker and thus a new ticket should be created by sending an e-mail to support[at]it4i.cz. Information about the local services are provided in the [introduction of general user documentation Salomon](salomon/introduction/) and [introduction of general user documentation Anselm](anselm/introduction/). Please keep in mind, that standard PRACE accounts don't have a password to access the web interface of the local (IT4Innovations) request tracker and thus a new ticket should be created by sending an e-mail to support[at]it4i.cz.
## Obtaining Login Credentials ## Obtaining Login Credentials
In general PRACE users already have a PRACE account setup through their HOMESITE (institution from their country) as a result of rewarded PRACE project proposal. This includes signed PRACE AuP, generated and registered certificates, etc. In general PRACE users already have a PRACE account setup through their HOMESITE (institution from their country) as a result of rewarded PRACE project proposal. This includes signed PRACE AuP, generated and registered certificates, etc.
If there's a special need a PRACE user can get a standard (local) account at IT4Innovations. To get an account on a cluster, the user needs to obtain the login credentials. The procedure is the same as for general users of the cluster, so see the corresponding [section of the general documentation here](/general/obtaining-login-credentials/obtaining-login-credentials/). If there's a special need a PRACE user can get a standard (local) account at IT4Innovations. To get an account on a cluster, the user needs to obtain the login credentials. The procedure is the same as for general users of the cluster, so see the corresponding [section of the general documentation here](general/obtaining-login-credentials/obtaining-login-credentials/).
## Accessing the Cluster ## Accessing the Cluster
...@@ -147,9 +147,9 @@ $ gsiscp -P 2222 anselm-prace.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_ _LOCAL_PATH_TO_ ...@@ -147,9 +147,9 @@ $ gsiscp -P 2222 anselm-prace.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_ _LOCAL_PATH_TO_
### Access to X11 Applications (VNC) ### Access to X11 Applications (VNC)
If the user needs to run X11 based graphical application and does not have a X11 server, the applications can be run using VNC service. If the user is using regular SSH based access, see the [section in general documentation](/general/accessing-the-clusters/graphical-user-interface/x-window-system/). If the user needs to run X11 based graphical application and does not have a X11 server, the applications can be run using VNC service. If the user is using regular SSH based access, see the [section in general documentation](general/accessing-the-clusters/graphical-user-interface/x-window-system/).
If the user uses GSI SSH based access, then the procedure is similar to the SSH based access ([look here](/general/accessing-the-clusters/graphical-user-interface/x-window-system/)), only the port forwarding must be done using GSI SSH: If the user uses GSI SSH based access, then the procedure is similar to the SSH based access ([look here](general/accessing-the-clusters/graphical-user-interface/x-window-system/)), only the port forwarding must be done using GSI SSH:
```console ```console
$ gsissh -p 2222 salomon.it4i.cz -L 5961:localhost:5961 $ gsissh -p 2222 salomon.it4i.cz -L 5961:localhost:5961
...@@ -157,11 +157,11 @@ $ gsissh -p 2222 salomon.it4i.cz -L 5961:localhost:5961 ...@@ -157,11 +157,11 @@ $ gsissh -p 2222 salomon.it4i.cz -L 5961:localhost:5961
### Access With SSH ### Access With SSH
After successful obtainment of login credentials for the local IT4Innovations account, the PRACE users can access the cluster as regular users using SSH. For more information see [the section in general documentation for Salomon](/salomon/shell-and-data-access/) and [the section in general documentation for Anselm](/anselm/shell-and-data-access/). After successful obtainment of login credentials for the local IT4Innovations account, the PRACE users can access the cluster as regular users using SSH. For more information see [the section in general documentation for Salomon](salomon/shell-and-data-access/) and [the section in general documentation for Anselm](anselm/shell-and-data-access/).
## File Transfers ## File Transfers
PRACE users can use the same transfer mechanisms as regular users (if they've undergone the full registration procedure). For information about this, see [the section in the general documentation for Salomon](/salomon/shell-and-data-access/) and [the section in general documentation for Anselm](/anselm/shell-and-data-access/). PRACE users can use the same transfer mechanisms as regular users (if they've undergone the full registration procedure). For information about this, see [the section in the general documentation for Salomon](salomon/shell-and-data-access/) and [the section in general documentation for Anselm](anselm/shell-and-data-access/).
Apart from the standard mechanisms, for PRACE users to transfer data to/from Salomon cluster, a GridFTP server running Globus Toolkit GridFTP service is available. The service is available from public Internet as well as from the internal PRACE network (accessible only from other PRACE partners). Apart from the standard mechanisms, for PRACE users to transfer data to/from Salomon cluster, a GridFTP server running Globus Toolkit GridFTP service is available. The service is available from public Internet as well as from the internal PRACE network (accessible only from other PRACE partners).
...@@ -302,7 +302,7 @@ Generally both shared file systems are available through GridFTP: ...@@ -302,7 +302,7 @@ Generally both shared file systems are available through GridFTP:
| /home | Lustre | Default HOME directories of users in format /home/prace/login/ | | /home | Lustre | Default HOME directories of users in format /home/prace/login/ |
| /scratch | Lustre | Shared SCRATCH mounted on the whole cluster | | /scratch | Lustre | Shared SCRATCH mounted on the whole cluster |
More information about the shared file systems is available [for Salomon here](/salomon/storage/) and [for anselm here](/anselm/storage). More information about the shared file systems is available [for Salomon here](salomon/storage/) and [for anselm here](anselm/storage).
!!! hint !!! hint
`prace` directory is used for PRACE users on the SCRATCH file system. `prace` directory is used for PRACE users on the SCRATCH file system.
...@@ -318,7 +318,7 @@ Only Salomon cluster /scratch: ...@@ -318,7 +318,7 @@ Only Salomon cluster /scratch:
There are some limitations for PRACE user when using the cluster. By default PRACE users aren't allowed to access special queues in the PBS Pro to have high priority or exclusive access to some special equipment like accelerated nodes and high memory (fat) nodes. There may be also restrictions obtaining a working license for the commercial software installed on the cluster, mostly because of the license agreement or because of insufficient amount of licenses. There are some limitations for PRACE user when using the cluster. By default PRACE users aren't allowed to access special queues in the PBS Pro to have high priority or exclusive access to some special equipment like accelerated nodes and high memory (fat) nodes. There may be also restrictions obtaining a working license for the commercial software installed on the cluster, mostly because of the license agreement or because of insufficient amount of licenses.
For production runs always use scratch file systems. The available file systems are described [for Salomon here](/salomon/storage/) and [for Anselm here](/anselm/storage). For production runs always use scratch file systems. The available file systems are described [for Salomon here](salomon/storage/) and [for Anselm here](anselm/storage).
### Software, Modules and PRACE Common Production Environment ### Software, Modules and PRACE Common Production Environment
...@@ -332,7 +332,7 @@ $ ml prace ...@@ -332,7 +332,7 @@ $ ml prace
### Resource Allocation and Job Execution ### Resource Allocation and Job Execution
General information about the resource allocation, job queuing and job execution is in this [section of general documentation for Salomon](/salomon/resources-allocation-policy/) and [section of general documentation for Anselm](/anselm/resources-allocation-policy/). General information about the resource allocation, job queuing and job execution is in this [section of general documentation for Salomon](salomon/resources-allocation-policy/) and [section of general documentation for Anselm](anselm/resources-allocation-policy/).
For PRACE users, the default production run queue is "qprace". PRACE users can also use two other queues "qexp" and "qfree". For PRACE users, the default production run queue is "qprace". PRACE users can also use two other queues "qexp" and "qfree".
...@@ -356,7 +356,7 @@ For Anselm: ...@@ -356,7 +356,7 @@ For Anselm:
### Accounting & Quota ### Accounting & Quota
The resources that are currently subject to accounting are the core hours. The core hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. See [example in the general documentation for Salomon](/salomon/resources-allocation-policy/) and [example in the general documentation for Anselm](/anselm/resources-allocation-policy/). The resources that are currently subject to accounting are the core hours. The core hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. See [example in the general documentation for Salomon](salomon/resources-allocation-policy/) and [example in the general documentation for Anselm](anselm/resources-allocation-policy/).
PRACE users should check their project accounting using the [PRACE Accounting Tool (DART)](http://www.prace-ri.eu/accounting-report-tool/). PRACE users should check their project accounting using the [PRACE Accounting Tool (DART)](http://www.prace-ri.eu/accounting-report-tool/).
......
...@@ -147,7 +147,7 @@ Display status information for all user's subjobs. ...@@ -147,7 +147,7 @@ Display status information for all user's subjobs.
$ qstat -u $USER -tJ $ qstat -u $USER -tJ
``` ```
Read more on job arrays in the [PBSPro Users guide](/software/pbspro/). Read more on job arrays in the [PBSPro Users guide](software/pbspro/).
## GNU Parallel ## GNU Parallel
......
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
Salomon is cluster of x86-64 Intel based nodes. The cluster contains two types of compute nodes of the same processor type and memory size. Salomon is cluster of x86-64 Intel based nodes. The cluster contains two types of compute nodes of the same processor type and memory size.
Compute nodes with MIC accelerator **contains two Intel Xeon Phi 7120P accelerators.** Compute nodes with MIC accelerator **contains two Intel Xeon Phi 7120P accelerators.**
[More about schematic representation of the Salomon cluster compute nodes IB topology](/salomon/ib-single-plane-topology/). [More about schematic representation of the Salomon cluster compute nodes IB topology](salomon/ib-single-plane-topology/).
### Compute Nodes Without Accelerator ### Compute Nodes Without Accelerator
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
The Salomon cluster consists of 1008 computational nodes of which 576 are regular compute nodes and 432 accelerated nodes. Each node is a powerful x86-64 computer, equipped with 24 cores (two twelve-core Intel Xeon processors) and 128 GB RAM. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 0.5 PB /home NFS disk storage to store the user files. Users may use a DDN Lustre shared storage with capacity of 1.69 PB which is available for the scratch project data. The user access to the Salomon cluster is provided by four login nodes. The Salomon cluster consists of 1008 computational nodes of which 576 are regular compute nodes and 432 accelerated nodes. Each node is a powerful x86-64 computer, equipped with 24 cores (two twelve-core Intel Xeon processors) and 128 GB RAM. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 0.5 PB /home NFS disk storage to store the user files. Users may use a DDN Lustre shared storage with capacity of 1.69 PB which is available for the scratch project data. The user access to the Salomon cluster is provided by four login nodes.
[More about schematic representation of the Salomon cluster compute nodes IB topology](/salomon/ib-single-plane-topology/). [More about schematic representation of the Salomon cluster compute nodes IB topology](salomon/ib-single-plane-topology/).
![Salomon](../img/salomon-2.jpg) ![Salomon](../img/salomon-2.jpg)
...@@ -17,7 +17,7 @@ The parameters are summarized in the following tables: ...@@ -17,7 +17,7 @@ The parameters are summarized in the following tables:
| Primary purpose | High Performance Computing | | Primary purpose | High Performance Computing |
| Architecture of compute nodes | x86-64 | | Architecture of compute nodes | x86-64 |
| Operating system | CentOS 6.x Linux | | Operating system | CentOS 6.x Linux |
| [**Compute nodes**](/salomon/compute-nodes/) | | | [**Compute nodes**](salomon/compute-nodes/) | |
| Totally | 1008 | | Totally | 1008 |
| Processor | 2 x Intel Xeon E5-2680v3, 2.5 GHz, 12 cores | | Processor | 2 x Intel Xeon E5-2680v3, 2.5 GHz, 12 cores |
| RAM | 128GB, 5.3 GB per core, DDR4@2133 MHz | | RAM | 128GB, 5.3 GB per core, DDR4@2133 MHz |
...@@ -36,7 +36,7 @@ The parameters are summarized in the following tables: ...@@ -36,7 +36,7 @@ The parameters are summarized in the following tables:
| w/o accelerator | 576 | 2 x Intel Xeon E5-2680v3, 2.5 GHz | 24 | 128 GB | - | | w/o accelerator | 576 | 2 x Intel Xeon E5-2680v3, 2.5 GHz | 24 | 128 GB | - |
| MIC accelerated | 432 | 2 x Intel Xeon E5-2680v3, 2.5 GHz | 24 | 128 GB | 2 x Intel Xeon Phi 7120P, 61 cores, 16 GB RAM | | MIC accelerated | 432 | 2 x Intel Xeon E5-2680v3, 2.5 GHz | 24 | 128 GB | 2 x Intel Xeon Phi 7120P, 61 cores, 16 GB RAM |
For more details refer to the [Compute nodes](/salomon/compute-nodes/). For more details refer to the [Compute nodes](salomon/compute-nodes/).
## Remote Visualization Nodes ## Remote Visualization Nodes
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment