Commit 231dd723 authored by Lukáš Krupčík's avatar Lukáš Krupčík

fix links

parent 2620cbbe
Pipeline #5156 passed with stages
in 1 minute and 1 second
......@@ -9,13 +9,13 @@ However, executing a huge number of jobs via the PBS queue may strain the system
!!! note
Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
* Use [Job arrays](/anselm/capacity-computing/#job-arrays) when running a huge number of [multithread](anselm/capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
* Use [GNU parallel](/anselm/capacity-computing/#gnu-parallel) when running single core jobs
* Combine [GNU parallel with Job arrays](/anselm/capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
* Use [Job arrays](anselm/capacity-computing/#job-arrays) when running a huge number of [multithread](anselm/capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
* Use [GNU parallel](anselm/capacity-computing/#gnu-parallel) when running single core jobs
* Combine [GNU parallel with Job arrays](anselm/capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
## Policy
1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](/anselm/capacity-computing/#job-arrays).
1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](anselm/capacity-computing/#job-arrays).
1. The array size is at most 1000 subjobs.
## Job Arrays
......@@ -76,7 +76,7 @@ If running a huge number of parallel multicore (in means of multinode multithrea
### Submit the Job Array
To submit the job array, use the qsub -J command. The 900 jobs of the [example above](/anselm/capacity-computing/#array_example) may be submitted like this:
To submit the job array, use the qsub -J command. The 900 jobs of the [example above](anselm/capacity-computing/#array_example) may be submitted like this:
```console
$ qsub -N JOBNAME -J 1-900 jobscript
......@@ -207,7 +207,7 @@ In this example, tasks from the tasklist are executed via the GNU parallel. The
### Submit the Job
To submit the job, use the qsub command. The 101 task job of the [example above](/anselm/capacity-computing/#gp_example) may be submitted as follows:
To submit the job, use the qsub command. The 101 task job of the [example above](anselm/capacity-computing/#gp_example) may be submitted as follows:
```console
$ qsub -N JOBNAME jobscript
......@@ -292,7 +292,7 @@ When deciding this values, keep in mind the following guiding rules:
### Submit the Job Array (-J)
To submit the job array, use the qsub -J command. The 992 task job of the [example above](/anselm/capacity-computing/#combined_example) may be submitted like this:
To submit the job array, use the qsub -J command. The 992 task job of the [example above](anselm/capacity-computing/#combined_example) may be submitted like this:
```console
$ qsub -N JOBNAME -J 1-992:32 jobscript
......
......@@ -52,7 +52,7 @@ Anselm is cluster of x86-64 Intel based nodes built with Bull Extreme Computing
### Compute Node Summary
| Node type | Count | Range | Memory | Cores | [Access](/general/resources-allocation-policy/) |
| Node type | Count | Range | Memory | Cores | [Access](general/resources-allocation-policy/) |
| ---------------------------- | ----- | ----------- | ------ | ----------- | -------------------------------------- |
| Nodes without an accelerator | 180 | cn[1-180] | 64GB | 16 @ 2.4GHz | qexp, qprod, qlong, qfree, qprace, qatlas |
| Nodes with a GPU accelerator | 23 | cn[181-203] | 96GB | 16 @ 2.3GHz | qnvidia, qexp |
......
......@@ -17,16 +17,16 @@ There are four types of compute nodes:
* 4 compute nodes with a MIC accelerator - an Intel Xeon Phi 5110P
* 2 fat nodes - equipped with 512 GB of RAM and two 100 GB SSD drives
[More about Compute nodes](/anselm/compute-nodes/).
[More about Compute nodes](anselm/compute-nodes/).
GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy](/anselm/resources-allocation-policy/).
GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy](anselm/resources-allocation-policy/).
All of these nodes are interconnected through fast InfiniBand and Ethernet networks. [More about the Network](/anselm/network/).
All of these nodes are interconnected through fast InfiniBand and Ethernet networks. [More about the Network](anselm/network/).
Every chassis provides an InfiniBand switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches.
All of the nodes share a 360 TB /home disk for storage of user files. The 146 TB shared /scratch storage is available for scratch data. These file systems are provided by the Lustre parallel file system. There is also local disk storage available on all compute nodes in /lscratch. [More about Storage](/anselm/storage/).
All of the nodes share a 360 TB /home disk for storage of user files. The 146 TB shared /scratch storage is available for scratch data. These file systems are provided by the Lustre parallel file system. There is also local disk storage available on all compute nodes in /lscratch. [More about Storage](anselm/storage/).
User access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing the cluster.](/anselm/shell-and-data-access/)
User access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing the cluster.](anselm/shell-and-data-access/)
The parameters are summarized in the following tables:
......@@ -35,7 +35,7 @@ The parameters are summarized in the following tables:
| Primary purpose | High Performance Computing |
| Architecture of compute nodes | x86-64 |
| Operating system | Linux (CentOS) |
| [**Compute nodes**](/anselm/compute-nodes/) | |
| [**Compute nodes**](anselm/compute-nodes/) | |
| Total | 209 |
| Processor cores | 16 (2 x 8 cores) |
| RAM | min. 64 GB, min. 4 GB per core |
......@@ -57,4 +57,4 @@ The parameters are summarized in the following tables:
| MIC accelerated | 2 x Intel Sandy Bridge E5-2470, 2.3 GHz | 96 GB | Intel Xeon Phi 5110P |
| Fat compute node | 2 x Intel Sandy Bridge E5-2665, 2.4 GHz | 512 GB | - |
For more details refer to [Compute nodes](/anselm/compute-nodes/), [Storage](anselm/storage/), and [Network](anselm/network/).
For more details refer to [Compute nodes](anselm/compute-nodes/), [Storage](anselm/storage/), and [Network](anselm/network/).
# Introduction
Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totalling 3344 compute cores with 15 TB RAM, giving over 94 TFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB of RAM, and a 500 GB hard disk drive. Nodes are interconnected through a fully non-blocking fat-tree InfiniBand network, and are equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](/anselm/hardware-overview/).
Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totalling 3344 compute cores with 15 TB RAM, giving over 94 TFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB of RAM, and a 500 GB hard disk drive. Nodes are interconnected through a fully non-blocking fat-tree InfiniBand network, and are equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](anselm/hardware-overview/).
The cluster runs with an [operating system](/software/operating-system/) which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/).
The cluster runs with an [operating system](software/operating-system/) which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/).
The user data shared file-system (HOME, 320 TB) and job data shared file-system (SCRATCH, 146 TB) are available to users.
The PBS Professional workload manager provides [computing resources allocations and job execution](/anselm/resources-allocation-policy/).
The PBS Professional workload manager provides [computing resources allocations and job execution](anselm/resources-allocation-policy/).
Read more on how to [apply for resources](/general/applying-for-resources/), [obtain login credentials](general/obtaining-login-credentials/obtaining-login-credentials/) and [access the cluster](/anselm/shell-and-data-access/).
Read more on how to [apply for resources](general/applying-for-resources/), [obtain login credentials](general/obtaining-login-credentials/obtaining-login-credentials/) and [access the cluster](anselm/shell-and-data-access/).
......@@ -92,9 +92,9 @@ In this example, we allocate 4 nodes, 16 cores per node, selecting only the node
### Placement by IB Switch
Groups of computational nodes are connected to chassis integrated Infiniband switches. These switches form the leaf switch layer of the [Infiniband network](/anselm/network/) fat tree topology. Nodes sharing the leaf switch can communicate most efficiently. Sharing the same switch prevents hops in the network and facilitates unbiased, highly efficient network communication.
Groups of computational nodes are connected to chassis integrated Infiniband switches. These switches form the leaf switch layer of the [Infiniband network](anselm/network/) fat tree topology. Nodes sharing the leaf switch can communicate most efficiently. Sharing the same switch prevents hops in the network and facilitates unbiased, highly efficient network communication.
Nodes sharing the same switch may be selected via the PBS resource attribute ibswitch. Values of this attribute are iswXX, where XX is the switch number. The node-switch mapping can be seen in the [Hardware Overview](/anselm/hardware-overview/) section.
Nodes sharing the same switch may be selected via the PBS resource attribute ibswitch. Values of this attribute are iswXX, where XX is the switch number. The node-switch mapping can be seen in the [Hardware Overview](anselm/hardware-overview/) section.
We recommend allocating compute nodes to a single switch when best possible computational network performance is required to run the job efficiently:
......@@ -373,7 +373,7 @@ exit
In this example, input and executable files are assumed to be preloaded manually in the /scratch/$USER/myjob directory. Note the **mpiprocs** and **ompthreads** qsub options controlling the behavior of the MPI execution. mympiprog.x is executed as one process per node, on all 100 allocated nodes. If mympiprog.x implements OpenMP threads, it will run 16 threads per node.
More information can be found in the [Running OpenMPI](/software/mpi/Running_OpenMPI/) and [Running MPICH2](software/mpi/running-mpich2/)
More information can be found in the [Running OpenMPI](software/mpi/Running_OpenMPI/) and [Running MPICH2](software/mpi/running-mpich2/)
sections.
### Example Jobscript for Single Node Calculation
......@@ -381,7 +381,7 @@ sections.
!!! note
The local scratch directory is often useful for single node jobs. Local scratch memory will be deleted immediately after the job ends.
Example jobscript for single node calculation, using [local scratch](/anselm/storage/) memory on the node:
Example jobscript for single node calculation, using [local scratch](anselm/storage/) memory on the node:
```bash
#!/bin/bash
......@@ -407,4 +407,4 @@ In this example, a directory in /home holds the input file input and executable
### Other Jobscript Examples
Further jobscript examples may be found in the software section and the [Capacity computing](/anselm/capacity-computing/) section.
Further jobscript examples may be found in the software section and the [Capacity computing](anselm/capacity-computing/) section.
......@@ -2,7 +2,7 @@
## Job Queue Policies
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and the resources available to the Project. The Fair-share system of Anselm ensures that individual users may consume approximately equal amounts of resources per week. Detailed information can be found in the [Job scheduling](/anselm/job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. The following table provides the queue partitioning overview:
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and the resources available to the Project. The Fair-share system of Anselm ensures that individual users may consume approximately equal amounts of resources per week. Detailed information can be found in the [Job scheduling](anselm/job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. The following table provides the queue partitioning overview:
!!! note
Check the queue status at <https://extranet.it4i.cz/anselm/>
......@@ -29,7 +29,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const
## Queue Notes
The job wall clock time defaults to **half the maximum time**, see the table above. Longer wall time limits can be [set manually, see examples](/anselm/job-submission-and-execution/).
The job wall clock time defaults to **half the maximum time**, see the table above. Longer wall time limits can be [set manually, see examples](anselm/job-submission-and-execution/).
Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatically. The wall clock time limit can be changed for queuing jobs (state Q) using the qalter command, however it cannot be changed for a running job (state R).
......
......@@ -204,9 +204,9 @@ Now, configure the applications proxy settings to **localhost:6000**. Use port f
## Graphical User Interface
* The [X Window system](/general/accessing-the-clusters/graphical-user-interface/x-window-system/) is the principal way to get GUI access to the clusters.
* [Virtual Network Computing](/general/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
* The [X Window system](general/accessing-the-clusters/graphical-user-interface/x-window-system/) is the principal way to get GUI access to the clusters.
* [Virtual Network Computing](general/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
## VPN Access
* Access IT4Innovations internal resources via [VPN](/general/accessing-the-clusters/vpn-access/).
* Access IT4Innovations internal resources via [VPN](general/accessing-the-clusters/vpn-access/).
......@@ -105,7 +105,7 @@ The HOME filesystem is mounted in directory /home. Users home directories /home/
The HOME filesystem should not be used to archive data of past Projects or other unrelated data.
The files on HOME filesystem will not be deleted until end of the [users lifecycle](/general/obtaining-login-credentials/obtaining-login-credentials/).
The files on HOME filesystem will not be deleted until end of the [users lifecycle](general/obtaining-login-credentials/obtaining-login-credentials/).
The filesystem is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files.
......
......@@ -30,7 +30,7 @@ fi
In order to configure your shell for running particular application on clusters we use Module package interface.
Application modules on clusters are built using [EasyBuild](/software/tools/easybuild/). The modules are divided into the following structure:
Application modules on clusters are built using [EasyBuild](software/tools/easybuild/). The modules are divided into the following structure:
```
base: Default module class
......@@ -61,4 +61,4 @@ Application modules on clusters are built using [EasyBuild](/software/tools/easy
!!! note
The modules set up the application paths, library paths and environment variables for running particular application.
The modules may be loaded, unloaded and switched, according to momentary needs. For details see [here](/software/modules/lmod/).
The modules may be loaded, unloaded and switched, according to momentary needs. For details see [here](software/modules/lmod/).
......@@ -2,7 +2,7 @@
The **Virtual Network Computing** (**VNC**) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing "Desktop sharing") system that uses the [Remote Frame Buffer protocol (RFB)](http://en.wikipedia.org/wiki/RFB_protocol "RFB protocol") to remotely control another [computer](http://en.wikipedia.org/wiki/Computer "Computer"). It transmits the [keyboard](http://en.wikipedia.org/wiki/Computer_keyboard "Computer keyboard") and [mouse](http://en.wikipedia.org/wiki/Computer_mouse") events from one computer to another, relaying the graphical [screen](http://en.wikipedia.org/wiki/Computer_screen "Computer screen") updates back in the other direction, over a [network](http://en.wikipedia.org/wiki/Computer_network "Computer network").
Vnc-based connections are usually faster (require less network bandwidth) then [X11](/general/accessing-the-clusters/graphical-user-interface/x-window-system) applications forwarded directly through ssh.
Vnc-based connections are usually faster (require less network bandwidth) then [X11](general/accessing-the-clusters/graphical-user-interface/x-window-system) applications forwarded directly through ssh.
The recommended clients are [TightVNC](http://www.tightvnc.com) or [TigerVNC](http://sourceforge.net/apps/mediawiki/tigervnc/index.php?title=Main_Page) (free, open source, available for almost any platform).
......@@ -230,7 +230,7 @@ Allow incoming X11 graphics from the compute nodes at the login node:
$ xhost +
```
Get an interactive session on a compute node (for more detailed info [look here](/anselm/job-submission-and-execution/)). Use the **-v DISPLAY** option to propagate the DISPLAY on the compute node. In this example, we want a complete node (16 cores in this example) from the production queue:
Get an interactive session on a compute node (for more detailed info [look here](anselm/job-submission-and-execution/)). Use the **-v DISPLAY** option to propagate the DISPLAY on the compute node. In this example, we want a complete node (16 cores in this example) from the production queue:
```console
$ qsub -I -v DISPLAY=$(uname -n):$(echo $DISPLAY | cut -d ':' -f 2) -A PROJECT_ID -q qprod -l select=1:ncpus=16
......
......@@ -23,7 +23,7 @@ We recommned you to download "**A Windows installer for everything except PuTTYt
* Category - Connection - SSH - Auth:
Select Attempt authentication using Pageant.
Select Allow agent forwarding.
Browse and select your [private key](ssh-keys/) file.
Browse and select your [private key](general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) file.
![](../../../img/PuTTY_keyV.png)
......@@ -36,7 +36,7 @@ We recommned you to download "**A Windows installer for everything except PuTTYt
![](../../../img/PuTTY_open_Salomon.png)
* Enter your username if the _Host Name_ input is not in the format "username@salomon.it4i.cz".
* Enter passphrase for selected [private key](/general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) file if Pageant **SSH authentication agent is not used.**
* Enter passphrase for selected [private key](general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) file if Pageant **SSH authentication agent is not used.**
## Another PuTTY Settings
......@@ -63,7 +63,7 @@ PuTTYgen is the PuTTY key generator. You can load in an existing private key and
You can change the password of your SSH key with "PuTTY Key Generator". Make sure to backup the key.
* Load your [private key](/general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) file with _Load_ button.
* Load your [private key](general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) file with _Load_ button.
* Enter your current passphrase.
* Change key passphrase.
* Confirm key passphrase.
......@@ -104,4 +104,4 @@ You can generate an additional public/private key pair and insert public key int
![](../../../img/PuttyKeygenerator_006V.png)
* Now you can insert additional public key into authorized_keys file for authentication with your own private key.
You must log in using ssh key received after registration. Then proceed to [How to add your own key](/general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/).
You must log in using ssh key received after registration. Then proceed to [How to add your own key](general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/).
......@@ -15,7 +15,7 @@ It is impossible to connect to VPN from other operating systems.
## VPN Client Installation
You can install VPN client from web interface after successful login with [IT4I credentials](/general/obtaining-login-credentials/obtaining-login-credentials/#login-credentials) on address [https://vpn.it4i.cz/user](https://vpn.it4i.cz/user)
You can install VPN client from web interface after successful login with [IT4I credentials](general/obtaining-login-credentials/obtaining-login-credentials/#login-credentials) on address [https://vpn.it4i.cz/user](https://vpn.it4i.cz/user)
![](../../img/vpn_web_login.png)
......
......@@ -8,4 +8,4 @@ Anyone is welcomed to apply via the [Directors Discretion.](http://www.it4i.cz/o
Foreign (mostly European) users can obtain computational resources via the [PRACE (DECI) program](http://www.prace-ri.eu/DECI-Projects).
In all cases, IT4Innovations’ access mechanisms are aimed at distributing computational resources while taking into account the development and application of supercomputing methods and their benefits and usefulness for society. The applicants are expected to submit a proposal. In the proposal, the applicants **apply for a particular amount of core-hours** of computational resources. The requested core-hours should be substantiated by scientific excellence of the proposal, its computational maturity and expected impacts. Proposals do undergo a scientific, technical and economic evaluation. The allocation decisions are based on this evaluation. More information at [Computing resources allocation](http://www.it4i.cz/computing-resources-allocation/?lang=en) and [Obtaining Login Credentials](/general/obtaining-login-credentials/obtaining-login-credentials/) page.
In all cases, IT4Innovations’ access mechanisms are aimed at distributing computational resources while taking into account the development and application of supercomputing methods and their benefits and usefulness for society. The applicants are expected to submit a proposal. In the proposal, the applicants **apply for a particular amount of core-hours** of computational resources. The requested core-hours should be substantiated by scientific excellence of the proposal, its computational maturity and expected impacts. Proposals do undergo a scientific, technical and economic evaluation. The allocation decisions are based on this evaluation. More information at [Computing resources allocation](http://www.it4i.cz/computing-resources-allocation/?lang=en) and [Obtaining Login Credentials](general/obtaining-login-credentials/obtaining-login-credentials/) page.
......@@ -7,7 +7,7 @@ The computational resources of IT4I are allocated by the Allocation Committee to
![](../../img/Authorization_chain.png)
!!! note
You need to either [become the PI](/general/applying-for-resources) or [be named as a collaborator](#authorization-by-web) by a PI in order to access and use the clusters.
You need to either [become the PI](general/applying-for-resources) or [be named as a collaborator](#authorization-by-web) by a PI in order to access and use the clusters.
Head of Supercomputing Services acts as a PI of a project DD-13-5. Joining this project, you may **access and explore the clusters**, use software, development environment and computers via the qexp and qfree queues. You may use these resources for own education/research, no paperwork is required. All IT4I employees may contact the Head of Supercomputing Services in order to obtain **free access to the clusters**.
......@@ -141,7 +141,7 @@ You will receive your personal login credentials by protected e-mail. The login
1. ssh private key and private key passphrase
1. system password
The clusters are accessed by the [private key](/general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) and username. Username and password is used for login to the [information systems](http://support.it4i.cz/).
The clusters are accessed by the [private key](general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) and username. Username and password is used for login to the [information systems](http://support.it4i.cz/).
## Authorization by Web
......@@ -192,7 +192,7 @@ On Linux, use
local $ ssh-keygen -f id_rsa -p
```
On Windows, use [PuTTY Key Generator](/general/accessing-the-clusters/shell-access-and-data-transfer/putty/#putty-key-generator).
On Windows, use [PuTTY Key Generator](general/accessing-the-clusters/shell-access-and-data-transfer/putty/#putty-key-generator).
## Certificates for Digital Signatures
......@@ -207,7 +207,7 @@ Certificate generation process for academic purposes, utilizing the CESNET certi
If you are not able to obtain certificate from any of the respected certification authorities, follow the Alternative Way bellow.
A FAQ about certificates can be found here: [Certificates FAQ](/general/obtaining-login-credentials/certificates-faq/).
A FAQ about certificates can be found here: [Certificates FAQ](general/obtaining-login-credentials/certificates-faq/).
## Alternative Way to Personal Certificate
......
# Resource Allocation and Job Execution
To run a [job](/#terminology-frequently-used-on-these-pages), [computational resources](/salomon/resources-allocation-policy#resource-accounting-policy) for this particular job must be allocated. This is done via the PBS Pro job workload manager software, which distributes workloads across the supercomputer. Extensive information about PBS Pro can be found in the [PBS Pro User's Guide](/pbspro).
To run a [job](/#terminology-frequently-used-on-these-pages), [computational resources](salomon/resources-allocation-policy#resource-accounting-policy) for this particular job must be allocated. This is done via the PBS Pro job workload manager software, which distributes workloads across the supercomputer. Extensive information about PBS Pro can be found in the [PBS Pro User's Guide](/pbspro).
## Resources Allocation Policy
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share](/salomon/job-priority#fair-share-priority) ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following queues are are the most important:
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share](salomon/job-priority#fair-share-priority) ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following queues are are the most important:
* **qexp**, the Express queue
* **qprod**, the Production queue
......@@ -16,7 +16,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const
!!! note
Check the queue status at [https://extranet.it4i.cz/](https://extranet.it4i.cz/)
Read more on the [Resource AllocationPolicy](/salomon/resources-allocation-policy) page.
Read more on the [Resource AllocationPolicy](salomon/resources-allocation-policy) page.
## Job Submission and Execution
......@@ -25,7 +25,7 @@ Read more on the [Resource AllocationPolicy](/salomon/resources-allocation-polic
The qsub submits the job into the queue. The qsub command creates a request to the PBS Job manager for allocation of specified resources. The **smallest allocation unit is entire node, 16 cores**, with exception of the qexp queue. The resources will be allocated when available, subject to allocation policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on first of the allocated nodes.**
Read more on the [Job submission and execution](/salomon/job-submission-and-execution) page.
Read more on the [Job submission and execution](salomon/job-submission-and-execution) page.
## Capacity Computing
......@@ -36,4 +36,4 @@ Use GNU Parallel and/or Job arrays when running (many) single core jobs.
In many cases, it is useful to submit huge (100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization. In this chapter, we discuss the the recommended way to run huge number of jobs, including **ways to run huge number of single core jobs**.
Read more on [Capacity computing](/salomon/capacity-computing) page.
Read more on [Capacity computing](salomon/capacity-computing) page.
......@@ -44,7 +44,7 @@ Configure network for virtualization, create interconnect for fast communication
$ qsub ... -l virt_network=true
```
[See Tap Interconnect](/software/tools/virtualization/#tap-interconnect)
[See Tap Interconnect](software/tools/virtualization/#tap-interconnect)
## x86 Adapt Support
......
This diff is collapsed.
......@@ -147,7 +147,7 @@ Display status information for all user's subjobs.
$ qstat -u $USER -tJ
```
Read more on job arrays in the [PBSPro Users guide](/software/pbspro/).
Read more on job arrays in the [PBSPro Users guide](software/pbspro/).
## GNU Parallel
......
......@@ -5,7 +5,7 @@
Salomon is cluster of x86-64 Intel based nodes. The cluster contains two types of compute nodes of the same processor type and memory size.
Compute nodes with MIC accelerator **contains two Intel Xeon Phi 7120P accelerators.**
[More about schematic representation of the Salomon cluster compute nodes IB topology](/salomon/ib-single-plane-topology/).
[More about schematic representation of the Salomon cluster compute nodes IB topology](salomon/ib-single-plane-topology/).
### Compute Nodes Without Accelerator
......
......@@ -4,7 +4,7 @@
The Salomon cluster consists of 1008 computational nodes of which 576 are regular compute nodes and 432 accelerated nodes. Each node is a powerful x86-64 computer, equipped with 24 cores (two twelve-core Intel Xeon processors) and 128 GB RAM. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 0.5 PB /home NFS disk storage to store the user files. Users may use a DDN Lustre shared storage with capacity of 1.69 PB which is available for the scratch project data. The user access to the Salomon cluster is provided by four login nodes.
[More about schematic representation of the Salomon cluster compute nodes IB topology](/salomon/ib-single-plane-topology/).
[More about schematic representation of the Salomon cluster compute nodes IB topology](salomon/ib-single-plane-topology/).
![Salomon](../img/salomon-2.jpg)
......@@ -17,7 +17,7 @@ The parameters are summarized in the following tables:
| Primary purpose | High Performance Computing |
| Architecture of compute nodes | x86-64 |
| Operating system | CentOS 6.x Linux |
| [**Compute nodes**](/salomon/compute-nodes/) | |
| [**Compute nodes**](salomon/compute-nodes/) | |
| Totally | 1008 |
| Processor | 2 x Intel Xeon E5-2680v3, 2.5 GHz, 12 cores |
| RAM | 128GB, 5.3 GB per core, DDR4@2133 MHz |
......@@ -36,7 +36,7 @@ The parameters are summarized in the following tables:
| w/o accelerator | 576 | 2 x Intel Xeon E5-2680v3, 2.5 GHz | 24 | 128 GB | - |
| MIC accelerated | 432 | 2 x Intel Xeon E5-2680v3, 2.5 GHz | 24 | 128 GB | 2 x Intel Xeon Phi 7120P, 61 cores, 16 GB RAM |
For more details refer to the [Compute nodes](/salomon/compute-nodes/).
For more details refer to the [Compute nodes](salomon/compute-nodes/).
## Remote Visualization Nodes
......
......@@ -18,9 +18,9 @@ Each color in each physical IRU represents one dual-switch ASIC switch.
## IB Single-Plane Topology - Accelerated Nodes
Each of the 3 inter-connected D racks are equivalent to one half of M-Cell rack. 18 x D rack with MIC accelerated nodes [r21-r38] are equivalent to 3 M-Cell racks as shown in a diagram [7D Enhanced Hypercube](/salomon/7d-enhanced-hypercube/).
Each of the 3 inter-connected D racks are equivalent to one half of M-Cell rack. 18 x D rack with MIC accelerated nodes [r21-r38] are equivalent to 3 M-Cell racks as shown in a diagram [7D Enhanced Hypercube](salomon/7d-enhanced-hypercube/).
As shown in a diagram [IB Topology](/salomon/7d-enhanced-hypercube/#ib-topology)
As shown in a diagram [IB Topology](salomon/7d-enhanced-hypercube/#ib-topology)
* Racks 21, 22, 23, 24, 25, 26 are equivalent to one M-Cell rack.
* Racks 27, 28, 29, 30, 31, 32 are equivalent to one M-Cell rack.
......
# Introduction
Welcome to Salomon supercomputer cluster. The Salomon cluster consists of 1008 compute nodes, totalling 24192 compute cores with 129 TB RAM and giving over 2 Pflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 24 cores, and at least 128 GB RAM. Nodes are interconnected through a 7D Enhanced hypercube InfiniBand network and are equipped with Intel Xeon E5-2680v3 processors. The Salomon cluster consists of 576 nodes without accelerators, and 432 nodes equipped with Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](/salomon/hardware-overview/).
Welcome to Salomon supercomputer cluster. The Salomon cluster consists of 1008 compute nodes, totalling 24192 compute cores with 129 TB RAM and giving over 2 Pflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 24 cores, and at least 128 GB RAM. Nodes are interconnected through a 7D Enhanced hypercube InfiniBand network and are equipped with Intel Xeon E5-2680v3 processors. The Salomon cluster consists of 576 nodes without accelerators, and 432 nodes equipped with Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](salomon/hardware-overview/).
The cluster runs with a [CentOS Linux](http://www.bull.com/bullx-logiciels/systeme-exploitation.html) operating system, which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg)
......
......@@ -72,6 +72,6 @@ Specifying more accurate walltime enables better scheduling, better execution ti
### Job Placement
Job [placement can be controlled by flags during submission](/salomon/job-submission-and-execution/#job_placement).
Job [placement can be controlled by flags during submission](salomon/job-submission-and-execution/#job_placement).
---8<--- "mathjax.md"
......@@ -102,7 +102,7 @@ exec_vnode = (r21u05n581-mic0:naccelerators=1:ncpus=0)
Per NUMA node allocation.
Jobs are isolated by cpusets.
The UV2000 (node uv1) offers 3TB of RAM and 104 cores, distributed in 13 NUMA nodes. A NUMA node packs 8 cores and approx. 247GB RAM (with exception, node 11 has only 123GB RAM). In the PBS the UV2000 provides 13 chunks, a chunk per NUMA node (see [Resource allocation policy](/salomon/resources-allocation-policy/)). The jobs on UV2000 are isolated from each other by cpusets, so that a job by one user may not utilize CPU or memory allocated to a job by other user. Always, full chunks are allocated, a job may only use resources of the NUMA nodes allocated to itself.
The UV2000 (node uv1) offers 3TB of RAM and 104 cores, distributed in 13 NUMA nodes. A NUMA node packs 8 cores and approx. 247GB RAM (with exception, node 11 has only 123GB RAM). In the PBS the UV2000 provides 13 chunks, a chunk per NUMA node (see [Resource allocation policy](salomon/resources-allocation-policy/)). The jobs on UV2000 are isolated from each other by cpusets, so that a job by one user may not utilize CPU or memory allocated to a job by other user. Always, full chunks are allocated, a job may only use resources of the NUMA nodes allocated to itself.
```console
$ qsub -A OPEN-0-0 -q qfat -l select=13 ./myjob
......@@ -165,7 +165,7 @@ In this example, we allocate nodes r24u35n680 and r24u36n681, all 24 cores per n
### Placement by Network Location
Network location of allocated nodes in the [InifiBand network](/salomon/network/) influences efficiency of network communication between nodes of job. Nodes on the same InifiBand switch communicate faster with lower latency than distant nodes. To improve communication efficiency of jobs, PBS scheduler on Salomon is configured to allocate nodes - from currently available resources - which are as close as possible in the network topology.
Network location of allocated nodes in the [InifiBand network](salomon/network/) influences efficiency of network communication between nodes of job. Nodes on the same InifiBand switch communicate faster with lower latency than distant nodes. To improve communication efficiency of jobs, PBS scheduler on Salomon is configured to allocate nodes - from currently available resources - which are as close as possible in the network topology.
For communication intensive jobs it is possible to set stricter requirement - to require nodes directly connected to the same InifiBand switch or to require nodes located in the same dimension group of the InifiBand network.
......@@ -238,7 +238,7 @@ Nodes located in the same dimension group may be allocated using node grouping o
| 6D | ehc_6d | 432,576 |
| 7D | ehc_7d | all |
In this example, we allocate 16 nodes in the same [hypercube dimension](/salomon/7d-enhanced-hypercube/) 1 group.
In this example, we allocate 16 nodes in the same [hypercube dimension](salomon/7d-enhanced-hypercube/) 1 group.
```console
$ qsub -A OPEN-0-0 -q qprod -l select=16:ncpus=24 -l place=group=ehc_1d -I
......@@ -516,7 +516,7 @@ HTML commented section #2 (examples need to be reworked)
!!! note
Local scratch directory is often useful for single node jobs. Local scratch will be deleted immediately after the job ends. Be very careful, use of RAM disk filesystem is at the expense of operational memory.
Example jobscript for single node calculation, using [local scratch](/salomon/storage/) on the node:
Example jobscript for single node calculation, using [local scratch](salomon/storage/) on the node:
```bash
#!/bin/bash
......
......@@ -5,10 +5,10 @@ network. Only [InfiniBand](http://en.wikipedia.org/wiki/InfiniBand) network may
## InfiniBand Network
All compute and login nodes of Salomon are interconnected by 7D Enhanced hypercube [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) network (56 Gbps). The network topology is a [7D Enhanced hypercube](/salomon/7d-enhanced-hypercube/).
All compute and login nodes of Salomon are interconnected by 7D Enhanced hypercube [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) network (56 Gbps). The network topology is a [7D Enhanced hypercube](salomon/7d-enhanced-hypercube/).
Read more about schematic representation of the Salomon cluster [IB single-plain topology](/salomon/ib-single-plane-topology/)
([hypercube dimension](/salomon/7d-enhanced-hypercube/)).
Read more about schematic representation of the Salomon cluster [IB single-plain topology](salomon/ib-single-plane-topology/)
([hypercube dimension](salomon/7d-enhanced-hypercube/)).
The compute nodes may be accessed via the Infiniband network using ib0 network interface, in address range 10.17.0.0 (mask 255.255.224.0). The MPI may be used to establish native Infiniband connection among the nodes.
......
......@@ -2,7 +2,7 @@
## Job Queue Policies
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](/salomon/job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](salomon/job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
!!! note
Check the queue status at <https://extranet.it4i.cz/rsweb/salomon/>
......@@ -35,7 +35,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const
## Queue Notes
The job wall-clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be [set manually, see examples](/salomon/job-submission-and-execution/).
The job wall-clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be [set manually, see examples](salomon/job-submission-and-execution/).
Jobs that exceed the reserved wall-clock time (Req'd Time) get killed automatically. Wall-clock time limit can be changed for queuing jobs (state Q) using the qalter command, however can not be changed for a running job (state R).
......
......@@ -65,7 +65,7 @@ Last login: Tue Jul 9 15:57:38 2018 from your-host.example.com
```
!!! note
The environment is **not** shared between login nodes, except for [shared filesystems](/salomon/storage/).
The environment is **not** shared between login nodes, except for [shared filesystems](salomon/storage/).
## Data Transfer
......@@ -79,7 +79,7 @@ Data in and out of the system may be transferred by the [scp](http://en.wikipedi
| login3.salomon.it4i.cz | 22 | scp, sftp |
| login4.salomon.it4i.cz | 22 | scp, sftp |
The authentication is by the [private key](/general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/)
The authentication is by the [private key](general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/)
On linux or Mac, use scp or sftp client to transfer the data to Salomon:
......@@ -115,7 +115,7 @@ $ man sshfs
On Windows, use [WinSCP client](http://winscp.net/eng/download.php) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/) provides a way to mount the Salomon filesystems directly as an external disc.
More information about the shared file systems is available [here](/salomon/storage/).
More information about the shared file systems is available [here](salomon/storage/).
## Connection Restrictions
......@@ -199,9 +199,9 @@ Now, configure the applications proxy settings to **localhost:6000**. Use port f
## Graphical User Interface
* The [X Window system](/general/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters.
* The [X Window system](general/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters.
* The [Virtual Network Computing](../general/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
## VPN Access
* Access to IT4Innovations internal resources via [VPN](/general/accessing-the-clusters/vpn-access/).
* Access to IT4Innovations internal resources via [VPN](general/accessing-the-clusters/vpn-access/).
......@@ -235,7 +235,7 @@ Users home directories /home/username reside on HOME file system. Accessible cap
The HOME should not be used to archive data of past Projects or other unrelated data.
The files on HOME will not be deleted until end of the [users lifecycle](/general/obtaining-login-credentials/obtaining-login-credentials/).
The files on HOME will not be deleted until end of the [users lifecycle](general/obtaining-login-credentials/obtaining-login-credentials/).
The workspace is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files.
......
......@@ -14,8 +14,8 @@ Remote visualization with NICE DCV software is availabe on two nodes.
## References
* [Graphical User Interface](/salomon/shell-and-data-access/#graphical-user-interface)
* [VPN Access](/salomon/shell-and-data-access/#vpn-access)
* [Graphical User Interface](salomon/shell-and-data-access/#graphical-user-interface)
* [VPN Access](salomon/shell-and-data-access/#vpn-access)
## Install and Run
......@@ -25,7 +25,7 @@ Remote visualization with NICE DCV software is availabe on two nodes.
* [Linux download](http://www.nice-software.com/storage/nice-dcv/2016.0/endstation/linux/nice-dcv-endstation-2016.0-17066.run)
* [Windows download](http://www.nice-software.com/storage/nice-dcv/2016.0/endstation/win/nice-dcv-endstation-2016.0-17066-Release.msi)
**Install VPN client** [VPN Access](/general/accessing-the-clusters/vpn-access/) (user-computer)
**Install VPN client** [VPN Access](general/accessing-the-clusters/vpn-access/) (user-computer)
!!! note
Visualisation server is a compute node. You are not able to SSH with your private key. There are two solutions available to solve login issue.
......
......@@ -165,9 +165,9 @@ Systems biology
We also import systems biology information like interactome information from IntAct (24). Reactome (25) stores pathway and interaction information in BioPAX (26) format. BioPAX data exchange format enables the integration of diverse pathway
resources. We successfully solved the problem of storing data released in BioPAX format into a SQL relational schema, which allowed us importing Reactome in CellBase.
### [Diagnostic Component (TEAM)](/software/bio/omics-master/diagnostic-component-team/)
### [Diagnostic Component (TEAM)](software/bio/omics-master/diagnostic-component-team/)
### [Priorization Component (BiERApp)](/software/bio/omics-master/priorization-component-bierapp/)
### [Priorization Component (BiERApp)](software/bio/omics-master/priorization-component-bierapp/)
## Usage
......@@ -262,7 +262,7 @@ The ped file ( file.ped) contains the following info:
FAM sample_B 0 0 2 2
```
Now, lets load the NGSPipeline module and copy the sample data to a [scratch directory](/salomon/storage/):
Now, lets load the NGSPipeline module and copy the sample data to a [scratch directory](salomon/storage/):
```console
$ ml ngsPipeline
......@@ -276,7 +276,7 @@ Now, we can launch the pipeline (replace OPEN-0-0 with your Project ID):
$ ngsPipeline -i /scratch/$USER/omics/sample_data/data -o /scratch/$USER/omics/results -p /scratch/$USER/omics/sample_data/data/file.ped --project OPEN-0-0 --queue qprod
```
This command submits the processing [jobs to the queue](/salomon/job-submission-and-execution/).
This command submits the processing [jobs to the queue](salomon/job-submission-and-execution/).
If we want to re-launch the pipeline from stage 4 until stage 20 we should use the next command:
......
......@@ -18,7 +18,7 @@ On the clusters COMSOL is available in the latest stable version. There are two
* **Non commercial** or so called **EDU variant**, which can be used for research and educational purposes.
* **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing [here](/software/cae/comsol/licensing-and-available-versions/).
* **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing [here](software/cae/comsol/licensing-and-available-versions/).
To load the of COMSOL load the module
......@@ -32,7 +32,7 @@ By default the **EDU variant** will be loaded. If user needs other version or va
$ ml av COMSOL
```
If user needs to prepare COMSOL jobs in the interactive mode it is recommend to use COMSOL on the compute nodes via PBS Pro scheduler. In order run the COMSOL Desktop GUI on Windows is recommended to use the [Virtual Network Computing (VNC)](/general/accessing-the-clusters/graphical-user-interface/x-window-system/).
If user needs to prepare COMSOL jobs in the interactive mode it is recommend to use COMSOL on the compute nodes via PBS Pro scheduler. In order run the COMSOL Desktop GUI on Windows is recommended to use the [Virtual Network Computing (VNC)](general/accessing-the-clusters/graphical-user-interface/x-window-system/).
Example for Salomon:
......@@ -76,7 +76,7 @@ Working directory has to be created before sending the (comsol.pbs) job script i
COMSOL is the software package for the numerical solution of the partial differential equations. LiveLink for MATLAB allows connection to the COMSOL API (Application Programming Interface) with the benefits of the programming language and computing environment of the MATLAB.
LiveLink for MATLAB is available in both **EDU** and **COM** **variant** of the COMSOL release. On the clusters 1 commercial (**COM**) license and the 5 educational (**EDU**) licenses of LiveLink for MATLAB (see the [ISV Licenses](/software/isv_licenses/)) are available. Following example shows how to start COMSOL model from MATLAB via LiveLink in the interactive mode (on Anselm use 16 threads).
LiveLink for MATLAB is available in both **EDU** and **COM** **variant** of the COMSOL release. On the clusters 1 commercial (**COM**) license and the 5 educational (**EDU**) licenses of LiveLink for MATLAB (see the [ISV Licenses](software/isv_licenses/)) are available. Following example shows how to start COMSOL model from MATLAB via LiveLink in the interactive mode (on Anselm use 16 threads).
```console
$ xhost +
......
......@@ -35,7 +35,7 @@ Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molp
!!! note
The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS.
You are advised to use the -d option to point to a directory in [SCRATCH file system - Salomon](/salomon/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch file system.
You are advised to use the -d option to point to a directory in [SCRATCH file system - Salomon](salomon/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch file system.
### Example jobscript
......
......@@ -33,4 +33,4 @@ mpirun nwchem h2o.nw
Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
* MEMORY : controls the amount of memory NWChem will use
* SCRATCH_DIR : set this to a directory in [SCRATCH filesystem - Salomon](/salomon/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
* SCRATCH_DIR : set this to a directory in [SCRATCH filesystem - Salomon](salomon/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
......@@ -24,7 +24,7 @@ Commercial licenses:
## Intel Compilers
For information about the usage of Intel Compilers and other Intel products, read the [Intel Parallel studio](/software/intel-suite/intel-compilers/) page.
For information about the usage of Intel Compilers and other Intel products, read the [Intel Parallel studio](software/intel-suite/intel-compilers/) page.
## PGI Compilers (Only on Salomon)
......@@ -187,8 +187,8 @@ For more information see the man pages.
## Java
For information how to use Java (runtime and/or compiler), read the [Java page](/software/java/).
For information how to use Java (runtime and/or compiler), read the [Java page](software/java/).
## NVIDIA CUDA
For information how to work with NVIDIA CUDA, read the [NVIDIA CUDA page](/anselm/software/nvidia-cuda/).
For information how to work with NVIDIA CUDA, read the [NVIDIA CUDA page](anselm/software/nvidia-cuda/).
......@@ -15,7 +15,7 @@ $ ml intel
$ idb
```
Read more at the [Intel Debugger](/software/intel/intel-suite/intel-debugger/) page.
Read more at the [Intel Debugger](software/intel/intel-suite/intel-debugger/) page.
## Allinea Forge (DDT/MAP)
......@@ -26,7 +26,7 @@ $ ml Forge
$ forge
```
Read more at the [Allinea DDT](/software/debuggers/allinea-ddt/) page.
Read more at the [Allinea DDT](software/debuggers/allinea-ddt/) page.
## Allinea Performance Reports
......@@ -37,7 +37,7 @@ $ ml PerformanceReports/6.0
$ perf-report mpirun -n 64 ./my_application argument01 argument02
```
Read more at the [Allinea Performance Reports](/software/debuggers/allinea-performance-reports/) page.
Read more at the [Allinea Performance Reports](software/debuggers/allinea-performance-reports/) page.
## RougeWave Totalview
......@@ -48,7 +48,7 @@ $ ml TotalView/8.15.4-6-linux-x86-64
$ totalview
```
Read more at the [Totalview](/software/debuggers/total-view/) page.
Read more at the [Totalview](software/debuggers/total-view/) page.
## Vampir Trace Analyzer
......@@ -59,4 +59,4 @@ Vampir is a GUI trace analyzer for traces in OTF format.
$ vampir