Skip to content
Snippets Groups Projects
Commit 93c314e6 authored by Pavel Jirásek's avatar Pavel Jirásek
Browse files

Merge branch 'content_revision' into 'master'

Links change

See merge request !21
parents 75ec251b 6f62fbcf
No related branches found
No related tags found
No related merge requests found
......@@ -54,7 +54,7 @@ Anselm is cluster of x86-64 Intel based nodes built on Bull Extreme Computing bu
### Compute Nodes Summary
|Node type|Count|Range|Memory|Cores|[Access](resource-allocation-and-job-execution/resources-allocation-policy/)|
|Node type|Count|Range|Memory|Cores|[Access](resources-allocation-policy/)|
|---|---|---|---|---|---|
|Nodes without accelerator|180|cn[1-180]|64GB|16 @ 2.4Ghz|qexp, qprod, qlong, qfree|
|Nodes with GPU accelerator|23|cn[181-203]|96GB|16 @ 2.3Ghz|qgpu, qprod|
......
......@@ -20,14 +20,14 @@ There are four types of compute nodes:
[More about Compute nodes](compute-nodes/).
GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy](resource-allocation-and-job-execution/resources-allocation-policy/).
GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy](resources-allocation-policy/).
All these nodes are interconnected by fast InfiniBand network and Ethernet network. [More about the Network](network/).
Every chassis provides Infiniband switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches.
All nodes share 360TB /home disk storage to store user files. The 146TB shared /scratch storage is available for the scratch data. These file systems are provided by Lustre parallel file system. There is also local disk storage available on all compute nodes /lscratch. [More about Storage](storage/storage/).
All nodes share 360TB /home disk storage to store user files. The 146TB shared /scratch storage is available for the scratch data. These file systems are provided by Lustre parallel file system. There is also local disk storage available on all compute nodes /lscratch. [More about Storage](storage/).
The user access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing cluster.](accessing-the-cluster/shell-and-data-access/)
The user access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing cluster.](shell-and-data-access/)
The parameters are summarized in the following tables:
......@@ -58,4 +58,4 @@ The parameters are summarized in the following tables:
|MIC accelerated|2x Intel Sandy Bridge E5-2470, 2.3GHz|96GB|Intel Xeon Phi P5110|
|Fat compute node|2x Intel Sandy Bridge E5-2665, 2.4GHz|512GB|-|
For more details please refer to the [Compute nodes](compute-nodes/), [Storage](storage/storage/), and [Network](network/).
For more details please refer to the [Compute nodes](compute-nodes/), [Storage](storage/), and [Network](network/).
......@@ -7,7 +7,7 @@ The cluster runs bullx Linux ([bull](http://www.bull.com/bullx-logiciels/systeme
User data shared file-system (HOME, 320TB) and job data shared file-system (SCRATCH, 146TB) are available to users.
The PBS Professional workload manager provides [computing resources allocations and job execution](resource-allocation-and-job-execution/introduction/).
The PBS Professional workload manager provides [computing resources allocations and job execution](resource-allocation-and-job-execution/).
Read more on how to [apply for resources](../get-started-with-it4innovations/applying-for-resources/), [obtain login credentials,](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/) and [access the cluster](accessing-the-cluster/shell-and-data-access/).
Read more on how to [apply for resources](../get-started-with-it4innovations/applying-for-resources/), [obtain login credentials,](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/) and [access the cluster](/shell-and-data-access/).
......@@ -217,7 +217,7 @@ PRACE users can use the "prace" module to use the [PRACE Common Production Envir
### Resource Allocation and Job Execution
General information about the resource allocation, job queuing and job execution is in this [section of general documentation](resource-allocation-and-job-execution/introduction/).
General information about the resource allocation, job queuing and job execution is in this [section of general documentation](resource-allocation-and-job-execution/).
For PRACE users, the default production run queue is "qprace". PRACE users can also use two other queues "qexp" and "qfree".
......
......@@ -215,7 +215,7 @@ Usage of the cluster
--------------------
There are some limitations for PRACE user when using the cluster. By default PRACE users aren't allowed to access special queues in the PBS Pro to have high priority or exclusive access to some special equipment like accelerated nodes and high memory (fat) nodes. There may be also restrictions obtaining a working license for the commercial software installed on the cluster, mostly because of the license agreement or because of insufficient amount of licenses.
For production runs always use scratch file systems. The available file systems are described [here](storage/storage/).
For production runs always use scratch file systems. The available file systems are described [here](storage/).
### Software, Modules and PRACE Common Production Environment
......@@ -229,7 +229,7 @@ PRACE users can use the "prace" module to use the [PRACE Common Production Envir
### Resource Allocation and Job Execution
General information about the resource allocation, job queuing and job execution is in this [section of general documentation](resource-allocation-and-job-execution/introduction/).
General information about the resource allocation, job queuing and job execution is in this [section of general documentation](resource-allocation-and-job-execution/).
For PRACE users, the default production run queue is "qprace". PRACE users can also use two other queues "qexp" and "qfree".
......@@ -244,7 +244,7 @@ For PRACE users, the default production run queue is "qprace". PRACE users can a
### Accounting & Quota
The resources that are currently subject to accounting are the core hours. The core hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. See [example in the general documentation](resource-allocation-and-job-execution/resources-allocation-policy/).
The resources that are currently subject to accounting are the core hours. The core hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. See [example in the general documentation](resources-allocation-policy/).
PRACE users should check their project accounting using the [PRACE Accounting Tool (DART)](http://www.prace-ri.eu/accounting-report-tool/).
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment