diff --git a/docs.it4i/anselm-cluster-documentation/compute-nodes.md b/docs.it4i/anselm-cluster-documentation/compute-nodes.md index 85cf3c05bae514cc1a06b79127196affde8f1575..d6ddb121afa3a0a1e938fec3c9b6434ca7004c28 100644 --- a/docs.it4i/anselm-cluster-documentation/compute-nodes.md +++ b/docs.it4i/anselm-cluster-documentation/compute-nodes.md @@ -54,7 +54,7 @@ Anselm is cluster of x86-64 Intel based nodes built on Bull Extreme Computing bu ### Compute Nodes Summary - |Node type|Count|Range|Memory|Cores|[Access](resource-allocation-and-job-execution/resources-allocation-policy/)| + |Node type|Count|Range|Memory|Cores|[Access](resources-allocation-policy/)| |---|---|---|---|---|---| |Nodes without accelerator|180|cn[1-180]|64GB|16 @ 2.4Ghz|qexp, qprod, qlong, qfree| |Nodes with GPU accelerator|23|cn[181-203]|96GB|16 @ 2.3Ghz|qgpu, qprod| diff --git a/docs.it4i/anselm-cluster-documentation/hardware-overview.md b/docs.it4i/anselm-cluster-documentation/hardware-overview.md index f435c50bb71f5cc968ddaf3f0326e43bac700231..5aac8f98ee189a6754ec7cf0da6497b61b60dc0c 100644 --- a/docs.it4i/anselm-cluster-documentation/hardware-overview.md +++ b/docs.it4i/anselm-cluster-documentation/hardware-overview.md @@ -20,14 +20,14 @@ There are four types of compute nodes: [More about Compute nodes](compute-nodes/). -GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy](resource-allocation-and-job-execution/resources-allocation-policy/). +GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy](resources-allocation-policy/). All these nodes are interconnected by fast InfiniBand network and Ethernet network. [More about the Network](network/). Every chassis provides Infiniband switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches. -All nodes share 360TB /home disk storage to store user files. The 146TB shared /scratch storage is available for the scratch data. These file systems are provided by Lustre parallel file system. There is also local disk storage available on all compute nodes /lscratch. [More about Storage](storage/storage/). +All nodes share 360TB /home disk storage to store user files. The 146TB shared /scratch storage is available for the scratch data. These file systems are provided by Lustre parallel file system. There is also local disk storage available on all compute nodes /lscratch. [More about Storage](storage/). -The user access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing cluster.](accessing-the-cluster/shell-and-data-access/) +The user access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing cluster.](shell-and-data-access/) The parameters are summarized in the following tables: @@ -58,4 +58,4 @@ The parameters are summarized in the following tables: |MIC accelerated|2x Intel Sandy Bridge E5-2470, 2.3GHz|96GB|Intel Xeon Phi P5110| |Fat compute node|2x Intel Sandy Bridge E5-2665, 2.4GHz|512GB|-| -For more details please refer to the [Compute nodes](compute-nodes/), [Storage](storage/storage/), and [Network](network/). +For more details please refer to the [Compute nodes](compute-nodes/), [Storage](storage/), and [Network](network/). diff --git a/docs.it4i/anselm-cluster-documentation/introduction.md b/docs.it4i/anselm-cluster-documentation/introduction.md index 6db81fd060f288f14e5f238fa19bd493dcb290f9..fb46ed5451253fb547770bb1a847b5c57f636c23 100644 --- a/docs.it4i/anselm-cluster-documentation/introduction.md +++ b/docs.it4i/anselm-cluster-documentation/introduction.md @@ -7,7 +7,7 @@ The cluster runs bullx Linux ([bull](http://www.bull.com/bullx-logiciels/systeme User data shared file-system (HOME, 320TB) and job data shared file-system (SCRATCH, 146TB) are available to users. -The PBS Professional workload manager provides [computing resources allocations and job execution](resource-allocation-and-job-execution/introduction/). +The PBS Professional workload manager provides [computing resources allocations and job execution](resource-allocation-and-job-execution/). -Read more on how to [apply for resources](../get-started-with-it4innovations/applying-for-resources/), [obtain login credentials,](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/) and [access the cluster](accessing-the-cluster/shell-and-data-access/). +Read more on how to [apply for resources](../get-started-with-it4innovations/applying-for-resources/), [obtain login credentials,](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/) and [access the cluster](/shell-and-data-access/). diff --git a/docs.it4i/anselm-cluster-documentation/prace.md b/docs.it4i/anselm-cluster-documentation/prace.md index 8bf2f5c696785727b4fb0e6ba79971d3b2ebbdad..0609a6d97a68ea316798e3a22a25903be98d13fc 100644 --- a/docs.it4i/anselm-cluster-documentation/prace.md +++ b/docs.it4i/anselm-cluster-documentation/prace.md @@ -217,7 +217,7 @@ PRACE users can use the "prace" module to use the [PRACE Common Production Envir ### Resource Allocation and Job Execution -General information about the resource allocation, job queuing and job execution is in this [section of general documentation](resource-allocation-and-job-execution/introduction/). +General information about the resource allocation, job queuing and job execution is in this [section of general documentation](resource-allocation-and-job-execution/). For PRACE users, the default production run queue is "qprace". PRACE users can also use two other queues "qexp" and "qfree". diff --git a/docs.it4i/salomon/prace.md b/docs.it4i/salomon/prace.md index f8162731d527bae379e5bcd9bf050d098651ff60..30e3b779929f6fbcca0b8c0bf16f8de1e9eb29df 100644 --- a/docs.it4i/salomon/prace.md +++ b/docs.it4i/salomon/prace.md @@ -215,7 +215,7 @@ Usage of the cluster -------------------- There are some limitations for PRACE user when using the cluster. By default PRACE users aren't allowed to access special queues in the PBS Pro to have high priority or exclusive access to some special equipment like accelerated nodes and high memory (fat) nodes. There may be also restrictions obtaining a working license for the commercial software installed on the cluster, mostly because of the license agreement or because of insufficient amount of licenses. -For production runs always use scratch file systems. The available file systems are described [here](storage/storage/). +For production runs always use scratch file systems. The available file systems are described [here](storage/). ### Software, Modules and PRACE Common Production Environment @@ -229,7 +229,7 @@ PRACE users can use the "prace" module to use the [PRACE Common Production Envir ### Resource Allocation and Job Execution -General information about the resource allocation, job queuing and job execution is in this [section of general documentation](resource-allocation-and-job-execution/introduction/). +General information about the resource allocation, job queuing and job execution is in this [section of general documentation](resource-allocation-and-job-execution/). For PRACE users, the default production run queue is "qprace". PRACE users can also use two other queues "qexp" and "qfree". @@ -244,7 +244,7 @@ For PRACE users, the default production run queue is "qprace". PRACE users can a ### Accounting & Quota -The resources that are currently subject to accounting are the core hours. The core hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. See [example in the general documentation](resource-allocation-and-job-execution/resources-allocation-policy/). +The resources that are currently subject to accounting are the core hours. The core hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. See [example in the general documentation](resources-allocation-policy/). PRACE users should check their project accounting using the [PRACE Accounting Tool (DART)](http://www.prace-ri.eu/accounting-report-tool/).