Skip to content
Snippets Groups Projects
Commit 8afbe213 authored by David Hrbáč's avatar David Hrbáč
Browse files

Merge branch 'capitalize_test' into 'master'

Removed apostrophes

See merge request !169
parents 2664fd5b 5a1f9ec1
No related branches found
No related tags found
6 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!196Master,!169Removed apostrophes
Showing
with 35 additions and 28 deletions
......@@ -22,10 +22,9 @@ two spaces:
capitalize:
stage: test
image: davidhrbac/docker-mkdocscheck:latest
allow_failure: true
# allow_failure: true
script:
- scripts/titlemd_test.py mkdocs.yml
- find docs.it4i/ -name '*.md' -print0 | xargs -0 -n1 scripts/titlemd_test.py
- find mkdocs.yml docs.it4i/ \( -name '*.md' -o -name '*.yml' \) -print0 | xargs -0 -n1 scripts/titlemd_test.py
spell check:
stage: test
......
......@@ -3,6 +3,14 @@
# global dictionary is at the start, file overrides afterwards
# one word per line, to define a file override use ' - filename'
# where filename is relative to this configuration file
CAE
CUBE
GPU
GSL
LMGC90
LS-DYNA
MAPDL
GPI-2
COM
.ssh
Anselm
......
# Capacity computing
# Capacity Computing
## Introduction
......
# Job scheduling
# Job Scheduling
## Job Execution Priority
......@@ -54,7 +54,7 @@ Job execution priority (job sort formula) is calculated as:
---8<--- "job_sort_formula.md"
### Job backfilling
### Job Backfilling
Anselm cluster uses job backfilling.
......
# Job submission and execution
# Job Submission and Execution
## Job Submission
......
# Remote visualization service
# Remote Visualization Service
## Introduction
......
# Resources Allocation Policy
## Job queue policies
## Job Queue Policies
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The Fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
......@@ -27,7 +27,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const
* **qnvidia**, qmic, qfat, the Dedicated queues: The queue qnvidia is dedicated to access the Nvidia accelerated nodes, the qmic to access MIC nodes and qfat the Fat nodes. It is required that active project with nonzero remaining resources is specified to enter these queues. 23 nvidia, 4 mic and 2 fat nodes are included. Full nodes, 16 cores per node are allocated. The queues run with very high priority, the jobs will be scheduled before the jobs coming from the qexp queue. An PI needs explicitly ask [support](https://support.it4i.cz/rt/) for authorization to enter the dedicated queues for all users associated to her/his Project.
* **qfree**, The Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 16 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
## Queue notes
## Queue Notes
The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be [set manually, see examples](job-submission-and-execution/).
......@@ -35,7 +35,7 @@ Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatica
Anselm users may check current queue configuration at <https://extranet.it4i.cz/anselm/queues>.
## Queue status
## Queue Status
!!! tip
Check the status of jobs, queues and compute nodes at <https://extranet.it4i.cz/anselm/>
......
......@@ -98,7 +98,7 @@ The architecture of Lustre on Anselm is composed of two metadata servers (MDS) a
* 2 groups of 5 disks in RAID5
* 2 hot-spare disks
### HOME
### HOME File System
The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 320TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
......@@ -127,7 +127,7 @@ Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for t
| Default stripe count | 1 |
| Number of OSTs | 22 |
### SCRATCH
### SCRATCH File System
The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 146TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. If 100TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
......
# VPN - Connection fail in Win 8.1
# VPN - Connection Fail in Win 8.1
## Failed to Initialize Connection Subsystem Win 8.1 - 02-10-15 MS Patch
......
# Capacity computing
# Capacity Computing
## Introduction
......
# IB single-plane topology
# IB Single-Plane Topology
A complete M-Cell assembly consists of four compute racks. Each rack contains 4 x physical IRUs - Independent rack units. Using one dual socket node per one blade slot leads to 8 logical IRUs. Each rack contains 4 x 2 SGI ICE X IB Premium Blades.
......
# Job scheduling
# Job Scheduling
## Job Execution Priority
......
# Job submission and execution
# Job Submission and Execution
## Job Submission
......
# Resources Allocation Policy
## Job queue policies
## Job Queue Policies
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
......@@ -31,7 +31,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const
!!! note
To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement](job-submission-and-execution/).
## Queue notes
## Queue Notes
The job wall-clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be [set manually, see examples](job-submission-and-execution/).
......
......@@ -17,7 +17,7 @@ $ ml Clp
The module sets up environment variables required for linking and running applications using Clp. This particular command loads the default module Clp/1.16.10-intel-2017a, Intel module intel/2017a and other related modules.
## Compiling and linking
## Compiling and Linking
!!! note
Link with -lClp
......@@ -51,7 +51,7 @@ int main (int argc, const char *argv[])
}
```
### Load modules and compile:
### Load Modules and Compile:
```console
ml Clp
......
......@@ -28,7 +28,7 @@ SQLite/3.8.8.1
Python/2.7.9
```
## Running generic example
## Running Generic Example
LMGC90 software main API is a Python module. It comes with a pre-processor written in Python. There are several examples that you can copy from the `examples` directory which is in `/apps/all/LMGC90/2017.rc1-GCC-6.3.0-2.27` folder. Follow the next steps to run one of them.
......
# Resource Accounting Policy
## Wall-clock Core-Hours WCH
## Wall-Clock Core-Hours WCH
The wall-clock core-hours (WCH) are the basic metric of computer utilization time.
1 wall-clock core-hour is defined as 1 processor core allocated for 1 hour of wall-clock time. Allocating a full node (16 cores Anselm, 24 cores Salomon)
......
# Diagnostic component (TEAM)
# Diagnostic Component (TEAM)
## Access
......
# Prioritization component (BiERapp)
# Prioritization Component (BiERapp)
## Access
......
# Compilers
## Available compilers, including GNU, INTEL and UPC compilers
## Available Compilers, Including GNU, INTEL and UPC Compilers
There are several compilers for different programming languages available on the cluster:
......@@ -26,7 +26,7 @@ Commercial licenses:
For information about the usage of Intel Compilers and other Intel products, please read the [Intel Parallel studio](intel-suite/intel-compilers/) page.
## PGI Compilers (only on Salomon)
## PGI Compilers (Only on Salomon)
The Portland Group Cluster Development Kit (PGI CDK) is available on Salomon.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment