Skip to content
Snippets Groups Projects
Commit 6040d60a authored by David Hrbáč's avatar David Hrbáč
Browse files

Capitalize visual fix

parent d7099739
No related branches found
No related tags found
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!68Auto capitalize
Pipeline #
Showing
with 42 additions and 37 deletions
......@@ -3,17 +3,25 @@
# global dictionary is at the start, file overrides afterwards
# one word per line, to define a file override use ' - filename'
# where filename is relative to this configuration file
COM
.ssh
Anselm
IT4I
IT4Innovations
PBS
Salomon
TurboVNC
VNC
DDR3
DIMM
InfiniBand
CUDA
ORCA
COMSOL
API
GNU
CUDA
NVIDIA
LiveLink
MATLAB
Allinea
......@@ -63,6 +71,7 @@ MPICH
MVAPICH2
OpenBLAS
ScaLAPACK
PAPI
SGI
UV2000
400GB
......@@ -251,7 +260,3 @@ r37u31n1008
qsub
it4ifree
it4i.portal.clients
API
GNU
CUDA
NVIDIA
......@@ -31,7 +31,7 @@ A job array is a compact representation of many jobs, called subjobs. The subjob
All subjobs within a job array have the same scheduling priority and schedule as independent jobs. Entire job array is submitted through a single qsub command and may be managed by qdel, qalter, qhold, qrls and qsig commands as a single job.
### Shared jobscript
### Shared Jobscript
All subjobs in job array use the very same, single jobscript. Each subjob runs its own instance of the jobscript. The instances execute different work controlled by $PBS_ARRAY_INDEX variable.
......@@ -161,7 +161,7 @@ $ module add parallel
$ man parallel
```
### GNU Parallel jobscript
### GNU Parallel Jobscript
The GNU parallel shell executes multiple instances of the jobscript using all cores on the node. The instances execute different work, controlled by the $PARALLEL_SEQ variable.
......
......@@ -107,7 +107,7 @@ Options:
## Resources Accounting Policy
### the Core-Hour
### Core-Hours
The resources that are currently subject to accounting are the core-hours. The core-hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. 1 core-hour is defined as 1 processor core allocated for 1 hour of wall clock time. Allocating a full node (16 cores) for 1 hour accounts to 16 core-hours. See example in the [Job submission and execution](job-submission-and-execution/) section.
......
# Compilers
## Available Compilers, Including GNU, INTEL and UPC Compilers
## Available Compilers, Including GNU, INTEL, and UPC Compilers
Currently there are several compilers for different programming languages available on the Anselm cluster:
......
......@@ -72,7 +72,7 @@ Measures the cost (in cycles) of basic PAPI operations.
Prints information about the memory architecture of the current CPU.
## Papi API
## PAPI API
PAPI provides two kinds of events:
......
# Intel Xeon Phi
## a Guide to Intel Xeon Phi Usage
## Guide to Intel Xeon Phi Usage
Intel Xeon Phi can be programmed in several modes. The default mode on Anselm is offload mode, but all modes described in this document are supported.
......
# ParaView
## an Open-Source, Multi-Platform Data Analysis and Visualization Application
Open-Source, Multi-Platform Data Analysis and Visualization Application
## Introduction
......
......@@ -6,7 +6,7 @@ The X Window system is a principal way to get GUI access to the clusters.
Read more about configuring [**X Window System**](x-window-system/).
## Vnc
## VNC
The **Virtual Network Computing** (**VNC**) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing "Desktop sharing") system that uses the [Remote Frame Buffer protocol (RFB)](http://en.wikipedia.org/wiki/RFB_protocol "RFB protocol") to remotely control another [computer](http://en.wikipedia.org/wiki/Computer "Computer").
......
......@@ -19,7 +19,7 @@ After logging in, you can see .ssh/ directory with SSH keys and authorized_keys
!!! Hint
Private keys in .ssh directory are without passphrase and allow you to connect within the cluster.
## Access Privileges on .Ssh Folder
## Access Privileges on .ssh Folder
- .ssh directory: 700 (drwx------)
- Authorized_keys, known_hosts and public key (.pub file): 644 (-rw-r--r--)
......
......@@ -127,18 +127,18 @@ In this example, we allocate nodes r24u35n680 and r24u36n681, all 24 cores per n
### Placement by Network Location
Network location of allocated nodes in the [Infiniband network](network/) influences efficiency of network communication between nodes of job. Nodes on the same Infiniband switch communicate faster with lower latency than distant nodes. To improve communication efficiency of jobs, PBS scheduler on Salomon is configured to allocate nodes - from currently available resources - which are as close as possible in the network topology.
Network location of allocated nodes in the [InifiBand network](network/) influences efficiency of network communication between nodes of job. Nodes on the same InifiBand switch communicate faster with lower latency than distant nodes. To improve communication efficiency of jobs, PBS scheduler on Salomon is configured to allocate nodes - from currently available resources - which are as close as possible in the network topology.
For communication intensive jobs it is possible to set stricter requirement - to require nodes directly connected to the same Infiniband switch or to require nodes located in the same dimension group of the Infiniband network.
For communication intensive jobs it is possible to set stricter requirement - to require nodes directly connected to the same InifiBand switch or to require nodes located in the same dimension group of the InifiBand network.
### Placement by Infiniband Switch
### Placement by InifiBand Switch
Nodes directly connected to the same Infiniband switch can communicate most efficiently. Using the same switch prevents hops in the network and provides for unbiased, most efficient network communication. There are 9 nodes directly connected to every Infiniband switch.
Nodes directly connected to the same InifiBand switch can communicate most efficiently. Using the same switch prevents hops in the network and provides for unbiased, most efficient network communication. There are 9 nodes directly connected to every InifiBand switch.
!!! Note "Note"
We recommend allocating compute nodes of a single switch when the best possible computational network performance is required to run job efficiently.
Nodes directly connected to the one Infiniband switch can be allocated using node grouping on PBS resource attribute switch.
Nodes directly connected to the one InifiBand switch can be allocated using node grouping on PBS resource attribute switch.
In this example, we request all 9 nodes directly connected to the same switch using node grouping placement.
......@@ -146,12 +146,12 @@ In this example, we request all 9 nodes directly connected to the same switch us
$ qsub -A OPEN-0-0 -q qprod -l select=9:ncpus=24 -l place=group=switch ./myjob
```
### Placement by Specific Infiniband Switch
### Placement by Specific InifiBand Switch
!!! Note "Note"
Not useful for ordinary computing, suitable for testing and management tasks.
Nodes directly connected to the specific Infiniband switch can be selected using the PBS resource attribute _switch_.
Nodes directly connected to the specific InifiBand switch can be selected using the PBS resource attribute _switch_.
In this example, we request all 9 nodes directly connected to r4i1s0sw1 switch.
......@@ -159,7 +159,7 @@ In this example, we request all 9 nodes directly connected to r4i1s0sw1 switch.
$ qsub -A OPEN-0-0 -q qprod -l select=9:ncpus=24:switch=r4i1s0sw1 ./myjob
```
List of all Infiniband switches:
List of all InifiBand switches:
```bash
$ qmgr -c 'print node @a' | grep switch | awk '{print $6}' | sort -u
......@@ -172,7 +172,7 @@ r1i2s0sw0
...
```
List of all all nodes directly connected to the specific Infiniband switch:
List of all all nodes directly connected to the specific InifiBand switch:
```bash
$ qmgr -c 'p n @d' | grep 'switch = r36sw3' | awk '{print $3}' | sort
......
......@@ -113,7 +113,7 @@ Options:
## Resources Accounting Policy
### the Core-Hour
### Core-Hours
The resources that are currently subject to accounting are the core-hours. The core-hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. 1 core-hour is defined as 1 processor core allocated for 1 hour of wall clock time. Allocating a full node (24 cores) for 1 hour accounts to 24 core-hours. See example in the [Job submission and execution](job-submission-and-execution/) section.
......
......@@ -10,7 +10,7 @@
The licence intended to be used for science and research, publications, students’ projects (academic licence).
## ANSYS Com
## ANSYS COM
The licence intended to be used for science and research, publications, students’ projects, commercial research with no commercial use restrictions.
......
......@@ -37,7 +37,7 @@ Direct
0.6250000000000000 0.6250000000000000 0.1250000000000000
```
### Generating Displacement Using 2 X 2 X 2 Supercell for Both Second and Third Order Force Constants
### Generating Displacement Using 2 by 2 by 2 Supercell for Both Second and Third Order Force Constants
```bash
$ phono3py -d --dim="2 2 2" -c POSCAR
......
# COMSOL Multiphysics®
# COMSOL Multiphysics
## Introduction
......@@ -70,9 +70,9 @@ comsol -nn ${ntask} batch -configuration /tmp –mpiarg –rmk –mpiarg pbs -tm
Working directory has to be created before sending the (comsol.pbs) job script into the queue. Input file (name_input_f.mph) has to be in working directory or full path to input file has to be specified. The appropriate path to the temp directory of the job has to be set by command option (-tmpdir).
## LiveLink™\* \*For MATLAB®
## LiveLink for MATLAB
COMSOL is the software package for the numerical solution of the partial differential equations. LiveLink for MATLAB allows connection to the COMSOL®API (Application Programming Interface) with the benefits of the programming language and computing environment of the MATLAB.
COMSOL is the software package for the numerical solution of the partial differential equations. LiveLink for MATLAB allows connection to the COMSOL API (Application Programming Interface) with the benefits of the programming language and computing environment of the MATLAB.
LiveLink for MATLAB is available in both **EDU** and **COM** **variant** of the COMSOL release. On the clusters 1 commercial (**COM**) license and the 5 educational (**EDU**) licenses of LiveLink for MATLAB (please see the [ISV Licenses](../isv_licenses/)) are available. Following example shows how to start COMSOL model from MATLAB via LiveLink in the interactive mode.
......
......@@ -26,7 +26,7 @@ Below you will find more details and examples.
| ml save mycollection | stores the currently loaded modules to a collection |
| ml restore mycollection | restores a previously stored collection of modules |
## Listing Loaded Modules: Ml (Module Load)
## Listing Loaded Modules
To get an overview of the currently loaded modules, use module list or ml (without specifying extra arguments).
......@@ -41,7 +41,7 @@ Currently Loaded Modules:
!!! tip
for more details on sticky modules, see the section on [ml purge](#resetting-by-unloading-all-modules-ml-purge-module-purge)
## Searching for Available Modules: Ml Av (Module Avail) and Ml Spider
## Searching for Available Modules
To get an overview of all available modules, you can use module avail or simply ml av:
......@@ -65,7 +65,7 @@ In the current module naming scheme, each module name consists of two parts:
!!! tip
The (D) indicates that this particular version of the module is the default, but we strongly recommend to not rely on this as the default can change at any point. Usuall, the default will point to the latest version available.
## Searching for Modules: Ml Spider
## Searching for Modules
If you just provide a software name, for example gcc, it prints on overview of all available modules for GCC.
......@@ -129,7 +129,7 @@ $ module spider GCC/6.2.0-2.27
This tells you what the module contains and a URL to the homepage of the software.
## Available Modules for a Particular Software Package: Ml Av <name>
## Available Modules for a Particular Software Package
To check which modules are available for a particular software package, you can provide the software name to ml av.
For example, to check which versions of git are available:
......@@ -165,7 +165,7 @@ Use "module spider" to find all possible modules.
Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys".
```
## Inspecting a Module Using Ml Show
## Inspecting a Module
To see how a module would change the environment, use module show or ml show:
......@@ -200,7 +200,7 @@ setenv("EBEXTSLISTPYTHON","setuptools-20.1.1,pip-8.0.2,nose-1.3.7")
If you're not sure what all of this means: don't worry, you don't have to know; just try loading the module as try using the software.
## Loading Modules: Ml &Lt;modname(s)> (Module Load &Lt;modname(s)>)
## Loading Modules
The effectively apply the changes to the environment that are specified by a module, use module load or ml and specify the name of the module.
For example, to set up your environment to use intel:
......@@ -260,7 +260,7 @@ $ which gcc
/usr/bin/gcc
```
## Resetting by Unloading All Modules: Ml Purge (Module Purge)
## Resetting by Unloading All Modules
To reset your environment back to a clean state, you can use module purge or ml purge:
......@@ -282,7 +282,7 @@ No modules loaded
As such, you should not (re)load the cluster module anymore after running ml purge. See also here.
## Module Collections: Ml Save, Ml Restore
## Module Collections
If you have a set of modules that you need to load often, you can save these in a collection (only works with Lmod).
......
......@@ -2,7 +2,7 @@
ORCA is a flexible, efficient and easy-to-use general purpose tool for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single- and multireference correlated ab initio methods. It can also treat environmental and relativistic effects.
## Making Orca Available
## Making ORCA Available
The following module command makes the latest version of orca available to your session
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment