diff --git a/docs.it4i/anselm-cluster-documentation/environment-and-modules.md b/docs.it4i/anselm-cluster-documentation/environment-and-modules.md index 916f14a9db8139cd82c9f9051c95551d0698768d..1dafc5879add66fbb2fa949c0fde6ba67058fdb8 100644 --- a/docs.it4i/anselm-cluster-documentation/environment-and-modules.md +++ b/docs.it4i/anselm-cluster-documentation/environment-and-modules.md @@ -25,7 +25,7 @@ fi ``` !!! Note "Note" - Do not run commands outputting to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Take care for SSH session interactivity for such commands as stated in the previous example. + Do not run commands outputting to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Conside utilization of SSH session interactivity for such commands as stated in the previous example. ### Application Modules diff --git a/docs.it4i/anselm-cluster-documentation/hardware-overview.md b/docs.it4i/anselm-cluster-documentation/hardware-overview.md index dd0d1f7bfc061a49bf016aaa2b3c29a8f298fc8b..42b3cba4c5e775f92444e22e9bd12d6856cb93ce 100644 --- a/docs.it4i/anselm-cluster-documentation/hardware-overview.md +++ b/docs.it4i/anselm-cluster-documentation/hardware-overview.md @@ -25,7 +25,7 @@ GPU and accelerated nodes are available upon request, see the [Resources Allocat All these nodes are interconnected by fast InfiniBand network and Ethernet network. [More about the Network](network/). Every chassis provides InfiniBand switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches. -All nodes share 360 TB /home disk storage to store user files. The 146 TB shared /scratch storage is available for the scratch data. These file systems are provided by Lustre parallel file system. There is also local disk storage available on all compute nodes /lscratch. [More about Storage](storage/). +All nodes share 360 TB /home disk storage to store user files. The 146 TB shared /scratch storage is available for the scratch data. These file systems are provided by Lustre parallel file system. There is also local disk storage available on all compute nodes in /lscratch. [More about Storage](storage/). The user access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing cluster.](shell-and-data-access/) diff --git a/docs.it4i/anselm-cluster-documentation/network.md b/docs.it4i/anselm-cluster-documentation/network.md index 0a3b738f3daa71ee8f846ecce6ad0bbe2707dbf1..24fe0881bab6b1629a3096493d409f5c5b0702b9 100644 --- a/docs.it4i/anselm-cluster-documentation/network.md +++ b/docs.it4i/anselm-cluster-documentation/network.md @@ -5,12 +5,12 @@ All compute and login nodes of Anselm are interconnected by [InfiniBand](http:// InfiniBand Network ------------------ -All compute and login nodes of Anselm are interconnected by a high-bandwidth, low-latency [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) QDR network (IB 4 x QDR, 40 Gbps). The network topology is a fully non-blocking fat-tree. +All compute and login nodes of Anselm are interconnected by a high-bandwidth, low-latency [InfiniBand](http://en.wikipedia.org/wiki/InfiniBand) QDR network (IB 4 x QDR, 40 Gbps). The network topology is a fully non-blocking fat-tree. -The compute nodes may be accessed via the Infiniband network using ib0 network interface, in address range 10.2.1.1-209. The MPI may be used to establish native Infiniband connection among the nodes. +The compute nodes may be accessed via the InfiniBand network using ib0 network interface, in address range 10.2.1.1-209. The MPI may be used to establish native InfiniBand connection among the nodes. !!! Note "Note" - The network provides **2170 MB/s** transfer rates via the TCP connection (single stream) and up to **3600 MB/s** via native Infiniband protocol. + The network provides **2170 MB/s** transfer rates via the TCP connection (single stream) and up to **3600 MB/s** via native InfiniBand protocol. The Fat tree topology ensures that peak transfer rates are achieved between any two nodes, independent of network traffic exchanged among other nodes concurrently. diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md index e0e3f5be09419906d865676c170ca17ea5790f13..a64b56cfc4eca936cfb65aa0abc1ca04b2ac6f0d 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md @@ -50,7 +50,7 @@ echo Machines: $hl /ansys_inc/v145/ansys/bin/ansys145 -b -dis -p aa_r -i input.dat -o file.out -machines $hl -dir $WORK_DIR ``` -Header of the PBS file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the PBS file (above) is common and description can be found on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common APDL file which is attached to the ANSYS solver via parameter -i diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md index d0fedd929c1f7d30fb4fc19a21187821c169ae82..92a9cb156c786daed2cffd9c08e6182a4ba9df18 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md @@ -3,7 +3,7 @@ Overview of ANSYS Products **[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM) -Anselm provides as commercial as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of  license or by two letter preposition "**aa_**" in the license feature name. Change of license is realized on command line respectively directly in user's PBS file (see individual products). [ More about licensing here](ansys/licensing/) +Anselm provides commercial as well as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of  license or by two letter preposition "**aa_**" in the license feature name. Change of license is realized on command line respectively directly in user's PBS file (see individual products). [ More about licensing here](ansys/licensing/) To load the latest version of any ANSYS product (Mechanical, Fluent, CFX, MAPDL,...) load the module: diff --git a/docs.it4i/anselm-cluster-documentation/software/compilers.md b/docs.it4i/anselm-cluster-documentation/software/compilers.md index 305389fd5f43336cb8ff1e0bef420c88649b660f..3c87064f9a5246317233f180ba243e88bc37e0ef 100644 --- a/docs.it4i/anselm-cluster-documentation/software/compilers.md +++ b/docs.it4i/anselm-cluster-documentation/software/compilers.md @@ -152,4 +152,4 @@ For information how to use Java (runtime and/or compiler), please read the [Java NVIDIA CUDA ----------- -For information how to work with NVIDIA CUDA, please read the [NVIDIA CUDA page](nvidia-cuda/). \ No newline at end of file +For information on how to work with NVIDIA CUDA, please read the [NVIDIA CUDA page](nvidia-cuda/). diff --git a/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md b/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md index 8c3528b4f2782432e1a973bc68a377b80b911912..1d87e0ebf27b664d3a292b69324738329cfcb917 100644 --- a/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md +++ b/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md @@ -118,4 +118,4 @@ cd /apps/engineering/comsol/comsol43b/mli matlab -nodesktop -nosplash -r "mphstart; addpath /scratch/$USER; test_job" ``` -This example shows how to run LiveLink for MATLAB with following configuration: 3 nodes and 16 cores per node. Working directory has to be created before submitting (comsol_matlab.pbs) job script into the queue. Input file (test_job.m) has to be in working directory or full path to input file has to be specified. The MATLAB command option (-r ”mphstart”) created a connection with a COMSOL server using the default port number. +This example shows, how to run LiveLink for MATLAB with following configuration: 3 nodes and 16 cores per node. Working directory has to be created before submitting (comsol_matlab.pbs) job script into the queue. Input file (test_job.m) has to be in working directory or full path to input file has to be specified. The MATLAB command option (-r ”mphstart”) created a connection with a COMSOL server using the default port number. diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md index 677a12150df5f71ebd962eb7f6c4b72a8d98dae1..1aa9bc9e97e32ff3ddc6a5846330a6c34ab43d86 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md @@ -32,7 +32,7 @@ and launch the GUI : The GUI will open in new window. Click on "*New Project...*" to create a new project. After clicking *OK*, a new window with project properties will appear.  At "*Application:*", select the bath to your binary you want to profile (the binary should be compiled with -g flag). Some additional options such as command line arguments can be selected. At "*Managed code profiling mode:*" select "*Native*" (unless you want to profile managed mode .NET/Mono applications). After clicking *OK*, your project is created. -To run a new analysis, click "*New analysis...*". You will see a list of possible analysis. Some of them will not be possible on the current CPU (e.g.. Intel Atom analysis is not possible on Sandy Bridge CPU), the GUI will show an error box if you select the wrong analysis. For example, select "*Advanced Hotspots*". Clicking on *Start *will start profiling of the application. +To run a new analysis, click "*New analysis...*". You will see a list of possible analysis. Some of them will not be possible on the current CPU (e.g. Intel Atom analysis is not possible on Sandy Bridge CPU), the GUI will show an error box if you select the wrong analysis. For example, select "*Advanced Hotspots*". Clicking on *Start *will start profiling of the application. Remote Analysis --------------- diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md index 0eb26db0984080cc5957551cf3a15c36f7825601..f77fbe5cfa6161395fd9b45cf33cbc0dcb21961e 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md @@ -99,7 +99,7 @@ Octave is linked with parallel Intel MKL, so it best suited for batch processing variable. !!! Note "Note" - Calculations that do not employ parallelism (either by using parallel MKL e.g.. via matrix operations, fork() function, [parallel package](http://octave.sourceforge.net/parallel/) or other mechanism) will actually run slower than on host CPU. + Calculations that do not employ parallelism (either by using parallel MKL e.g. via matrix operations, fork() function, [parallel package](http://octave.sourceforge.net/parallel/) or other mechanism) will actually run slower than on host CPU. To use Octave on a node with Xeon Phi: diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md index 935fc4f73495a5dcabb4362e7b6bd8775778aaee..e4ca472867371b26240e5d5ade9f0c9f441d265e 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md @@ -12,7 +12,7 @@ Trilinos is a collection of software packages for the numerical solution of larg Current Trilinos installation on ANSELM contains (among others) the following main packages - **Epetra** - core linear algebra package containing classes for manipulation with serial and distributed vectors, matrices, and graphs. Dense linear solvers are supported via interface to BLAS and LAPACK (Intel MKL on ANSELM). Its extension **EpetraExt** contains e.g. methods for matrix-matrix multiplication. -- **Tpetra** - next-generation linear algebra package. Supports 64 bit indexing and arbitrary data type using C++ templates. +- **Tpetra** - next-generation linear algebra package. Supports 64-bit indexing and arbitrary data type using C++ templates. - **Belos** - library of various iterative solvers (CG, block CG, GMRES, block GMRES etc.). - **Amesos** - interface to direct sparse solvers. - **Anasazi** - framework for large-scale eigenvalue algorithms. diff --git a/docs.it4i/anselm-cluster-documentation/storage.md b/docs.it4i/anselm-cluster-documentation/storage.md index ce6acc114ddf7a017b759578395305fea19d4bda..545bc2d4a9fb0004c52519e24c8414c6e559c6a1 100644 --- a/docs.it4i/anselm-cluster-documentation/storage.md +++ b/docs.it4i/anselm-cluster-documentation/storage.md @@ -354,7 +354,7 @@ Once registered for CESNET Storage, you may [access the storage](https://du.cesn !!! Note "Note" SSHFS: The storage will be mounted like a local hard drive -The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard disk drive. Files can be than copied in and out in a usual fashion. +The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion. First, create the mount point diff --git a/docs.it4i/salomon/resources-allocation-policy.md b/docs.it4i/salomon/resources-allocation-policy.md index d3813d410761b1286caec2f90213a18950586167..ae1e38e57ff11cfde272496656a59a9dd32b6ff2 100644 --- a/docs.it4i/salomon/resources-allocation-policy.md +++ b/docs.it4i/salomon/resources-allocation-policy.md @@ -3,7 +3,7 @@ Resources Allocation Policy Resources Allocation Policy --------------------------- -The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The Fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview: +The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview: !!! Note "Note" Check the queue status at https://extranet.it4i.cz/rsweb/salomon/ diff --git a/docs.it4i/salomon/storage.md b/docs.it4i/salomon/storage.md index e57e8fc3137c06ae32a27dac4fdee19ed2f9ea6a..8c4058cb7ab52d81d97c0b31cc13a32d28a2337e 100644 --- a/docs.it4i/salomon/storage.md +++ b/docs.it4i/salomon/storage.md @@ -56,7 +56,7 @@ If multiple clients try to read and write the same part of a file at the same ti There is default stripe configuration for Salomon Lustre filesystems. However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance: -1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of kB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Salomon Lustre filesystems +1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Salomon Lustre filesystems 2. stripe_count the number of OSTs to stripe across; default is 1 for Salomon Lustre filesystems one can specify -1 to use all OSTs in the filesystem. 3. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended. @@ -356,7 +356,7 @@ Once registered for CESNET Storage, you may [access the storage](https://du.cesn !!! Note "Note" SSHFS: The storage will be mounted like a local hard drive -The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard disk drive. Files can be than copied in and out in a usual fashion. +The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion. First, create the mount point