From 071a73d42211227fc14f529f0e72330e748f6448 Mon Sep 17 00:00:00 2001 From: Lubomir Prda <lubomir.prda@vsb.cz> Date: Mon, 23 Jan 2017 10:02:51 +0100 Subject: [PATCH] Various spell corrections --- .../anselm-cluster-documentation/environment-and-modules.md | 2 +- docs.it4i/anselm-cluster-documentation/hardware-overview.md | 2 +- docs.it4i/anselm-cluster-documentation/network.md | 6 +++--- .../software/ansys/ansys-mechanical-apdl.md | 2 +- .../anselm-cluster-documentation/software/ansys/ansys.md | 2 +- .../anselm-cluster-documentation/software/compilers.md | 2 +- .../software/comsol-multiphysics.md | 2 +- .../software/debuggers/intel-vtune-amplifier.md | 2 +- .../software/numerical-languages/octave.md | 2 +- .../software/numerical-libraries/trilinos.md | 2 +- docs.it4i/anselm-cluster-documentation/storage.md | 2 +- docs.it4i/salomon/resources-allocation-policy.md | 2 +- docs.it4i/salomon/storage.md | 4 ++-- 13 files changed, 16 insertions(+), 16 deletions(-) diff --git a/docs.it4i/anselm-cluster-documentation/environment-and-modules.md b/docs.it4i/anselm-cluster-documentation/environment-and-modules.md index 916f14a9d..1dafc5879 100644 --- a/docs.it4i/anselm-cluster-documentation/environment-and-modules.md +++ b/docs.it4i/anselm-cluster-documentation/environment-and-modules.md @@ -25,7 +25,7 @@ fi ``` !!! Note "Note" - Do not run commands outputting to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Take care for SSH session interactivity for such commands as stated in the previous example. + Do not run commands outputting to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Conside utilization of SSH session interactivity for such commands as stated in the previous example. ### Application Modules diff --git a/docs.it4i/anselm-cluster-documentation/hardware-overview.md b/docs.it4i/anselm-cluster-documentation/hardware-overview.md index dd0d1f7bf..42b3cba4c 100644 --- a/docs.it4i/anselm-cluster-documentation/hardware-overview.md +++ b/docs.it4i/anselm-cluster-documentation/hardware-overview.md @@ -25,7 +25,7 @@ GPU and accelerated nodes are available upon request, see the [Resources Allocat All these nodes are interconnected by fast InfiniBand network and Ethernet network. [More about the Network](network/). Every chassis provides InfiniBand switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches. -All nodes share 360 TB /home disk storage to store user files. The 146 TB shared /scratch storage is available for the scratch data. These file systems are provided by Lustre parallel file system. There is also local disk storage available on all compute nodes /lscratch. [More about Storage](storage/). +All nodes share 360 TB /home disk storage to store user files. The 146 TB shared /scratch storage is available for the scratch data. These file systems are provided by Lustre parallel file system. There is also local disk storage available on all compute nodes in /lscratch. [More about Storage](storage/). The user access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing cluster.](shell-and-data-access/) diff --git a/docs.it4i/anselm-cluster-documentation/network.md b/docs.it4i/anselm-cluster-documentation/network.md index 0a3b738f3..24fe0881b 100644 --- a/docs.it4i/anselm-cluster-documentation/network.md +++ b/docs.it4i/anselm-cluster-documentation/network.md @@ -5,12 +5,12 @@ All compute and login nodes of Anselm are interconnected by [InfiniBand](http:// InfiniBand Network ------------------ -All compute and login nodes of Anselm are interconnected by a high-bandwidth, low-latency [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) QDR network (IB 4 x QDR, 40 Gbps). The network topology is a fully non-blocking fat-tree. +All compute and login nodes of Anselm are interconnected by a high-bandwidth, low-latency [InfiniBand](http://en.wikipedia.org/wiki/InfiniBand) QDR network (IB 4 x QDR, 40 Gbps). The network topology is a fully non-blocking fat-tree. -The compute nodes may be accessed via the Infiniband network using ib0 network interface, in address range 10.2.1.1-209. The MPI may be used to establish native Infiniband connection among the nodes. +The compute nodes may be accessed via the InfiniBand network using ib0 network interface, in address range 10.2.1.1-209. The MPI may be used to establish native InfiniBand connection among the nodes. !!! Note "Note" - The network provides **2170 MB/s** transfer rates via the TCP connection (single stream) and up to **3600 MB/s** via native Infiniband protocol. + The network provides **2170 MB/s** transfer rates via the TCP connection (single stream) and up to **3600 MB/s** via native InfiniBand protocol. The Fat tree topology ensures that peak transfer rates are achieved between any two nodes, independent of network traffic exchanged among other nodes concurrently. diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md index e0e3f5be0..a64b56cfc 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md @@ -50,7 +50,7 @@ echo Machines: $hl /ansys_inc/v145/ansys/bin/ansys145 -b -dis -p aa_r -i input.dat -o file.out -machines $hl -dir $WORK_DIR ``` -Header of the PBS file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the PBS file (above) is common and description can be found on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common APDL file which is attached to the ANSYS solver via parameter -i diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md index d0fedd929..92a9cb156 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md @@ -3,7 +3,7 @@ Overview of ANSYS Products **[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM) -Anselm provides as commercial as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of  license or by two letter preposition "**aa_**" in the license feature name. Change of license is realized on command line respectively directly in user's PBS file (see individual products). [ More about licensing here](ansys/licensing/) +Anselm provides commercial as well as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of  license or by two letter preposition "**aa_**" in the license feature name. Change of license is realized on command line respectively directly in user's PBS file (see individual products). [ More about licensing here](ansys/licensing/) To load the latest version of any ANSYS product (Mechanical, Fluent, CFX, MAPDL,...) load the module: diff --git a/docs.it4i/anselm-cluster-documentation/software/compilers.md b/docs.it4i/anselm-cluster-documentation/software/compilers.md index 305389fd5..3c87064f9 100644 --- a/docs.it4i/anselm-cluster-documentation/software/compilers.md +++ b/docs.it4i/anselm-cluster-documentation/software/compilers.md @@ -152,4 +152,4 @@ For information how to use Java (runtime and/or compiler), please read the [Java NVIDIA CUDA ----------- -For information how to work with NVIDIA CUDA, please read the [NVIDIA CUDA page](nvidia-cuda/). \ No newline at end of file +For information on how to work with NVIDIA CUDA, please read the [NVIDIA CUDA page](nvidia-cuda/). diff --git a/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md b/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md index 8c3528b4f..1d87e0ebf 100644 --- a/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md +++ b/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md @@ -118,4 +118,4 @@ cd /apps/engineering/comsol/comsol43b/mli matlab -nodesktop -nosplash -r "mphstart; addpath /scratch/$USER; test_job" ``` -This example shows how to run LiveLink for MATLAB with following configuration: 3 nodes and 16 cores per node. Working directory has to be created before submitting (comsol_matlab.pbs) job script into the queue. Input file (test_job.m) has to be in working directory or full path to input file has to be specified. The MATLAB command option (-r ”mphstart”) created a connection with a COMSOL server using the default port number. +This example shows, how to run LiveLink for MATLAB with following configuration: 3 nodes and 16 cores per node. Working directory has to be created before submitting (comsol_matlab.pbs) job script into the queue. Input file (test_job.m) has to be in working directory or full path to input file has to be specified. The MATLAB command option (-r ”mphstart”) created a connection with a COMSOL server using the default port number. diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md index 677a12150..1aa9bc9e9 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md @@ -32,7 +32,7 @@ and launch the GUI : The GUI will open in new window. Click on "*New Project...*" to create a new project. After clicking *OK*, a new window with project properties will appear.  At "*Application:*", select the bath to your binary you want to profile (the binary should be compiled with -g flag). Some additional options such as command line arguments can be selected. At "*Managed code profiling mode:*" select "*Native*" (unless you want to profile managed mode .NET/Mono applications). After clicking *OK*, your project is created. -To run a new analysis, click "*New analysis...*". You will see a list of possible analysis. Some of them will not be possible on the current CPU (e.g.. Intel Atom analysis is not possible on Sandy Bridge CPU), the GUI will show an error box if you select the wrong analysis. For example, select "*Advanced Hotspots*". Clicking on *Start *will start profiling of the application. +To run a new analysis, click "*New analysis...*". You will see a list of possible analysis. Some of them will not be possible on the current CPU (e.g. Intel Atom analysis is not possible on Sandy Bridge CPU), the GUI will show an error box if you select the wrong analysis. For example, select "*Advanced Hotspots*". Clicking on *Start *will start profiling of the application. Remote Analysis --------------- diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md index 0eb26db09..f77fbe5cf 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md @@ -99,7 +99,7 @@ Octave is linked with parallel Intel MKL, so it best suited for batch processing variable. !!! Note "Note" - Calculations that do not employ parallelism (either by using parallel MKL e.g.. via matrix operations, fork() function, [parallel package](http://octave.sourceforge.net/parallel/) or other mechanism) will actually run slower than on host CPU. + Calculations that do not employ parallelism (either by using parallel MKL e.g. via matrix operations, fork() function, [parallel package](http://octave.sourceforge.net/parallel/) or other mechanism) will actually run slower than on host CPU. To use Octave on a node with Xeon Phi: diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md index 935fc4f73..e4ca47286 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md @@ -12,7 +12,7 @@ Trilinos is a collection of software packages for the numerical solution of larg Current Trilinos installation on ANSELM contains (among others) the following main packages - **Epetra** - core linear algebra package containing classes for manipulation with serial and distributed vectors, matrices, and graphs. Dense linear solvers are supported via interface to BLAS and LAPACK (Intel MKL on ANSELM). Its extension **EpetraExt** contains e.g. methods for matrix-matrix multiplication. -- **Tpetra** - next-generation linear algebra package. Supports 64 bit indexing and arbitrary data type using C++ templates. +- **Tpetra** - next-generation linear algebra package. Supports 64-bit indexing and arbitrary data type using C++ templates. - **Belos** - library of various iterative solvers (CG, block CG, GMRES, block GMRES etc.). - **Amesos** - interface to direct sparse solvers. - **Anasazi** - framework for large-scale eigenvalue algorithms. diff --git a/docs.it4i/anselm-cluster-documentation/storage.md b/docs.it4i/anselm-cluster-documentation/storage.md index ce6acc114..545bc2d4a 100644 --- a/docs.it4i/anselm-cluster-documentation/storage.md +++ b/docs.it4i/anselm-cluster-documentation/storage.md @@ -354,7 +354,7 @@ Once registered for CESNET Storage, you may [access the storage](https://du.cesn !!! Note "Note" SSHFS: The storage will be mounted like a local hard drive -The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard disk drive. Files can be than copied in and out in a usual fashion. +The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion. First, create the mount point diff --git a/docs.it4i/salomon/resources-allocation-policy.md b/docs.it4i/salomon/resources-allocation-policy.md index d3813d410..ae1e38e57 100644 --- a/docs.it4i/salomon/resources-allocation-policy.md +++ b/docs.it4i/salomon/resources-allocation-policy.md @@ -3,7 +3,7 @@ Resources Allocation Policy Resources Allocation Policy --------------------------- -The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The Fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview: +The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview: !!! Note "Note" Check the queue status at https://extranet.it4i.cz/rsweb/salomon/ diff --git a/docs.it4i/salomon/storage.md b/docs.it4i/salomon/storage.md index e57e8fc31..8c4058cb7 100644 --- a/docs.it4i/salomon/storage.md +++ b/docs.it4i/salomon/storage.md @@ -56,7 +56,7 @@ If multiple clients try to read and write the same part of a file at the same ti There is default stripe configuration for Salomon Lustre filesystems. However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance: -1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of kB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Salomon Lustre filesystems +1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Salomon Lustre filesystems 2. stripe_count the number of OSTs to stripe across; default is 1 for Salomon Lustre filesystems one can specify -1 to use all OSTs in the filesystem. 3. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended. @@ -356,7 +356,7 @@ Once registered for CESNET Storage, you may [access the storage](https://du.cesn !!! Note "Note" SSHFS: The storage will be mounted like a local hard drive -The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard disk drive. Files can be than copied in and out in a usual fashion. +The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion. First, create the mount point -- GitLab