From 2048576f53502ba8d03aad0881d9d0ff2302cf2f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Luk=C3=A1=C5=A1=20Krup=C4=8D=C3=ADk?= <lukas.krupcik@vsb.cz> Date: Fri, 27 Jan 2017 12:13:54 +0100 Subject: [PATCH] revision --- .../software/chemistry/nwchem.md | 2 -- .../software/debuggers/allinea-ddt.md | 4 ++-- .../software/debuggers/total-view.md | 6 +++--- .../software/intel-xeon-phi.md | 14 +++++++------- .../software/isv_licenses.md | 2 +- .../software/nvidia-cuda.md | 4 ++-- 6 files changed, 15 insertions(+), 17 deletions(-) diff --git a/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md b/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md index b05ff3eb3..9f09fe794 100644 --- a/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md +++ b/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md @@ -1,7 +1,5 @@ # NWChem -**High-Performance Computational Chemistry** - ## Introduction NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters. diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md index 27863605d..07d41915d 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md @@ -48,8 +48,8 @@ $ mpif90 -g -O0 -o test_debug test.f Before debugging, you need to compile your code with theses flags: !!! note - - **g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers. - - **O0** : Suppress all optimizations. + \* **g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers. + \* **O0** : Suppress all optimizations. ## Starting a Job With DDT diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md index ca526f248..b4f710675 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md @@ -1,6 +1,6 @@ # Total View -\##TotalView is a GUI-based source code multi-process, multi-thread debugger. +TotalView is a GUI-based source code multi-process, multi-thread debugger. ## License and Limitations for Anselm Users @@ -58,8 +58,8 @@ Compile the code: Before debugging, you need to compile your code with theses flags: !!! note - - **-g** : Generates extra debugging information usable by GDB. **-g3** includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers. - - **-O0** : Suppress all optimizations. + \* **-g** : Generates extra debugging information usable by GDB. **-g3** includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers. + \* **-O0** : Suppress all optimizations. ## Starting a Job With TotalView diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md b/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md index 453c57304..9d6700e30 100644 --- a/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md +++ b/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md @@ -632,7 +632,7 @@ The output should be similar to: There are two ways how to execute an MPI code on a single coprocessor: 1.) lunch the program using "**mpirun**" from the coprocessor; or 2.) lunch the task using "**mpiexec.hydra**" from a host. -**Execution on coprocessor** +#### Execution on coprocessor Similarly to execution of OpenMP programs in native mode, since the environmental module are not supported on MIC, user has to setup paths to Intel MPI libraries and binaries manually. One time setup can be done by creating a "**.profile**" file in user's home directory. This file sets up the environment on the MIC automatically once user access to the accelerator through the SSH. @@ -651,8 +651,8 @@ Similarly to execution of OpenMP programs in native mode, since the environmenta ``` !!! note - - this file sets up both environmental variable for both MPI and OpenMP libraries. - - this file sets up the paths to a particular version of Intel MPI library and particular version of an Intel compiler. These versions have to match with loaded modules. + \* this file sets up both environmental variable for both MPI and OpenMP libraries. + \* this file sets up the paths to a particular version of Intel MPI library and particular version of an Intel compiler. These versions have to match with loaded modules. To access a MIC accelerator located on a node that user is currently connected to, use: @@ -681,7 +681,7 @@ The output should be similar to: Hello world from process 0 of 4 on host cn207-mic0 ``` -**Execution on host** +#### Execution on host If the MPI program is launched from host instead of the coprocessor, the environmental variables are not set using the ".profile" file. Therefore user has to specify library paths from the command line when calling "mpiexec". @@ -704,8 +704,8 @@ or using mpirun ``` !!! note - - the full path to the binary has to specified (here: `>~/mpi-test-mic`) - - the `LD_LIBRARY_PATH` has to match with Intel MPI module used to compile the MPI code + \* the full path to the binary has to specified (here: `>~/mpi-test-mic`) + \* the `LD_LIBRARY_PATH` has to match with Intel MPI module used to compile the MPI code The output should be again similar to: @@ -726,7 +726,7 @@ A simple test to see if the file is present is to execute: /bin/pmi_proxy ``` -**Execution on host - MPI processes distributed over multiple accelerators on multiple nodes** +#### Execution on host - MPI processes distributed over multiple accelerators on multiple nodes** To get access to multiple nodes with MIC accelerator, user has to use PBS to allocate the resources. To start interactive session, that allocates 2 compute nodes = 2 MIC accelerators run qsub command with following parameters: diff --git a/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md b/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md index 3e9dca22e..9868056e5 100644 --- a/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md +++ b/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md @@ -69,7 +69,7 @@ Names of applications (APP): To get the FEATUREs of a license take a look into the corresponding state file ([see above](isv_licenses/#Licence)), or use: -**Application and List of provided features** +### Application and List of provided features * **ansys** $ grep -v "#" /apps/user/licenses/ansys_features_state.txt | cut -f1 -d' ' * **comsol** $ grep -v "#" /apps/user/licenses/comsol_features_state.txt | cut -f1 -d' ' diff --git a/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md b/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md index 375d3732c..6291a4f29 100644 --- a/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md +++ b/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md @@ -1,6 +1,6 @@ # NVIDIA CUDA -## Guide to NVIDIA CUDA Programming and GPU Usage +Guide to NVIDIA CUDA Programming and GPU Usage ## CUDA Programming on Anselm @@ -198,7 +198,7 @@ To run the code use interactive PBS session to get access to one of the GPU acce The NVIDIA CUDA Basic Linear Algebra Subroutines (cuBLAS) library is a GPU-accelerated version of the complete standard BLAS library with 152 standard BLAS routines. Basic description of the library together with basic performance comparison with MKL can be found [here](https://developer.nvidia.com/cublas "Nvidia cuBLAS"). -**cuBLAS example: SAXPY** +#### cuBLAS example: SAXPY SAXPY function multiplies the vector x by the scalar alpha and adds it to the vector y overwriting the latest vector with the result. The description of the cuBLAS function can be found in [NVIDIA CUDA documentation](http://docs.nvidia.com/cuda/cublas/index.html#cublas-lt-t-gt-axpy "Nvidia CUDA documentation "). Code can be pasted in the file and compiled without any modification. -- GitLab