Skip to content
Snippets Groups Projects
Commit 266704e5 authored by Filip Staněk's avatar Filip Staněk
Browse files

Typos and correction of LibSCI naming.

parent 06299d7b
No related branches found
No related tags found
1 merge request!480Typos and correction of LibSCI naming.
Pipeline #40674 failed
......@@ -24,7 +24,7 @@ see [Lorenz Compiler performance benchmark][a].
## 2. Use BLAS Library
It is important to use the BLAS library that performs well on AMD processors.
To combine the optimizations for the general CPU code and have the most efficient BLAS routines we recommend the combination of lastest Intel Compiler suite, with Cray's Scientific Library bundle (LIBSCI). When using the Intel Compiler suite includes also support for efficient MPI implementation utilizing Intel MPI library over the Infiniband interconnect.
To combine the optimizations for the general CPU code and have the most efficient BLAS routines we recommend the combination of lastest Intel Compiler suite, with Cray's Scientific Library bundle (LibSci). When using the Intel Compiler suite includes also support for efficient MPI implementation utilizing Intel MPI library over the Infiniband interconnect.
For the compilation as well for the runtime of compiled code use:
......@@ -39,10 +39,10 @@ There are usually two standard situation how to compile and run the code
### OpenMP Without MPI
To compile the code against the LIBSCI, without MPI, but still enabling OpenMP run over multiple cores use:
To compile the code against the LibSci, without MPI, but still enabling OpenMP run over multiple cores use:
```code
icx -qopenmp -L$CRAY_LIBSCI_PREFIX_DIR/lib -I$CRAY_LIBSCI_PREFIX_DIR/include -o BINARY.x SURCE_CODE.c -lsci_intel_mp
icx -qopenmp -L$CRAY_LIBSCI_PREFIX_DIR/lib -I$CRAY_LIBSCI_PREFIX_DIR/include -o BINARY.x SOURCE_CODE.c -lsci_intel_mp
```
To run the resulting binary use:
......@@ -55,10 +55,10 @@ This enables effective run over all 128 cores available on a single Karlina comp
### OpenMP With MPI
To compile the code against the LIBSCI, with MPI, use:
To compile the code against the LibSci, with MPI, use:
```code
mpiicx -qopenmp -L$CRAY_LIBSCI_PREFIX_DIR/lib -I$CRAY_LIBSCI_PREFIX_DIR/include -o BINARY.x SURCE_CODE.c -lsci_intel_mp -lsci_intel_mpi_mp
mpiicx -qopenmp -L$CRAY_LIBSCI_PREFIX_DIR/lib -I$CRAY_LIBSCI_PREFIX_DIR/include -o BINARY.x SOURCE_CODE.c -lsci_intel_mp -lsci_intel_mpi_mp
```
To run the resulting binary use:
......@@ -69,7 +69,7 @@ OMP_NUM_THREADS=64 OMP_PROC_BIND=true mpirun -n 2 ${HOME}/BINARY.x
This example runs the BINARY.x, placed in ${HOME} as 2 MPI processes, each using 64 cores of a single socket of a single node.
Another example would be to run a job on 2 full nodes, utilizing 128 cores on each (so 256 cores in total) and letting the LIBSCI efficiently placing the BLAS routines across the allocated CPU sockets:
Another example would be to run a job on 2 full nodes, utilizing 128 cores on each (so 256 cores in total) and letting the LibSci efficiently placing the BLAS routines across the allocated CPU sockets:
```code
OMP_NUM_THREADS=128 OMP_PROC_BIND=true mpirun -n 2 ${HOME}/BINARY.x
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment