Skip to content
Snippets Groups Projects

Úprava optimální kompilace kódů na Karolíně s ohledem na dostupnost funkčního...

Merged Filip Staněk requested to merge sta03-master-patch-75356 into master
1 file
+ 43
15
Compare changes
  • Side-by-side
  • Inline
@@ -24,29 +24,57 @@ see [Lorenz Compiler performance benchmark][a].
@@ -24,29 +24,57 @@ see [Lorenz Compiler performance benchmark][a].
## 2. Use BLAS Library
## 2. Use BLAS Library
It is important to use the BLAS library that performs well on AMD processors.
It is important to use the BLAS library that performs well on AMD processors.
We have measured the best performance with the MKL;
To combine the optimizations for the general CPU code and have the most efficient BLAS routines we recommend the combination of lastest Intel Compiler suite, with Cray's Scientific Library bundle (LIBSCI). When using the Intel Compiler suite includes also support for efficient MPI implementation utilizing Intel MPI library over the Infiniband interconnect.
however, the MKL BLAS must be ‘tricked’ to believe it is working with an Intel CPU.
 
For the compilation as well for the runtime of compiled code use:
 
 
```code
 
ml PrgEnv-intel
 
ml cray-pmi/6.1.14
 
 
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CRAY_LD_LIBRARY_PATH:$CRAY_LIBSCI_PREFIX_DIR/lib:/opt/cray/pals/1.3.2/lib
 
```
 
There are usually two standard situation how to compile and run the code
 
 
### OpenMP without MPI
 
 
To compile the code against the LIBSCI, without MPI, but still enabling OpenMP run over multiple cores use:
```code
```code
ml mkl
icx -qopenmp -L$CRAY_LIBSCI_PREFIX_DIR/lib -I$CRAY_LIBSCI_PREFIX_DIR/include -o BINARY.x SURCE_CODE.c -lsci_intel_mp
ml KAROLINA/FAKEintel
```
```
Further, it is very important to pin the BLAS threads to the cores
To run the resulting binary use:
and also to restrict BLAS threads to run on a single socket of an AMD processor.
```code
 
OMP_NUM_THREADS=128 OMP_PROC_BIND=true BINARY.x
 
```
 
This enables effective run over all 128 cores available on a single Karlina compute node.
 
### OpenMP with MPI
 
To compile the code against the LIBSCI, with MPI, use:
```code
```code
OMP_NUM_THREADS = 64
mpiicx -qopenmp -L$CRAY_LIBSCI_PREFIX_DIR/lib -I$CRAY_LIBSCI_PREFIX_DIR/include -o BINARY.x SURCE_CODE.c -lsci_intel_mp -lsci_intel_mpi_mp
GOMP_CPU_AFFINITY=0:63:1
```
```
However, to get full performance, you have to execute two jobs on two Karolina sockets at the time.
To run the resulting binary use:
Other BLAS libraries may be used, however none performs as well as the MKL.
```code
 
OMP_NUM_THREADS=64 OMP_PROC_BIND=true mpirun -n 2 ${HOME}/BINARY.x
 
```
 
This example runs the BINARY.x, placed in ${HOME} as 2 MPI processes, each using 64 cores of a single socket of a single node.
 
 
Another example would be to run a job on 2 full nodes, utilizing 128 cores on each (so 256 cores in total) and letting the LIBSCI efficiently placing the BLAS routines across the allocated CPU sockets:
 
```code
 
OMP_NUM_THREADS=128 OMP_PROC_BIND=true mpirun -n 2 ${HOME}/BINARY.x
 
```
 
This assumes you have allocated 2 full nodes on Karolina using SLURM's directives, e. g. in a submission script:
 
```code
 
#SBATCH --nodes 2
 
#SBATCH --ntasks-per-node 128
 
```
!!! note
**Don't forget** before the run to ensure you have the correct modules and loaded and that you have set up the LD_LIBRARY_PATH environment variable set as shown above (e.g. part of your submission script for SLURM).
Most MPI libraries do the binding automatically. The binding of MPI ranks can be inspected for any MPI by running `$ mpirun -n num_of_ranks numactl --show`. However, if the ranks spawn threads, binding of these threads should be done via the environment variables described above.
The choice of BLAS library and its performance may be verified with our benchmark,
!!! note
see [Lorenz BLAS performance benchmark][a].
Most MPI libraries do the binding automatically. The binding of MPI ranks can be inspected for any MPI by running `$ mpirun -n num_of_ranks numactl --show`. However, if the ranks spawn threads, binding of these threads should be done via the environment variables described above.
[a]: https://code.it4i.cz/jansik/lorenz/-/blob/main/README.md
The choice of BLAS library and its performance may be verified with our benchmark,
 
see [Lorenz BLAS performance benchmark](https://code.it4i.cz/jansik/lorenz/-/blob/main/README.md).
Loading