Skip to content
Snippets Groups Projects
Commit 220960c4 authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

up

parent ddeb4d8a
No related branches found
No related tags found
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!74Md revision
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
The Salomon cluster provides following elements of the Intel Parallel Studio XE The Salomon cluster provides following elements of the Intel Parallel Studio XE
Intel Parallel Studio XE Intel Parallel Studio XE
* Intel Compilers * Intel Compilers
* Intel Debugger * Intel Debugger
* Intel MKL Library * Intel MKL Library
......
...@@ -631,7 +631,7 @@ The output should be similar to: ...@@ -631,7 +631,7 @@ The output should be similar to:
There are two ways how to execute an MPI code on a single coprocessor: 1.) lunch the program using "**mpirun**" from the There are two ways how to execute an MPI code on a single coprocessor: 1.) lunch the program using "**mpirun**" from the
coprocessor; or 2.) lunch the task using "**mpiexec.hydra**" from a host. coprocessor; or 2.) lunch the task using "**mpiexec.hydra**" from a host.
**Execution on coprocessor** #### Execution on coprocessor
Similarly to execution of OpenMP programs in native mode, since the environmental module are not supported on MIC, user has to setup paths to Intel MPI libraries and binaries manually. One time setup can be done by creating a "**.profile**" file in user's home directory. This file sets up the environment on the MIC automatically once user access to the accelerator through the SSH. Similarly to execution of OpenMP programs in native mode, since the environmental module are not supported on MIC, user has to setup paths to Intel MPI libraries and binaries manually. One time setup can be done by creating a "**.profile**" file in user's home directory. This file sets up the environment on the MIC automatically once user access to the accelerator through the SSH.
...@@ -650,8 +650,8 @@ Similarly to execution of OpenMP programs in native mode, since the environmenta ...@@ -650,8 +650,8 @@ Similarly to execution of OpenMP programs in native mode, since the environmenta
``` ```
!!! note !!! note
- this file sets up both environmental variable for both MPI and OpenMP libraries. \* this file sets up both environmental variable for both MPI and OpenMP libraries.
- this file sets up the paths to a particular version of Intel MPI library and particular version of an Intel compiler. These versions have to match with loaded modules. \* this file sets up the paths to a particular version of Intel MPI library and particular version of an Intel compiler. These versions have to match with loaded modules.
To access a MIC accelerator located on a node that user is currently connected to, use: To access a MIC accelerator located on a node that user is currently connected to, use:
...@@ -680,7 +680,7 @@ The output should be similar to: ...@@ -680,7 +680,7 @@ The output should be similar to:
Hello world from process 0 of 4 on host cn207-mic0 Hello world from process 0 of 4 on host cn207-mic0
``` ```
**Execution on host** #### Execution on host
If the MPI program is launched from host instead of the coprocessor, the environmental variables are not set using the ".profile" file. Therefore user has to specify library paths from the command line when calling "mpiexec". If the MPI program is launched from host instead of the coprocessor, the environmental variables are not set using the ".profile" file. Therefore user has to specify library paths from the command line when calling "mpiexec".
...@@ -703,8 +703,8 @@ or using mpirun ...@@ -703,8 +703,8 @@ or using mpirun
``` ```
!!! note !!! note
- the full path to the binary has to specified (here: "**>~/mpi-test-mic**") \* the full path to the binary has to specified (here: "**>~/mpi-test-mic**")
- the LD_LIBRARY_PATH has to match with Intel MPI module used to compile the MPI code \* the LD_LIBRARY_PATH has to match with Intel MPI module used to compile the MPI code
The output should be again similar to: The output should be again similar to:
...@@ -725,7 +725,7 @@ A simple test to see if the file is present is to execute: ...@@ -725,7 +725,7 @@ A simple test to see if the file is present is to execute:
/bin/pmi_proxy /bin/pmi_proxy
``` ```
**Execution on host - MPI processes distributed over multiple accelerators on multiple nodes** #### Execution on host - MPI processes distributed over multiple accelerators on multiple nodes
To get access to multiple nodes with MIC accelerator, user has to use PBS to allocate the resources. To start interactive session, that allocates 2 compute nodes = 2 MIC accelerators run qsub command with following parameters: To get access to multiple nodes with MIC accelerator, user has to use PBS to allocate the resources. To start interactive session, that allocates 2 compute nodes = 2 MIC accelerators run qsub command with following parameters:
...@@ -885,7 +885,7 @@ A possible output of the MPI "hello-world" example executed on two hosts and two ...@@ -885,7 +885,7 @@ A possible output of the MPI "hello-world" example executed on two hosts and two
!!! note !!! note
At this point the MPI communication between MIC accelerators on different nodes uses 1Gb Ethernet only. At this point the MPI communication between MIC accelerators on different nodes uses 1Gb Ethernet only.
**Using the PBS automatically generated node-files** #### Using the PBS automatically generated node-files
PBS also generates a set of node-files that can be used instead of manually creating a new one every time. Three node-files are genereated: PBS also generates a set of node-files that can be used instead of manually creating a new one every time. Three node-files are genereated:
......
# Java # Java
**Java on the cluster**
Java is available on the cluster. Activate java by loading the Java module Java is available on the cluster. Activate java by loading the Java module
```bash ```bash
......
...@@ -14,7 +14,7 @@ Read more on <http://www.r-project.org/>, <http://cran.r-project.org/doc/manuals ...@@ -14,7 +14,7 @@ Read more on <http://www.r-project.org/>, <http://cran.r-project.org/doc/manuals
## Modules ## Modules
**The R version 3.1.1 is available on the cluster, along with GUI interface Rstudio** The R version 3.1.1 is available on the cluster, along with GUI interface Rstudio
| Application | Version | module | | Application | Version | module |
| ----------- | ----------------- | ------------------- | | ----------- | ----------------- | ------------------- |
...@@ -347,7 +347,7 @@ mpi.apply Rmpi example: ...@@ -347,7 +347,7 @@ mpi.apply Rmpi example:
mpi.quit() mpi.quit()
``` ```
The above is the mpi.apply MPI example for calculating the number π. Only the slave processes carry out the calculation. Note the **mpi.parSapply(), ** function call. The package parallel [example](r/#package-parallel)[above](r/#package-parallel) may be trivially adapted (for much better performance) to this structure using the mclapply() in place of mpi.parSapply(). The above is the mpi.apply MPI example for calculating the number π. Only the slave processes carry out the calculation. Note the **mpi.parSapply()**, function call. The package parallel [example](r/#package-parallel) [above](r/#package-parallel) may be trivially adapted (for much better performance) to this structure using the mclapply() in place of mpi.parSapply().
Execute the example as: Execute the example as:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment