Skip to content
Snippets Groups Projects
Commit ca8e9c89 authored by David Hrbáč's avatar David Hrbáč
Browse files

Auto capitalize

parent 5c9d7149
No related branches found
No related tags found
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!78Module matrix
Pipeline #
...@@ -632,7 +632,7 @@ The output should be similar to: ...@@ -632,7 +632,7 @@ The output should be similar to:
There are two ways how to execute an MPI code on a single coprocessor: 1.) lunch the program using "**mpirun**" from the There are two ways how to execute an MPI code on a single coprocessor: 1.) lunch the program using "**mpirun**" from the
coprocessor; or 2.) lunch the task using "**mpiexec.hydra**" from a host. coprocessor; or 2.) lunch the task using "**mpiexec.hydra**" from a host.
#### Execution on coprocessor #### Execution on Coprocessor
Similarly to execution of OpenMP programs in native mode, since the environmental module are not supported on MIC, user has to setup paths to Intel MPI libraries and binaries manually. One time setup can be done by creating a "**.profile**" file in user's home directory. This file sets up the environment on the MIC automatically once user access to the accelerator through the SSH. Similarly to execution of OpenMP programs in native mode, since the environmental module are not supported on MIC, user has to setup paths to Intel MPI libraries and binaries manually. One time setup can be done by creating a "**.profile**" file in user's home directory. This file sets up the environment on the MIC automatically once user access to the accelerator through the SSH.
...@@ -681,7 +681,7 @@ The output should be similar to: ...@@ -681,7 +681,7 @@ The output should be similar to:
Hello world from process 0 of 4 on host cn207-mic0 Hello world from process 0 of 4 on host cn207-mic0
``` ```
#### Execution on host #### Execution on Host
If the MPI program is launched from host instead of the coprocessor, the environmental variables are not set using the ".profile" file. Therefore user has to specify library paths from the command line when calling "mpiexec". If the MPI program is launched from host instead of the coprocessor, the environmental variables are not set using the ".profile" file. Therefore user has to specify library paths from the command line when calling "mpiexec".
...@@ -726,7 +726,7 @@ A simple test to see if the file is present is to execute: ...@@ -726,7 +726,7 @@ A simple test to see if the file is present is to execute:
/bin/pmi_proxy /bin/pmi_proxy
``` ```
#### Execution on host - MPI processes distributed over multiple accelerators on multiple nodes** #### Execution on Host - MPI Processes Distributed Over Multiple Accelerators on Multiple Nodes**
To get access to multiple nodes with MIC accelerator, user has to use PBS to allocate the resources. To start interactive session, that allocates 2 compute nodes = 2 MIC accelerators run qsub command with following parameters: To get access to multiple nodes with MIC accelerator, user has to use PBS to allocate the resources. To start interactive session, that allocates 2 compute nodes = 2 MIC accelerators run qsub command with following parameters:
...@@ -886,7 +886,7 @@ A possible output of the MPI "hello-world" example executed on two hosts and two ...@@ -886,7 +886,7 @@ A possible output of the MPI "hello-world" example executed on two hosts and two
!!! note !!! note
At this point the MPI communication between MIC accelerators on different nodes uses 1Gb Ethernet only. At this point the MPI communication between MIC accelerators on different nodes uses 1Gb Ethernet only.
### Using the PBS automatically generated node-files ### Using the PBS Automatically Generated Node-Files
PBS also generates a set of node-files that can be used instead of manually creating a new one every time. Three node-files are genereated: PBS also generates a set of node-files that can be used instead of manually creating a new one every time. Three node-files are genereated:
......
...@@ -69,7 +69,7 @@ Names of applications (APP): ...@@ -69,7 +69,7 @@ Names of applications (APP):
To get the FEATUREs of a license take a look into the corresponding state file ([see above](isv_licenses/#Licence)), or use: To get the FEATUREs of a license take a look into the corresponding state file ([see above](isv_licenses/#Licence)), or use:
### Application and List of provided features ### Application and List of Provided Features
* **ansys** $ grep -v "#" /apps/user/licenses/ansys_features_state.txt | cut -f1 -d' ' * **ansys** $ grep -v "#" /apps/user/licenses/ansys_features_state.txt | cut -f1 -d' '
* **comsol** $ grep -v "#" /apps/user/licenses/comsol_features_state.txt | cut -f1 -d' ' * **comsol** $ grep -v "#" /apps/user/licenses/comsol_features_state.txt | cut -f1 -d' '
......
...@@ -279,7 +279,7 @@ Optimized network setup with sharing and port forwarding ...@@ -279,7 +279,7 @@ Optimized network setup with sharing and port forwarding
### Advanced Networking ### Advanced Networking
#### Internet access #### Internet Access
Sometime your virtual machine needs access to internet (install software, updates, software activation, etc). We suggest solution using Virtual Distributed Ethernet (VDE) enabled QEMU with SLIRP running on login node tunneled to compute node. Be aware, this setup has very low performance, the worst performance of all described solutions. Sometime your virtual machine needs access to internet (install software, updates, software activation, etc). We suggest solution using Virtual Distributed Ethernet (VDE) enabled QEMU with SLIRP running on login node tunneled to compute node. Be aware, this setup has very low performance, the worst performance of all described solutions.
...@@ -321,7 +321,7 @@ Optimized setup ...@@ -321,7 +321,7 @@ Optimized setup
$ qemu-system-x86_64 ... -device virtio-net-pci,netdev=net0 -netdev vde,id=net0,sock=/tmp/sw0 $ qemu-system-x86_64 ... -device virtio-net-pci,netdev=net0 -netdev vde,id=net0,sock=/tmp/sw0
``` ```
#### TAP interconnect #### TAP Interconnect
Both user and vde network back-end have low performance. For fast interconnect (10 Gbit/s and more) of compute node (host) and virtual machine (guest) we suggest using Linux kernel TAP device. Both user and vde network back-end have low performance. For fast interconnect (10 Gbit/s and more) of compute node (host) and virtual machine (guest) we suggest using Linux kernel TAP device.
......
...@@ -4,7 +4,7 @@ Welcome to Salomon supercomputer cluster. The Salomon cluster consists of 1008 c ...@@ -4,7 +4,7 @@ Welcome to Salomon supercomputer cluster. The Salomon cluster consists of 1008 c
The cluster runs [CentOS Linux](http://www.bull.com/bullx-logiciels/systeme-exploitation.html) operating system, which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) The cluster runs [CentOS Linux](http://www.bull.com/bullx-logiciels/systeme-exploitation.html) operating system, which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg)
## Water-cooled Compute Nodes With MIC Accelerator ## Water-Cooled Compute Nodes With MIC Accelerator
![](../img/salomon) ![](../img/salomon)
......
...@@ -48,7 +48,7 @@ To check whether your proxy certificate is still valid (by default it's valid 12 ...@@ -48,7 +48,7 @@ To check whether your proxy certificate is still valid (by default it's valid 12
To access Salomon cluster, two login nodes running GSI SSH service are available. The service is available from public Internet as well as from the internal PRACE network (accessible only from other PRACE partners). To access Salomon cluster, two login nodes running GSI SSH service are available. The service is available from public Internet as well as from the internal PRACE network (accessible only from other PRACE partners).
#### Access from PRACE network: #### Access From PRACE Network:
It is recommended to use the single DNS name salomon-prace.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are: It is recommended to use the single DNS name salomon-prace.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
...@@ -70,7 +70,7 @@ When logging from other PRACE system, the prace_service script can be used: ...@@ -70,7 +70,7 @@ When logging from other PRACE system, the prace_service script can be used:
$ gsissh `prace_service -i -s salomon` $ gsissh `prace_service -i -s salomon`
``` ```
#### Access from public Internet: #### Access From Public Internet:
It is recommended to use the single DNS name salomon.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are: It is recommended to use the single DNS name salomon.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
...@@ -127,7 +127,7 @@ Apart from the standard mechanisms, for PRACE users to transfer data to/from Sal ...@@ -127,7 +127,7 @@ Apart from the standard mechanisms, for PRACE users to transfer data to/from Sal
There's one control server and three backend servers for striping and/or backup in case one of them would fail. There's one control server and three backend servers for striping and/or backup in case one of them would fail.
### Access from PRACE network ### Access From PRACE Network
| Login address | Port | Node role | | Login address | Port | Node role |
| ----------------------------- | ---- | --------------------------- | | ----------------------------- | ---- | --------------------------- |
...@@ -160,7 +160,7 @@ Or by using prace_service script: ...@@ -160,7 +160,7 @@ Or by using prace_service script:
$ globus-url-copy gsiftp://`prace_service -i -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_ $ globus-url-copy gsiftp://`prace_service -i -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
``` ```
### Access from public Internet ### Access From Public Internet
| Login address | Port | Node role | | Login address | Port | Node role |
| ----------------------- | ---- | --------------------------- | | ----------------------- | ---- | --------------------------- |
......
...@@ -631,7 +631,7 @@ The output should be similar to: ...@@ -631,7 +631,7 @@ The output should be similar to:
There are two ways how to execute an MPI code on a single coprocessor: 1.) lunch the program using "**mpirun**" from the There are two ways how to execute an MPI code on a single coprocessor: 1.) lunch the program using "**mpirun**" from the
coprocessor; or 2.) lunch the task using "**mpiexec.hydra**" from a host. coprocessor; or 2.) lunch the task using "**mpiexec.hydra**" from a host.
#### Execution on coprocessor #### Execution on Coprocessor
Similarly to execution of OpenMP programs in native mode, since the environmental module are not supported on MIC, user has to setup paths to Intel MPI libraries and binaries manually. One time setup can be done by creating a "**.profile**" file in user's home directory. This file sets up the environment on the MIC automatically once user access to the accelerator through the SSH. Similarly to execution of OpenMP programs in native mode, since the environmental module are not supported on MIC, user has to setup paths to Intel MPI libraries and binaries manually. One time setup can be done by creating a "**.profile**" file in user's home directory. This file sets up the environment on the MIC automatically once user access to the accelerator through the SSH.
...@@ -680,7 +680,7 @@ The output should be similar to: ...@@ -680,7 +680,7 @@ The output should be similar to:
Hello world from process 0 of 4 on host cn207-mic0 Hello world from process 0 of 4 on host cn207-mic0
``` ```
#### Execution on host #### Execution on Host
If the MPI program is launched from host instead of the coprocessor, the environmental variables are not set using the ".profile" file. Therefore user has to specify library paths from the command line when calling "mpiexec". If the MPI program is launched from host instead of the coprocessor, the environmental variables are not set using the ".profile" file. Therefore user has to specify library paths from the command line when calling "mpiexec".
...@@ -725,7 +725,7 @@ A simple test to see if the file is present is to execute: ...@@ -725,7 +725,7 @@ A simple test to see if the file is present is to execute:
/bin/pmi_proxy /bin/pmi_proxy
``` ```
#### Execution on host - MPI processes distributed over multiple accelerators on multiple nodes #### Execution on Host - MPI Processes Distributed Over Multiple Accelerators on Multiple Nodes
To get access to multiple nodes with MIC accelerator, user has to use PBS to allocate the resources. To start interactive session, that allocates 2 compute nodes = 2 MIC accelerators run qsub command with following parameters: To get access to multiple nodes with MIC accelerator, user has to use PBS to allocate the resources. To start interactive session, that allocates 2 compute nodes = 2 MIC accelerators run qsub command with following parameters:
...@@ -885,7 +885,7 @@ A possible output of the MPI "hello-world" example executed on two hosts and two ...@@ -885,7 +885,7 @@ A possible output of the MPI "hello-world" example executed on two hosts and two
!!! note !!! note
At this point the MPI communication between MIC accelerators on different nodes uses 1Gb Ethernet only. At this point the MPI communication between MIC accelerators on different nodes uses 1Gb Ethernet only.
#### Using the PBS automatically generated node-files #### Using the PBS Automatically Generated Node-Files
PBS also generates a set of node-files that can be used instead of manually creating a new one every time. Three node-files are genereated: PBS also generates a set of node-files that can be used instead of manually creating a new one every time. Three node-files are genereated:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment