diff --git a/docs.it4i/software/intel/intel-xeon-phi-salomon.md b/docs.it4i/software/intel/intel-xeon-phi-salomon.md index 3660a0ddb0d51dcfdd20968661d1ac53fecef48a..16e2490d3ea9656bcd57f06a13782931e6649a20 100644 --- a/docs.it4i/software/intel/intel-xeon-phi-salomon.md +++ b/docs.it4i/software/intel/intel-xeon-phi-salomon.md @@ -718,8 +718,8 @@ Hello world from process 0 of 4 on host r38u31n1000 ### Coprocessor-Only Model -There are two ways how to execute an MPI code on a single coprocessor: 1) launch the program using "**mpirun**" from the -coprocessor; or 2) launch the task using "**mpiexec.hydra**" from a host. +There are two ways how to execute an MPI code on a single coprocessor: 1) launch the program using `mpirun` from the +coprocessor; or 2) launch the task using `mpiexec.hydra` from a host. #### Execution on Coprocessor @@ -815,7 +815,7 @@ Hello world from process 0 of 4 on host r38u31n1000-mic0 ``` !!! hint - **"mpiexec.hydra"** requires a file on the MIC filesystem. If the file is missing, contact the system administrators. + `mpiexec.hydra` requires a file on the MIC filesystem. If the file is missing, contact the system administrators. A simple test to see if the file is present is to execute: @@ -854,7 +854,7 @@ This output means that the PBS allocated nodes cn204 and cn205, which means that - to connect to the accelerator on the first node from the first node: `$ ssh r25u25n710-mic0` or `$ ssh mic0` - to connect to the accelerator on the second node from the first node: `$ ssh r25u25n711-mic0` -At this point, we expect that the correct modules are loaded and the binary is compiled. For parallel execution, the mpiexec.hydra is used. Again the first step is to tell mpiexec that the MPI can be executed on MIC accelerators by setting up the environmental variable `I_MPI_MIC`; do not forget to have correct FABRIC and PROVIDER defined. +At this point, we expect that the correct modules are loaded and the binary is compiled. For parallel execution, `mpiexec.hydra` is used. Again the first step is to tell mpiexec that the MPI can be executed on MIC accelerators by setting up the environmental variable `I_MPI_MIC`; do not forget to have correct FABRIC and PROVIDER defined. ```console $ export I_MPI_MIC=1 @@ -870,7 +870,7 @@ $ mpirun -genv LD_LIBRARY_PATH $MIC_LD_LIBRARY_PATH \ : -host r25u26n711-mic0 -n 6 ~/mpi-test-mic ``` -or using mpirun: +or using `mpirun`: ```console $ mpirun -genv LD_LIBRARY_PATH \ @@ -906,7 +906,7 @@ $ mpirun -genv LD_LIBRARY_PATH $MIC_LD_LIBRARY_PATH \ In a symmetric mode, MPI programs are executed on both the host computer(s) and the MIC accelerator(s). Since MIC has a different architecture and requires different binary file produced by the Intel compiler, two different files have to be compiled before the MPI program is executed. -In the previous section, we have compiled two binary files, one for hosts "**mpi-test**" and one for MIC accelerators "**mpi-test-mic**". These two binaries can be executed at once using mpiexec.hydra: +In the previous section, we have compiled two binary files, one for hosts "**mpi-test**" and one for MIC accelerators "**mpi-test-mic**". These two binaries can be executed at once using `mpiexec.hydra`: ```console $ mpirun \ @@ -944,7 +944,7 @@ In addition, if a naming convention is set in a way that the name of the binary $ export I_MPI_MIC_POSTFIX=-mic ``` -To run the MPI code using mpirun and the machine file "hosts_file_mix", use: +To run the MPI code using `mpirun` and the machine file "hosts_file_mix", use: ```console $ mpirun \