Commit 11de3560 authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

bash je u bash scriptu

parent 1d50a3eb
Pipeline #2148 passed with stages
in 59 seconds
......@@ -4,7 +4,9 @@
After logging in, you may want to configure the environment. Write your preferred path definitions, aliases, functions and module loads in the .bashrc file
```bash
```console
$ cat ./bashrc
# ./bashrc
# Source global definitions
......@@ -53,7 +55,7 @@ loading the octave module will set up paths and environment variables of your ac
To check loaded modules use
```bash
```console
$ module list **or** ml
```
......
......@@ -68,7 +68,7 @@ This syntax will start the ANSYS FLUENT job under PBS Professional using the qsu
The sample script uses a configuration file called pbs_fluent.conf if no command line arguments are present. This configuration file should be present in the directory from which the jobs are submitted (which is also the directory in which the jobs are executed). The following is an example of what the content of pbs_fluent.conf can be:
```bash
```console
input="example_small.flin"
case="Small-1.65m.cas"
fluent_args="3d -pmyrinet"
......
......@@ -86,9 +86,8 @@ $ ./cblas_dgemmx.x data/cblas_dgemmx.d
In this example, we compile, link and run the cblas_dgemm example, demonstrating use of MKL with icc -mkl option. Using the -mkl option is equivalent to:
```bash
$ icc -w source/cblas_dgemmx.c source/common_func.c -o cblas_dgemmx.x
-I$MKL_INC_DIR -L$MKL_LIB_DIR -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5
```console
$ icc -w source/cblas_dgemmx.c source/common_func.c -o cblas_dgemmx.x -I$MKL_INC_DIR -L$MKL_LIB_DIR -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5
```
In this example, we compile and link the cblas_dgemm example, using LP64 interface to threaded MKL and Intel OMP threads implementation.
......
......@@ -560,14 +560,14 @@ $ module load intel
To compile an MPI code for host use:
````bash
$ mpiicc -xhost -o mpi-test mpi-test.c
```bash
````console
$ mpiicc -xhost -o mpi-test mpi-test.c
```
To compile the same code for Intel Xeon Phi architecture use:
To compile the same code for Intel Xeon Phi architecture use:
```bash
$ mpiicc -mmic -o mpi-test-mic mpi-test.c
```console
$ mpiicc -mmic -o mpi-test-mic mpi-test.c
````
An example of basic MPI version of "hello-world" example in C language, that can be executed on both host and Xeon Phi is (can be directly copy and pasted to a .c file)
......@@ -612,13 +612,13 @@ Intel MPI for the Xeon Phi coprocessors offers different MPI programming models:
In this case all environment variables are set by modules, so to execute the compiled MPI program on a single node, use:
```bash
$ mpirun -np 4 ./mpi-test
```console
$ mpirun -np 4 ./mpi-test
```
The output should be similar to:
```bash
```console
Hello world from process 1 of 4 on host cn207
Hello world from process 3 of 4 on host cn207
Hello world from process 2 of 4 on host cn207
......@@ -634,8 +634,8 @@ coprocessor; or 2.) lunch the task using "**mpiexec.hydra**" from a host.
Similarly to execution of OpenMP programs in native mode, since the environmental module are not supported on MIC, user has to setup paths to Intel MPI libraries and binaries manually. One time setup can be done by creating a "**.profile**" file in user's home directory. This file sets up the environment on the MIC automatically once user access to the accelerator through the SSH.
```bash
$ vim ~/.profile
```console
$ vim ~/.profile
PS1='[u@h W]$ '
export PATH=/usr/bin:/usr/sbin:/bin:/sbin
......@@ -654,25 +654,25 @@ Similarly to execution of OpenMP programs in native mode, since the environmenta
To access a MIC accelerator located on a node that user is currently connected to, use:
```bash
$ ssh mic0
```console
$ ssh mic0
```
or in case you need specify a MIC accelerator on a particular node, use:
```bash
$ ssh cn207-mic0
```console
$ ssh cn207-mic0
```
To run the MPI code in parallel on multiple core of the accelerator, use:
```bash
$ mpirun -np 4 ./mpi-test-mic
```console
$ mpirun -np 4 ./mpi-test-mic
```
The output should be similar to:
```bash
```console
Hello world from process 1 of 4 on host cn207-mic0
Hello world from process 2 of 4 on host cn207-mic0
Hello world from process 3 of 4 on host cn207-mic0
......@@ -685,20 +685,20 @@ If the MPI program is launched from host instead of the coprocessor, the environ
First step is to tell mpiexec that the MPI should be executed on a local accelerator by setting up the environmental variable "I_MPI_MIC"
```bash
$ export I_MPI_MIC=1
```console
$ export I_MPI_MIC=1
```
Now the MPI program can be executed as:
```bash
$ mpiexec.hydra -genv LD_LIBRARY_PATH /apps/intel/impi/4.1.1.036/mic/lib/ -host mic0 -n 4 ~/mpi-test-mic
```console
$ mpiexec.hydra -genv LD_LIBRARY_PATH /apps/intel/impi/4.1.1.036/mic/lib/ -host mic0 -n 4 ~/mpi-test-mic
```
or using mpirun
```bash
$ mpirun -genv LD_LIBRARY_PATH /apps/intel/impi/4.1.1.036/mic/lib/ -host mic0 -n 4 ~/mpi-test-mic
```console
$ mpirun -genv LD_LIBRARY_PATH /apps/intel/impi/4.1.1.036/mic/lib/ -host mic0 -n 4 ~/mpi-test-mic
```
!!! note
......@@ -707,7 +707,7 @@ or using mpirun
The output should be again similar to:
```bash
```console
Hello world from process 1 of 4 on host cn207-mic0
Hello world from process 2 of 4 on host cn207-mic0
Hello world from process 3 of 4 on host cn207-mic0
......@@ -719,8 +719,8 @@ The output should be again similar to:
A simple test to see if the file is present is to execute:
```bash
$ ssh mic0 ls /bin/pmi_proxy
```console
$ ssh mic0 ls /bin/pmi_proxy
/bin/pmi_proxy
```
......@@ -728,21 +728,20 @@ A simple test to see if the file is present is to execute:
To get access to multiple nodes with MIC accelerator, user has to use PBS to allocate the resources. To start interactive session, that allocates 2 compute nodes = 2 MIC accelerators run qsub command with following parameters:
```bash
$ qsub -I -q qmic -A NONE-0-0 -l select=2:ncpus=16
$ module load intel/13.5.192 impi/4.1.1.036
```console
$ qsub -I -q qmic -A NONE-0-0 -l select=2:ncpus=16
$ ml intel/13.5.192 impi/4.1.1.036
```
This command connects user through ssh to one of the nodes immediately. To see the other nodes that have been allocated use:
```bash
$ cat $PBS_NODEFILE
```console
$ cat $PBS_NODEFILE
```
For example:
```bash
```console
cn204.bullx
cn205.bullx
```
......@@ -757,14 +756,14 @@ This output means that the PBS allocated nodes cn204 and cn205, which means that
At this point we expect that correct modules are loaded and binary is compiled. For parallel execution the mpiexec.hydra is used. Again the first step is to tell mpiexec that the MPI can be executed on MIC accelerators by setting up the environmental variable "I_MPI_MIC"
```bash
$ export I_MPI_MIC=1
```console
$ export I_MPI_MIC=1
```
The launch the MPI program use:
```bash
$ mpiexec.hydra -genv LD_LIBRARY_PATH /apps/intel/impi/4.1.1.036/mic/lib/
```console
$ mpiexec.hydra -genv LD_LIBRARY_PATH /apps/intel/impi/4.1.1.036/mic/lib/
-genv I_MPI_FABRICS_LIST tcp
-genv I_MPI_FABRICS shm:tcp
-genv I_MPI_TCP_NETMASK=10.1.0.0/16
......@@ -774,8 +773,8 @@ The launch the MPI program use:
or using mpirun:
```bash
$ mpirun -genv LD_LIBRARY_PATH /apps/intel/impi/4.1.1.036/mic/lib/
```console
$ mpirun -genv LD_LIBRARY_PATH /apps/intel/impi/4.1.1.036/mic/lib/
-genv I_MPI_FABRICS_LIST tcp
-genv I_MPI_FABRICS shm:tcp
-genv I_MPI_TCP_NETMASK=10.1.0.0/16
......@@ -785,7 +784,7 @@ or using mpirun:
In this case four MPI processes are executed on accelerator cn204-mic and six processes are executed on accelerator cn205-mic0. The sample output (sorted after execution) is:
```bash
```console
Hello world from process 0 of 10 on host cn204-mic0
Hello world from process 1 of 10 on host cn204-mic0
Hello world from process 2 of 10 on host cn204-mic0
......@@ -800,8 +799,8 @@ In this case four MPI processes are executed on accelerator cn204-mic and six pr
The same way MPI program can be executed on multiple hosts:
```bash
$ mpiexec.hydra -genv LD_LIBRARY_PATH /apps/intel/impi/4.1.1.036/mic/lib/
```console
$ mpiexec.hydra -genv LD_LIBRARY_PATH /apps/intel/impi/4.1.1.036/mic/lib/
-genv I_MPI_FABRICS_LIST tcp
-genv I_MPI_FABRICS shm:tcp
-genv I_MPI_TCP_NETMASK=10.1.0.0/16
......@@ -816,8 +815,8 @@ architecture and requires different binary file produced by the Intel compiler t
In the previous section we have compiled two binary files, one for hosts "**mpi-test**" and one for MIC accelerators "**mpi-test-mic**". These two binaries can be executed at once using mpiexec.hydra:
```bash
$ mpiexec.hydra
```console
$ mpiexec.hydra
-genv I_MPI_FABRICS_LIST tcp
-genv I_MPI_FABRICS shm:tcp
-genv I_MPI_TCP_NETMASK=10.1.0.0/16
......@@ -830,7 +829,7 @@ In this example the first two parameters (line 2 and 3) sets up required environ
The output of the program is:
```bash
```console
Hello world from process 0 of 4 on host cn205
Hello world from process 1 of 4 on host cn205
Hello world from process 2 of 4 on host cn205-mic0
......@@ -841,8 +840,8 @@ The execution procedure can be simplified by using the mpirun command with the m
An example of a machine file that uses 2 >hosts (**cn205** and **cn206**) and 2 accelerators **(cn205-mic0** and **cn206-mic0**) to run 2 MPI processes on each of them:
```bash
$ cat hosts_file_mix
```console
$ cat hosts_file_mix
cn205:2
cn205-mic0:2
cn206:2
......@@ -851,14 +850,14 @@ An example of a machine file that uses 2 >hosts (**cn205** and **cn206**) and 2
In addition if a naming convention is set in a way that the name of the binary for host is **"bin_name"** and the name of the binary for the accelerator is **"bin_name-mic"** then by setting up the environment variable **I_MPI_MIC_POSTFIX** to **"-mic"** user do not have to specify the names of booth binaries. In this case mpirun needs just the name of the host binary file (i.e. "mpi-test") and uses the suffix to get a name of the binary for accelerator (i..e. "mpi-test-mic").
```bash
$ export I_MPI_MIC_POSTFIX=-mic
```console
$ export I_MPI_MIC_POSTFIX=-mic
```
To run the MPI code using mpirun and the machine file "hosts_file_mix" use:
```bash
$ mpirun
```console
$ mpirun
-genv I_MPI_FABRICS shm:tcp
-genv LD_LIBRARY_PATH /apps/intel/impi/4.1.1.036/mic/lib/
-genv I_MPI_FABRICS_LIST tcp
......@@ -870,7 +869,7 @@ To run the MPI code using mpirun and the machine file "hosts_file_mix" use:
A possible output of the MPI "hello-world" example executed on two hosts and two accelerators is:
```bash
```console
Hello world from process 0 of 8 on host cn204
Hello world from process 1 of 8 on host cn204
Hello world from process 2 of 8 on host cn204-mic0
......
......@@ -149,7 +149,7 @@ The last step is to start matlabpool with "cluster" object and correct number of
The complete example showing how to use Distributed Computing Toolbox in local mode is shown here.
```bash
```console
cluster = parcluster('local');
cluster
......@@ -182,7 +182,7 @@ This mode uses PBS scheduler to launch the parallel pool. It uses the SalomonPBS
This is an example of m-script using PBS mode:
```bash
```console
cluster = parcluster('SalomonPBSPro');
set(cluster, 'SubmitArguments', '-A OPEN-0-0');
set(cluster, 'ResourceTemplate', '-q qprod -l select=10:ncpus=16');
......@@ -223,7 +223,7 @@ For this method, you need to use SalomonDirect profile, import it using [the sam
This is an example of m-script using direct mode:
```bash
```console
parallel.importProfile('/apps/all/MATLAB/2015a-EDU/SalomonDirect.settings')
cluster = parcluster('SalomonDirect');
set(cluster, 'NumWorkers', 48);
......
......@@ -52,9 +52,7 @@ For the performance reasons Matlab should use system MPI. On Anselm the supporte
```console
$ vim ~/matlab/mpiLibConf.m
```
```bash
function [lib, extras] = mpiLibConf
%MATLAB MPI Library overloading for Infiniband Networks
......@@ -135,7 +133,7 @@ $ qsub ./jobscript
The last part of the configuration is done directly in the user Matlab script before Distributed Computing Toolbox is started.
```bash
```console
sched = findResource('scheduler', 'type', 'mpiexec');
set(sched, 'MpiexecFileName', '/apps/intel/impi/4.1.1/bin/mpirun');
set(sched, 'EnvironmentSetMethod', 'setenv');
......@@ -148,7 +146,7 @@ This script creates scheduler object "sched" of type "mpiexec" that starts worke
The last step is to start matlabpool with "sched" object and correct number of workers. In this case qsub asked for total number of 32 cores, therefore the number of workers is also set to 32.
```bash
```console
matlabpool(sched,32);
......@@ -160,7 +158,7 @@ matlabpool close
The complete example showing how to use Distributed Computing Toolbox is show here.
```bash
```console
sched = findResource('scheduler', 'type', 'mpiexec');
set(sched, 'MpiexecFileName', '/apps/intel/impi/4.1.1/bin/mpirun')
set(sched, 'EnvironmentSetMethod', 'setenv')
......
......@@ -104,7 +104,7 @@ The forking is the most simple to use. Forking family of functions provide paral
Forking example:
```bash
```r
library(parallel)
#integrand function
......@@ -168,7 +168,7 @@ Static Rmpi programs are executed via mpiexec, as any other MPI programs. Number
Static Rmpi example:
```cpp
```r
library(Rmpi)
#integrand function
......@@ -226,7 +226,7 @@ Dynamic Rmpi programs are executed by calling the R directly. openmpi module mus
Dynamic Rmpi example:
```cpp
```r
#integrand function
f <- function(i,h) {
x <- h*(i-0.5)
......@@ -303,7 +303,7 @@ Execution is identical to other dynamic Rmpi programs.
mpi.apply Rmpi example:
```bash
```r
#integrand function
f <- function(i,h) {
x <- h*(i-0.5)
......
......@@ -192,7 +192,7 @@ Job script links application data (win), input data (data) and run script (run.b
Example run script (run.bat) for Windows virtual machine:
```bash
```console
z:
cd winappl
call application.bat z:data z:output
......@@ -348,7 +348,9 @@ You can also provide your SMB services (on ports 3139, 3445) to obtain high perf
Example smb.conf (not optimized)
```bash
```console
$ cat smb.conf
[global]
socket address=192.168.1.1
smb ports = 3445 3139
......
......@@ -31,14 +31,14 @@ There is default stripe configuration for Anselm Lustre filesystems. However, us
Use the lfs getstripe for getting the stripe parameters. Use the lfs setstripe command for setting the stripe parameters to get optimal I/O performance The correct stripe setting depends on your needs and file access patterns.
```bash
```console
$ lfs getstripe dir|filename
$ lfs setstripe -s stripe_size -c stripe_count -o stripe_offset dir|filename
```
Example:
```bash
```console
$ lfs getstripe /scratch/username/
/scratch/username/
stripe_count: 1 stripe_size: 1048576 stripe_offset: -1
......@@ -53,7 +53,7 @@ In this example, we view current stripe setting of the /scratch/username/ direct
Use lfs check OSTs to see the number and status of active OSTs for each filesystem on Anselm. Learn more by reading the man page
```bash
```console
$ lfs check osts
$ man lfs
```
......@@ -98,7 +98,7 @@ The architecture of Lustre on Anselm is composed of two metadata servers (MDS) a
* 2 groups of 5 disks in RAID5
* 2 hot-spare disks
\###HOME
### HOME
The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 320TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
......@@ -127,14 +127,14 @@ Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for t
| Default stripe count | 1 |
| Number of OSTs | 22 |
\###SCRATCH
### SCRATCH
The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 146TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. If 100TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
!!! note
The Scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory.
>Users are advised to save the necessary data from the SCRATCH filesystem to HOME filesystem after the calculations and clean up the scratch files.
Users are advised to save the necessary data from the SCRATCH filesystem to HOME filesystem after the calculations and clean up the scratch files.
Files on the SCRATCH filesystem that are **not accessed for more than 90 days** will be automatically **deleted**.
......@@ -157,13 +157,13 @@ The SCRATCH filesystem is realized as Lustre parallel filesystem and is availabl
User quotas on the file systems can be checked and reviewed using following command:
```bash
```console
$ lfs quota dir
```
Example for Lustre HOME directory:
```bash
```console
$ lfs quota /home
Disk quotas for user user001 (uid 1234):
Filesystem kbytes quota limit grace files quota limit grace
......@@ -177,7 +177,7 @@ In this example, we view current quota size limit of 250GB and 300MB currently u
Example for Lustre SCRATCH directory:
```bash
```console
$ lfs quota /scratch
Disk quotas for user user001 (uid 1234):
Filesystem kbytes quota limit grace files quota limit grace
......@@ -191,13 +191,13 @@ In this example, we view current quota size limit of 100TB and 8KB currently use
To have a better understanding of where the space is exactly used, you can use following command to find out.
```bash
```console
$ du -hs dir
```
Example for your HOME directory:
```bash
```console
$ cd /home
$ du -hs * .[a-zA-z0-9]* | grep -E "[0-9]*G|[0-9]*M" | sort -hr
258M cuda-samples
......@@ -211,11 +211,11 @@ This will list all directories which are having MegaBytes or GigaBytes of consum
To have a better understanding of previous commands, you can read manpages.
```bash
```console
$ man lfs
```
```bash
```console
$ man du
```
......@@ -225,7 +225,7 @@ Extended ACLs provide another security mechanism beside the standard POSIX ACLs
ACLs on a Lustre file system work exactly like ACLs on any Linux file system. They are manipulated with the standard tools in the standard manner. Below, we create a directory and allow a specific user access.
```bash
```console
[vop999@login1.anselm ~]$ umask 027
[vop999@login1.anselm ~]$ mkdir test
[vop999@login1.anselm ~]$ ls -ld test
......@@ -353,40 +353,40 @@ The SSHFS provides a very convenient way to access the CESNET Storage. The stora
First, create the mount point
```bash
$ mkdir cesnet
```console
$ mkdir cesnet
```
Mount the storage. Note that you can choose among the ssh.du1.cesnet.cz (Plzen), ssh.du2.cesnet.cz (Jihlava), ssh.du3.cesnet.cz (Brno) Mount tier1_home **(only 5120M !)**:
```bash
$ sshfs username@ssh.du1.cesnet.cz:. cesnet/
```console
$ sshfs username@ssh.du1.cesnet.cz:. cesnet/
```
For easy future access from Anselm, install your public key
```bash
$ cp .ssh/id_rsa.pub cesnet/.ssh/authorized_keys
```console
$ cp .ssh/id_rsa.pub cesnet/.ssh/authorized_keys
```
Mount tier1_cache_tape for the Storage VO:
```bash
$ sshfs username@ssh.du1.cesnet.cz:/cache_tape/VO_storage/home/username cesnet/
```console
$ sshfs username@ssh.du1.cesnet.cz:/cache_tape/VO_storage/home/username cesnet/
```
View the archive, copy the files and directories in and out
```bash
$ ls cesnet/
$ cp -a mydir cesnet/.
$ cp cesnet/myfile .
```console
$ ls cesnet/
$ cp -a mydir cesnet/.
$ cp cesnet/myfile .
```
Once done, please remember to unmount the storage
```bash
$ fusermount -u cesnet
```console
$ fusermount -u cesnet
```
### Rsync Access
......@@ -402,16 +402,16 @@ Rsync finds files that need to be transferred using a "quick check" algorithm (b
Transfer large files to/from CESNET storage, assuming membership in the Storage VO
```bash
$ rsync --progress datafile username@ssh.du1.cesnet.cz:VO_storage-cache_tape/.
$ rsync --progress username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafile .
```console
$ rsync --progress datafile username@ssh.du1.cesnet.cz:VO_storage-cache_tape/.
$ rsync --progress username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafile .
```
Transfer large directories to/from CESNET storage, assuming membership in the Storage VO
```bash
$ rsync --progress -av datafolder username@ssh.du1.cesnet.cz:VO_storage-cache_tape/.
$ rsync --progress -av username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafolder .
```console
$ rsync --progress -av datafolder username@ssh.du1.cesnet.cz:VO_storage-cache_tape/.
$ rsync --progress -av username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafolder .
```
Transfer rates of about 28 MB/s can be expected.
......@@ -103,6 +103,7 @@ To check your certificate (e.g., DN, validity, issuer, public key algorithm, etc
```console
openssl x509 -in usercert.pem -text -noout
```
To download openssl if not pre-installed, [please visit](https://www.openssl.org/source/). On Macintosh Mac OS X computers openssl is already pre-installed and can be used immediately.
## Q: How Do I Create and Then Manage a Keystore?
......
......@@ -134,8 +134,8 @@ Follow these steps **only** if you can not obtain your certificate in a standard
* Go to [COMODO Application for Secure Email Certificate](https://secure.comodo.com/products/frontpage?area=SecureEmailCertificate).
* Fill in the form, accept the Subscriber Agreement and submit it by the _Next_ button.
* Type in the e-mail address, which you intend to use for communication with us.
* Don't forget your chosen _Revocation password_.
* Type in the e-mail address, which you intend to use for communication with us.
* Don't forget your chosen _Revocation password_.
* You will receive an e-mail with link to collect your certificate. Be sure to open the link in the same browser, in which you submited the application.
* Your browser should notify you, that the certificate has been correctly installed in it. Now you will need to save it as a file.
* In Firefox navigate to _Options > Advanced > Certificates > View Certificates_.
......
......@@ -68,7 +68,7 @@ This syntax will start the ANSYS FLUENT job under PBS Professional using the qsu