Skip to content
Snippets Groups Projects
Commit 9bbfd123 authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

salomon done

parent 5b89a84d
No related branches found
No related tags found
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!117Resolve "Ukázky z konzoly nemají být bash"
Pipeline #
Showing
with 214 additions and 214 deletions
......@@ -41,7 +41,7 @@ Assume we have 900 input files with name beginning with "file" (e. g. file001, .
First, we create a tasklist file (or subjobs list), listing all tasks (subjobs) - all input files in our example:
```bash
```console
$ find . -name 'file*' > tasklist
```
......@@ -78,7 +78,7 @@ If huge number of parallel multicore (in means of multinode multithread, e. g. M
To submit the job array, use the qsub -J command. The 900 jobs of the [example above](capacity-computing/#array_example) may be submitted like this:
```bash
```console
$ qsub -N JOBNAME -J 1-900 jobscript
506493[].isrv5
```
......@@ -87,7 +87,7 @@ In this example, we submit a job array of 900 subjobs. Each subjob will run on f
Sometimes for testing purposes, you may need to submit only one-element array. This is not allowed by PBSPro, but there's a workaround:
```bash
```console
$ qsub -N JOBNAME -J 9-10:2 jobscript
```
......@@ -97,7 +97,7 @@ This will only choose the lower index (9 in this example) for submitting/running
Check status of the job array by the qstat command.
```bash
```console
$ qstat -a 506493[].isrv5
isrv5:
......@@ -111,7 +111,7 @@ The status B means that some subjobs are already running.
Check status of the first 100 subjobs by the qstat command.
```bash
```console
$ qstat -a 12345[1-100].isrv5
isrv5:
......@@ -129,7 +129,7 @@ Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
Delete the entire job array. Running subjobs will be killed, queueing subjobs will be deleted.
```bash
```console
$ qdel 12345[].isrv5
```
......@@ -137,13 +137,13 @@ Deleting large job arrays may take a while.
Display status information for all user's jobs, job arrays, and subjobs.
```bash
```console
$ qstat -u $USER -t
```
Display status information for all user's subjobs.
```bash
```console
$ qstat -u $USER -tJ
```
......@@ -158,7 +158,7 @@ GNU parallel is a shell tool for executing jobs in parallel using one or more co
For more information and examples see the parallel man page:
```bash
```console
$ module add parallel
$ man parallel
```
......@@ -173,7 +173,7 @@ Assume we have 101 input files with name beginning with "file" (e. g. file001, .
First, we create a tasklist file, listing all tasks - all input files in our example:
```bash
```console
$ find . -name 'file*' > tasklist
```
......@@ -211,7 +211,7 @@ In this example, tasks from tasklist are executed via the GNU parallel. The jobs
To submit the job, use the qsub command. The 101 tasks' job of the [example above](capacity-computing/#gp_example) may be submitted like this:
```bash
```console
$ qsub -N JOBNAME jobscript
12345.dm2
```
......@@ -241,13 +241,13 @@ Assume we have 992 input files with name beginning with "file" (e. g. file001, .
First, we create a tasklist file, listing all tasks - all input files in our example:
```bash
```console
$ find . -name 'file*' > tasklist
```
Next we create a file, controlling how many tasks will be executed in one subjob
```bash
```console
$ seq 32 > numtasks
```
......@@ -296,7 +296,7 @@ When deciding this values, think about following guiding rules :
To submit the job array, use the qsub -J command. The 992 tasks' job of the [example above](capacity-computing/#combined_example) may be submitted like this:
```bash
```console
$ qsub -N JOBNAME -J 1-992:32 jobscript
12345[].dm2
```
......@@ -312,7 +312,7 @@ Download the examples in [capacity.zip](capacity.zip), illustrating the above li
Unzip the archive in an empty directory on Anselm and follow the instructions in the README file
```bash
```console
$ unzip capacity.zip
$ cd capacity
$ cat README
......
......@@ -4,7 +4,7 @@
After logging in, you may want to configure the environment. Write your preferred path definitions, aliases, functions and module loads in the .bashrc file
```bash
```console
# ./bashrc
# Source global definitions
......@@ -32,7 +32,7 @@ In order to configure your shell for running particular application on Salomon w
Application modules on Salomon cluster are built using [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). The modules are divided into the following structure:
```bash
```console
base: Default module class
bio: Bioinformatics, biology and biomedical
cae: Computer Aided Engineering (incl. CFD)
......@@ -63,33 +63,33 @@ The modules may be loaded, unloaded and switched, according to momentary needs.
To check available modules use
```bash
$ module avail
```console
$ module avail **or** ml av
```
To load a module, for example the Open MPI module use
```bash
$ module load OpenMPI
```console
$ module load OpenMPI **or** ml OpenMPI
```
loading the Open MPI module will set up paths and environment variables of your active shell such that you are ready to run the Open MPI software
To check loaded modules use
```bash
$ module list
```console
$ module list **or** ml
```
To unload a module, for example the Open MPI module use
```bash
$ module unload OpenMPI
```console
$ module unload OpenMPI **or** ml -OpenMPI
```
Learn more on modules by reading the module man page
```bash
```console
$ man module
```
......
......@@ -16,7 +16,7 @@ When allocating computational resources for the job, please specify
Submit the job using the qsub command:
```bash
```console
$ qsub -A Project_ID -q queue -l select=x:ncpus=y,walltime=[[hh:]mm:]ss[.ms] jobscript
```
......@@ -27,25 +27,25 @@ The qsub submits the job into the queue, in another words the qsub command creat
### Job Submission Examples
```bash
```console
$ qsub -A OPEN-0-0 -q qprod -l select=64:ncpus=24,walltime=03:00:00 ./myjob
```
In this example, we allocate 64 nodes, 24 cores per node, for 3 hours. We allocate these resources via the qprod queue, consumed resources will be accounted to the Project identified by Project ID OPEN-0-0. Jobscript myjob will be executed on the first node in the allocation.
```bash
```console
$ qsub -q qexp -l select=4:ncpus=24 -I
```
In this example, we allocate 4 nodes, 24 cores per node, for 1 hour. We allocate these resources via the qexp queue. The resources will be available interactively
```bash
```console
$ qsub -A OPEN-0-0 -q qlong -l select=10:ncpus=24 ./myjob
```
In this example, we allocate 10 nodes, 24 cores per node, for 72 hours. We allocate these resources via the qlong queue. Jobscript myjob will be executed on the first node in the allocation.
```bash
```console
$ qsub -A OPEN-0-0 -q qfree -l select=10:ncpus=24 ./myjob
```
......@@ -57,13 +57,13 @@ To allocate a node with Xeon Phi co-processor, user needs to specify that in sel
The absence of specialized queue for accessing the nodes with cards means, that the Phi cards can be utilized in any queue, including qexp for testing/experiments, qlong for longer jobs, qfree after the project resources have been spent, etc. The Phi cards are thus also available to PRACE users. There's no need to ask for permission to utilize the Phi cards in project proposals.
```bash
```console
$ qsub -A OPEN-0-0 -I -q qprod -l select=1:ncpus=24:accelerator=True:naccelerators=2:accelerator_model=phi7120 ./myjob
```
In this example, we allocate 1 node, with 24 cores, with 2 Xeon Phi 7120p cards, running batch job ./myjob. The default time for qprod is used, e. g. 24 hours.
```bash
```console
$ qsub -A OPEN-0-0 -I -q qlong -l select=4:ncpus=24:accelerator=True:naccelerators=2 -l walltime=56:00:00 -I
```
......@@ -78,13 +78,13 @@ In this example, we allocate 4 nodes, with 24 cores per node (totalling 96 cores
The UV2000 (node uv1) offers 3328GB of RAM and 112 cores, distributed in 14 NUMA nodes. A NUMA node packs 8 cores and approx. 236GB RAM. In the PBS the UV2000 provides 14 chunks, a chunk per NUMA node (see [Resource allocation policy](resources-allocation-policy/)). The jobs on UV2000 are isolated from each other by cpusets, so that a job by one user may not utilize CPU or memory allocated to a job by other user. Always, full chunks are allocated, a job may only use resources of the NUMA nodes allocated to itself.
```bash
```console
$ qsub -A OPEN-0-0 -q qfat -l select=14 ./myjob
```
In this example, we allocate all 14 NUMA nodes (corresponds to 14 chunks), 112 cores of the SGI UV2000 node for 72 hours. Jobscript myjob will be executed on the node uv1.
```bash
```console
$ qsub -A OPEN-0-0 -q qfat -l select=1:mem=2000GB ./myjob
```
......@@ -94,13 +94,13 @@ In this example, we allocate 2000GB of memory on the UV2000 for 72 hours. By req
All qsub options may be [saved directly into the jobscript](#example-jobscript-for-mpi-calculation-with-preloaded-inputs). In such a case, no options to qsub are needed.
```bash
```console
$ qsub ./myjob
```
By default, the PBS batch system sends an e-mail only when the job is aborted. Disabling mail events completely can be done like this:
```bash
```console
$ qsub -m n
```
......@@ -113,13 +113,13 @@ $ qsub -m n
Specific nodes may be selected using PBS resource attribute host (for hostnames):
```bash
```console
qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=24:host=r24u35n680+1:ncpus=24:host=r24u36n681 -I
```
Specific nodes may be selected using PBS resource attribute cname (for short names in cns[0-1]+ format):
```bash
```console
qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=24:host=cns680+1:ncpus=24:host=cns681 -I
```
......@@ -142,7 +142,7 @@ Nodes directly connected to the one InifiBand switch can be allocated using node
In this example, we request all 9 nodes directly connected to the same switch using node grouping placement.
```bash
```console
$ qsub -A OPEN-0-0 -q qprod -l select=9:ncpus=24 -l place=group=switch ./myjob
```
......@@ -155,13 +155,13 @@ Nodes directly connected to the specific InifiBand switch can be selected using
In this example, we request all 9 nodes directly connected to r4i1s0sw1 switch.
```bash
```console
$ qsub -A OPEN-0-0 -q qprod -l select=9:ncpus=24:switch=r4i1s0sw1 ./myjob
```
List of all InifiBand switches:
```bash
```console
$ qmgr -c 'print node @a' | grep switch | awk '{print $6}' | sort -u
r1i0s0sw0
r1i0s0sw1
......@@ -169,12 +169,11 @@ r1i1s0sw0
r1i1s0sw1
r1i2s0sw0
...
...
```
List of all all nodes directly connected to the specific InifiBand switch:
```bash
```console
$ qmgr -c 'p n @d' | grep 'switch = r36sw3' | awk '{print $3}' | sort
r36u31n964
r36u32n965
......@@ -203,7 +202,7 @@ Nodes located in the same dimension group may be allocated using node grouping o
In this example, we allocate 16 nodes in the same [hypercube dimension](7d-enhanced-hypercube/) 1 group.
```bash
```console
$ qsub -A OPEN-0-0 -q qprod -l select=16:ncpus=24 -l place=group=ehc_1d -I
```
......@@ -211,7 +210,7 @@ For better understanding:
List of all groups in dimension 1:
```bash
```console
$ qmgr -c 'p n @d' | grep ehc_1d | awk '{print $6}' | sort |uniq -c
18 r1i0
18 r1i1
......@@ -222,7 +221,7 @@ $ qmgr -c 'p n @d' | grep ehc_1d | awk '{print $6}' | sort |uniq -c
List of all all nodes in specific dimension 1 group:
```bash
```console
$ $ qmgr -c 'p n @d' | grep 'ehc_1d = r1i0' | awk '{print $3}' | sort
r1i0n0
r1i0n1
......@@ -236,7 +235,7 @@ r1i0n11
!!! note
Check status of your jobs using the **qstat** and **check-pbs-jobs** commands
```bash
```console
$ qstat -a
$ qstat -a -u username
$ qstat -an -u username
......@@ -245,7 +244,7 @@ $ qstat -f 12345.isrv5
Example:
```bash
```console
$ qstat -a
srv11:
......@@ -261,7 +260,7 @@ In this example user1 and user2 are running jobs named job1, job2 and job3x. The
Check status of your jobs using check-pbs-jobs command. Check presence of user's PBS jobs' processes on execution hosts. Display load, processes. Display job standard and error output. Continuously display (tail -f) job standard or error output.
```bash
```console
$ check-pbs-jobs --check-all
$ check-pbs-jobs --print-load --print-processes
$ check-pbs-jobs --print-job-out --print-job-err
......@@ -271,7 +270,7 @@ $ check-pbs-jobs --jobid JOBID --tailf-job-out
Examples:
```bash
```console
$ check-pbs-jobs --check-all
JOB 35141.dm2, session_id 71995, user user2, nodes r3i6n2,r3i6n3
Check session id: OK
......@@ -282,7 +281,7 @@ r3i6n3: No process
In this example we see that job 35141.dm2 currently runs no process on allocated node r3i6n2, which may indicate an execution error.
```bash
```console
$ check-pbs-jobs --print-load --print-processes
JOB 35141.dm2, session_id 71995, user user2, nodes r3i6n2,r3i6n3
Print load
......@@ -298,7 +297,7 @@ r3i6n2: 99.7 run-task
In this example we see that job 35141.dm2 currently runs process run-task on node r3i6n2, using one thread only, while node r3i6n3 is empty, which may indicate an execution error.
```bash
```console
$ check-pbs-jobs --jobid 35141.dm2 --print-job-out
JOB 35141.dm2, session_id 71995, user user2, nodes r3i6n2,r3i6n3
Print job standard output:
......@@ -317,19 +316,19 @@ In this example, we see actual output (some iteration loops) of the job 35141.dm
You may release your allocation at any time, using qdel command
```bash
```console
$ qdel 12345.isrv5
```
You may kill a running job by force, using qsig command
```bash
```console
$ qsig -s 9 12345.isrv5
```
Learn more by reading the pbs man page
```bash
```console
$ man pbs_professional
```
......@@ -345,7 +344,7 @@ The Jobscript is a user made script, controlling sequence of commands for execut
!!! note
The jobscript or interactive shell is executed on first of the allocated nodes.
```bash
```console
$ qsub -q qexp -l select=4:ncpus=24 -N Name0 ./myjob
$ qstat -n -u username
......@@ -362,7 +361,7 @@ In this example, the nodes r21u01n577, r21u02n578, r21u03n579, r21u04n580 were a
!!! note
The jobscript or interactive shell is by default executed in home directory
```bash
```console
$ qsub -q qexp -l select=4:ncpus=24 -I
qsub: waiting for job 15210.isrv5 to start
qsub: job 15210.isrv5 ready
......@@ -380,7 +379,7 @@ The allocated nodes are accessible via ssh from login nodes. The nodes may acces
Calculations on allocated nodes may be executed remotely via the MPI, ssh, pdsh or clush. You may find out which nodes belong to the allocation by reading the $PBS_NODEFILE file
```bash
```console
qsub -q qexp -l select=2:ncpus=24 -I
qsub: waiting for job 15210.isrv5 to start
qsub: job 15210.isrv5 ready
......
......@@ -16,7 +16,7 @@ The network provides **2170MB/s** transfer rates via the TCP connection (single
## Example
```bash
```console
$ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob
$ qstat -n -u username
Req'd Req'd Elap
......@@ -28,14 +28,14 @@ Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
In this example, we access the node r4i1n0 by Infiniband network via the ib0 interface.
```bash
```console
$ ssh 10.17.35.19
```
In this example, we get
information of the Infiniband network.
```bash
```console
$ ifconfig
....
inet addr:10.17.35.19....
......
......@@ -36,14 +36,14 @@ Most of the information needed by PRACE users accessing the Salomon TIER-1 syste
Before you start to use any of the services don't forget to create a proxy certificate from your certificate:
```bash
$ grid-proxy-init
```console
$ grid-proxy-init
```
To check whether your proxy certificate is still valid (by default it's valid 12 hours), use:
```bash
$ grid-proxy-info
```console
$ grid-proxy-info
```
To access Salomon cluster, two login nodes running GSI SSH service are available. The service is available from public Internet as well as from the internal PRACE network (accessible only from other PRACE partners).
......@@ -60,14 +60,14 @@ It is recommended to use the single DNS name salomon-prace.it4i.cz which is dist
| login3-prace.salomon.it4i.cz | 2222 | gsissh | login3 |
| login4-prace.salomon.it4i.cz | 2222 | gsissh | login4 |
```bash
$ gsissh -p 2222 salomon-prace.it4i.cz
```console
$ gsissh -p 2222 salomon-prace.it4i.cz
```
When logging from other PRACE system, the prace_service script can be used:
```bash
$ gsissh `prace_service -i -s salomon`
```console
$ gsissh `prace_service -i -s salomon`
```
#### Access From Public Internet:
......@@ -82,27 +82,24 @@ It is recommended to use the single DNS name salomon.it4i.cz which is distribute
| login3-prace.salomon.it4i.cz | 2222 | gsissh | login3 |
| login4-prace.salomon.it4i.cz | 2222 | gsissh | login4 |
```bash
$ gsissh -p 2222 salomon.it4i.cz
```console
$ gsissh -p 2222 salomon.it4i.cz
```
When logging from other PRACE system, the prace_service script can be used:
```bash
$ gsissh `prace_service -e -s salomon`
```console
$ gsissh `prace_service -e -s salomon`
```
Although the preferred and recommended file transfer mechanism is [using GridFTP](prace/#file-transfers), the GSI SSH
implementation on Salomon supports also SCP, so for small files transfer gsiscp can be used:
```bash
$ gsiscp -P 2222 _LOCAL_PATH_TO_YOUR_FILE_ salomon.it4i.cz:_SALOMON_PATH_TO_YOUR_FILE_
$ gsiscp -P 2222 salomon.it4i.cz:_SALOMON_PATH_TO_YOUR_FILE_ _LOCAL_PATH_TO_YOUR_FILE_
$ gsiscp -P 2222 _LOCAL_PATH_TO_YOUR_FILE_ salomon-prace.it4i.cz:_SALOMON_PATH_TO_YOUR_FILE_
$ gsiscp -P 2222 salomon-prace.it4i.cz:_SALOMON_PATH_TO_YOUR_FILE_ _LOCAL_PATH_TO_YOUR_FILE_
```console
$ gsiscp -P 2222 _LOCAL_PATH_TO_YOUR_FILE_ salomon.it4i.cz:_SALOMON_PATH_TO_YOUR_FILE_
$ gsiscp -P 2222 salomon.it4i.cz:_SALOMON_PATH_TO_YOUR_FILE_ _LOCAL_PATH_TO_YOUR_FILE_
$ gsiscp -P 2222 _LOCAL_PATH_TO_YOUR_FILE_ salomon-prace.it4i.cz:_SALOMON_PATH_TO_YOUR_FILE_
$ gsiscp -P 2222 salomon-prace.it4i.cz:_SALOMON_PATH_TO_YOUR_FILE_ _LOCAL_PATH_TO_YOUR_FILE_
```
### Access to X11 Applications (VNC)
......@@ -111,8 +108,8 @@ If the user needs to run X11 based graphical application and does not have a X11
If the user uses GSI SSH based access, then the procedure is similar to the SSH based access ([look here](../general/accessing-the-clusters/graphical-user-interface/x-window-system/)), only the port forwarding must be done using GSI SSH:
```bash
$ gsissh -p 2222 salomon.it4i.cz -L 5961:localhost:5961
```console
$ gsissh -p 2222 salomon.it4i.cz -L 5961:localhost:5961
```
### Access With SSH
......@@ -138,26 +135,26 @@ There's one control server and three backend servers for striping and/or backup
Copy files **to** Salomon by running the following commands on your local machine:
```bash
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp-prace.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_
```console
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp-prace.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_
```
Or by using prace_service script:
```bash
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -i -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_
```console
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -i -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_
```
Copy files **from** Salomon:
```bash
$ globus-url-copy gsiftp://gridftp-prace.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
```console
$ globus-url-copy gsiftp://gridftp-prace.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
```
Or by using prace_service script:
```bash
$ globus-url-copy gsiftp://`prace_service -i -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
```console
$ globus-url-copy gsiftp://`prace_service -i -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
```
### Access From Public Internet
......@@ -171,26 +168,26 @@ Or by using prace_service script:
Copy files **to** Salomon by running the following commands on your local machine:
```bash
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_
```console
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_
```
Or by using prace_service script:
```bash
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -e -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_
```console
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -e -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_
```
Copy files **from** Salomon:
```bash
$ globus-url-copy gsiftp://gridftp.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
```console
$ globus-url-copy gsiftp://gridftp.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
```
Or by using prace_service script:
```bash
$ globus-url-copy gsiftp://`prace_service -e -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
```console
$ globus-url-copy gsiftp://`prace_service -e -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
```
Generally both shared file systems are available through GridFTP:
......@@ -222,8 +219,8 @@ All system wide installed software on the cluster is made available to the users
PRACE users can use the "prace" module to use the [PRACE Common Production Environment](http://www.prace-ri.eu/prace-common-production-environment/).
```bash
$ module load prace
```console
$ module load prace
```
### Resource Allocation and Job Execution
......@@ -251,8 +248,8 @@ Users who have undergone the full local registration procedure (including signin
!!! note
The **it4ifree** command is a part of it4i.portal.clients package, [located here](https://pypi.python.org/pypi/it4i.portal.clients).
```bash
$ it4ifree
```console
$ it4ifree
Password:
PID Total Used ...by me Free
-------- ------- ------ -------- -------
......@@ -262,9 +259,9 @@ Users who have undergone the full local registration procedure (including signin
By default file system quota is applied. To check the current status of the quota (separate for HOME and SCRATCH) use
```bash
$ quota
$ lfs quota -u USER_LOGIN /scratch
```console
$ quota
$ lfs quota -u USER_LOGIN /scratch
```
If the quota is insufficient, please contact the [support](prace/#help-and-support) and request an increase.
......@@ -46,13 +46,13 @@ Salomon users may check current queue configuration at <https://extranet.it4i.cz
Display the queue status on Salomon:
```bash
```console
$ qstat -q
```
The PBS allocation overview may be obtained also using the rspbs command.
```bash
```console
$ rspbs
Usage: rspbs [options]
......@@ -122,7 +122,7 @@ The resources that are currently subject to accounting are the core-hours. The c
User may check at any time, how many core-hours have been consumed by himself/herself and his/her projects. The command is available on clusters' login nodes.
```bash
```console
$ it4ifree
Password:
PID Total Used ...by me Free
......
......@@ -26,13 +26,13 @@ Private key authentication:
On **Linux** or **Mac**, use
```bash
```console
local $ ssh -i /path/to/id_rsa username@salomon.it4i.cz
```
If you see warning message "UNPROTECTED PRIVATE KEY FILE!", use this command to set lower permissions to private key file.
```bash
```console
local $ chmod 600 /path/to/id_rsa
```
......@@ -40,7 +40,7 @@ On **Windows**, use [PuTTY ssh client](../general/accessing-the-clusters/shell-a
After logging in, you will see the command prompt:
```bash
```console
_____ _
/ ____| | |
| (___ __ _| | ___ _ __ ___ ___ _ __
......@@ -75,23 +75,23 @@ The authentication is by the [private key](../general/accessing-the-clusters/she
On linux or Mac, use scp or sftp client to transfer the data to Salomon:
```bash
```console
local $ scp -i /path/to/id_rsa my-local-file username@salomon.it4i.cz:directory/file
```
```bash
```console
local $ scp -i /path/to/id_rsa -r my-local-dir username@salomon.it4i.cz:directory
```
or
```bash
```console
local $ sftp -o IdentityFile=/path/to/id_rsa username@salomon.it4i.cz
```
Very convenient way to transfer files in and out of the Salomon computer is via the fuse filesystem [sshfs](http://linux.die.net/man/1/sshfs)
```bash
```console
local $ sshfs -o IdentityFile=/path/to/id_rsa username@salomon.it4i.cz:. mountpoint
```
......@@ -136,7 +136,7 @@ It works by tunneling the connection from Salomon back to users workstation and
Pick some unused port on Salomon login node (for example 6000) and establish the port forwarding:
```bash
```console
local $ ssh -R 6000:remote.host.com:1234 salomon.it4i.cz
```
......@@ -146,7 +146,7 @@ Port forwarding may be done **using PuTTY** as well. On the PuTTY Configuration
Port forwarding may be established directly to the remote host. However, this requires that user has ssh access to remote.host.com
```bash
```console
$ ssh -L 6000:localhost:1234 remote.host.com
```
......@@ -160,7 +160,7 @@ First, establish the remote port forwarding form the login node, as [described a
Second, invoke port forwarding from the compute node to the login node. Insert following line into your jobscript or interactive shell
```bash
```console
$ ssh -TN -f -L 6000:localhost:6000 login1
```
......@@ -175,7 +175,7 @@ Port forwarding is static, each single port is mapped to a particular port on re
To establish local proxy server on your workstation, install and run SOCKS proxy server software. On Linux, sshd demon provides the functionality. To establish SOCKS proxy server listening on port 1080 run:
```bash
```console
local $ ssh -D 1080 localhost
```
......@@ -183,7 +183,7 @@ On Windows, install and run the free, open source [Sock Puppet](http://sockspupp
Once the proxy server is running, establish ssh port forwarding from Salomon to the proxy server, port 1080, exactly as [described above](#port-forwarding-from-login-nodes).
```bash
```console
local $ ssh -R 6000:localhost:1080 salomon.it4i.cz
```
......
......@@ -44,7 +44,7 @@ Working directory has to be created before sending pbs job into the queue. Input
Journal file with definition of the input geometry and boundary conditions and defined process of solution has e.g. the following structure:
```bash
```console
/file/read-case aircraft_2m.cas.gz
/solve/init
init
......@@ -58,7 +58,7 @@ The appropriate dimension of the problem has to be set by parameter (2d/3d).
1. Fast way to run Fluent from command line
```bash
```console
fluent solver_version [FLUENT_options] -i journal_file -pbs
```
......@@ -145,7 +145,7 @@ It runs the jobs out of the directory from which they are submitted (PBS_O_WORKD
Fluent could be run in parallel only under Academic Research license. To do so this ANSYS Academic Research license must be placed before ANSYS CFD license in user preferences. To make this change anslic_admin utility should be run
```bash
```console
/ansys_inc/shared_les/licensing/lic_admin/anslic_admin
```
......
......@@ -6,8 +6,8 @@ Anselm provides as commercial as academic variants. Academic variants are distin
To load the latest version of any ANSYS product (Mechanical, Fluent, CFX, MAPDL,...) load the module:
```bash
$ module load ansys
```console
$ ml ansys
```
ANSYS supports interactive regime, but due to assumed solution of extremely difficult tasks it is not recommended.
......
......@@ -18,6 +18,7 @@ The licence intended to be used for science and research, publications, students
* 16.1
* 17.0
* 18.0
## License Preferences
......
......@@ -6,8 +6,8 @@ Thus you need to configure preferred license order with ANSLIC_ADMIN. Please fol
Launch the ANSLIC_ADMIN utility in a graphical environment:
```bash
$ANSYSLIC_DIR/lic_admin/anslic_admin
```console
$ANSYSLIC_DIR/lic_admin/anslic_admin
```
ANSLIC_ADMIN Utility will be run
......
......@@ -8,7 +8,7 @@ It is possible to run Workbench scripts in batch mode. You need to configure sol
Enable Distribute Solution checkbox and enter number of cores (eg. 48 to run on two Salomon nodes). If you want the job to run on more then 1 node, you must also provide a so called MPI appfile. In the Additional Command Line Arguments input field, enter:
```bash
```console
-mpifile /path/to/my/job/mpifile.txt
```
......
......@@ -15,8 +15,8 @@ The following versions are currently installed:
For a current list of installed versions, execute:
```bash
module avail NWChem
```console
$ ml av NWChem
```
The recommend to use version 6.5. Version 6.3 fails on Salomon nodes with accelerator, because it attempts to communicate over scif0 interface. In 6.5 this is avoided by setting ARMCI_OPENIB_DEVICE=mlx4_0, this setting is included in the module.
......
......@@ -4,11 +4,14 @@
This GPL software calculates phonon-phonon interactions via the third order force constants. It allows to obtain lattice thermal conductivity, phonon lifetime/linewidth, imaginary part of self energy at the lowest order, joint density of states (JDOS) and weighted-JDOS. For details see Phys. Rev. B 91, 094306 (2015) and <http://atztogo.github.io/phono3py/index.html>
!!! note
Load the phono3py/0.9.14-ictce-7.3.5-Python-2.7.9 module
Available modules
```console
$ ml av phono3py
```
```bash
$ module load phono3py/0.9.14-ictce-7.3.5-Python-2.7.9
$ ml phono3py
```
## Example of Calculating Thermal Conductivity of Si Using VASP Code.
......@@ -17,7 +20,7 @@ $ module load phono3py/0.9.14-ictce-7.3.5-Python-2.7.9
One needs to calculate second order and third order force constants using the diamond structure of silicon stored in [POSCAR](poscar-si) (the same form as in VASP) using single displacement calculations within supercell.
```bash
```console
$ cat POSCAR
Si
1.0
......@@ -39,14 +42,14 @@ Direct
### Generating Displacement Using 2 by 2 by 2 Supercell for Both Second and Third Order Force Constants
```bash
```console
$ phono3py -d --dim="2 2 2" -c POSCAR
```
111 displacements is created stored in
disp_fc3.yaml, and the structure input files with this displacements are POSCAR-00XXX, where the XXX=111.
```bash
```console
disp_fc3.yaml POSCAR-00008 POSCAR-00017 POSCAR-00026 POSCAR-00035 POSCAR-00044 POSCAR-00053 POSCAR-00062 POSCAR-00071 POSCAR-00080 POSCAR-00089 POSCAR-00098 POSCAR-00107
POSCAR POSCAR-00009 POSCAR-00018 POSCAR-00027 POSCAR-00036 POSCAR-00045 POSCAR-00054 POSCAR-00063 POSCAR-00072 POSCAR-00081 POSCAR-00090 POSCAR-00099 POSCAR-00108
POSCAR-00001 POSCAR-00010 POSCAR-00019 POSCAR-00028 POSCAR-00037 POSCAR-00046 POSCAR-00055 POSCAR-00064 POSCAR-00073 POSCAR-00082 POSCAR-00091 POSCAR-00100 POSCAR-00109
......@@ -60,7 +63,7 @@ POSCAR-00007 POSCAR-00016 POSCAR-00025 POSCAR-00034 POSCAR-00043 POSCAR-00052
For each displacement the forces needs to be calculated, i.e. in form of the output file of VASP (vasprun.xml). For a single VASP calculations one needs [KPOINTS](KPOINTS), [POTCAR](POTCAR), [INCAR](INCAR) in your case directory (where you have POSCARS) and those 111 displacements calculations can be generated by [prepare.sh](prepare.sh) script. Then each of the single 111 calculations is submitted [run.sh](run.sh) by [submit.sh](submit.sh).
```bash
```console
$./prepare.sh
$ls
disp-00001 disp-00009 disp-00017 disp-00025 disp-00033 disp-00041 disp-00049 disp-00057 disp-00065 disp-00073 disp-00081 disp-00089 disp-00097 disp-00105 INCAR
......@@ -75,7 +78,7 @@ disp-00008 disp-00016 disp-00024 disp-00032 disp-00040 disp-00048 disp-00056 dis
Taylor your run.sh script to fit into your project and other needs and submit all 111 calculations using submit.sh script
```bash
```console
$ ./submit.sh
```
......@@ -83,13 +86,13 @@ $ ./submit.sh
Once all jobs are finished and vasprun.xml is created in each disp-XXXXX directory the collection is done by
```bash
```console
$ phono3py --cf3 disp-{00001..00111}/vasprun.xml
```
and `disp_fc2.yaml, FORCES_FC2`, `FORCES_FC3` and disp_fc3.yaml should appear and put into the hdf format by
```bash
```console
$ phono3py --dim="2 2 2" -c POSCAR
```
......@@ -99,13 +102,13 @@ resulting in `fc2.hdf5` and `fc3.hdf5`
The phonon lifetime calculations takes some time, however is independent on grid points, so could be splitted:
```bash
```console
$ phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" --sigma 0.1 --wgp
```
### Inspecting ir_grid_points.yaml
```bash
```console
$ grep grid_point ir_grid_points.yaml
num_reduced_ir_grid_points: 35
ir_grid_points: # [address, weight]
......@@ -148,18 +151,18 @@ ir_grid_points: # [address, weight]
one finds which grid points needed to be calculated, for instance using following
```bash
```console
$ phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" -c POSCAR --sigma 0.1 --br --write-gamma --gp="0 1 2
```
one calculates grid points 0, 1, 2. To automize one can use for instance scripts to submit 5 points in series, see [gofree-cond1.sh](gofree-cond1.sh)
```bash
```console
$ qsub gofree-cond1.sh
```
Finally the thermal conductivity result is produced by grouping single conductivity per grid calculations using
```bash
```console
$ phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" --br --read_gamma
```
......@@ -22,19 +22,19 @@ On the clusters COMSOL is available in the latest stable version. There are two
To load the of COMSOL load the module
```bash
$ module load COMSOL/51-EDU
```console
$ ml COMSOL/51-EDU
```
By default the **EDU variant** will be loaded. If user needs other version or variant, load the particular version. To obtain the list of available versions use
```bash
$ module avail COMSOL
```console
$ ml av COMSOL
```
If user needs to prepare COMSOL jobs in the interactive mode it is recommend to use COMSOL on the compute nodes via PBS Pro scheduler. In order run the COMSOL Desktop GUI on Windows is recommended to use the [Virtual Network Computing (VNC)](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/).
```bash
```console
$ xhost +
$ qsub -I -X -A PROJECT_ID -q qprod -l select=1:ppn=24
$ module load COMSOL
......@@ -76,7 +76,7 @@ COMSOL is the software package for the numerical solution of the partial differe
LiveLink for MATLAB is available in both **EDU** and **COM** **variant** of the COMSOL release. On the clusters 1 commercial (**COM**) license and the 5 educational (**EDU**) licenses of LiveLink for MATLAB (please see the [ISV Licenses](../../../anselm/software/isv_licenses/)) are available. Following example shows how to start COMSOL model from MATLAB via LiveLink in the interactive mode.
```bash
```console
$ xhost +
$ qsub -I -X -A PROJECT_ID -q qexp -l select=1:ppn=24
$ module load MATLAB
......
......@@ -10,9 +10,9 @@ Intel debugger is no longer available since Parallel Studio version 2015
The intel debugger version 13.0 is available, via module intel. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment.
```bash
$ module load intel
$ idb
```console
$ ml intel
$ idb
```
Read more at the [Intel Debugger](../intel-suite/intel-debugger/) page.
......@@ -21,9 +21,9 @@ Read more at the [Intel Debugger](../intel-suite/intel-debugger/) page.
Allinea DDT, is a commercial debugger primarily for debugging parallel MPI or OpenMP programs. It also has a support for GPU (CUDA) and Intel Xeon Phi accelerators. DDT provides all the standard debugging features (stack trace, breakpoints, watches, view variables, threads etc.) for every thread running as part of your program, or for every process - even if these processes are distributed across a cluster using an MPI implementation.
```bash
$ module load Forge
$ forge
```console
$ ml Forge
$ forge
```
Read more at the [Allinea DDT](allinea-ddt/) page.
......@@ -32,9 +32,9 @@ Read more at the [Allinea DDT](allinea-ddt/) page.
Allinea Performance Reports characterize the performance of HPC application runs. After executing your application through the tool, a synthetic HTML report is generated automatically, containing information about several metrics along with clear behavior statements and hints to help you improve the efficiency of your runs. Our license is limited to 64 MPI processes.
```bash
$ module load PerformanceReports/6.0
$ perf-report mpirun -n 64 ./my_application argument01 argument02
```console
$ module load PerformanceReports/6.0
$ perf-report mpirun -n 64 ./my_application argument01 argument02
```
Read more at the [Allinea Performance Reports](allinea-performance-reports/) page.
......@@ -43,9 +43,9 @@ Read more at the [Allinea Performance Reports](allinea-performance-reports/) pag
TotalView is a source- and machine-level debugger for multi-process, multi-threaded programs. Its wide range of tools provides ways to analyze, organize, and test programs, making it easy to isolate and identify problems in individual threads and processes in programs of great complexity.
```bash
$ module load TotalView/8.15.4-6-linux-x86-64
$ totalview
```console
$ ml TotalView/8.15.4-6-linux-x86-64
$ totalview
```
Read more at the [Totalview](total-view/) page.
......@@ -54,7 +54,7 @@ Read more at the [Totalview](total-view/) page.
Vampir is a GUI trace analyzer for traces in OTF format.
```bash
```console
$ module load Vampir/8.5.0
$ vampir
```
......
......@@ -49,13 +49,13 @@ The program does the following: process 0 receives two messages from anyone and
To verify this program by Aislinn, we first load Aislinn itself:
```bash
$ module load aislinn
```console
$ ml aislinn
```
Now we compile the program by Aislinn implementation of MPI. There are `mpicc` for C programs and `mpicxx` for C++ programs. Only MPI parts of the verified application has to be recompiled; non-MPI parts may remain untouched. Let us assume that our program is in `test.cpp`.
```bash
```console
$ mpicc -g test.cpp -o test
```
......@@ -63,7 +63,7 @@ The `-g` flag is not necessary, but it puts more debugging information into the
Now we run the Aislinn itself. The argument `-p 3` specifies that we want to verify our program for the case of three MPI processes
```bash
```console
$ aislinn -p 3 ./test
==AN== INFO: Aislinn v0.3.0
==AN== INFO: Found error 'Invalid write'
......@@ -73,8 +73,8 @@ $ aislinn -p 3 ./test
Aislinn found an error and produced HTML report. To view it, we can use any browser, e.g.:
```bash
$ firefox report.html
```console
$ firefox report.html
```
At the beginning of the report there are some basic summaries of the verification. In the second part (depicted in the following picture), the error is described.
......
......@@ -24,22 +24,21 @@ In case of debugging on accelerators:
Load all necessary modules to compile the code. For example:
```bash
$ module load intel
$ module load impi ... or ... module load openmpi/X.X.X-icc
```console
$ ml intel
$ ml impi **or** ml OpenMPI/X.X.X-icc
```
Load the Allinea DDT module:
```bash
$ module load Forge
```console
$ module load Forge
```
Compile the code:
```bash
```console
$ mpicc -g -O0 -o test_debug test.c
$ mpif90 -g -O0 -o test_debug test.f
```
......@@ -56,22 +55,22 @@ Before debugging, you need to compile your code with theses flags:
Be sure to log in with an X window forwarding enabled. This could mean using the -X in the ssh:
```bash
$ ssh -X username@anselm.it4i.cz
```console
$ ssh -X username@anselm.it4i.cz
```
Other options is to access login node using VNC. Please see the detailed information on how to [use graphic user interface on Anselm](/general/accessing-the-clusters/graphical-user-interface/x-window-system/)
From the login node an interactive session **with X windows forwarding** (-X option) can be started by following command:
```bash
$ qsub -I -X -A NONE-0-0 -q qexp -lselect=1:ncpus=16:mpiprocs=16,walltime=01:00:00
```console
$ qsub -I -X -A NONE-0-0 -q qexp -lselect=1:ncpus=16:mpiprocs=16,walltime=01:00:00
```
Then launch the debugger with the ddt command followed by the name of the executable to debug:
```bash
$ ddt test_debug
```console
$ ddt test_debug
```
A submission window that appears have a prefilled path to the executable to debug. You can select the number of MPI processors and/or OpenMP threads on which to run and press run. Command line arguments to a program can be entered to the "Arguments " box.
......@@ -80,16 +79,16 @@ A submission window that appears have a prefilled path to the executable to debu
To start the debugging directly without the submission window, user can specify the debugging and execution parameters from the command line. For example the number of MPI processes is set by option "-np 4". Skipping the dialog is done by "-start" option. To see the list of the "ddt" command line parameters, run "ddt --help".
```bash
ddt -start -np 4 ./hello_debug_impi
```console
ddt -start -np 4 ./hello_debug_impi
```
## Documentation
Users can find original User Guide after loading the DDT module:
```bash
$DDTPATH/doc/userguide.pdf
```console
$DDTPATH/doc/userguide.pdf
```
[1] Discipline, Magic, Inspiration and Science: Best Practice Debugging with Allinea DDT, Workshop conducted at LLNL by Allinea on May 10, 2013, [link](https://computing.llnl.gov/tutorials/allineaDDT/index.html)
......@@ -12,8 +12,8 @@ Our license is limited to 64 MPI processes.
Allinea Performance Reports version 6.0 is available
```bash
$ module load PerformanceReports/6.0
```console
$ module load PerformanceReports/6.0
```
The module sets up environment variables, required for using the Allinea Performance Reports.
......@@ -24,8 +24,8 @@ Use the the perf-report wrapper on your (MPI) program.
Instead of [running your MPI program the usual way](../mpi/mpi/), use the the perf report wrapper:
```bash
$ perf-report mpirun ./mympiprog.x
```console
$ perf-report mpirun ./mympiprog.x
```
The mpi program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that demanding MPI codes should be run within [the queue system](../../job-submission-and-execution/).
......@@ -36,23 +36,24 @@ In this example, we will be profiling the mympiprog.x MPI program, using Allinea
First, we allocate some nodes via the express queue:
```bash
$ qsub -q qexp -l select=2:ppn=24:mpiprocs=24:ompthreads=1 -I
```console
$ qsub -q qexp -l select=2:ppn=24:mpiprocs=24:ompthreads=1 -I
qsub: waiting for job 262197.dm2 to start
qsub: job 262197.dm2 ready
```
Then we load the modules and run the program the usual way:
```bash
$ module load intel impi PerfReports/6.0
$ mpirun ./mympiprog.x
```console
$ ml intel
$ ml PerfReports/6.0
$ mpirun ./mympiprog.x
```
Now lets profile the code:
```bash
$ perf-report mpirun ./mympiprog.x
```console
$ perf-report mpirun ./mympiprog.x
```
Performance report files [mympiprog_32p\*.txt](mympiprog_32p_2014-10-15_16-56.txt) and [mympiprog_32p\*.html](mympiprog_32p_2014-10-15_16-56.html) were created. We can see that the code is very efficient on MPI and is CPU bounded.
......@@ -15,14 +15,14 @@ Intel *®* VTune™ Amplifier, part of Intel Parallel studio, is a GUI profiling
To profile an application with VTune Amplifier, special kernel modules need to be loaded. The modules are not loaded on the login nodes, thus direct profiling on login nodes is not possible. By default, the kernel modules ale not loaded on compute nodes neither. In order to have the modules loaded, you need to specify vtune=version PBS resource at job submit. The version is the same as for environment module. For example to use VTune/2016_update1:
```bash
$ qsub -q qexp -A OPEN-0-0 -I -l select=1,vtune=2016_update1
```console
$ qsub -q qexp -A OPEN-0-0 -I -l select=1,vtune=2016_update1
```
After that, you can verify the modules sep\*, pax and vtsspp are present in the kernel :
```bash
$ lsmod | grep -e sep -e pax -e vtsspp
```console
$ lsmod | grep -e sep -e pax -e vtsspp
vtsspp 362000 0
sep3_15 546657 0
pax 4312 0
......@@ -30,14 +30,14 @@ After that, you can verify the modules sep\*, pax and vtsspp are present in the
To launch the GUI, first load the module:
```bash
$ module add VTune/2016_update1
```console
$ module add VTune/2016_update1
```
and launch the GUI :
```bash
$ amplxe-gui
```console
$ amplxe-gui
```
The GUI will open in new window. Click on "New Project..." to create a new project. After clicking OK, a new window with project properties will appear. At "Application:", select the bath to your binary you want to profile (the binary should be compiled with -g flag). Some additional options such as command line arguments can be selected. At "Managed code profiling mode:" select "Native" (unless you want to profile managed mode .NET/Mono applications). After clicking OK, your project is created.
......@@ -50,8 +50,8 @@ VTune Amplifier also allows a form of remote analysis. In this mode, data for an
The command line will look like this:
```bash
/apps/all/VTune/2016_update1/vtune_amplifier_xe_2016.1.1.434111/bin64/amplxe-cl -collect advanced-hotspots -app-working-dir /home/sta545/tmp -- /home/sta545/tmp/sgemm
```console
/apps/all/VTune/2016_update1/vtune_amplifier_xe_2016.1.1.434111/bin64/amplxe-cl -collect advanced-hotspots -app-working-dir /home/sta545/tmp -- /home/sta545/tmp/sgemm
```
Copy the line to clipboard and then you can paste it in your jobscript or in command line. After the collection is run, open the GUI once again, click the menu button in the upper right corner, and select "Open > Result...". The GUI will load the results from the run.
......@@ -75,14 +75,14 @@ You may also use remote analysis to collect data from the MIC and then analyze i
Native launch:
```bash
$ /apps/all/VTune/2016_update1/vtune_amplifier_xe_2016.1.1.434111/bin64/amplxe-cl -target-system mic-native:0 -collect advanced-hotspots -- /home/sta545/tmp/vect-add-mic
```console
$ /apps/all/VTune/2016_update1/vtune_amplifier_xe_2016.1.1.434111/bin64/amplxe-cl -target-system mic-native:0 -collect advanced-hotspots -- /home/sta545/tmp/vect-add-mic
```
Host launch:
```bash
$ /apps/all/VTune/2016_update1/vtune_amplifier_xe_2016.1.1.434111/bin64/amplxe-cl -target-system mic-host-launch:0 -collect advanced-hotspots -- /home/sta545/tmp/sgemm
```console
$ /apps/all/VTune/2016_update1/vtune_amplifier_xe_2016.1.1.434111/bin64/amplxe-cl -target-system mic-host-launch:0 -collect advanced-hotspots -- /home/sta545/tmp/sgemm
```
You can obtain this command line by pressing the "Command line..." button on Analysis Type screen.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment