Skip to content
Snippets Groups Projects
Commit b4474c1c authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

remove tab

parent 35f4dca9
No related branches found
No related tags found
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!70Note clean-up
Pipeline #
Showing
with 37 additions and 37 deletions
......@@ -7,7 +7,7 @@ In many cases, it is useful to submit huge (>100+) number of computational jobs
However, executing huge number of jobs via the PBS queue may strain the system. This strain may result in slow response to commands, inefficient scheduling and overall degradation of performance and user experience, for all users. For this reason, the number of jobs is **limited to 100 per user, 1000 per job array**
!!! Note
Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
- Use [Job arrays](capacity-computing/#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
- Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
......@@ -21,7 +21,7 @@ However, executing huge number of jobs via the PBS queue may strain the system.
## Job Arrays
!!! Note
Huge number of jobs may be easily submitted and managed as a job array.
Huge number of jobs may be easily submitted and managed as a job array.
A job array is a compact representation of many jobs, called subjobs. The subjobs share the same job script, and have the same values for all attributes and resources, with the following exceptions:
......@@ -150,7 +150,7 @@ Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/)
## GNU Parallel
!!! Note
Use GNU parallel to run many single core tasks on one node.
Use GNU parallel to run many single core tasks on one node.
GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on Anselm.
......
......@@ -24,14 +24,14 @@ fi
```
!!! Note
Do not run commands outputting to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Conside utilization of SSH session interactivity for such commands as stated in the previous example.
Do not run commands outputting to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Conside utilization of SSH session interactivity for such commands as stated in the previous example.
### Application Modules
In order to configure your shell for running particular application on Anselm we use Module package interface.
!!! Note
The modules set up the application paths, library paths and environment variables for running particular application.
The modules set up the application paths, library paths and environment variables for running particular application.
We have also second modules repository. This modules repository is created using tool called EasyBuild. On Salomon cluster, all modules will be build by this tool. If you want to use software from this modules repository, please follow instructions in section [Application Modules Path Expansion](environment-and-modules/#EasyBuild).
......
......@@ -36,7 +36,7 @@ Usage counts allocated core-hours (`ncpus x walltime`). Usage is decayed, or cut
Jobs queued in queue qexp are not calculated to project's usage.
!!! Note
Calculated usage and fair-share priority can be seen at <https://extranet.it4i.cz/anselm/projects>.
Calculated usage and fair-share priority can be seen at <https://extranet.it4i.cz/anselm/projects>.
Calculated fair-share priority can be also seen as Resource_List.fairshare attribute of a job.
......@@ -65,6 +65,6 @@ The scheduler makes a list of jobs to run in order of execution priority. Schedu
It means, that jobs with lower execution priority can be run before jobs with higher execution priority.
!!! Note
It is **very beneficial to specify the walltime** when submitting jobs.
It is **very beneficial to specify the walltime** when submitting jobs.
Specifying more accurate walltime enables better scheduling, better execution times and better resource usage. Jobs with suitable (small) walltime could be backfilled - and overtake job(s) with higher priority.
......@@ -9,7 +9,7 @@ All compute and login nodes of Anselm are interconnected by a high-bandwidth, lo
The compute nodes may be accessed via the InfiniBand network using ib0 network interface, in address range 10.2.1.1-209. The MPI may be used to establish native InfiniBand connection among the nodes.
!!! Note
The network provides **2170 MB/s** transfer rates via the TCP connection (single stream) and up to **3600 MB/s** via native InfiniBand protocol.
The network provides **2170 MB/s** transfer rates via the TCP connection (single stream) and up to **3600 MB/s** via native InfiniBand protocol.
The Fat tree topology ensures that peak transfer rates are achieved between any two nodes, independent of network traffic exchanged among other nodes concurrently.
......
......@@ -13,14 +13,14 @@ The resources are allocated to the job in a fair-share fashion, subject to const
- **qfree**, the Free resource utilization queue
!!! Note
Check the queue status at <https://extranet.it4i.cz/anselm/>
Check the queue status at <https://extranet.it4i.cz/anselm/>
Read more on the [Resource AllocationPolicy](resources-allocation-policy/) page.
## Job Submission and Execution
!!! Note
Use the **qsub** command to submit your jobs.
Use the **qsub** command to submit your jobs.
The qsub submits the job into the queue. The qsub command creates a request to the PBS Job manager for allocation of specified resources. The **smallest allocation unit is entire node, 16 cores**, with exception of the qexp queue. The resources will be allocated when available, subject to allocation policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on first of the allocated nodes.**
......@@ -29,7 +29,7 @@ Read more on the [Job submission and execution](job-submission-and-execution/) p
## Capacity Computing
!!! Note
Use Job arrays when running huge number of jobs.
Use Job arrays when running huge number of jobs.
Use GNU Parallel and/or Job arrays when running (many) single core jobs.
......
......@@ -33,7 +33,7 @@ Compilation parameters are default:
Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details.
!!! Note
The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS.
The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS.
You are advised to use the -d option to point to a directory in [SCRATCH file system](../../storage/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch file system.
......
......@@ -24,13 +24,13 @@ On the Anselm cluster COMSOL is available in the latest stable version. There ar
To load the of COMSOL load the module
```bash
$ module load comsol
$ module load comsol
```
By default the **EDU variant** will be loaded. If user needs other version or variant, load the particular version. To obtain the list of available versions use
```bash
$ module avail comsol
$ module avail comsol
```
If user needs to prepare COMSOL jobs in the interactive mode it is recommend to use COMSOL on the compute nodes via PBS Pro scheduler. In order run the COMSOL Desktop GUI on Windows is recommended to use the Virtual Network Computing (VNC).
......
......@@ -21,7 +21,7 @@ The module sets up environment variables, required for using the Allinea Perform
## Usage
!!! Note
Use the the perf-report wrapper on your (MPI) program.
Use the the perf-report wrapper on your (MPI) program.
Instead of [running your MPI program the usual way](../mpi/), use the the perf report wrapper:
......
......@@ -28,7 +28,7 @@ Currently, there are two versions of CUBE 4.2.3 available as [modules](../../env
CUBE is a graphical application. Refer to Graphical User Interface documentation for a list of methods to launch graphical applications on Anselm.
!!! Note
Analyzing large data sets can consume large amount of CPU and RAM. Do not perform large analysis on login nodes.
Analyzing large data sets can consume large amount of CPU and RAM. Do not perform large analysis on login nodes.
After loading the appropriate module, simply launch cube command, or alternatively you can use scalasca -examine command to launch the GUI. Note that for Scalasca datasets, if you do not analyze the data with scalasca -examine before to opening them with CUBE, not all performance data will be available.
......
......@@ -193,7 +193,7 @@ Can be used as a sensor for ksysguard GUI, which is currently not installed on A
In a similar fashion to PAPI, PCM provides a C++ API to access the performance counter from within your application. Refer to the [Doxygen documentation](http://intel-pcm-api-documentation.github.io/classPCM.html) for details of the API.
!!! Note
Due to security limitations, using PCM API to monitor your applications is currently not possible on Anselm. (The application must be run as root user)
Due to security limitations, using PCM API to monitor your applications is currently not possible on Anselm. (The application must be run as root user)
Sample program using the API :
......
......@@ -27,7 +27,7 @@ and launch the GUI :
```
!!! Note
To profile an application with VTune Amplifier, special kernel modules need to be loaded. The modules are not loaded on Anselm login nodes, thus direct profiling on login nodes is not possible. Use VTune on compute nodes and refer to the documentation on using GUI applications.
To profile an application with VTune Amplifier, special kernel modules need to be loaded. The modules are not loaded on Anselm login nodes, thus direct profiling on login nodes is not possible. Use VTune on compute nodes and refer to the documentation on using GUI applications.
The GUI will open in new window. Click on "_New Project..._" to create a new project. After clicking _OK_, a new window with project properties will appear. At "_Application:_", select the bath to your binary you want to profile (the binary should be compiled with -g flag). Some additional options such as command line arguments can be selected. At "_Managed code profiling mode:_" select "_Native_" (unless you want to profile managed mode .NET/Mono applications). After clicking _OK_, your project is created.
......@@ -48,7 +48,7 @@ Copy the line to clipboard and then you can paste it in your jobscript or in com
## Xeon Phi
!!! Note
This section is outdated. It will be updated with new information soon.
This section is outdated. It will be updated with new information soon.
It is possible to analyze both native and offload Xeon Phi applications. For offload mode, just specify the path to the binary. For native mode, you need to specify in project properties:
......@@ -59,7 +59,7 @@ Application parameters: mic0 source ~/.profile && /path/to/your/bin
Note that we include source ~/.profile in the command to setup environment paths [as described here](../intel-xeon-phi/).
!!! Note
If the analysis is interrupted or aborted, further analysis on the card might be impossible and you will get errors like "ERROR connecting to MIC card". In this case please contact our support to reboot the MIC card.
If the analysis is interrupted or aborted, further analysis on the card might be impossible and you will get errors like "ERROR connecting to MIC card". In this case please contact our support to reboot the MIC card.
You may also use remote analysis to collect data from the MIC and then analyze it in the GUI later :
......
......@@ -191,7 +191,7 @@ Now the compiler won't remove the multiplication loop. (However it is still not
### Intel Xeon Phi
!!! Note
PAPI currently supports only a subset of counters on the Intel Xeon Phi processor compared to Intel Xeon, for example the floating point operations counter is missing.
PAPI currently supports only a subset of counters on the Intel Xeon Phi processor compared to Intel Xeon, for example the floating point operations counter is missing.
To use PAPI in [Intel Xeon Phi](../intel-xeon-phi/) native applications, you need to load module with " -mic" suffix, for example " papi/5.3.2-mic" :
......
......@@ -43,7 +43,7 @@ Some notable Scalasca options are:
- **-e &lt;directory> Specify a directory to save the collected data to. By default, Scalasca saves the data to a directory with prefix scorep\_, followed by name of the executable and launch configuration.**
!!! Note
Scalasca can generate a huge amount of data, especially if tracing is enabled. Please consider saving the data to a [scratch directory](../../storage/storage/).
Scalasca can generate a huge amount of data, especially if tracing is enabled. Please consider saving the data to a [scratch directory](../../storage/storage/).
### Analysis of Reports
......
......@@ -121,7 +121,7 @@ The source code of this function can be also found in
```
!!! Note
You can also add only following line to you ~/.tvdrc file instead of the entire function:
You can also add only following line to you ~/.tvdrc file instead of the entire function:
**source /apps/mpi/openmpi/intel/1.6.5/etc/openmpi-totalview.tcl**
You need to do this step only once.
......
......@@ -5,7 +5,7 @@
Intel Integrated Performance Primitives, version 7.1.1, compiled for AVX vector instructions is available, via module ipp. The IPP is a very rich library of highly optimized algorithmic building blocks for media and data applications. This includes signal, image and frame processing algorithms, such as FFT, FIR, Convolution, Optical Flow, Hough transform, Sum, MinMax, as well as cryptographic functions, linear algebra functions and many more.
!!! Note
Check out IPP before implementing own math functions for data processing, it is likely already there.
Check out IPP before implementing own math functions for data processing, it is likely already there.
```bash
$ module load ipp
......
......@@ -24,7 +24,7 @@ Intel MKL version 13.5.192 is available on Anselm
The module sets up environment variables, required for linking and running mkl enabled applications. The most important variables are the $MKLROOT, $MKL_INC_DIR, $MKL_LIB_DIR and $MKL_EXAMPLES
!!! Note
The MKL library may be linked using any compiler. With intel compiler use -mkl option to link default threaded MKL.
The MKL library may be linked using any compiler. With intel compiler use -mkl option to link default threaded MKL.
### Interfaces
......@@ -48,7 +48,7 @@ You will need the mkl module loaded to run the mkl enabled executable. This may
### Threading
!!! Note
Advantage in using the MKL library is that it brings threaded parallelization to applications that are otherwise not parallel.
Advantage in using the MKL library is that it brings threaded parallelization to applications that are otherwise not parallel.
For this to work, the application must link the threaded MKL library (default). Number and behaviour of MKL threads may be controlled via the OpenMP environment variables, such as OMP_NUM_THREADS and KMP_AFFINITY. MKL_NUM_THREADS takes precedence over OMP_NUM_THREADS
......
......@@ -14,7 +14,7 @@ Intel TBB version 4.1 is available on Anselm
The module sets up environment variables, required for linking and running tbb enabled applications.
!!! Note
Link the tbb library, using -ltbb
Link the tbb library, using -ltbb
## Examples
......
......@@ -233,7 +233,7 @@ During the compilation Intel compiler shows which loops have been vectorized in
Some interesting compiler flags useful not only for code debugging are:
!!! Note
Debugging
Debugging
openmp_report[0|1|2] - controls the compiler based vectorization diagnostic level
vec-report[0|1|2] - controls the OpenMP parallelizer diagnostic level
......@@ -421,7 +421,7 @@ If the code is parallelized using OpenMP a set of additional libraries is requir
For your information the list of libraries and their location required for execution of an OpenMP parallel code on Intel Xeon Phi is:
!!! Note
/apps/intel/composer_xe_2013.5.192/compiler/lib/mic
/apps/intel/composer_xe_2013.5.192/compiler/lib/mic
- libiomp5.so
- libimf.so
......@@ -502,7 +502,7 @@ After executing the complied binary file, following output should be displayed.
```
!!! Note
More information about this example can be found on Intel website: <http://software.intel.com/en-us/vcsource/samples/caps-basic/>
More information about this example can be found on Intel website: <http://software.intel.com/en-us/vcsource/samples/caps-basic/>
The second example that can be found in "/apps/intel/opencl-examples" directory is General Matrix Multiply. You can follow the the same procedure to download the example to your directory and compile it.
......@@ -604,7 +604,7 @@ An example of basic MPI version of "hello-world" example in C language, that can
Intel MPI for the Xeon Phi coprocessors offers different MPI programming models:
!!! Note
**Host-only model** - all MPI ranks reside on the host. The coprocessors can be used by using offload pragmas. (Using MPI calls inside offloaded code is not supported.)
**Host-only model** - all MPI ranks reside on the host. The coprocessors can be used by using offload pragmas. (Using MPI calls inside offloaded code is not supported.)
**Coprocessor-only model** - all MPI ranks reside only on the coprocessors.
......@@ -873,7 +873,7 @@ To run the MPI code using mpirun and the machine file "hosts_file_mix" use:
A possible output of the MPI "hello-world" example executed on two hosts and two accelerators is:
```bash
Hello world from process 0 of 8 on host cn204
Hello world from process 0 of 8 on host cn204
Hello world from process 1 of 8 on host cn204
Hello world from process 2 of 8 on host cn204-mic0
Hello world from process 3 of 8 on host cn204-mic0
......@@ -891,7 +891,7 @@ A possible output of the MPI "hello-world" example executed on two hosts and two
PBS also generates a set of node-files that can be used instead of manually creating a new one every time. Three node-files are genereated:
!!! Note
**Host only node-file:**
**Host only node-file:**
- /lscratch/${PBS_JOBID}/nodefile-cn MIC only node-file:
- /lscratch/${PBS_JOBID}/nodefile-mic Host and MIC node-file:
......
......@@ -11,7 +11,7 @@ If an ISV application was purchased for educational (research) purposes and also
## Overview of the Licenses Usage
!!! Note
The overview is generated every minute and is accessible from web or command line interface.
The overview is generated every minute and is accessible from web or command line interface.
### Web Interface
......
......@@ -27,7 +27,7 @@ Virtualization has also some drawbacks, it is not so easy to setup efficient sol
Solution described in chapter [HOWTO](virtualization/#howto) is suitable for single node tasks, does not introduce virtual machine clustering.
!!! Note
Please consider virtualization as last resort solution for your needs.
Please consider virtualization as last resort solution for your needs.
!!! Warning
Please consult use of virtualization with IT4Innovation's support.
......@@ -39,7 +39,7 @@ For running Windows application (when source code and Linux native application a
IT4Innovations does not provide any licenses for operating systems and software of virtual machines. Users are ( in accordance with [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf)) fully responsible for licensing all software running in virtual machines on Anselm. Be aware of complex conditions of licensing software in virtual environments.
!!! Note
Users are responsible for licensing OS e.g. MS Windows and all software running in their virtual machines.
Users are responsible for licensing OS e.g. MS Windows and all software running in their virtual machines.
## Howto
......@@ -249,7 +249,7 @@ Run virtual machine using optimized devices, user network back-end with sharing
Thanks to port forwarding you can access virtual machine via SSH (Linux) or RDP (Windows) connecting to IP address of compute node (and port 2222 for SSH). You must use VPN network).
!!! Note
Keep in mind, that if you use virtio devices, you must have virtio drivers installed on your virtual machine.
Keep in mind, that if you use virtio devices, you must have virtio drivers installed on your virtual machine.
### Networking and Data Sharing
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment