Skip to content
Snippets Groups Projects
Commit 45410dfa authored by David Hrbáč's avatar David Hrbáč
Browse files

Lowercase

parent b4474c1c
No related branches found
Tags
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!70Note clean-up
Pipeline #
Showing
with 53 additions and 53 deletions
......@@ -6,7 +6,7 @@ In many cases, it is useful to submit huge (>100+) number of computational jobs
However, executing huge number of jobs via the PBS queue may strain the system. This strain may result in slow response to commands, inefficient scheduling and overall degradation of performance and user experience, for all users. For this reason, the number of jobs is **limited to 100 per user, 1000 per job array**
!!! Note
!!! note
Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
- Use [Job arrays](capacity-computing/#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
......@@ -20,7 +20,7 @@ However, executing huge number of jobs via the PBS queue may strain the system.
## Job Arrays
!!! Note
!!! note
Huge number of jobs may be easily submitted and managed as a job array.
A job array is a compact representation of many jobs, called subjobs. The subjobs share the same job script, and have the same values for all attributes and resources, with the following exceptions:
......@@ -149,7 +149,7 @@ Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/)
## GNU Parallel
!!! Note
!!! note
Use GNU parallel to run many single core tasks on one node.
GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on Anselm.
......@@ -216,17 +216,17 @@ $ qsub -N JOBNAME jobscript
In this example, we submit a job of 101 tasks. 16 input files will be processed in parallel. The 101 tasks on 16 cores are assumed to complete in less than 2 hours.
!!! Hint
!!! hint
Use #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue.
## Job Arrays and GNU Parallel
!!! Note
!!! note
Combine the Job arrays and GNU parallel for best throughput of single core jobs
While job arrays are able to utilize all available computational nodes, the GNU parallel can be used to efficiently run multiple single-core jobs on single node. The two approaches may be combined to utilize all available (current and future) resources to execute single core jobs.
!!! Note
!!! note
Every subjob in an array runs GNU parallel to utilize all cores on the node
### GNU Parallel, Shared jobscript
......@@ -281,7 +281,7 @@ cp output $PBS_O_WORKDIR/$TASK.out
In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node. Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The numtasks file controls how many tasks will be run per subjob. Once an task is finished, new task starts, until the number of tasks in numtasks file is reached.
!!! Note
!!! note
Select subjob walltime and number of tasks per subjob carefully
When deciding this values, think about following guiding rules:
......@@ -301,7 +301,7 @@ $ qsub -N JOBNAME -J 1-992:32 jobscript
In this example, we submit a job array of 31 subjobs. Note the -J 1-992:**32**, this must be the same as the number sent to numtasks file. Each subjob will run on full node and process 16 input files in parallel, 32 in total per subjob. Every subjob is assumed to complete in less than 2 hours.
!!! Hint
!!! hint
Use #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue.
## Examples
......
......@@ -23,14 +23,14 @@ then
fi
```
!!! Note
!!! note
Do not run commands outputting to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Conside utilization of SSH session interactivity for such commands as stated in the previous example.
### Application Modules
In order to configure your shell for running particular application on Anselm we use Module package interface.
!!! Note
!!! note
The modules set up the application paths, library paths and environment variables for running particular application.
We have also second modules repository. This modules repository is created using tool called EasyBuild. On Salomon cluster, all modules will be build by this tool. If you want to use software from this modules repository, please follow instructions in section [Application Modules Path Expansion](environment-and-modules/#EasyBuild).
......
......@@ -35,7 +35,7 @@ usage<sub>Total</sub> is total usage by all users, by all projects.
Usage counts allocated core-hours (`ncpus x walltime`). Usage is decayed, or cut in half periodically, at the interval 168 hours (one week).
Jobs queued in queue qexp are not calculated to project's usage.
!!! Note
!!! note
Calculated usage and fair-share priority can be seen at <https://extranet.it4i.cz/anselm/projects>.
Calculated fair-share priority can be also seen as Resource_List.fairshare attribute of a job.
......@@ -64,7 +64,7 @@ The scheduler makes a list of jobs to run in order of execution priority. Schedu
It means, that jobs with lower execution priority can be run before jobs with higher execution priority.
!!! Note
!!! note
It is **very beneficial to specify the walltime** when submitting jobs.
Specifying more accurate walltime enables better scheduling, better execution times and better resource usage. Jobs with suitable (small) walltime could be backfilled - and overtake job(s) with higher priority.
......@@ -11,7 +11,7 @@ When allocating computational resources for the job, please specify
5. Project ID
6. Jobscript or interactive switch
!!! Note
!!! note
Use the **qsub** command to submit your job to a queue for allocation of the computational resources.
Submit the job using the qsub command:
......@@ -132,7 +132,7 @@ Although this example is somewhat artificial, it demonstrates the flexibility of
## Job Management
!!! Note
!!! note
Check status of your jobs using the **qstat** and **check-pbs-jobs** commands
```bash
......@@ -213,7 +213,7 @@ Run loop 3
In this example, we see actual output (some iteration loops) of the job 35141.dm2
!!! Note
!!! note
Manage your queued or running jobs, using the **qhold**, **qrls**, **qdel**, **qsig** or **qalter** commands
You may release your allocation at any time, using qdel command
......@@ -238,12 +238,12 @@ $ man pbs_professional
### Jobscript
!!! Note
!!! note
Prepare the jobscript to run batch jobs in the PBS queue system
The Jobscript is a user made script, controlling sequence of commands for executing the calculation. It is often written in bash, other scripts may be used as well. The jobscript is supplied to PBS **qsub** command as an argument and executed by the PBS Professional workload manager.
!!! Note
!!! note
The jobscript or interactive shell is executed on first of the allocated nodes.
```bash
......@@ -273,7 +273,7 @@ $ pwd
In this example, 4 nodes were allocated interactively for 1 hour via the qexp queue. The interactive shell is executed in the home directory.
!!! Note
!!! note
All nodes within the allocation may be accessed via ssh. Unallocated nodes are not accessible to user.
The allocated nodes are accessible via ssh from login nodes. The nodes may access each other via ssh as well.
......@@ -305,7 +305,7 @@ In this example, the hostname program is executed via pdsh from the interactive
### Example Jobscript for MPI Calculation
!!! Note
!!! note
Production jobs must use the /scratch directory for I/O
The recommended way to run production jobs is to change to /scratch directory early in the jobscript, copy all inputs to /scratch, execute the calculations and copy outputs to home directory.
......@@ -337,12 +337,12 @@ exit
In this example, some directory on the /home holds the input file input and executable mympiprog.x . We create a directory myjob on the /scratch filesystem, copy input and executable files from the /home directory where the qsub was invoked ($PBS_O_WORKDIR) to /scratch, execute the MPI programm mympiprog.x and copy the output file back to the /home directory. The mympiprog.x is executed as one process per node, on all allocated nodes.
!!! Note
!!! note
Consider preloading inputs and executables onto [shared scratch](storage/) before the calculation starts.
In some cases, it may be impractical to copy the inputs to scratch and outputs to home. This is especially true when very large input and output files are expected, or when the files should be reused by a subsequent calculation. In such a case, it is users responsibility to preload the input files on shared /scratch before the job submission and retrieve the outputs manually, after all calculations are finished.
!!! Note
!!! note
Store the qsub options within the jobscript. Use **mpiprocs** and **ompthreads** qsub options to control the MPI job execution.
Example jobscript for an MPI job with preloaded inputs and executables, options for qsub are stored within the script :
......@@ -375,7 +375,7 @@ sections.
### Example Jobscript for Single Node Calculation
!!! Note
!!! note
Local scratch directory is often useful for single node jobs. Local scratch will be deleted immediately after the job ends.
Example jobscript for single node calculation, using [local scratch](storage/) on the node:
......
......@@ -8,7 +8,7 @@ All compute and login nodes of Anselm are interconnected by a high-bandwidth, lo
The compute nodes may be accessed via the InfiniBand network using ib0 network interface, in address range 10.2.1.1-209. The MPI may be used to establish native InfiniBand connection among the nodes.
!!! Note
!!! note
The network provides **2170 MB/s** transfer rates via the TCP connection (single stream) and up to **3600 MB/s** via native InfiniBand protocol.
The Fat tree topology ensures that peak transfer rates are achieved between any two nodes, independent of network traffic exchanged among other nodes concurrently.
......
......@@ -235,10 +235,10 @@ PRACE users should check their project accounting using the [PRACE Accounting To
Users who have undergone the full local registration procedure (including signing the IT4Innovations Acceptable Use Policy) and who have received local password may check at any time, how many core-hours have been consumed by themselves and their projects using the command "it4ifree".
!!! Note
!!! note
You need to know your user password to use the command. Displayed core hours are "system core hours" which differ from PRACE "standardized core hours".
!!! Hint
!!! hint
The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients>
```bash
......
......@@ -41,7 +41,7 @@ Please [follow the documentation](shell-and-data-access/).
To have the OpenGL acceleration, **24 bit color depth must be used**. Otherwise only the geometry (desktop size) definition is needed.
!!! Hint
!!! hint
At first VNC server run you need to define a password.
This example defines desktop with dimensions 1200x700 pixels and 24 bit color depth.
......@@ -138,7 +138,7 @@ qviz**. The queue has following properties:
Currently when accessing the node, each user gets 4 cores of a CPU allocated, thus approximately 16 GB of RAM and 1/4 of the GPU capacity.
!!! Note
!!! note
If more GPU power or RAM is required, it is recommended to allocate one whole node per user, so that all 16 cores, whole RAM and whole GPU is exclusive. This is currently also the maximum allowed allocation per one user. One hour of work is allocated by default, the user may ask for 2 hours maximum.
To access the visualization node, follow these steps:
......
......@@ -12,14 +12,14 @@ The resources are allocated to the job in a fair-share fashion, subject to const
- **qnvidia**, **qmic**, **qfat**, the Dedicated queues
- **qfree**, the Free resource utilization queue
!!! Note
!!! note
Check the queue status at <https://extranet.it4i.cz/anselm/>
Read more on the [Resource AllocationPolicy](resources-allocation-policy/) page.
## Job Submission and Execution
!!! Note
!!! note
Use the **qsub** command to submit your jobs.
The qsub submits the job into the queue. The qsub command creates a request to the PBS Job manager for allocation of specified resources. The **smallest allocation unit is entire node, 16 cores**, with exception of the qexp queue. The resources will be allocated when available, subject to allocation policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on first of the allocated nodes.**
......@@ -28,7 +28,7 @@ Read more on the [Job submission and execution](job-submission-and-execution/) p
## Capacity Computing
!!! Note
!!! note
Use Job arrays when running huge number of jobs.
Use GNU Parallel and/or Job arrays when running (many) single core jobs.
......
......@@ -4,7 +4,7 @@
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The Fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
!!! Note
!!! note
Check the queue status at <https://extranet.it4i.cz/anselm/>
| queue | active project | project resources | nodes | min ncpus | priority | authorization | walltime |
......@@ -15,7 +15,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const
| qnvidia, qmic, qfat | yes | 0 | 23 total qnvidia4 total qmic2 total qfat | 16 | 200 | yes | 24/48 h |
| qfree | yes | none required | 178 w/o accelerator | 16 | -1024 | no | 12 h |
!!! Note
!!! note
**The qfree queue is not free of charge**. [Normal accounting](#resources-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply for Directors Discreation's projects (DD projects) by default. Usage of qfree after exhaustion of DD projects computational resources is allowed after request for this queue.
**The qexp queue is equipped with the nodes not having the very same CPU clock speed.** Should you need the very same CPU speed, you have to select the proper nodes during the PSB job submission.
......@@ -113,7 +113,7 @@ The resources that are currently subject to accounting are the core-hours. The c
### Check Consumed Resources
!!! Note
!!! note
The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients>
User may check at any time, how many core-hours have been consumed by himself/herself and his/her projects. The command is available on clusters' login nodes.
......
......@@ -53,7 +53,7 @@ Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com
Example to the cluster login:
!!! Note
!!! note
The environment is **not** shared between login nodes, except for [shared filesystems](storage/#shared-filesystems).
## Data Transfer
......@@ -69,14 +69,14 @@ Data in and out of the system may be transferred by the [scp](http://en.wikipedi
The authentication is by the [private key](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/)
!!! Note
!!! note
Data transfer rates up to **160MB/s** can be achieved with scp or sftp.
1TB may be transferred in 1:50h.
To achieve 160MB/s transfer rates, the end user must be connected by 10G line all the way to IT4Innovations and use computer with fast processor for the transfer. Using Gigabit ethernet connection, up to 110MB/s may be expected. Fast cipher (aes128-ctr) should be used.
!!! Note
!!! note
If you experience degraded data transfer performance, consult your local network provider.
On linux or Mac, use scp or sftp client to transfer the data to Anselm:
......@@ -126,7 +126,7 @@ Outgoing connections, from Anselm Cluster login nodes to the outside world, are
| 443 | https |
| 9418 | git |
!!! Note
!!! note
Please use **ssh port forwarding** and proxy servers to connect from Anselm to all other remote ports.
Outgoing connections, from Anselm Cluster compute nodes are restricted to the internal network. Direct connections form compute nodes to outside world are cut.
......@@ -135,7 +135,7 @@ Outgoing connections, from Anselm Cluster compute nodes are restricted to the in
### Port Forwarding From Login Nodes
!!! Note
!!! note
Port forwarding allows an application running on Anselm to connect to arbitrary remote host and port.
It works by tunneling the connection from Anselm back to users workstation and forwarding from the workstation to the remote host.
......@@ -177,7 +177,7 @@ In this example, we assume that port forwarding from login1:6000 to remote.host.
Port forwarding is static, each single port is mapped to a particular port on remote host. Connection to other remote host, requires new forward.
!!! Note
!!! note
Applications with inbuilt proxy support, experience unlimited access to remote hosts, via single proxy server.
To establish local proxy server on your workstation, install and run SOCKS proxy server software. On Linux, sshd demon provides the functionality. To establish SOCKS proxy server listening on port 1080 run:
......
......@@ -32,7 +32,7 @@ Compilation parameters are default:
Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details.
!!! Note
!!! note
The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS.
You are advised to use the -d option to point to a directory in [SCRATCH file system](../../storage/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch file system.
......
......@@ -104,7 +104,7 @@ As default UPC network the "smp" is used. This is very quick and easy way for te
For production runs, it is recommended to use the native Infiband implementation of UPC network "ibv". For testing/debugging using multiple nodes, the "mpi" UPC network is recommended.
!!! Warning
!!! warning
Selection of the network is done at the compile time and not at runtime (as expected)!
Example UPC code:
......
......@@ -47,7 +47,7 @@ $ mpif90 -g -O0 -o test_debug test.f
Before debugging, you need to compile your code with theses flags:
!!! Note
!!! note
- **g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
- **O0** : Suppress all optimizations.
......
......@@ -20,7 +20,7 @@ The module sets up environment variables, required for using the Allinea Perform
## Usage
!!! Note
!!! note
Use the the perf-report wrapper on your (MPI) program.
Instead of [running your MPI program the usual way](../mpi/), use the the perf report wrapper:
......
......@@ -27,7 +27,7 @@ Currently, there are two versions of CUBE 4.2.3 available as [modules](../../env
CUBE is a graphical application. Refer to Graphical User Interface documentation for a list of methods to launch graphical applications on Anselm.
!!! Note
!!! note
Analyzing large data sets can consume large amount of CPU and RAM. Do not perform large analysis on login nodes.
After loading the appropriate module, simply launch cube command, or alternatively you can use scalasca -examine command to launch the GUI. Note that for Scalasca datasets, if you do not analyze the data with scalasca -examine before to opening them with CUBE, not all performance data will be available.
......
......@@ -192,7 +192,7 @@ Can be used as a sensor for ksysguard GUI, which is currently not installed on A
In a similar fashion to PAPI, PCM provides a C++ API to access the performance counter from within your application. Refer to the [Doxygen documentation](http://intel-pcm-api-documentation.github.io/classPCM.html) for details of the API.
!!! Note
!!! note
Due to security limitations, using PCM API to monitor your applications is currently not possible on Anselm. (The application must be run as root user)
Sample program using the API :
......
......@@ -26,7 +26,7 @@ and launch the GUI :
$ amplxe-gui
```
!!! Note
!!! note
To profile an application with VTune Amplifier, special kernel modules need to be loaded. The modules are not loaded on Anselm login nodes, thus direct profiling on login nodes is not possible. Use VTune on compute nodes and refer to the documentation on using GUI applications.
The GUI will open in new window. Click on "_New Project..._" to create a new project. After clicking _OK_, a new window with project properties will appear. At "_Application:_", select the bath to your binary you want to profile (the binary should be compiled with -g flag). Some additional options such as command line arguments can be selected. At "_Managed code profiling mode:_" select "_Native_" (unless you want to profile managed mode .NET/Mono applications). After clicking _OK_, your project is created.
......@@ -47,7 +47,7 @@ Copy the line to clipboard and then you can paste it in your jobscript or in com
## Xeon Phi
!!! Note
!!! note
This section is outdated. It will be updated with new information soon.
It is possible to analyze both native and offload Xeon Phi applications. For offload mode, just specify the path to the binary. For native mode, you need to specify in project properties:
......@@ -58,7 +58,7 @@ Application parameters: mic0 source ~/.profile && /path/to/your/bin
Note that we include source ~/.profile in the command to setup environment paths [as described here](../intel-xeon-phi/).
!!! Note
!!! note
If the analysis is interrupted or aborted, further analysis on the card might be impossible and you will get errors like "ERROR connecting to MIC card". In this case please contact our support to reboot the MIC card.
You may also use remote analysis to collect data from the MIC and then analyze it in the GUI later :
......
......@@ -190,7 +190,7 @@ Now the compiler won't remove the multiplication loop. (However it is still not
### Intel Xeon Phi
!!! Note
!!! note
PAPI currently supports only a subset of counters on the Intel Xeon Phi processor compared to Intel Xeon, for example the floating point operations counter is missing.
To use PAPI in [Intel Xeon Phi](../intel-xeon-phi/) native applications, you need to load module with " -mic" suffix, for example " papi/5.3.2-mic" :
......
......@@ -42,7 +42,7 @@ Some notable Scalasca options are:
- **-t Enable trace data collection. By default, only summary data are collected.**
- **-e &lt;directory> Specify a directory to save the collected data to. By default, Scalasca saves the data to a directory with prefix scorep\_, followed by name of the executable and launch configuration.**
!!! Note
!!! note
Scalasca can generate a huge amount of data, especially if tracing is enabled. Please consider saving the data to a [scratch directory](../../storage/storage/).
### Analysis of Reports
......
......@@ -57,7 +57,7 @@ Compile the code:
Before debugging, you need to compile your code with theses flags:
!!! Note
!!! note
- **-g** : Generates extra debugging information usable by GDB. **-g3** includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
- **-O0** : Suppress all optimizations.
......@@ -91,7 +91,7 @@ To debug a serial code use:
To debug a parallel code compiled with **OpenMPI** you need to setup your TotalView environment:
!!! Hint
!!! hint
To be able to run parallel debugging procedure from the command line without stopping the debugger in the mpiexec source code you have to add the following function to your `~/.tvdrc` file:
```bash
......@@ -120,7 +120,7 @@ The source code of this function can be also found in
/apps/mpi/openmpi/intel/1.6.5/etc/openmpi-totalview.tcl
```
!!! Note
!!! note
You can also add only following line to you ~/.tvdrc file instead of the entire function:
**source /apps/mpi/openmpi/intel/1.6.5/etc/openmpi-totalview.tcl**
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment