Commit 0724ba63 authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

salomon

parent 6680f297
Pipeline #1102 failed with stage
in 16 seconds
Shell access and data transfer
==============================
Interactive Login
-----------------
The Anselm cluster is accessed by SSH protocol via login nodes login1 and login2 at address anselm.it4i.cz. The login nodes may be addressed specifically, by prepending the login node name to the address.
|Login address|Port|Protocol|Login node|
|---|---|
|anselm.it4i.cz|22|ssh|round-robin DNS record for login1 and login2|
|login1.anselm.it4i.cz|22|ssh|login1|
|login2.anselm.it4i.cz|22|ssh|login2|
The authentication is by the [private key](../../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html)
>Please verify SSH fingerprints during the first logon. They are identical on all login nodes:
29:b3:f4:64:b0:73:f5:6f:a7:85:0f:e0:0d:be:76:bf (DSA)
d4:6f:5c:18:f4:3f:70:ef:bc:fc:cc:2b:fd:13:36:b7 (RSA)
Private key authentication:
On **Linux** or **Mac**, use
```bash
local $ ssh -i /path/to/id_rsa username@anselm.it4i.cz
```
If you see warning message "UNPROTECTED PRIVATE KEY FILE!", use this command to set lower permissions to private key file.
```bash
local $ chmod 600 /path/to/id_rsa
```
On **Windows**, use [PuTTY ssh client](../../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty.html).
After logging in, you will see the command prompt:
```bash
_
/\ | |
/ \ _ __ ___ ___| |_ __ ___
/ /\ \ | '_ \/ __|/ _ \ | '_ ` _ \
/ ____ \| | | \__ \ __/ | | | | | |
/_/ \_\_| |_|___/\___|_|_| |_| |_|
http://www.it4i.cz/?lang=en
Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com
[username@login2.anselm ~]$
```
>The environment is **not** shared between login nodes, except for [shared filesystems](../storage-1.html#section-1).
Data Transfer
-------------
Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp protocols.  (Not available yet.) In case large volumes of data are transferred, use dedicated data mover node dm1.anselm.it4i.cz for increased performance.
|Address|Port|Protocol|
|---|---|
|anselm.it4i.cz|22|scp, sftp|
|login1.anselm.it4i.cz|22|scp, sftp|
|login2.anselm.it4i.cz|22|scp, sftp|
|dm1.anselm.it4i.cz|22|scp, sftp|
The authentication is by the [private key](../../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html)
>Data transfer rates up to **160MB/s** can be achieved with scp or sftp.
1TB may be transferred in 1:50h.
To achieve 160MB/s transfer rates, the end user must be connected by 10G line all the way to IT4Innovations and use computer with fast processor for the transfer. Using Gigabit ethernet connection, up to 110MB/s may be expected.  Fast cipher (aes128-ctr) should be used.
>If you experience degraded data transfer performance, consult your local network provider.
On linux or Mac, use scp or sftp client to transfer the data to Anselm:
```bash
local $ scp -i /path/to/id_rsa my-local-file username@anselm.it4i.cz:directory/file
```
```bash
local $ scp -i /path/to/id_rsa -r my-local-dir username@anselm.it4i.cz:directory
```
or
```bash
local $ sftp -o IdentityFile=/path/to/id_rsa username@anselm.it4i.cz
```
Very convenient way to transfer files in and out of the Anselm computer is via the fuse filesystem [sshfs](http://linux.die.net/man/1/sshfs)
```bash
local $ sshfs -o IdentityFile=/path/to/id_rsa username@anselm.it4i.cz:. mountpoint
```
Using sshfs, the users Anselm home directory will be mounted on your local computer, just like an external disk.
Learn more on ssh, scp and sshfs by reading the manpages
```bash
$ man ssh
$ man scp
$ man sshfs
```
On Windows, use [WinSCP client](http://winscp.net/eng/download.php) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/) provides a way to mount the Anselm filesystems directly as an external disc.
More information about the shared file systems is available [here](../../storage.html).
......@@ -49,9 +49,9 @@ echo Machines: $hl
/ansys_inc/v145/CFX/bin/cfx5solve -def input.def -size 4 -size-ni 4x -part-large -start-method "Platform MPI Distributed Parallel" -par-dist $hl -P aa_r
```
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.html). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. >Input file has to be defined by common CFX def file which is attached to the cfx solver via parameter -def
**License** should be selected by parameter -P (Big letter **P**). Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**.
[More about licensing here](licensing.html)
\ No newline at end of file
[More about licensing here](licensing.md)
\ No newline at end of file
......@@ -39,7 +39,7 @@ NCORES=`wc -l $PBS_NODEFILE |awk '{print $1}'`
/ansys_inc/v145/fluent/bin/fluent 3d -t$NCORES -cnf=$PBS_NODEFILE -g -i fluent.jou
```
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.html). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common Fluent journal file which is attached to the Fluent solver via parameter -i fluent.jou
......@@ -84,8 +84,7 @@ input is the name of the input file.
case is the name of the .cas file that the input file will utilize.
fluent_args are extra ANSYS FLUENT arguments. As shown in the previous example, you can specify the interconnect by using the -p interconnect command. The available interconnects include ethernet (the default), myrinet, infiniband, vendor,
altix, and crayx. The MPI is selected automatically, based on the specified interconnect.
fluent_args are extra ANSYS FLUENT arguments. As shown in the previous example, you can specify the interconnect by using the -p interconnect command. The available interconnects include ethernet (the default), myrinet, infiniband, vendor, altix, and crayx. The MPI is selected automatically, based on the specified interconnect.
outfile is the name of the file to which the standard output will be sent.
......
......@@ -51,9 +51,6 @@ echo Machines: $hl
/ansys_inc/v145/ansys/bin/ansys145 -dis -lsdynampp i=input.k -machines $hl
```
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.html). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ansys solver via parameter i=
 
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ansys solver via parameter i=
\ No newline at end of file
......@@ -50,10 +50,9 @@ echo Machines: $hl
/ansys_inc/v145/ansys/bin/ansys145 -b -dis -p aa_r -i input.dat -o file.out -machines $hl -dir $WORK_DIR
```
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.html). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common APDL file which is attached to the ansys solver via parameter -i
**License** should be selected by parameter -p. Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**, aa_r_dy (ANSYS **Academic** AUTODYN)
[More about licensing here](licensing.html)
[More about licensing here](licensing.md)
\ No newline at end of file
......@@ -35,7 +35,7 @@ Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molp
>The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS.
You are advised to use the -d option to point to a directory in [SCRATCH filesystem](../../storage.html). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch filesystem.
You are advised to use the -d option to point to a directory in [SCRATCH filesystem](../../storage.md). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch filesystem.
### Example jobscript
......@@ -60,4 +60,4 @@ You are advised to use the -d option to point to a directory in [SCRATCH filesys
# delete scratch directory
rm -rf /scratch/$USER/$PBS_JOBID
```
```
\ No newline at end of file
NWChem
======
##High-Performance Computational Chemistry
**High-Performance Computational Chemistry**
Introduction
-------------------------
......@@ -43,5 +43,4 @@ Options
Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
- MEMORY : controls the amount of memory NWChem will use
- SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage.html#scratch) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
- SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage.md#scratch) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
\ No newline at end of file
Job scheduling
Job scheduling
==============
Job execution priority
----------------------
Scheduler gives each job an execution priority and then uses this job execution priority to select which job(s) to run.
Scheduler gives each job an execution priority and then uses this job
execution priority to select which job(s) to run.
Job execution priority is determined by these job properties (in order
of importance):
Job execution priority is determined by these job properties (in order of importance):
1. queue priority
2. fairshare priority
......@@ -16,24 +13,15 @@ of importance):
### Queue priority
Queue priority is priority of queue where job is queued before
execution.
Queue priority is priority of queue where job is queued before execution.
Queue priority has the biggest impact on job execution priority.
Execution priority of jobs in higher priority queues is always greater
than execution priority of jobs in lower priority queues. Other
properties of job used for determining job execution priority (fairshare
priority, eligible time) cannot compete with queue priority.
Queue priority has the biggest impact on job execution priority. Execution priority of jobs in higher priority queues is always greater than execution priority of jobs in lower priority queues. Other properties of job used for determining job execution priority (fairshare priority, eligible time) cannot compete with queue priority.
Queue priorities can be seen at
<https://extranet.it4i.cz/rsweb/salomon/queues>
Queue priorities can be seen at <https://extranet.it4i.cz/rsweb/salomon/queues>
### Fairshare priority
Fairshare priority is priority calculated on recent usage of resources.
Fairshare priority is calculated per project, all members of project
share same fairshare priority. Projects with higher recent usage have
lower fairshare priority than projects with lower or none recent usage.
Fairshare priority is priority calculated on recent usage of resources. Fairshare priority is calculated per project, all members of project share same fairshare priority. Projects with higher recent usage have lower fairshare priority than projects with lower or none recent usage.
Fairshare priority is used for ranking jobs with equal queue priority.
......@@ -41,33 +29,21 @@ Fairshare priority is calculated as
![](../../anselm-cluster-documentation/resource-allocation-and-job-execution/fairshare_formula.png)
where MAX_FAIRSHARE has value 1E6,
usage~Project~ is cumulated usage by all members of selected project,
usage~Total~ is total usage by all users, by all projects.
where MAX_FAIRSHARE has value 1E6, usage~Project~ is cumulated usage by all members of selected project, usage~Total~ is total usage by all users, by all projects.
Usage counts allocated corehours (ncpus*walltime). Usage is decayed, or
cut in half periodically, at the interval 168 hours (one week).
Jobs queued in queue qexp are not calculated to project's usage.
Usage counts allocated corehours (ncpus*walltime). Usage is decayed, or cut in half periodically, at the interval 168 hours (one week). Jobs queued in queue qexp are not calculated to project's usage.
Calculated usage and fairshare priority can be seen at
<https://extranet.it4i.cz/rsweb/salomon/projects>.
>Calculated usage and fairshare priority can be seen at <https://extranet.it4i.cz/rsweb/salomon/projects>.
Calculated fairshare priority can be also seen as
Resource_List.fairshare attribute of a job.
Calculated fairshare priority can be also seen as Resource_List.fairshare attribute of a job.
###Eligible time
Eligible time is amount (in seconds) of eligible time job accrued while
waiting to run. Jobs with higher eligible time gains higher
priority.
Eligible time is amount (in seconds) of eligible time job accrued while waiting to run. Jobs with higher eligible time gains higher priority.
Eligible time has the least impact on execution priority. Eligible time
is used for sorting jobs with equal queue priority and fairshare
priority. It is very, very difficult for >eligible time to
compete with fairshare priority.
Eligible time has the least impact on execution priority. Eligible time is used for sorting jobs with equal queue priority and fairshare priority. It is very, very difficult for eligible time to compete with fairshare priority.
Eligible time can be seen as eligible_time attribute of
job.
Eligible time can be seen as eligible_time attribute of job.
### Formula
......@@ -79,31 +55,16 @@ Job execution priority (job sort formula) is calculated as:
The scheduler uses job backfilling.
Backfilling means fitting smaller jobs around the higher-priority jobs
that the scheduler is going to run next, in such a way that the
higher-priority jobs are not delayed. Backfilling allows us to keep
resources from becoming idle when the top job (job with the highest
execution priority) cannot run.
Backfilling means fitting smaller jobs around the higher-priority jobs that the scheduler is going to run next, in such a way that the higher-priority jobs are not delayed. Backfilling allows us to keep resources from becoming idle when the top job (job with the highest execution priority) cannot run.
The scheduler makes a list of jobs to run in order of execution
priority. Scheduler looks for smaller jobs that can fit into the usage
gaps
around the highest-priority jobs in the list. The scheduler looks in the
prioritized list of jobs and chooses the highest-priority smaller jobs
that fit. Filler jobs are run only if they will not delay the start time
of top jobs.
The scheduler makes a list of jobs to run in order of execution priority. Scheduler looks for smaller jobs that can fit into the usage gaps around the highest-priority jobs in the list. The scheduler looks in the prioritized list of jobs and chooses the highest-priority smaller jobs that fit. Filler jobs are run only if they will not delay the start time of top jobs.
It means, that jobs with lower execution priority can be run before jobs
with higher execution priority.
It means, that jobs with lower execution priority can be run before jobs with higher execution priority.
It is **very beneficial to specify the walltime** when submitting jobs.
>It is **very beneficial to specify the walltime** when submitting jobs.
Specifying more accurate walltime enables better schedulling, better
execution times and better resource usage. Jobs with suitable (small)
walltime could be backfilled - and overtake job(s) with higher priority.
Specifying more accurate walltime enables better schedulling, better execution times and better resource usage. Jobs with suitable (small) walltime could be backfilled - and overtake job(s) with higher priority.
### Job placement
Job [placement can be controlled by flags during
submission](job-submission-and-execution.html#job_placement).
Job [placement can be controlled by flags during submission](job-submission-and-execution.html#job_placement).
\ No newline at end of file
Resources Allocation Policy
Resources Allocation Policy
===========================
Resources Allocation Policy
Resources Allocation Policy
---------------------------
The resources are allocated to the job in a fairshare fashion, subject to constraints set by the queue and resources available to the Project. The Fairshare at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority.md) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
The resources are allocated to the job in a fairshare fashion, subject
to constraints set by the queue and resources available to the Project.
The Fairshare at Anselm ensures that individual users may consume
approximately equal amount of resources per week. Detailed information
in the [Job scheduling](job-priority.html) section. The
resources are accessible via several queues for queueing the jobs. The
queues provide prioritized and exclusive access to the computational
resources. Following table provides the queue partitioning overview:
 
|queue |active project |project resources |nodes<th align="left">min ncpus*<th align="left">priority<th align="left">authorization<th align="left">walltime |
|queue |active project |project resources |nodes|min ncpus*|priority|authorization|walltime |
| --- | --- |
|<strong>qexp</strong>\ |no |none required |32 nodes, max 8 per user |24 |>150 |no |1 / 1h |
|<strong>qprod</strong>\ |yes |&gt; 0 |>1006 nodes, max 86 per job\ |24 |0 |no |24 / 48h |
|<strong>qlong</strong>Long queue |yes |&gt; 0 |256 nodes, max 40 per job, only non-accelerated nodes allowed |24 |0 |no |72 / 144h |
|<strong>qmpp</strong>Massive parallel queue |yes |&gt; 0 |1006 nodes |24 |0 |yes |2 / 4h |
|<strong>qfat</strong>UV2000 queue |yes |&gt; 0\ |1 (uv1) |8 |0 |yes |24 / 48h |
|<strong>qfree</strong>\ |yes |none required |752 nodes, max 86 per job |24 |-1024 |no |12 / 12h |
|<strong><strong>qviz</strong></strong>Visualization queue |yes |none required |2 (with NVIDIA Quadro K5000) |4 |150 |no |1 / 2h |
 
The qfree queue is not free of charge**. [Normal
accounting](resources-allocation-policy.html#resources-accounting-policy)
applies. However, it allows for utilization of free resources, once a
Project exhausted all its allocated computational resources. This does
not apply for Directors Discreation's projects (DD projects) by default.
Usage of qfree after exhaustion of DD projects computational resources
is allowed after request for this queue.
 
- **qexp**, the \: This queue is dedicated for testing and
running very small jobs. It is not required to specify a project to
enter the qexp. >*>There are 2 nodes always reserved for
this queue (w/o accelerator), maximum 8 nodes are available via the
qexp for a particular user. *The nodes may be
allocated on per core basis. No special authorization is required to
use it. The maximum runtime in qexp is 1 hour.
- **qprod**, the \***: This queue is intended for
normal production runs. It is required that active project with
nonzero remaining resources is specified to enter the qprod. All
nodes may be accessed via the qprod queue, however only 86 per job.
** Full nodes, 24 cores per node are allocated. The queue runs with
medium priority and no special authorization is required to use it.
The maximum runtime in qprod is 48 hours.
- **qlong**, the Long queue***: This queue is intended for long
production runs. It is required that active project with nonzero
remaining resources is specified to enter the qlong. Only 336 nodes
without acceleration may be accessed via the qlong queue. Full
nodes, 24 cores per node are allocated. The queue runs with medium
priority and no special authorization is required to use it.>
*The maximum runtime in qlong is 144 hours (three times of the
standard qprod time - 3 * 48 h)*
- >***qmpp**, the massively parallel queue. This queue is
intended for massively parallel runs. It is required that active
project with nonzero remaining resources is specified to enter
the qmpp. All nodes may be accessed via the qmpp queue. ** Full
nodes, 24 cores per node are allocated. The queue runs with medium
priority and no special authorization is required to use it.  The
maximum runtime in qmpp is 4 hours. An PI> *needs explicitly*
ask [support](https://support.it4i.cz/rt/)
for authorization to enter the queue for all users associated to
her/his Project.
- >***qfat**, the UV2000 queue. This queue is dedicated
to access the fat SGI UV2000 SMP machine. The machine (uv1) has 112
Intel IvyBridge cores at 3.3GHz and 3.25TB RAM. An PI> *needs
explicitly* ask
[support](https://support.it4i.cz/rt/) for
authorization to enter the queue for all users associated to her/his
Project.***
- **qfree**, the \***: The queue qfree is intended
for utilization of free resources, after a Project exhausted all its
allocated computational resources (Does not apply to DD projects
by default. DD projects have to request for persmission on qfree
after exhaustion of computational resources.). It is required that
active project is specified to enter the queue, however no remaining
resources are required. Consumed resources will be accounted to
the Project. Only 178 nodes without accelerator may be accessed from
this queue. Full nodes, 24 cores per node are allocated. The queue
runs with very low priority and no special authorization is required
to use it. The maximum runtime in qfree is 12 hours.
- **qviz**, the Visualization queue***: Intended for
pre-/post-processing using OpenGL accelerated graphics. Currently
when accessing the node, each user gets 4 cores of a CPU allocated,
thus approximately 73 GB of RAM and 1/7 of the GPU capacity
(default "chunk"). *If more GPU power or RAM is required, it is
recommended to allocate more chunks (with 4 cores each) up to one
whole node per user, so that all 28 cores, 512 GB RAM and whole GPU
is exclusive. This is currently also the maximum allowed allocation
per one user. One hour of work is allocated by default, the user may
ask for 2 hours maximum.*
 
To access node with Xeon Phi co-processor user needs to specify that in
[job submission select
statement](job-submission-and-execution.html).
|**qexe** Express queue|no |none required |32 nodes, max 8 per user |24 |>150 |no |1 / 1h |
|**qprod** Production queue|yes |&gt; 0 |>1006 nodes, max 86 per job |24 |0 |no |24 / 48h |
|**qlong** Long queue |yes |&gt; 0 |256 nodes, max 40 per job, only non-accelerated nodes allowed |24 |0 |no |72 / 144h |
|**qmpp** Massive parallel queue |yes |&gt; 0 |1006 nodes |24 |0 |yes |2 / 4h |
|**qfat** UV2000 queue |yes |&gt; 0 |1 (uv1) |8 |0 |yes |24 / 48h |
|**qfree** Free resource queue|yes |none required |752 nodes, max 86 per job |24 |-1024 |no |12 / 12h |
|**qviz** Visualization queue |yes |none required |2 (with NVIDIA Quadro K5000) |4 |150 |no |1 / 2h |
>**The qfree queue is not free of charge**. [Normal accounting](resources-allocation-policy.html#resources-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply for Directors Discreation's projects (DD projects) by default. Usage of qfree after exhaustion of DD projects computational resources is allowed after request for this queue.
- **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
- **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, however only 86 per job. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
- **qlong**, the Long queue: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 336 nodes without acceleration may be accessed via the qlong queue. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times of the standard qprod time - 3 * 48 h)
- **qmpp**, the massively parallel queue. This queue is intended for massively parallel runs. It is required that active project with nonzero remaining resources is specified to enter the qmpp. All nodes may be accessed via the qmpp queue. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qmpp is 4 hours. An PI needs explicitly ask support for authorization to enter the queue for all users associated to her/his Project.
- **qfat**, the UV2000 queue. This queue is dedicated to access the fat SGI UV2000 SMP machine. The machine (uv1) has 112 Intel IvyBridge cores at 3.3GHz and 3.25TB RAM. An PI needs explicitly ask support for authorization to enter the queue for all users associated to her/his Project.
- **qfree**, the Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 24 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
- **qviz**, the Visualization queue: Intended for pre-/post-processing using OpenGL accelerated graphics. Currently when accessing the node, each user gets 4 cores of a CPU allocated, thus approximately 73 GB of RAM and 1/7 of the GPU capacity (default "chunk"). If more GPU power or RAM is required, it is recommended to allocate more chunks (with 4 cores each) up to one whole node per user, so that all 28 cores, 512 GB RAM and whole GPU is exclusive. This is currently also the maximum allowed allocation per one user. One hour of work is allocated by default, the user may ask for 2 hours maximum.
>To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement](job-submission-and-execution.md).
### Notes
The job wall clock time defaults to **half the maximum time**, see table
above. Longer wall time limits can be  [set manually, see
examples](job-submission-and-execution.html).
The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be  [set manually, see examples](job-submission-and-execution.md).
Jobs that exceed the reserved wall clock time (Req'd Time) get killed
automatically. Wall clock time limit can be changed for queuing jobs
(state Q) using the qalter command, however can not be changed for a
running job (state R).
Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatically. Wall clock time limit can be changed for queuing jobs (state Q) using the qalter command, however can not be changed for a running job (state R).
Salomon users may check current queue configuration at
<https://extranet.it4i.cz/rsweb/salomon/queues>.
Salomon users may check current queue configuration at <https://extranet.it4i.cz/rsweb/salomon/queues>.
### Queue status
Check the status of jobs, queues and compute nodes at
[https://extranet.it4i.cz/rsweb/salomon/](https://extranet.it4i.cz/rsweb/salomon)
 
>Check the status of jobs, queues and compute nodes at [https://extranet.it4i.cz/rsweb/salomon/](https://extranet.it4i.cz/rsweb/salomon)
![RSWEB Salomon](rswebsalomon.png "RSWEB Salomon")
 
Display the queue status on Salomon:
`
```bash
$ qstat -q
`
```
The PBS allocation overview may be obtained also using the rspbs
command.
The PBS allocation overview may be obtained also using the rspbs command.
`
```bash
$ rspbs
Usage: rspbs [options]
......@@ -196,41 +104,26 @@ Options:
                        Only for given node state (affects only --get-node*
                        --get-qlist-* --get-ibswitch-* actions)
  --incl-finished       Include finished jobs
`
```
Resources Accounting Policy
-------------------------------
### The Core-Hour
The resources that are currently subject to accounting are the
core-hours. The core-hours are accounted on the wall clock basis. The
accounting runs whenever the computational cores are allocated or
blocked via the PBS Pro workload manager (the qsub command), regardless
of whether the cores are actually used for any calculation. 1 core-hour
is defined as 1 processor core allocated for 1 hour of wall clock time.
Allocating a full node (24 cores) for 1 hour accounts to 24 core-hours.
See example in the [Job submission and
execution](job-submission-and-execution.html) section.
The resources that are currently subject to accounting are the core-hours. The core-hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. 1 core-hour is defined as 1 processor core allocated for 1 hour of wall clock time. Allocating a full node (24 cores) for 1 hour accounts to 24 core-hours. See example in the [Job submission and execution](job-submission-and-execution.md) section.
### Check consumed resources
The **it4ifree** command is a part of it4i.portal.clients package,
located here:
<https://pypi.python.org/pypi/it4i.portal.clients>
The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients>
User may check at any time, how many core-hours have been consumed by
himself/herself and his/her projects. The command is available on
clusters' login nodes.
User may check at any time, how many core-hours have been consumed by himself/herself and his/her projects. The command is available on clusters' login nodes.
`
```bash
$ it4ifree
Password:
     PID   Total Used  ...by me Free
   -------- ------- ------ -------- -------
   OPEN-0-0 1500000 400644   225265 1099356
   DD-13-1    10000 2606 2606 7394
`
 
```
\ No newline at end of file
ANSYS CFX
ANSYS CFX
=========
[ANSYS
CFX](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+CFX)
software is a high-performance, general purpose fluid dynamics program
that has been applied to solve wide-ranging fluid flow problems for over
20 years. At the heart of ANSYS CFX is its advanced solver technology,
the key to achieving reliable and accurate solutions quickly and
robustly. The modern, highly parallelized solver is the foundation for
an abundant choice of physical models to capture virtually any type of
phenomena related to fluid flow. The solver and its many physical models
are wrapped in a modern, intuitive, and flexible GUI and user
environment, with extensive capabilities for customization and
automation using session files, scripting and a powerful expression
language.
[ANSYS CFX](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+CFX)
software is a high-performance, general purpose fluid dynamics program that has been applied to solve wide-ranging fluid flow problems for over 20 years. At the heart of ANSYS CFX is its advanced solver technology, the key to achieving reliable and accurate solutions quickly and robustly. The modern, highly parallelized solver is the foundation for an abundant choice of physical models to capture virtually any type of phenomena related to fluid flow. The solver and its many physical models are wrapped in a modern, intuitive, and flexible GUI and user environment, with extensive capabilities for customization and automation using session files, scripting and a powerful expression language.
To run ANSYS CFX in batch mode you can utilize/modify the default
cfx.pbs script and execute it via the qsub command.
To run ANSYS CFX in batch mode you can utilize/modify the default cfx.pbs script and execute it via the qsub command.
#!/bin/bash
#PBS -l nodes=2:ppn=24
#PBS -q qprod
#PBS -N $USER-CFX-Project
#PBS -A OPEN-0-0
```bash
#!/bin/bash
#PBS -l nodes=2:ppn=16
#PBS -q qprod