Commit afd8f956 authored by Lukáš Krupčík's avatar Lukáš Krupčík

repair external and internal links

parent 7dd1632e
......@@ -71,6 +71,6 @@ Load modules and compile:
$ mpicc testfftw3mpi.c -o testfftw3mpi.x -Wl,-rpath=$LIBRARY_PATH -lfftw3_mpi
```
Run the example as [Intel MPI program](../mpi-1/running-mpich2.html).
Run the example as [Intel MPI program](../mpi/running-mpich2/).
Read more on FFTW usage on the [FFTW website.](http://www.fftw.org/fftw3_doc/)
\ No newline at end of file
Read more on FFTW usage on the [FFTW website.](http://www.fftw.org/fftw3_doc/)![external](../../../img/external.png)
\ No newline at end of file
......@@ -3,7 +3,7 @@ HDF5
Hierarchical Data Format library. Serial and MPI parallel version.
[HDF5 (Hierarchical Data Format)](http://www.hdfgroup.org/HDF5/) is a general purpose library and file format for storing scientific data. HDF5 can store two primary objects: datasets and groups. A dataset is essentially a multidimensional array of data elements, and a group is a structure for organizing objects in an HDF5 file. Using these two basic objects, one can create and store almost any kind of scientific data structure, such as images, arrays of vectors, and structured and unstructured grids. You can also mix and match them in HDF5 files according to your needs.
[HDF5 (Hierarchical Data Format)](http://www.hdfgroup.org/HDF5/)![external](../../../img/external.png) is a general purpose library and file format for storing scientific data. HDF5 can store two primary objects: datasets and groups. A dataset is essentially a multidimensional array of data elements, and a group is a structure for organizing objects in an HDF5 file. Using these two basic objects, one can create and store almost any kind of scientific data structure, such as images, arrays of vectors, and structured and unstructured grids. You can also mix and match them in HDF5 files according to your needs.
Versions **1.8.11** and **1.8.13** of HDF5 library are available on Anselm, compiled for **Intel MPI** and **OpenMPI** using **intel** and **gnu** compilers. These are available via modules:
......@@ -23,7 +23,7 @@ Versions **1.8.11** and **1.8.13** of HDF5 library are available on Anselm, comp
The module sets up environment variables, required for linking and running HDF5 enabled applications. Make sure that the choice of HDF5 module is consistent with your choice of MPI library. Mixing MPI of different implementations may have unpredictable results.
>Be aware, that GCC version of **HDF5 1.8.11** has serious performance issues, since it's compiled with -O0 optimization flag. This version is provided only for testing of code compiled only by GCC and IS NOT recommended for production computations. For more informations, please see: <http://www.hdfgroup.org/ftp/HDF5/prev-releases/ReleaseFiles/release5-1811>
>Be aware, that GCC version of **HDF5 1.8.11** has serious performance issues, since it's compiled with -O0 optimization flag. This version is provided only for testing of code compiled only by GCC and IS NOT recommended for production computations. For more informations, please see: <http://www.hdfgroup.org/ftp/HDF5/prev-releases/ReleaseFiles/release5-1811>![external](../../../img/external.png)
All GCC versions of **HDF5 1.8.13** are not affected by the bug, are compiled with -O3 optimizations and are recommended for production computations.
Example
......@@ -84,6 +84,6 @@ Load modules and compile:
$ mpicc hdf5test.c -o hdf5test.x -Wl,-rpath=$LIBRARY_PATH $HDF5_INC $HDF5_SHLIB
```
Run the example as [Intel MPI program](../anselm-cluster-documentation/software/mpi-1/running-mpich2.html).
Run the example as [Intel MPI program](../anselm-cluster-documentation/software/mpi/running-mpich2/).
For further informations, please see the website: <http://www.hdfgroup.org/HDF5/>
\ No newline at end of file
For further informations, please see the website: <http://www.hdfgroup.org/HDF5/>![external](../../../img/external.png)
\ No newline at end of file
......@@ -11,7 +11,7 @@ Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, e
$ module load mkl
```
Read more at the [Intel MKL](../intel-suite/intel-mkl.html) page.
Read more at the [Intel MKL](../intel-suite/intel-mkl/) page.
Intel Integrated Performance Primitives
---------------------------------------
......@@ -21,7 +21,7 @@ Intel Integrated Performance Primitives, version 7.1.1, compiled for AVX is avai
$ module load ipp
```
Read more at the [Intel IPP](../intel-suite/intel-integrated-performance-primitives.html) page.
Read more at the [Intel IPP](../intel-suite/intel-integrated-performance-primitives/) page.
Intel Threading Building Blocks
-------------------------------
......@@ -31,4 +31,4 @@ Intel Threading Building Blocks (Intel TBB) is a library that supports scalable
$ module load tbb
```
Read more at the [Intel TBB](../intel-suite/intel-tbb.html) page.
\ No newline at end of file
Read more at the [Intel TBB](../intel-suite/intel-tbb/) page.
\ No newline at end of file
......@@ -69,8 +69,8 @@ To test if the MAGMA server runs properly we can run one of examples that are pa
**export OMP_NUM_THREADS=16**
See more details at [MAGMA home page](http://icl.cs.utk.edu/magma/).
See more details at [MAGMA home page](http://icl.cs.utk.edu/magma/)![external](../../../img/external.png).
References
----------
[1] MAGMA MIC: Linear Algebra Library for Intel Xeon Phi Coprocessors, Jack Dongarra et. al, [http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf](http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf)
\ No newline at end of file
[1] MAGMA MIC: Linear Algebra Library for Intel Xeon Phi Coprocessors, Jack Dongarra et. al, [http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf](http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf)![external](../../../img/external.png)
\ No newline at end of file
......@@ -9,13 +9,11 @@ PETSc (Portable, Extensible Toolkit for Scientific Computation) is a suite of bu
Resources
---------
- [project webpage](http://www.mcs.anl.gov/petsc/)
- [documentation](http://www.mcs.anl.gov/petsc/documentation/)
- [PETSc Users
Manual (PDF)](http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf)
- [index of all manual
pages](http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html)
- PRACE Video Tutorial [part1](http://www.youtube.com/watch?v=asVaFg1NDqY), [part2](http://www.youtube.com/watch?v=ubp_cSibb9I), [part3](http://www.youtube.com/watch?v=vJAAAQv-aaw), [part4](http://www.youtube.com/watch?v=BKVlqWNh8jY), [part5](http://www.youtube.com/watch?v=iXkbLEBFjlM)
- [project webpage](http://www.mcs.anl.gov/petsc/)![external](../../../img/external.png)
- [documentation](http://www.mcs.anl.gov/petsc/documentation/)![external](../../../img/external.png)
- [PETSc Users Manual (PDF)](http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf)![external](../../../img/external.png)
- [index of all manual pages](http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html)![external](../../../img/external.png)
- PRACE Video Tutorial [part1](http://www.youtube.com/watch?v=asVaFg1NDqY)![external](../../../img/external.png), [part2](http://www.youtube.com/watch?v=ubp_cSibb9I)![external](../../../img/external.png), [part3](http://www.youtube.com/watch?v=vJAAAQv-aaw)![external](../../../img/external.png), [part4](http://www.youtube.com/watch?v=BKVlqWNh8jY)![external](../../../img/external.png), [part5](http://www.youtube.com/watch?v=iXkbLEBFjlM)![external](../../../img/external.png)
Modules
-------
......@@ -27,13 +25,13 @@ You can start using PETSc on Anselm by loading the PETSc module. Module names ob
module load petsc/3.4.4-icc-impi-mkl-opt
```
where `variant` is replaced by one of `{dbg, opt, threads-dbg, threads-opt}`. The `opt` variant is compiled without debugging information (no `-g` option) and with aggressive compiler optimizations (`-O3 -xAVX`). This variant is suitable for performance measurements and production runs. In all other cases use the debug (`dbg`) variant, because it contains debugging information, performs validations and self-checks, and provides a clear stack trace and message in case of an error. The other two variants `threads-dbg` and `threads-opt` are `dbg` and `opt`, respectively, built with [OpenMP and pthreads threading support](http://www.mcs.anl.gov/petsc/features/threads.html).
where `variant` is replaced by one of `{dbg, opt, threads-dbg, threads-opt}`. The `opt` variant is compiled without debugging information (no `-g` option) and with aggressive compiler optimizations (`-O3 -xAVX`). This variant is suitable for performance measurements and production runs. In all other cases use the debug (`dbg`) variant, because it contains debugging information, performs validations and self-checks, and provides a clear stack trace and message in case of an error. The other two variants `threads-dbg` and `threads-opt` are `dbg` and `opt`, respectively, built with [OpenMP and pthreads threading support](http://www.mcs.anl.gov/petsc/features/threads.html)![external](../../../img/external.png).
External libraries
------------------
PETSc needs at least MPI, BLAS and LAPACK. These dependencies are currently satisfied with Intel MPI and Intel MKL in Anselm `petsc` modules.
PETSc can be linked with a plethora of [external numerical libraries](http://www.mcs.anl.gov/petsc/miscellaneous/external.html), extending PETSc functionality, e.g. direct linear system solvers, preconditioners or partitioners. See below a list of libraries currently included in Anselm `petsc` modules.
PETSc can be linked with a plethora of [external numerical libraries](http://www.mcs.anl.gov/petsc/miscellaneous/external.html)![external](../../../img/external.png), extending PETSc functionality, e.g. direct linear system solvers, preconditioners or partitioners. See below a list of libraries currently included in Anselm `petsc` modules.
All these libraries can be used also alone, without PETSc. Their static or shared program libraries are available in
`$PETSC_DIR/$PETSC_ARCH/lib` and header files in `$PETSC_DIR/$PETSC_ARCH/include`. `PETSC_DIR` and `PETSC_ARCH` are environment variables pointing to a specific PETSc instance based on the petsc module loaded.
......@@ -41,24 +39,24 @@ All these libraries can be used also alone, without PETSc. Their static or share
### Libraries linked to PETSc on Anselm (as of 11 April 2015)
- dense linear algebra
- [Elemental](http://libelemental.org/)
- [Elemental](http://libelemental.org/)![external](../../../img/external.png)
- sparse linear system solvers
- [Intel MKL Pardiso](https://software.intel.com/en-us/node/470282)
- [MUMPS](http://mumps.enseeiht.fr/)
- [PaStiX](http://pastix.gforge.inria.fr/)
- [SuiteSparse](http://faculty.cse.tamu.edu/davis/suitesparse.html)
- [SuperLU](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu)
- [SuperLU_Dist](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu_dist)
- [Intel MKL Pardiso](https://software.intel.com/en-us/node/470282)![external](../../../img/external.png)
- [MUMPS](http://mumps.enseeiht.fr/)![external](../../../img/external.png)
- [PaStiX](http://pastix.gforge.inria.fr/)![external](../../../img/external.png)
- [SuiteSparse](http://faculty.cse.tamu.edu/davis/suitesparse.html)![external](../../../img/external.png)
- [SuperLU](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu)![external](../../../img/external.png)
- [SuperLU_Dist](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu_dist)![external](../../../img/external.png)
- input/output
- [ExodusII](http://sourceforge.net/projects/exodusii/)
- [HDF5](http://www.hdfgroup.org/HDF5/)
- [NetCDF](http://www.unidata.ucar.edu/software/netcdf/)
- [ExodusII](http://sourceforge.net/projects/exodusii/)![external](../../../img/external.png)
- [HDF5](http://www.hdfgroup.org/HDF5/)![external](../../../img/external.png)
- [NetCDF](http://www.unidata.ucar.edu/software/netcdf/)![external](../../../img/external.png)
- partitioning
- [Chaco](http://www.cs.sandia.gov/CRF/chac.html)
- [METIS](http://glaros.dtc.umn.edu/gkhome/metis/metis/overview)
- [ParMETIS](http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview)
- [PT-Scotch](http://www.labri.fr/perso/pelegrin/scotch/)
- [Chaco](http://www.cs.sandia.gov/CRF/chac.html)![external](../../../img/external.png)
- [METIS](http://glaros.dtc.umn.edu/gkhome/metis/metis/overview)![external](../../../img/external.png)
- [ParMETIS](http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview)![external](../../../img/external.png)
- [PT-Scotch](http://www.labri.fr/perso/pelegrin/scotch/)![external](../../../img/external.png)
- preconditioners & multigrid
- [Hypre](http://acts.nersc.gov/hypre/)
- [Trilinos ML](http://trilinos.sandia.gov/packages/ml/)
- [SPAI - Sparse Approximate Inverse](https://bitbucket.org/petsc/pkg-spai)
\ No newline at end of file
- [Hypre](http://acts.nersc.gov/hypre/)![external](../../../img/external.png)
- [Trilinos ML](http://trilinos.sandia.gov/packages/ml/)![external](../../../img/external.png)
- [SPAI - Sparse Approximate Inverse](https://bitbucket.org/petsc/pkg-spai)![external](../../../img/external.png)
\ No newline at end of file
......@@ -19,7 +19,7 @@ Current Trilinos installation on ANSELM contains (among others) the following ma
- **IFPACK** - distributed algebraic preconditioner (includes e.g. incomplete LU factorization)
- **Teuchos** - common tools packages. This package contains classes for memory management, output, performance monitoring, BLAS and LAPACK wrappers etc.
For the full list of Trilinos packages, descriptions of their capabilities, and user manuals see [http://trilinos.sandia.gov.](http://trilinos.sandia.gov)
For the full list of Trilinos packages, descriptions of their capabilities, and user manuals see [http://trilinos.sandia.gov.](http://trilinos.sandia.gov)![external](../../../img/external.png)
### Installed version
......@@ -33,7 +33,7 @@ First, load the appropriate module:
$ module load trilinos
```
For the compilation of CMake-aware project, Trilinos provides the FIND_PACKAGE( Trilinos ) capability, which makes it easy to build against Trilinos, including linking against the correct list of libraries. For details, see <http://trilinos.sandia.gov/Finding_Trilinos.txt>
For the compilation of CMake-aware project, Trilinos provides the FIND_PACKAGE( Trilinos ) capability, which makes it easy to build against Trilinos, including linking against the correct list of libraries. For details, see <http://trilinos.sandia.gov/Finding_Trilinos.txt>![external](../../../img/external.png)
For compiling using simple makefiles, Trilinos provides Makefile.export system, which allows users to include important Trilinos variables directly into their makefiles. This can be done simply by inserting the following line into the makefile:
......@@ -47,4 +47,4 @@ or
include Makefile.export.<package>
```
if you are interested only in a specific Trilinos package. This will give you access to the variables such as Trilinos_CXX_COMPILER, Trilinos_INCLUDE_DIRS, Trilinos_LIBRARY_DIRS etc. For the detailed description and example makefile see <http://trilinos.sandia.gov/Export_Makefile.txt>.
\ No newline at end of file
if you are interested only in a specific Trilinos package. This will give you access to the variables such as Trilinos_CXX_COMPILER, Trilinos_INCLUDE_DIRS, Trilinos_LIBRARY_DIRS etc. For the detailed description and example makefile see <http://trilinos.sandia.gov/Export_Makefile.txt>![external](../../../img/external.png).
\ No newline at end of file
......@@ -3,9 +3,9 @@ Diagnostic component (TEAM)
### Access
TEAM is available at the following address: <http://omics.it4i.cz/team/>
TEAM is available at the following address: <http://omics.it4i.cz/team/>![external](../../../img/external.png)
>The address is accessible only via [VPN. ](../../accessing-the-cluster/vpn-access.html)
>The address is accessible only via [VPN. ](../../accessing-the-cluster/vpn-access/)
### Diagnostic component (TEAM)
......
......@@ -5,7 +5,7 @@ Priorization component (BiERApp)
BiERApp is available at the following address: <http://omics.it4i.cz/bierapp/>
>The address is accessible onlyvia [VPN. ](../../accessing-the-cluster/vpn-access.html)
>The address is accessible onlyvia [VPN. ](../../accessing-the-cluster/vpn-access/)
###BiERApp
......
Operating System
===============
##The operating system, deployed on ANSELM
The operating system on Anselm is Linux - bullx Linux Server release 6.3.
The operating system on Anselm is Linux - **bullx Linux Server release 6.X**
bullx Linux is based on Red Hat Enterprise Linux. bullx Linux is a Linux distribution provided by Bull and dedicated to HPC applications.
......@@ -15,7 +15,7 @@ The Salomon cluster is accessed by SSH protocol via login nodes login1, login2,
|login1.salomon.it4i.cz|22|ssh|login1|
|login1.salomon.it4i.cz|22|ssh|login1|
The authentication is by the [private key](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html)
The authentication is by the [private key](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/)
>Please verify SSH fingerprints during the first logon. They are identical on all login nodes:
f6:28:98:e4:f9:b2:a6:8f:f2:f4:2d:0a:09:67:69:80 (DSA)
......@@ -35,7 +35,7 @@ If you see warning message "UNPROTECTED PRIVATE KEY FILE!", use this command to
local $ chmod 600 /path/to/id_rsa
```
On **Windows**, use [PuTTY ssh client](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty.html).
On **Windows**, use [PuTTY ssh client](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty/).
After logging in, you will see the command prompt:
......@@ -55,11 +55,11 @@ Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com
[username@login2.salomon ~]$
```
>The environment is **not** shared between login nodes, except for [shared filesystems](storage/storage.html).
>The environment is **not** shared between login nodes, except for [shared filesystems](storage/storage/).
Data Transfer
-------------
Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp protocols.
Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy)![external](../../img/external.png) and sftp protocols.
In case large volumes of data are transferred, use dedicated data mover nodes cedge[1-3].salomon.it4i.cz for increased performance.
......@@ -73,7 +73,7 @@ HTML commented section #1 (removed cedge servers from the table)
|login3.salomon.it4i.cz|22|scp, sftp|
|login4.salomon.it4i.cz|22|scp, sftp|
The authentication is by the [private key](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html)
The authentication is by the [private key](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/)
HTML commented section #2 (ssh transfer performance data need to be verified)
......@@ -93,7 +93,7 @@ or
local $ sftp -o IdentityFile=/path/to/id_rsa username@salomon.it4i.cz
```
Very convenient way to transfer files in and out of the Salomon computer is via the fuse filesystem [sshfs](http://linux.die.net/man/1/sshfs)
Very convenient way to transfer files in and out of the Salomon computer is via the fuse filesystem [sshfs](http://linux.die.net/man/1/sshfs)![external](../../img/external.png)
```bash
local $ sshfs -o IdentityFile=/path/to/id_rsa username@salomon.it4i.cz:. mountpoint
......@@ -109,6 +109,6 @@ $ man scp
$ man sshfs
```
On Windows, use [WinSCP client](http://winscp.net/eng/download.php) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/) provides a way to mount the Salomon filesystems directly as an external disc.
On Windows, use [WinSCP client](http://winscp.net/eng/download.php)![external](../../img/external.png) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/)![external](../../img/external.png) provides a way to mount the Salomon filesystems directly as an external disc.
More information about the shared file systems is available [here](storage/storage.html).
\ No newline at end of file
More information about the shared file systems is available [here](storage/storage/).
\ No newline at end of file
......@@ -47,7 +47,7 @@ Note: Port number 6000 is chosen as an example only. Pick any free port.
Remote port forwarding from compute nodes allows applications running on the compute nodes to access hosts outside Salomon Cluster.
First, establish the remote port forwarding form the login node, as [described above](outgoing-connections.html#port-forwarding-from-login-nodes).
First, establish the remote port forwarding form the login node, as [described above](outgoing-connections/#port-forwarding-from-login-nodes).
Second, invoke port forwarding from the compute node to the login node. Insert following line into your jobscript or interactive shell
......@@ -69,12 +69,12 @@ To establish local proxy server on your workstation, install and run SOCKS proxy
local $ ssh -D 1080 localhost
```
On Windows, install and run the free, open source [Sock Puppet](http://sockspuppet.com/) server.
On Windows, install and run the free, open source [Sock Puppet](http://sockspuppet.com/)![external](../../img/external.png) server.
Once the proxy server is running, establish ssh port forwarding from Salomon to the proxy server, port 1080, exactly as [described above](outgoing-connections.html#port-forwarding-from-login-nodes).
Once the proxy server is running, establish ssh port forwarding from Salomon to the proxy server, port 1080, exactly as [described above](outgoing-connections/#port-forwarding-from-login-nodes).
```bash
local $ ssh -R 6000:localhost:1080 salomon.it4i.cz
```
Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding  to access the [proxy server from compute nodes](outgoing-connections.html#port-forwarding-from-compute-nodes) as well .
\ No newline at end of file
Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding  to access the [proxy server from compute nodes](outgoing-connections/#port-forwarding-from-compute-nodes) as well .
\ No newline at end of file
......@@ -18,7 +18,7 @@ It is impossible to connect to VPN from other operating systems.
VPN client installation
------------------------------------
You can install VPN client from web interface after successful login with LDAP credentials on address <https://vpn.it4i.cz/user>
You can install VPN client from web interface after successful login with LDAP credentials on address <https://vpn.it4i.cz/user>![external](../../img/external.png)
![](vpn_web_login.png)
......@@ -47,7 +47,7 @@ Working with VPN client
You can use graphical user interface or command line interface to run VPN client on all supported operating systems. We suggest using GUI.
Before the first login to VPN, you have to fill URL **[https://vpn.it4i.cz/user](https://vpn.it4i.cz/user)** into the text field.
Before the first login to VPN, you have to fill URL **[https://vpn.it4i.cz/user](https://vpn.it4i.cz/user)![external](../../img/external.png)** into the text field.
![](vpn_contacting_https_cluster.png)
......
......@@ -30,7 +30,7 @@ fi
In order to configure your shell for  running particular application on Salomon we use Module package interface.
Application modules on Salomon cluster are built using [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). The modules are divided into the following structure:
Application modules on Salomon cluster are built using [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild")![external](../img/external.png). The modules are divided into the following structure:
```bash
base: Default module class
......@@ -120,5 +120,4 @@ On Salomon, we have currently following toolchains installed:
|gompi|GCC, OpenMPI|
|goolf|BLACS, FFTW, GCC, OpenBLAS, OpenMPI, ScaLAPACK|
|iompi|OpenMPI, icc, ifort|
|iccifort|icc, ifort|
|iccifort|icc, ifort|
\ No newline at end of file
......@@ -5,7 +5,7 @@ Introduction
------------
The Salomon cluster consists of 1008 computational nodes of which 576 are regular compute nodes and 432 accelerated nodes. Each node is a powerful x86-64 computer, equipped with 24 cores (two twelve-core Intel Xeon processors) and 128GB RAM. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 0.5PB /home NFS disk storage to store the user files. Users may use a DDN Lustre shared storage with capacity of 1.69 PB which is available for the scratch project data. The user access to the Salomon cluster is provided by four login nodes.
[More about schematic representation of the Salomon cluster compute nodes IB topology](../network/ib-single-plane-topology.md).
[More about schematic representation of the Salomon cluster compute nodes IB topology](../network/ib-single-plane-topology/).
![Salomon](salomon-2)
......@@ -19,7 +19,7 @@ General information
|Primary purpose|High Performance Computing|
|Architecture of compute nodes|x86-64|
|Operating system|CentOS 6.7 Linux|
|[**Compute nodes**](../compute-nodes.md)||
|[**Compute nodes**](../compute-nodes/)||
|Totally|1008|
|Processor|2x Intel Xeon E5-2680v3, 2.5GHz, 12cores|
|RAM|128GB, 5.3GB per core, DDR4@2133 MHz|
......@@ -39,7 +39,7 @@ Compute nodes
|w/o accelerator|576|2x Intel Xeon E5-2680v3, 2.5GHz|24|128GB|-|
|MIC accelerated|432|2x Intel Xeon E5-2680v3, 2.5GHz|24|128GB|2x Intel Xeon Phi 7120P, 61cores, 16GB RAM|
For more details please refer to the [Compute nodes](../compute-nodes.md).
For more details please refer to the [Compute nodes](../compute-nodes/).
Remote visualization nodes
--------------------------
......
Introduction
============
Welcome to Salomon supercomputer cluster. The Salomon cluster consists of 1008 compute nodes, totaling 24192 compute cores with 129TB RAM and giving over 2 Pflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 24 cores, at least 128GB RAM. Nodes are interconnected by 7D Enhanced hypercube Infiniband network and equipped with Intel Xeon E5-2680v3 processors. The Salomon cluster consists of 576 nodes without accelerators and 432 nodes equipped with Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview-1/hardware-overview.html).
Welcome to Salomon supercomputer cluster. The Salomon cluster consists of 1008 compute nodes, totaling 24192 compute cores with 129TB RAM and giving over 2 Pflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 24 cores, at least 128GB RAM. Nodes are interconnected by 7D Enhanced hypercube Infiniband network and equipped with Intel Xeon E5-2680v3 processors. The Salomon cluster consists of 576 nodes without accelerators and 432 nodes equipped with Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/).
The cluster runs [CentOS Linux](http://www.bull.com/bullx-logiciels/systeme-exploitation.html) operating system, which is compatible with the RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg)
The cluster runs [CentOS Linux](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)![external](../img/external.png) operating system, which is compatible with the RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg)![external](../img/external.png)
**Water-cooled Compute Nodes With MIC Accelerator**
......
7D Enhanced Hypercube
=====================
[More about Job submission - Placement by IB switch / Hypercube dimension.](../resource-allocation-and-job-execution/job-submission-and-execution.md)
[More about Job submission - Placement by IB switch / Hypercube dimension.](../resource-allocation-and-job-execution/job-submission-and-execution/)
Nodes may be selected via the PBS resource attribute ehc_[1-7]d .
......@@ -15,7 +15,7 @@ Nodes may be selected via the PBS resource attribute ehc_[1-7]d .
|6D|ehc_6d|
|7D|ehc_7d|
[Schematic representation of the Salomon cluster IB single-plain topology represents hypercube dimension 0](ib-single-plane-topology.md).
[Schematic representation of the Salomon cluster IB single-plain topology represents hypercube dimension 0](ib-single-plane-topology/).
### 7D Enhanced Hypercube {#d-enhanced-hypercube}
......
......@@ -17,7 +17,7 @@ Each colour in each physical IRU represents one dual-switch ASIC switch.
### IB single-plane topology - Accelerated nodes
Each of the 3 inter-connected D racks are equivalent to one half of Mcell rack. 18x D rack with MIC accelerated nodes [r21-r38] are equivalent to 3 Mcell racks as shown in a diagram [7D Enhanced Hypercube](7d-enhanced-hypercube.md).
Each of the 3 inter-connected D racks are equivalent to one half of Mcell rack. 18x D rack with MIC accelerated nodes [r21-r38] are equivalent to 3 Mcell racks as shown in a diagram [7D Enhanced Hypercube](7d-enhanced-hypercube/).
As shown in a diagram ![IB Topology](Salomon_IB_topology.png):
......
Network
=======
All compute and login nodes of Salomon are interconnected by 7D Enhanced hypercube [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) network and by Gigabit [Ethernet](http://en.wikipedia.org/wiki/Ethernet)
network. Only [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) network may be used to transfer user data.
All compute and login nodes of Salomon are interconnected by 7D Enhanced hypercube [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) network and by Gigabit [Ethernet](http://en.wikipedia.org/wiki/Ethernet)![external](../../img/external.png)
network. Only [Infiniband](http://en.wikipedia.org/wiki/InfiniBand)![external](../../img/external.png) network may be used to transfer user data.
Infiniband Network
------------------
All compute and login nodes of Salomon are interconnected by 7D Enhanced hypercube [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) network (56 Gbps). The network topology is a [7D Enhanced hypercube](7d-enhanced-hypercube.md).
All compute and login nodes of Salomon are interconnected by 7D Enhanced hypercube [Infiniband](http://en.wikipedia.org/wiki/InfiniBand)![external](../../img/external.png) network (56 Gbps). The network topology is a [7D Enhanced hypercube](7d-enhanced-hypercube/).
Read more about schematic representation of the Salomon cluster [IB single-plain topology](ib-single-plane-topology.md)
([hypercube dimension](7d-enhanced-hypercube.md) 0).
Read more about schematic representation of the Salomon cluster [IB single-plain topology](ib-single-plane-topology/)
([hypercube dimension](7d-enhanced-hypercube/) 0).
The compute nodes may be accessed via the Infiniband network using ib0 network interface, in address range 10.17.0.0 (mask 255.255.224.0). The MPI may be used to establish native Infiniband connection among the nodes.
......
This diff is collapsed.
......@@ -9,13 +9,13 @@ However, executing huge number of jobs via the PBS queue may strain the system.
>Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
- Use [Job arrays](capacity-computing.md#job-arrays) when running huge number of [multithread](capacity-computing.md#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
- Use [GNU parallel](capacity-computing.md#gnu-parallel) when running single core jobs
- Combine[GNU parallel with Job arrays](capacity-computing.md#combining-job-arrays-and-gnu-parallel) when running huge number of single core jobs
- Use [Job arrays](capacity-computing.md#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
- Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
- Combine[GNU parallel with Job arrays](capacity-computing/#combining-job-arrays-and-gnu-parallel) when running huge number of single core jobs
Policy
------
1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing.md#job-arrays).
1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays).
2. The array size is at most 1000 subjobs.
Job arrays
......@@ -75,7 +75,7 @@ If huge number of parallel multicore (in means of multinode multithread, e. g. M
### Submit the job array
To submit the job array, use the qsub -J command. The 900 jobs of the [example above](capacity-computing.html#array_example) may be submitted like this:
To submit the job array, use the qsub -J command. The 900 jobs of the [example above](capacity-computing/#array_example) may be submitted like this:
```bash
$ qsub -N JOBNAME -J 1-900 jobscript
......@@ -146,7 +146,7 @@ Display status information for all user's subjobs.
$ qstat -u $USER -tJ
```
Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation.html).
Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/).
GNU parallel
----------------
......@@ -207,7 +207,7 @@ In this example, tasks from tasklist are executed via the GNU parallel. The job
### Submit the job
To submit the job, use the qsub command. The 101 tasks' job of the [example above](capacity-computing.html#gp_example) may be submitted like this:
To submit the job, use the qsub command. The 101 tasks' job of the [example above](capacity-computing/#gp_example) may be submitted like this:
```bash
$ qsub -N JOBNAME jobscript
......@@ -288,7 +288,7 @@ When deciding this values, think about following guiding rules :
### Submit the job array
To submit the job array, use the qsub -J command. The 992 tasks' job of the [example above](capacity-computing.html#combined_example) may be submitted like this:
To submit the job array, use the qsub -J command. The 992 tasks' job of the [example above](capacity-computing/#combined_example) may be submitted like this:
```bash
$ qsub -N JOBNAME -J 1-992:32 jobscript
......@@ -301,7 +301,7 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo
Examples
--------
Download the examples in [capacity.zip](capacity-computing-example), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs.
Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs.
Unzip the archive in an empty directory on Anselm and follow the instructions in the README file
......
Resource Allocation and Job Execution
=====================================
To run a [job](job-submission-and-execution.html), [computational resources](resources-allocation-policy.html) for this particular job must be allocated. This is done via the PBS Pro job workload manager software, which efficiently distributes workloads across the supercomputer. Extensive informations about PBS Pro can be found in the [official documentation here](../../pbspro-documentation.html), especially in the [PBS Pro User's Guide](https://docs.it4i.cz/pbspro-documentation/pbspro-users-guide).
To run a [job](job-submission-and-execution/), [computational resources](resources-allocation-policy/) for this particular job must be allocated. This is done via the PBS Pro job workload manager software, which efficiently distributes workloads across the supercomputer. Extensive informations about PBS Pro can be found in the [official documentation here](../../pbspro-documentation/), especially in the [PBS Pro User's Guide](../../pbspro-documentation/).
Resources Allocation Policy
---------------------------
The resources are allocated to the job in a fairshare fashion, subject to constraints set by the queue and resources available to the Project. [The Fairshare](job-priority.html) at Salomon ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following queues are available to Anselm users:
The resources are allocated to the job in a fairshare fashion, subject to constraints set by the queue and resources available to the Project. [The Fairshare](job-priority/) at Salomon ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following queues are available to Anselm users:
- **qexp**, the Express queue
- **qprod**, the Production queue
......@@ -16,7 +16,7 @@ The resources are allocated to the job in a fairshare fashion, subject to constr
>Check the queue status at <https://extranet.it4i.cz/rsweb/salomon/>
Read more on the [Resource Allocation Policy](resources-allocation-policy.html) page.
Read more on the [Resource Allocation Policy](resources-allocation-policy/) page.
Job submission and execution
----------------------------
......@@ -24,4 +24,4 @@ Job submission and execution
The qsub submits the job into the queue. The qsub command creates a request to the PBS Job manager for allocation of specified resources. The **smallest allocation unit is entire node, 24 cores**, with exception of the qexp queue. The resources will be allocated when available, subject to allocation policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on first of the allocated nodes.**
Read more on the [Job submission and execution](job-submission-and-execution.html) page.
\ No newline at end of file
Read more on the [Job submission and execution](job-submission-and-execution/) page.
\ No newline at end of file
......@@ -17,7 +17,7 @@ Queue priority is priority of queue where job is queued before execution.
Queue priority has the biggest impact on job execution priority. Execution priority of jobs in higher priority queues is always greater than execution priority of jobs in lower priority queues. Other properties of job used for determining job execution priority (fairshare priority, eligible time) cannot compete with queue priority.
Queue priorities can be seen at <https://extranet.it4i.cz/rsweb/salomon/queues>
Queue priorities can be seen at <https://extranet.it4i.cz/rsweb/salomon/queues>![external](../../img/external.png)
### Fairshare priority
......@@ -33,7 +33,7 @@ where MAX_FAIRSHARE has value 1E6, usage~Project~ is cumulated usage by all memb
Usage counts allocated corehours (ncpus*walltime). Usage is decayed, or cut in half periodically, at the interval 168 hours (one week). Jobs queued in queue qexp are not calculated to project's usage.
>Calculated usage and fairshare priority can be seen at <https://extranet.it4i.cz/rsweb/salomon/projects>.
>Calculated usage and fairshare priority can be seen at <https://extranet.it4i.cz/rsweb/salomon/projects>![external](../../img/external.png).
Calculated fairshare priority can be also seen as Resource_List.fairshare attribute of a job.
......@@ -67,4 +67,4 @@ Specifying more accurate walltime enables better schedulling, better execution t
### Job placement
Job [placement can be controlled by flags during submission](job-submission-and-execution.html#job_placement).
\ No newline at end of file
Job [placement can be controlled by flags during submission](job-submission-and-execution/#job_placement).
\ No newline at end of file
......@@ -74,7 +74,7 @@ In this example, we allocate 4 nodes, with 24 cores per node (totalling 96 cores
Per NUMA node allocation.
Jobs are isolated by cpusets.
The UV2000 (node uv1) offers 3328GB of RAM and 112 cores, distributed in 14 NUMA nodes. A NUMA node packs 8 cores and approx. 236GB RAM. In the PBS  the UV2000 provides 14 chunks, a chunk per NUMA node (see [Resource allocation policy](resources-allocation-policy.html)). The jobs on UV2000 are isolated from each other by cpusets, so that a job by one user may not utilize CPU or memory allocated to a job by other user. Always, full chunks are allocated, a job may only use resources of  the NUMA nodes allocated to itself.
The UV2000 (node uv1) offers 3328GB of RAM and 112 cores, distributed in 14 NUMA nodes. A NUMA node packs 8 cores and approx. 236GB RAM. In the PBS  the UV2000 provides 14 chunks, a chunk per NUMA node (see [Resource allocation policy](resources-allocation-policy/)). The jobs on UV2000 are isolated from each other by cpusets, so that a job by one user may not utilize CPU or memory allocated to a job by other user. Always, full chunks are allocated, a job may only use resources of  the NUMA nodes allocated to itself.
```bash
 $ qsub -A OPEN-0-0 -q qfat -l select=14 ./myjob
......@@ -90,7 +90,7 @@ In this example, we allocate 2000GB of memory on the UV2000 for 72 hours. By req
### Useful tricks
All qsub options may be [saved directly into the jobscript](job-submission-and-execution.html#PBSsaved). In such a case, no options to qsub are needed.
All qsub options may be [saved directly into the jobscript](job-submission-and-execution/#PBSsaved). In such a case, no options to qsub are needed.
```bash
$ qsub ./myjob
......@@ -139,11 +139,11 @@ Nodes may be selected via the PBS resource attribute ehc_[1-7]d .
$ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=24 -l place=group=ehc_1d -I
```
In this example, we allocate 4 nodes, 24 cores, selecting only the nodes with [hypercube dimension](../network-1/7d-enhanced-hypercube.html) 1.
In this example, we allocate 4 nodes, 24 cores, selecting only the nodes with [hypercube dimension](../network/7d-enhanced-hypercube/) 1.
### Placement by IB switch
Groups of computational nodes are connected to chassis integrated Infiniband switches. These switches form the leaf switch layer of the [Infiniband  network](../network-1.html) . Nodes sharing the leaf switch can communicate most efficiently. Sharing the same switch prevents hops in the network and provides for unbiased, most efficient network communication.
Groups of computational nodes are connected to chassis integrated Infiniband switches. These switches form the leaf switch layer of the [Infiniband  network](../network/) . Nodes sharing the leaf switch can communicate most efficiently. Sharing the same switch prevents hops in the network and provides for unbiased, most efficient network communication.
There are at most 9 nodes sharing the same Infiniband switch.
......@@ -391,7 +391,7 @@ exit
In this example, some directory on the /home holds the input file input and executable mympiprog.x . We create a directory myjob on the /scratch filesystem, copy input and executable files from the /home directory where the qsub was invoked ($PBS_O_WORKDIR) to /scratch, execute the MPI programm mympiprog.x and copy the output file back to the /home directory. The mympiprog.x is executed as one process per node, on all allocated nodes.
>Consider preloading inputs and executables onto [shared scratch](../storage.html) before the calculation starts.
>Consider preloading inputs and executables onto [shared scratch](../storage/storage/) before the calculation starts.
In some cases, it may be impractical to copy the inputs to scratch and outputs to home. This is especially true when very large input and output files are expected, or when the files should be reused by a subsequent calculation. In such a case, it is users responsibility to preload the input files on shared /scratch before the job submission and retrieve the outputs manually, after all calculations are finished.
......@@ -428,7 +428,7 @@ HTML commented section #2 (examples need to be reworked)
>Local scratch directory is often useful for single node jobs. Local scratch will be deleted immediately after the job ends. Be very careful, use of RAM disk filesystem is at the expense of operational memory.
Example jobscript for single node calculation, using [local scratch](../storage.html) on the node:
Example jobscript for single node calculation, using [local scratch](../storage/storage/) on the node:
```bash
#!/bin/bash
......
......@@ -3,7 +3,7 @@ Resources Allocation Policy
Resources Allocation Policy
---------------------------
The resources are allocated to the job in a fairshare fashion, subject to constraints set by the queue and resources available to the Project. The Fairshare at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority.md) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
The resources are allocated to the job in a fairshare fashion, subject to constraints set by the queue and resources available to the Project. The Fairshare at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
|queue |active project |project resources |nodes|min ncpus*|priority|authorization|walltime |
| --- | --- |
......@@ -15,7 +15,7 @@ The resources are allocated to the job in a fairshare fashion, subject to constr
|**qfree** Free resource queue|yes |none required |752 nodes, max 86 per job |24 |-1024 |no |12 / 12h |
|**qviz** Visualization queue |yes |none required |2 (with NVIDIA Quadro K5000) |4 |150 |no |1 / 2h |
>**The qfree queue is not free of charge**. [Normal accounting](resources-allocation-policy.html#resources-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply for Directors Discreation's projects (DD projects) by default. Usage of qfree after exhaustion of DD projects computational resources is allowed after request for this queue.
>**The qfree queue is not free of charge**. [Normal accounting](resources-allocation-policy/#resources-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply for Directors Discreation's projects (DD projects) by default. Usage of qfree after exhaustion of DD projects computational resources is allowed after request for this queue.
- **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
- **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, however only 86 per job. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
......@@ -25,19 +25,19 @@ The resources are allocated to the job in a fairshare fashion, subject to constr
- **qfree**, the Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 24 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
- **qviz**, the Visualization queue: Intended for pre-/post-processing using OpenGL accelerated graphics. Currently when accessing the node, each user gets 4 cores of a CPU allocated, thus approximately 73 GB of RAM and 1/7 of the GPU capacity (default "chunk"). If more GPU power or RAM is required, it is recommended to allocate more chunks (with 4 cores each) up to one whole node per user, so that all 28 cores, 512 GB RAM and whole GPU is exclusive. This is currently also the maximum allowed allocation per one user. One hour of work is allocated by default, the user may ask for 2 hours maximum.
>To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement](job-submission-and-execution.md).
>To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement](job-submission-and-execution/).
### Notes
The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be  [set manually, see examples](job-submission-and-execution.md).
The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be  [set manually, see examples](job-submission-and-execution/).
Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatically. Wall clock time limit can be changed for queuing jobs (state Q) using the qalter command, however can not be changed for a running job (state R).
Salomon users may check current queue configuration at <https://extranet.it4i.cz/rsweb/salomon/queues>.
Salomon users may check current queue configuration at <https://extranet.it4i.cz/rsweb/salomon/queues>![external](../../img/external.png).
### Queue status
>Check the status of jobs, queues and compute nodes at [https://extranet.it4i.cz/rsweb/salomon/](https://extranet.it4i.cz/rsweb/salomon)
>Check the status of jobs, queues and compute nodes at [https://extranet.it4i.cz/rsweb/salomon/](https://extranet.it4i.cz/rsweb/salomon)![external](../../img/external.png)
![RSWEB Salomon](rswebsalomon.png "RSWEB Salomon")
......@@ -111,7 +111,7 @@ Resources Accounting Policy
### The Core-Hour
The resources that are currently subject to accounting are the core-hours. The core-hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. 1 core-hour is defined as 1 processor core allocated for 1 hour of wall clock time. Allocating a full node (24 cores) for 1 hour accounts to 24 core-hours. See example in the [Job submission and execution](job-submission-and-execution.md) section.
The resources that are currently subject to accounting are the core-hours. The core-hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. 1 core-hour is defined as 1 processor core allocated for 1 hour of wall clock time. Allocating a full node (24 cores) for 1 hour accounts to 24 core-hours. See example in the [Job submission and execution](job-submission-and-execution/) section.
### Check consumed resources
......
ANSYS CFX
=========
[ANSYS CFX](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+CFX)
[ANSYS CFX](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+CFX)![external](../../../img/external.png)
software is a high-performance, general purpose fluid dynamics program that has been applied to solve wide-ranging fluid flow problems for over 20 years. At the heart of ANSYS CFX is its advanced solver technology, the key to achieving reliable and accurate solutions quickly and robustly. The modern, highly parallelized solver is the foundation for an abundant choice of physical models to capture virtually any type of phenomena related to fluid flow. The solver and its many physical models are wrapped in a modern, intuitive, and flexible GUI and user environment, with extensive capabilities for customization and automation using session files, scripting and a powerful expression language.
To run ANSYS CFX in batch mode you can utilize/modify the default cfx.pbs script and execute it via the qsub command.
......@@ -49,9 +49,9 @@ echo Machines: $hl
/ansys_inc/v145/CFX/bin/cfx5solve -def input.def -size 4 -size-ni 4x -part-large -start-method "Platform MPI Distributed Parallel" -par-dist $hl -P aa_r
```
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. >Input file has to be defined by common CFX def file which is attached to the cfx solver via parameter -def
**License** should be selected by parameter -P (Big letter **P**). Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**.
[More about licensing here](licensing.md)
\ No newline at end of file
[More about licensing here](licensing/)
\ No newline at end of file