Skip to content
Snippets Groups Projects
Commit 4e5344d9 authored by David Hrbáč's avatar David Hrbáč
Browse files

Merge remote-tracking branch 'origin/master' into hot_fix

parents 93526b9f ed9dc9cc
No related branches found
No related tags found
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!148Hot fix
......@@ -55,9 +55,9 @@ Anselm is cluster of x86-64 Intel based nodes built on Bull Extreme Computing bu
| Node type | Count | Range | Memory | Cores | [Access](resources-allocation-policy/) |
| -------------------------- | ----- | ----------- | ------ | ----------- | -------------------------------------- |
| Nodes without accelerator | 180 | cn[1-180] | 64GB | 16 @ 2.4GHz | qexp, qprod, qlong, qfree |
| Nodes with GPU accelerator | 23 | cn[181-203] | 96GB | 16 @ 2.3GHz | qgpu, qprod |
| Nodes with MIC accelerator | 4 | cn[204-207] | 96GB | 16 @ 2.3GHz | qmic, qprod |
| Fat compute nodes | 2 | cn[208-209] | 512GB | 16 @ 2.4GHz | qfat, qprod |
| Nodes with GPU accelerator | 23 | cn[181-203] | 96GB | 16 @ 2.3GHz | qgpu, qexp |
| Nodes with MIC accelerator | 4 | cn[204-207] | 96GB | 16 @ 2.3GHz | qmic, qexp |
| Fat compute nodes | 2 | cn[208-209] | 512GB | 16 @ 2.4GHz | qfat, qexp |
## Processor Architecture
......
......@@ -12,7 +12,8 @@ The resources are allocated to the job in a fair-share fashion, subject to const
| qexp | no | none required | 2 reserved, 31 totalincluding MIC, GPU and FAT nodes | 1 | 150 | no | 1 h |
| qprod | yes | 0 | 178 nodes w/o accelerator | 16 | 0 | no | 24/48 h |
| qlong | yes | 0 | 60 nodes w/o accelerator | 16 | 0 | no | 72/144 h |
| qnvidia, qmic, qfat | yes | 0 | 23 total qnvidia4 total qmic2 total qfat | 16 | 200 | yes | 24/48 h |
| qnvidia, qmic | yes | 0 | 23 nvidia nodes, 4 mic nodes | 16 | 200 | yes | 24/48 h |
| qfat | yes | 0 | 2 fat nodes | 16 | 200 | yes | 24/144 h |
| qfree | yes | none required | 178 w/o accelerator | 16 | -1024 | no | 12 h |
!!! note
......
......@@ -156,39 +156,35 @@ The SCRATCH filesystem is realized as Lustre parallel filesystem and is availabl
### Disk Usage and Quota Commands
User quotas on the file systems can be checked and reviewed using following command:
Disk usage and user quotas can be checked and reviewed using following command:
```console
$ lfs quota dir
$ it4i-disk-usage
```
Example for Lustre HOME directory:
```console
$ lfs quota /home
Disk quotas for user user001 (uid 1234):
Filesystem kbytes quota limit grace files quota limit grace
/home 300096 0 250000000 - 2102 0 500000 -
Disk quotas for group user001 (gid 1234):
Filesystem kbytes quota limit grace files quota limit grace
/home 300096 0 0 - 2102 0 0 -
```
In this example, we view current quota size limit of 250GB and 300MB currently used by user001.
Example for Lustre SCRATCH directory:
Example:
```console
$ lfs quota /scratch
Disk quotas for user user001 (uid 1234):
Filesystem kbytes quota limit grace files quota limit grace
/scratch 8 0 100000000000 - 3 0 0 -
Disk quotas for group user001 (gid 1234):
Filesystem kbytes quota limit grace files quota limit grace
/scratch 8 0 0 - 3 0 0 -
$ it4i-disk-usage -h
# Using human-readable format
# Using power of 1024 for space
# Using power of 1000 for entries
Filesystem: /home
Space used: 112G
Space limit: 238G
Entries: 15k
Entries limit: 500k
Filesystem: /scratch
Space used: 0
Space limit: 93T
Entries: 0
Entries limit: 0
```
In this example, we view current quota size limit of 100TB and 8KB currently used by user001.
In this example, we view current size limits and space occupied on the /home and /scratch filesystem, for a particular user executing the command.
Note that limits are imposed also on number of objects (files, directories, links, etc...) that are allowed to create.
To have a better understanding of where the space is exactly used, you can use following command to find out.
......
# OpenCoarrays
## Introduction
Coarray Fortran (CAF) is an extension of Fortran language and offers a simple interface for parallel processing and memory sharing.
The advantage is that only small changes are required to convert existing Fortran code to support a robust and potentially efficient parallelism.
A CAF program is interpreted as if it was replicated a number of times and all copies were executed asynchronously.
The number of copies is decided at execution time. Each copy (called *image*) has its own private variables.
The variable syntax of Fortran language is extended with indexes in square brackets (called *co-dimension*), which represents a reference to data distributed across images.
By default, the CAF is using Message Passing Interface (MPI) for lower-level communication, so there are some similarities with MPI.
Read more on <http://www.opencoarrays.org/>
## Coarray Basics
### Indexing of coarray images
Indexing of individual images can be shown on the simple *Hello World* program:
```fortran
program hello_world
implicit none
print *, 'Hello world from image ', this_image() , 'of', num_images()
end program hello_world
```
* num_images() - returns the number of all images
* this_image() - returns the image index - numbered from 1 to num_images()
### Co-dimension variables declaration
Coarray variables can be declared with the **codimension[*]** attribute or by adding trailing index **[*]** after the variable name.
Notice, the ***** character always has to be in the square brackets.
```fortran
integer, codimension[*] :: scalar
integer :: scalar[*]
real, dimension(64), codimension[*] :: vector
real :: vector(64)[*]
```
### Images synchronization
Because each image is running on its own, the image synchronization is needed to ensure, that all altered data are distributed to all images.
Synchronization can be done across all images or only between selected images. Be aware, that selective synchronization can lead to the race condition problems like deadlock.
Example program:
```fortran
program synchronization_test
implicit none
integer :: i ! Local variable
integer :: numbers[*] ! Scalar coarray
! Genereate random number on image 1
if (this_image() == 1) then
numbers = floor(rand(1) * 1000)
! Distribute information to other images
do i = 2, num_images()
numbers[i] = numbers
end do
end if
sync all ! Barrier to synchronize all images
print *, 'The random number is', numbers
end program synchronization_test
```
* sync all - Synchronize all images between each other
* sync images(*) - Synchronize this image to all other
* sync images(*index*) - Synchronize this image to image with *index*
!!! note
**number** is the local variable while **number[*index*]** accesses the variable in a specific image.
**number[this_image()]** is the same as **number**.
## Compile and run
Currently, version 1.8.10 compiled with OpenMPI 1.10.7 library is installed on Cluster. The OpenCoarrays module can be load as follows:
```console
$ ml OpenCoarrays/1.8.10-GCC-6.3.0-2.27
```
### Compile CAF program
The preferred method for compiling a CAF program is by invoking the *caf* compiler wrapper.
The above mentioned *Hello World* program can be compiled as follows:
```console
$ caf hello_world.f90 -o hello_world.x
```
!!! warning
The input file extension has to be **.f90** or **.F90** to be interpreted as *Fortran 90*.
If the input file extension is **.f** or **.F** the source code will be interpreted as *Fortran 77*.
Another method for compiling is by invoking the *mpif90* compiler wrapper directly:
```console
$ mpif90 hello_world.f90 -o hello_world.x -fcoarray=lib -lcaf_mpi
```
### Run CAF program
A CAF program can be run by invoking the *cafrun* wrapper or directly by the *mpiexec*:
```console
$ cafrun -np 4 ./hello_world.x
Hello world from image 1 of 4
Hello world from image 2 of 4
Hello world from image 3 of 4
Hello world from image 4 of 4
$ mpiexec -np 4 ./synchronization_test.x
The random number is 242
The random number is 242
The random number is 242
The random number is 242
```
where **-np 4** is number of images to run. The parameters of **cafrun** and **mpiexec** are the same.
For more information about running CAF program please follow [Running OpenMPI](../mpi/Running_OpenMPI.md)
......@@ -9,9 +9,9 @@ All login and compute nodes may access same data on shared file systems. Compute
## Policy (In a Nutshell)
!!! note
\* Use [HOME](#home) for your most valuable data and programs.
\* Use [WORK](#work) for your large project files.
\* Use [TEMP](#temp) for large scratch data.
\* Use [HOME](#home) for your most valuable data and programs.
\* Use [WORK](#work) for your large project files.
\* Use [TEMP](#temp) for large scratch data.
!!! warning
Do not use for [archiving](#archiving)!
......@@ -110,41 +110,52 @@ Read more on <http://wiki.lustre.org/manual/LustreManual20_HTML/ManagingStriping
## Disk Usage and Quota Commands
User quotas on the Lustre file systems (SCRATCH) can be checked and reviewed using following command:
Disk usage and user quotas can be checked and reviewed using following command:
```console
$ lfs quota dir
$ it4i-disk-usage
```
Example for Lustre SCRATCH directory:
```console
$ lfs quota /scratch
Disk quotas for user user001 (uid 1234):
Filesystem kbytes quota limit grace files quota limit grace
/scratch 8 0 100000000000 * 3 0 0 -
Disk quotas for group user001 (gid 1234):
Filesystem kbytes quota limit grace files quota limit grace
/scratch 8 0 0 * 3 0 0 -
```
In this example, we view current quota size limit of 100TB and 8KB currently used by user001.
HOME directory is mounted via NFS, so a different command must be used to obtain quota information:
Example:
```console
$ quota
$ it4i-disk-usage -h
# Using human-readable format
# Using power of 1024 for space
# Using power of 1000 for entries
Filesystem: /home
Space used: 110G
Space limit: 238G
Entries: 40k
Entries limit: 500k
# based on filesystem quota
Filesystem: /scratch
Space used: 377G
Space limit: 93T
Entries: 14k
Entries limit: 0
# based on Lustre quota
Filesystem: /scratch
Space used: 377G
Entries: 14k
# based on Robinhood
Filesystem: /scratch/work
Space used: 377G
Entries: 14k
# based on Robinhood
Filesystem: /scratch/temp
Space used: 12K
Entries: 6
# based on Robinhood
```
Example output:
```console
$ quota
Disk quotas for user vop999 (uid 1025):
Filesystem blocks quota limit grace files quota limit grace
home-nfs-ib.salomon.it4i.cz:/home
28 0 250000000 10 0 500000
```
In this example, we view current size limits and space occupied on the /home and /scratch filesystem, for a particular user executing the command.
Note that limits are imposed also on number of objects (files, directories, links, etc...) that are allowed to create.
To have a better understanding of where the space is exactly used, you can use following command to find out.
......
......@@ -117,7 +117,8 @@ pages:
- Introduction: salomon/software/numerical-languages/introduction.md
- Matlab: salomon/software/numerical-languages/matlab.md
- Octave: salomon/software/numerical-languages/octave.md
- R: salomon/software/numerical-languages/r.md
- R: salomon/software/numerical-languages/r.md
- OpenCoarrays: salomon/software/numerical-languages/opencoarrays.md
- Operating System: salomon/software/operating-system.md
- ParaView: salomon/software/paraview.md
- Anselm Software:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment