Skip to content
Snippets Groups Projects
Commit 52cff47e authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

modified: anselm/network.mdx

	modified:   anselm/storage.mdx
	modified:   barbora/storage.mdx
	modified:   cloud/it4i-cloud.mdx
	modified:   cs/guides/amd.mdx
	modified:   cs/guides/grace.mdx
	modified:   cs/guides/hm_management.mdx
	modified:   cs/guides/horizon.mdx
	modified:   cs/guides/power10.mdx
	modified:   cs/guides/xilinx.mdx
	modified:   cs/job-scheduling.mdx
	modified:   dgx2/accessing.mdx
	modified:   dgx2/job_execution.mdx
	modified:   dice.mdx
	modified:   einfracz-migration.mdx
	modified:   environment-and-modules.mdx
	modified:   general/access/einfracz-account.mdx
	modified:   general/access/project-access.mdx
	modified:   general/accessing-the-clusters/graphical-user-interface/ood.mdx
	modified:   general/accessing-the-clusters/graphical-user-interface/vnc.mdx
	modified:   general/accessing-the-clusters/graphical-user-interface/x-window-system.mdx
	modified:   general/accessing-the-clusters/graphical-user-interface/xorg.mdx
	modified:   general/accessing-the-clusters/shell-access-and-data-transfer/putty.mdx
	modified:   general/accessing-the-clusters/shell-access-and-data-transfer/ssh-key-management.mdx
	modified:   general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.mdx
	modified:   general/accessing-the-clusters/vpn-access.mdx
	modified:   general/applying-for-resources.mdx
	modified:   general/barbora-partitions.mdx
	modified:   general/capacity-computing.mdx
	modified:   general/hyperqueue.mdx
	modified:   general/job-arrays.mdx
	modified:   general/job-priority.mdx
	modified:   general/job-submission-and-execution.mdx
	modified:   general/karolina-mpi.mdx
	modified:   general/karolina-partitions.mdx
	modified:   general/karolina-slurm.mdx
	modified:   general/obtaining-login-credentials/obtaining-login-credentials.mdx
	modified:   general/pbs-job-submission-and-execution.mdx
	modified:   general/resource-accounting.mdx
	modified:   general/resource_allocation_and_job_execution.mdx
	modified:   general/resources-allocation-policy.mdx
	modified:   general/shell-and-data-access.mdx
	modified:   general/slurm-job-submission-and-execution.mdx
	modified:   general/tools/cicd.mdx
	modified:   general/tools/portal-clients.mdx
	modified:   index.mdx
	modified:   job-features.mdx
	modified:   karolina/introduction.mdx
	modified:   karolina/storage.mdx
	modified:   lumi/openfoam.mdx
	modified:   prace.mdx
	modified:   salomon/software/numerical-libraries/Clp.mdx
	modified:   salomon/storage.mdx
	modified:   software/bio/omics-master/diagnostic-component-team.mdx
	modified:   software/bio/omics-master/priorization-component-bierapp.mdx
	modified:   software/cae/comsol/comsol-multiphysics.mdx
	modified:   software/chemistry/gaussian.mdx
	modified:   software/chemistry/molpro.mdx
	modified:   software/chemistry/orca.mdx
	modified:   software/chemistry/vasp.mdx
	modified:   software/data-science/dask.mdx
	modified:   software/debuggers/allinea-ddt.mdx
	modified:   software/debuggers/cube.mdx
	modified:   software/debuggers/intel-vtune-profiler.mdx
	modified:   software/debuggers/scalasca.mdx
	modified:   software/debuggers/total-view.mdx
	modified:   software/intel/intel-suite/intel-compilers.mdx
	modified:   software/isv_licenses.mdx
	modified:   software/karolina-compilation.mdx
	modified:   software/lang/conda.mdx
	modified:   software/lang/csc.mdx
	modified:   software/lang/python.mdx
	modified:   software/machine-learning/netket.mdx
	modified:   software/machine-learning/tensorflow.mdx
	modified:   software/modules/lmod.mdx
	modified:   software/mpi/mpi.mdx
	modified:   software/numerical-languages/matlab.mdx
	modified:   software/numerical-languages/opencoarrays.mdx
	modified:   software/numerical-languages/r.mdx
	modified:   software/numerical-libraries/intel-numerical-libraries.mdx
	modified:   software/nvidia-cuda.mdx
	modified:   software/sdk/openacc-mpi.mdx
	modified:   software/tools/apptainer.mdx
	modified:   software/tools/easybuild-images.mdx
	modified:   software/tools/easybuild.mdx
	modified:   software/tools/singularity.mdx
	modified:   software/tools/spack.mdx
	modified:   software/tools/virtualization.mdx
	modified:   software/viz/NICEDCVsoftware.mdx
	modified:   software/viz/gpi2.mdx
	modified:   software/viz/openfoam.mdx
	modified:   software/viz/paraview.mdx
	modified:   software/viz/qtiplot.mdx
	modified:   software/viz/vesta.mdx
	modified:   software/viz/vgl.mdx
	modified:   storage/cesnet-s3.mdx
	modified:   storage/cesnet-storage.mdx
	modified:   storage/project-storage.mdx
parent 84ac4013
Branches
No related tags found
No related merge requests found
Pipeline #42270 failed
Showing
with 117 additions and 59 deletions
...@@ -9,8 +9,9 @@ All of the compute and login nodes of Anselm are interconnected through a high-b ...@@ -9,8 +9,9 @@ All of the compute and login nodes of Anselm are interconnected through a high-b
The compute nodes may be accessed via the InfiniBand network using the ib0 network interface, in address range 10.2.1.1-209. The MPI may be used to establish native InfiniBand connection among the nodes. The compute nodes may be accessed via the InfiniBand network using the ib0 network interface, in address range 10.2.1.1-209. The MPI may be used to establish native InfiniBand connection among the nodes.
!!! note <Callout>
The network provides **2170 MB/s** transfer rates via the TCP connection (single stream) and up to **3600 MB/s** via the native InfiniBand protocol. The network provides **2170 MB/s** transfer rates via the TCP connection (single stream) and up to **3600 MB/s** via the native InfiniBand protocol.
</Callout>
The Fat tree topology ensures that peak transfer rates are achieved between any two nodes, independent of network traffic exchanged among other nodes concurrently. The Fat tree topology ensures that peak transfer rates are achieved between any two nodes, independent of network traffic exchanged among other nodes concurrently.
......
...@@ -25,8 +25,9 @@ There is a default stripe configuration for Anselm Lustre filesystems. However, ...@@ -25,8 +25,9 @@ There is a default stripe configuration for Anselm Lustre filesystems. However,
1. stripe_count the number of OSTs to stripe across; default is 1 for Anselm Lustre filesystems one can specify -1 to use all OSTs in the filesystem. 1. stripe_count the number of OSTs to stripe across; default is 1 for Anselm Lustre filesystems one can specify -1 to use all OSTs in the filesystem.
1. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended. 1. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
!!! note <Callout>
Setting stripe size and stripe count correctly may significantly affect the I/O performance. Setting stripe size and stripe count correctly may significantly affect the I/O performance.
</Callout>
Use the lfs getstripe to get the stripe parameters. Use the lfs setstripe command to set the stripe parameters for optimal I/O performance. The correct stripe setting depends on your needs and file access patterns. Use the lfs getstripe to get the stripe parameters. Use the lfs setstripe command to set the stripe parameters for optimal I/O performance. The correct stripe setting depends on your needs and file access patterns.
...@@ -59,15 +60,17 @@ $ man lfs ...@@ -59,15 +60,17 @@ $ man lfs
### Hints on Lustre Stripping ### Hints on Lustre Stripping
!!! note <Callout>
Increase the stripe_count for parallel I/O to the same file. Increase the stripe_count for parallel I/O to the same file.
</Callout>
When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the stripe_count is set to a larger value. The stripe count sets the number of OSTs to which the file will be written. By default, the stripe count is set to 1. While this default setting provides for efficient access of metadata (for example to support the ls -l command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A rule of thumb is to use a stripe count approximately equal to the number of gigabytes in the file. When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the stripe_count is set to a larger value. The stripe count sets the number of OSTs to which the file will be written. By default, the stripe count is set to 1. While this default setting provides for efficient access of metadata (for example to support the ls -l command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A rule of thumb is to use a stripe count approximately equal to the number of gigabytes in the file.
Another good practice is to make the stripe count be an integral factor of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes. Another good practice is to make the stripe count be an integral factor of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes.
!!! note <Callout>
Using a large stripe size can improve performance when accessing very large files. Using a large stripe size can improve performance when accessing very large files.
</Callout>
Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file. Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file.
...@@ -101,8 +104,9 @@ The architecture of Lustre on Anselm is composed of two metadata servers (MDS) a ...@@ -101,8 +104,9 @@ The architecture of Lustre on Anselm is composed of two metadata servers (MDS) a
The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 320TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request. The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 320TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request.
!!! note <Callout>
The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects. The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects.
</Callout>
The HOME filesystem should not be used to archive data of past Projects or other unrelated data. The HOME filesystem should not be used to archive data of past Projects or other unrelated data.
...@@ -113,8 +117,9 @@ The filesystem is backed up, such that it can be restored in case of catastrophi ...@@ -113,8 +117,9 @@ The filesystem is backed up, such that it can be restored in case of catastrophi
The HOME filesystem is realized as Lustre parallel filesystem and is available on all login and computational nodes. The HOME filesystem is realized as Lustre parallel filesystem and is available on all login and computational nodes.
Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for the HOME filesystem. Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for the HOME filesystem.
!!! note <Callout>
Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance. Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance.
</Callout>
| HOME filesystem | | | HOME filesystem | |
| -------------------- | ------ | | -------------------- | ------ |
...@@ -131,18 +136,21 @@ Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for t ...@@ -131,18 +136,21 @@ Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for t
The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 146TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. If 100TB should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request. The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 146TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. If 100TB should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request.
!!! note <Callout>
The Scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory. The Scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory.
Users are advised to save the necessary data from the SCRATCH filesystem to HOME filesystem after the calculations and clean up the scratch files. Users are advised to save the necessary data from the SCRATCH filesystem to HOME filesystem after the calculations and clean up the scratch files.
</Callout>
!!! warning <Callout type=warn>
Files on the SCRATCH filesystem that are **not accessed for more than 90 days** will be automatically **deleted**. Files on the SCRATCH filesystem that are **not accessed for more than 90 days** will be automatically **deleted**.
</Callout>
The SCRATCH filesystem is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1MB, stripe count is 1. There are 10 OSTs dedicated for the SCRATCH filesystem. The SCRATCH filesystem is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1MB, stripe count is 1. There are 10 OSTs dedicated for the SCRATCH filesystem.
!!! note <Callout>
Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance. Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance.
</Callout>
| SCRATCH filesystem | | | SCRATCH filesystem | |
| -------------------- | -------- | | -------------------- | -------- |
...@@ -256,8 +264,9 @@ Default ACL mechanism can be used to replace setuid/setgid permissions on direct ...@@ -256,8 +264,9 @@ Default ACL mechanism can be used to replace setuid/setgid permissions on direct
### Local Scratch ### Local Scratch
!!! note <Callout>
Every computational node is equipped with 330GB local scratch disk. Every computational node is equipped with 330GB local scratch disk.
</Callout>
Use local scratch in case you need to access large amount of small files during your calculation. Use local scratch in case you need to access large amount of small files during your calculation.
...@@ -265,8 +274,9 @@ The local scratch disk is mounted as /lscratch and is accessible to user at /lsc ...@@ -265,8 +274,9 @@ The local scratch disk is mounted as /lscratch and is accessible to user at /lsc
The local scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs that access large number of small files within the calculation must use the local scratch filesystem as their working directory. This is required for performance reasons, as frequent access to number of small files may overload the metadata servers (MDS) of the Lustre filesystem. The local scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs that access large number of small files within the calculation must use the local scratch filesystem as their working directory. This is required for performance reasons, as frequent access to number of small files may overload the metadata servers (MDS) of the Lustre filesystem.
!!! note <Callout>
The local scratch directory /lscratch/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript. The local scratch directory /lscratch/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
</Callout>
| local SCRATCH filesystem | | | local SCRATCH filesystem | |
| ------------------------ | -------------------- | | ------------------------ | -------------------- |
...@@ -280,15 +290,17 @@ The local scratch filesystem is intended for temporary scratch data generated du ...@@ -280,15 +290,17 @@ The local scratch filesystem is intended for temporary scratch data generated du
Every computational node is equipped with filesystem realized in memory, so called RAM disk. Every computational node is equipped with filesystem realized in memory, so called RAM disk.
!!! note <Callout>
Use RAM disk in case you need a fast access to your data of limited size during your calculation. Be very careful, use of RAM disk filesystem is at the expense of operational memory. Use RAM disk in case you need a fast access to your data of limited size during your calculation. Be very careful, use of RAM disk filesystem is at the expense of operational memory.
</Callout>
The local RAM disk is mounted as /ramdisk and is accessible to user at /ramdisk/$PBS_JOBID directory. The local RAM disk is mounted as /ramdisk and is accessible to user at /ramdisk/$PBS_JOBID directory.
The local RAM disk filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. Size of RAM disk filesystem is limited. Be very careful, use of RAM disk filesystem is at the expense of operational memory. It is not recommended to allocate large amount of memory and use large amount of data in RAM disk filesystem at the same time. The local RAM disk filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. Size of RAM disk filesystem is limited. Be very careful, use of RAM disk filesystem is at the expense of operational memory. It is not recommended to allocate large amount of memory and use large amount of data in RAM disk filesystem at the same time.
!!! note <Callout>
The local RAM disk directory /ramdisk/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript. The local RAM disk directory /ramdisk/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
</Callout>
| RAM disk | | | RAM disk | |
| ----------- | -------------------------------------------------------------------------------------------------------- | | ----------- | -------------------------------------------------------------------------------------------------------- |
...@@ -316,8 +328,9 @@ Each node is equipped with local /tmp directory of few GB capacity. The /tmp dir ...@@ -316,8 +328,9 @@ Each node is equipped with local /tmp directory of few GB capacity. The /tmp dir
Do not use shared filesystems at IT4Innovations as a backup for large amount of data or long-term archiving purposes. Do not use shared filesystems at IT4Innovations as a backup for large amount of data or long-term archiving purposes.
!!! note <Callout>
The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service][f]. The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service][f].
</Callout>
The CESNET Storage service can be used for research purposes, mainly by academic staff and students of research institutions in the Czech Republic. The CESNET Storage service can be used for research purposes, mainly by academic staff and students of research institutions in the Czech Republic.
...@@ -333,15 +346,17 @@ The procedure to obtain the CESNET access is quick and trouble-free. ...@@ -333,15 +346,17 @@ The procedure to obtain the CESNET access is quick and trouble-free.
### Understanding CESNET Storage ### Understanding CESNET Storage
!!! note <Callout>
It is very important to understand the CESNET storage before uploading data. [Read][i] first. It is very important to understand the CESNET storage before uploading data. [Read][i] first.
</Callout>
Once registered for CESNET Storage, you may [access the storage][j] in number of ways. We recommend the SSHFS and RSYNC methods. Once registered for CESNET Storage, you may [access the storage][j] in number of ways. We recommend the SSHFS and RSYNC methods.
### SSHFS Access ### SSHFS Access
!!! note <Callout>
SSHFS: The storage will be mounted like a local hard drive SSHFS: The storage will be mounted like a local hard drive
</Callout>
The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion. The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion.
...@@ -385,8 +400,9 @@ $ fusermount -u cesnet ...@@ -385,8 +400,9 @@ $ fusermount -u cesnet
### RSYNC Access ### RSYNC Access
!!! info <Callout>
RSYNC provides delta transfer for best performance and can resume interrupted transfers. RSYNC provides delta transfer for best performance and can resume interrupted transfers.
</Callout>
RSYNC is a fast and extraordinarily versatile file copying tool. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. RSYNC is widely used for backups and mirroring and as an improved copy command for everyday use. RSYNC is a fast and extraordinarily versatile file copying tool. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. RSYNC is widely used for backups and mirroring and as an improved copy command for everyday use.
......
...@@ -33,8 +33,9 @@ There is default stripe configuration for Barbora Lustre filesystems. However, u ...@@ -33,8 +33,9 @@ There is default stripe configuration for Barbora Lustre filesystems. However, u
1. `stripe_count` the number of OSTs to stripe across; default is 1 for Barbora Lustre filesystems one can specify -1 to use all OSTs in the filesystem. 1. `stripe_count` the number of OSTs to stripe across; default is 1 for Barbora Lustre filesystems one can specify -1 to use all OSTs in the filesystem.
1. `stripe_offset` the index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended. 1. `stripe_offset` the index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
!!! note <Callout>
Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance. Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance.
</Callout>
Use the `lfs getstripe` command for getting the stripe parameters. Use `lfs setstripe` for setting the stripe parameters to get optimal I/O performance. The correct stripe setting depends on your needs and file access patterns. Use the `lfs getstripe` command for getting the stripe parameters. Use `lfs setstripe` for setting the stripe parameters to get optimal I/O performance. The correct stripe setting depends on your needs and file access patterns.
...@@ -62,15 +63,17 @@ $ man lfs ...@@ -62,15 +63,17 @@ $ man lfs
### Hints on Lustre Stripping ### Hints on Lustre Stripping
!!! note <Callout>
Increase the `stripe_count` for parallel I/O to the same file. Increase the `stripe_count` for parallel I/O to the same file.
</Callout>
When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the `stripe_count` is set to a larger value. The stripe count sets the number of OSTs to which the file will be written. By default, the stripe count is set to 1. While this default setting provides for efficient access of metadata (for example to support the `ls -l` command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A rule of thumb is to use a stripe count approximately equal to the number of gigabytes in the file. When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the `stripe_count` is set to a larger value. The stripe count sets the number of OSTs to which the file will be written. By default, the stripe count is set to 1. While this default setting provides for efficient access of metadata (for example to support the `ls -l` command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A rule of thumb is to use a stripe count approximately equal to the number of gigabytes in the file.
Another good practice is to make the stripe count be an integral factor of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes. Another good practice is to make the stripe count be an integral factor of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes.
!!! note <Callout>
Using a large stripe size can improve performance when accessing very large files Using a large stripe size can improve performance when accessing very large files
</Callout>
Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file. Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file.
...@@ -99,8 +102,9 @@ The architecture of Lustre on Barbora is composed of two metadata servers (MDS) ...@@ -99,8 +102,9 @@ The architecture of Lustre on Barbora is composed of two metadata servers (MDS)
The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 28TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 25GB per user. Should 25GB prove insufficient, contact [support][d], the quota may be lifted upon request. The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 28TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 25GB per user. Should 25GB prove insufficient, contact [support][d], the quota may be lifted upon request.
!!! note <Callout>
The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects. The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects.
</Callout>
The HOME filesystem should not be used to archive data of past Projects or other unrelated data. The HOME filesystem should not be used to archive data of past Projects or other unrelated data.
...@@ -123,18 +127,21 @@ The SCRATCH is realized as Lustre parallel file system and is available from all ...@@ -123,18 +127,21 @@ The SCRATCH is realized as Lustre parallel file system and is available from all
The SCRATCH filesystem is mounted in the `/scratch/project/PROJECT_ID` directory created automatically with the `PROJECT_ID` project. Accessible capacity is 310TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 10TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. Should 10TB prove insufficient, contact [support][d], the quota may be lifted upon request. The SCRATCH filesystem is mounted in the `/scratch/project/PROJECT_ID` directory created automatically with the `PROJECT_ID` project. Accessible capacity is 310TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 10TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. Should 10TB prove insufficient, contact [support][d], the quota may be lifted upon request.
!!! note <Callout>
The Scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high-performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory. The Scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high-performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory.
Users are advised to save the necessary data from the SCRATCH filesystem to HOME filesystem after the calculations and clean up the scratch files. Users are advised to save the necessary data from the SCRATCH filesystem to HOME filesystem after the calculations and clean up the scratch files.
</Callout>
!!! warning <Callout type=warn>
Files on the SCRATCH filesystem that are **not accessed for more than 90 days** will be automatically **deleted**. Files on the SCRATCH filesystem that are **not accessed for more than 90 days** will be automatically **deleted**.
</Callout>
The SCRATCH filesystem is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1MB, stripe count is 1. There are 5 OSTs dedicated for the SCRATCH filesystem. The SCRATCH filesystem is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1MB, stripe count is 1. There are 5 OSTs dedicated for the SCRATCH filesystem.
!!! note <Callout>
Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance. Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance.
</Callout>
| SCRATCH filesystem | | | SCRATCH filesystem | |
| -------------------- | --------- | | -------------------- | --------- |
......
...@@ -5,10 +5,11 @@ IT4I cloud consists of 14 nodes from the [Karolina][a] supercomputer. ...@@ -5,10 +5,11 @@ IT4I cloud consists of 14 nodes from the [Karolina][a] supercomputer.
The cloud site is built on top of OpenStack, The cloud site is built on top of OpenStack,
which is a free open standard cloud computing platform. which is a free open standard cloud computing platform.
!!! Note <Callout>
The guide describes steps for personal projects.<br> The guide describes steps for personal projects.<br>
Some steps may differ for large projects.<br> Some steps may differ for large projects.<br>
For large project, apply for resources to the [Allocation Committee][11]. For large project, apply for resources to the [Allocation Committee][11].
</Callout>
## Access ## Access
......
...@@ -26,9 +26,10 @@ salloc -N 1 -c 32 -A PROJECT-ID -p p03-amd --gres=gpu:2 --time=08:00:00 ...@@ -26,9 +26,10 @@ salloc -N 1 -c 32 -A PROJECT-ID -p p03-amd --gres=gpu:2 --time=08:00:00
salloc -N 1 -c 16 -A PROJECT-ID -p p03-amd --gres=gpu:1 --time=08:00:00 salloc -N 1 -c 16 -A PROJECT-ID -p p03-amd --gres=gpu:1 --time=08:00:00
``` ```
!!! Note <Callout>
p03-amd01 server has hyperthreading **enabled** therefore htop shows 128 cores.<br> p03-amd01 server has hyperthreading **enabled** therefore htop shows 128 cores.<br>
p03-amd02 server has hyperthreading **disabled** therefore htop shows 64 cores. p03-amd02 server has hyperthreading **disabled** therefore htop shows 64 cores.
</Callout>
## Using AMD MI100 GPUs ## Using AMD MI100 GPUs
......
...@@ -23,8 +23,9 @@ The platform offers three toolchains: ...@@ -23,8 +23,9 @@ The platform offers three toolchains:
- [NVHPC](https://developer.nvidia.com/hpc-sdk) (as a module `ml NVHPC`) - [NVHPC](https://developer.nvidia.com/hpc-sdk) (as a module `ml NVHPC`)
- [Clang for NVIDIA Grace](https://developer.nvidia.com/grace/clang) (installed in `/opt/nvidia/clang`) - [Clang for NVIDIA Grace](https://developer.nvidia.com/grace/clang) (installed in `/opt/nvidia/clang`)
!!! note <Callout>
The NVHPC toolchain showed strong results with minimal amount of tuning necessary in our initial evaluation. The NVHPC toolchain showed strong results with minimal amount of tuning necessary in our initial evaluation.
</Callout>
### GCC Toolchain ### GCC Toolchain
...@@ -59,8 +60,9 @@ for(int i = 0; i < 1000000; ++i) { ...@@ -59,8 +60,9 @@ for(int i = 0; i < 1000000; ++i) {
} }
``` ```
!!! note <Callout>
Our basic experiments show that fixed width vectorization (NEON) tends to perform better in the case of short (register-length) loops than SVE. In cases (like above), where specified `vectorize_width` is larger than availiable vector unit width, Clang will emit multiple NEON instructions (eg. 4 instructions will be emitted to process 8 64-bit operations in 128-bit units of Grace). Our basic experiments show that fixed width vectorization (NEON) tends to perform better in the case of short (register-length) loops than SVE. In cases (like above), where specified `vectorize_width` is larger than availiable vector unit width, Clang will emit multiple NEON instructions (eg. 4 instructions will be emitted to process 8 64-bit operations in 128-bit units of Grace).
</Callout>
### NVHPC Toolchain ### NVHPC Toolchain
...@@ -70,8 +72,9 @@ The NVHPC toolchain handled aforementioned case without any additional tuning. S ...@@ -70,8 +72,9 @@ The NVHPC toolchain handled aforementioned case without any additional tuning. S
The basic libraries (BLAS and LAPACK) are included in NVHPC toolchain and can be used simply as `-lblas` and `-llapack` for BLAS and LAPACK respectively (`lp64` and `ilp64` versions are also included). The basic libraries (BLAS and LAPACK) are included in NVHPC toolchain and can be used simply as `-lblas` and `-llapack` for BLAS and LAPACK respectively (`lp64` and `ilp64` versions are also included).
!!! note <Callout>
The Grace platform doesn't include CUDA-capable GPU, therefore `nvcc` will fail with an error. This means that `nvc`, `nvc++` and `nvfortran` should be used instead. The Grace platform doesn't include CUDA-capable GPU, therefore `nvcc` will fail with an error. This means that `nvc`, `nvc++` and `nvfortran` should be used instead.
</Callout>
### NVIDIA Performance Libraries ### NVIDIA Performance Libraries
...@@ -91,8 +94,9 @@ This package should be compatible with all availiable toolchains and includes CM ...@@ -91,8 +94,9 @@ This package should be compatible with all availiable toolchains and includes CM
We recommend to use the multi-threaded BLAS library from the NVPL package. We recommend to use the multi-threaded BLAS library from the NVPL package.
!!! note <Callout>
It is important to pin the processes using **OMP_PROC_BIND=spread** It is important to pin the processes using **OMP_PROC_BIND=spread**
</Callout>
Example: Example:
...@@ -274,8 +278,9 @@ nvfortran -O3 -march=native -fast -lblas main.f90 -o main.x ...@@ -274,8 +278,9 @@ nvfortran -O3 -march=native -fast -lblas main.f90 -o main.x
OMP_NUM_THREADS=144 OMP_PROC_BIND=spread ./main OMP_NUM_THREADS=144 OMP_PROC_BIND=spread ./main
``` ```
!!! note <Callout>
It may be advantageous to use NVPL libraries instead NVHPC ones. For example DGEMM BLAS 3 routine from NVPL is almost 30% faster than NVHPC one. It may be advantageous to use NVPL libraries instead NVHPC ones. For example DGEMM BLAS 3 routine from NVPL is almost 30% faster than NVHPC one.
</Callout>
### Using Clang (For Grace) Toolchain ### Using Clang (For Grace) Toolchain
...@@ -286,8 +291,9 @@ ml NVHPC ...@@ -286,8 +291,9 @@ ml NVHPC
/opt/nvidia/clang/17.23.11/bin/clang++ -O3 -march=native -ffast-math -I$NVHPC/Linux_aarch64/$EBVERSIONNVHPC/compilers/include/lp64 -lnvpl_blas_lp64_gomp main.cpp -o main /opt/nvidia/clang/17.23.11/bin/clang++ -O3 -march=native -ffast-math -I$NVHPC/Linux_aarch64/$EBVERSIONNVHPC/compilers/include/lp64 -lnvpl_blas_lp64_gomp main.cpp -o main
``` ```
!!! note <Callout>
NVHPC module is used just for the `cblas.h` include in this case. This can be avoided by changing the code to use `nvpl_blas.h` instead. NVHPC module is used just for the `cblas.h` include in this case. This can be avoided by changing the code to use `nvpl_blas.h` instead.
</Callout>
## Additional Resources ## Additional Resources
......
...@@ -69,8 +69,9 @@ memkind_free(NULL, pData); // "kind" parameter is deduced from the address ...@@ -69,8 +69,9 @@ memkind_free(NULL, pData); // "kind" parameter is deduced from the address
Similarly other memory types can be chosen. Similarly other memory types can be chosen.
!!! note <Callout>
The allocation will return `NULL` pointer when memory of specified kind is not available. The allocation will return `NULL` pointer when memory of specified kind is not available.
</Callout>
## High Bandwidth Memory (HBM) ## High Bandwidth Memory (HBM)
......
...@@ -10,8 +10,9 @@ including features such as session management, user authentication, and virtual ...@@ -10,8 +10,9 @@ including features such as session management, user authentication, and virtual
## How to Access VMware Horizon ## How to Access VMware Horizon
!!! important <Callout type=warn>
Access to VMware Horizon requires IT4I VPN. Access to VMware Horizon requires IT4I VPN.
</Callout>
1. Contact [IT4I support][a] with a request for an access and VM allocation. 1. Contact [IT4I support][a] with a request for an access and VM allocation.
1. [Download][1] and install the VMware Horizon Client for Windows. 1. [Download][1] and install the VMware Horizon Client for Windows.
......
...@@ -77,8 +77,9 @@ xlf -lopenblas hello.f90 -o hello ...@@ -77,8 +77,9 @@ xlf -lopenblas hello.f90 -o hello
to build the application as usual. to build the application as usual.
!!! note <Callout>
Combination of `xlf` and `openblas` seems to cause severe performance degradation. Therefore `ESSL` library should be preferred (see below). Combination of `xlf` and `openblas` seems to cause severe performance degradation. Therefore `ESSL` library should be preferred (see below).
</Callout>
### Using ESSL Library ### Using ESSL Library
......
...@@ -297,8 +297,9 @@ $ XCL_EMULATION_MODE=sw_emu <application> ...@@ -297,8 +297,9 @@ $ XCL_EMULATION_MODE=sw_emu <application>
### Hardware Synthesis Mode ### Hardware Synthesis Mode
!!! note <Callout>
The HLS of these simple applications **can take up to 2 hours** to finish. The HLS of these simple applications **can take up to 2 hours** to finish.
</Callout>
To allow the application to utilize real hardware we have to synthetize FPGA design for the accelerator. This can be done by repeating same steps used to build kernels in emulation mode, but with `IT4I_BUILD_MODE` set to `hw` like so: To allow the application to utilize real hardware we have to synthetize FPGA design for the accelerator. This can be done by repeating same steps used to build kernels in emulation mode, but with `IT4I_BUILD_MODE` set to `hw` like so:
......
...@@ -60,8 +60,9 @@ Run interactive job, with X11 forwarding ...@@ -60,8 +60,9 @@ Run interactive job, with X11 forwarding
$ salloc -A PROJECT-ID -p p01-arm --x11 $ salloc -A PROJECT-ID -p p01-arm --x11
``` ```
!!! warning <Callout type=warn>
Do not use `srun` for initiating interactive jobs, subsequent `srun`, `mpirun` invocations would block forever. Do not use `srun` for initiating interactive jobs, subsequent `srun`, `mpirun` invocations would block forever.
</Callout>
## Running Batch Jobs ## Running Batch Jobs
...@@ -377,8 +378,9 @@ $ scontrol -d show node p02-intel02 | grep ActiveFeatures ...@@ -377,8 +378,9 @@ $ scontrol -d show node p02-intel02 | grep ActiveFeatures
Slurm supports the ability to define and schedule arbitrary resources - Generic RESources (GRES) in Slurm's terminology. We use GRES for scheduling/allocating GPUs and FPGAs. Slurm supports the ability to define and schedule arbitrary resources - Generic RESources (GRES) in Slurm's terminology. We use GRES for scheduling/allocating GPUs and FPGAs.
!!! warning <Callout type=warn>
Use only allocated GPUs and FPGAs. Resource separation is not enforced. If you use non-allocated resources, you can observe strange behavior and get into troubles. Use only allocated GPUs and FPGAs. Resource separation is not enforced. If you use non-allocated resources, you can observe strange behavior and get into troubles.
</Callout>
### Node Resources ### Node Resources
......
...@@ -3,8 +3,9 @@ title: "Accessing the DGX-2" ...@@ -3,8 +3,9 @@ title: "Accessing the DGX-2"
--- ---
## Before You Access ## Before You Access
!!! warning <Callout type= warn>
GPUs are single-user devices. GPU memory is not purged between job runs and it can be read (but not written) by any user. Consider the confidentiality of your running jobs. GPUs are single-user devices. GPU memory is not purged between job runs and it can be read (but not written) by any user. Consider the confidentiality of your running jobs.
</Callout>
## How to Access ## How to Access
...@@ -24,8 +25,9 @@ The HOME filesystem is realized as an NFS filesystem. This is a shared home from ...@@ -24,8 +25,9 @@ The HOME filesystem is realized as an NFS filesystem. This is a shared home from
The SCRATCH is realized on an NVME storage. The SCRATCH filesystem is mounted in the `/scratch` directory. The SCRATCH is realized on an NVME storage. The SCRATCH filesystem is mounted in the `/scratch` directory.
Accessible capacity is 22TB, shared among all users. Accessible capacity is 22TB, shared among all users.
!!! warning <Callout type=warn>
Files on the SCRATCH filesystem that are not accessed for more than 60 days will be automatically deleted. Files on the SCRATCH filesystem that are not accessed for more than 60 days will be automatically deleted.
</Callout>
### PROJECT ### PROJECT
......
...@@ -81,8 +81,9 @@ Wed Jun 16 07:46:32 2021 ...@@ -81,8 +81,9 @@ Wed Jun 16 07:46:32 2021
kru0052@cn202:~$ exit kru0052@cn202:~$ exit
``` ```
!!! tip <Callout>
Submit the interactive job using the `salloc` command. Submit the interactive job using the `salloc` command.
</Callout>
## Job Execution ## Job Execution
......
...@@ -148,8 +148,9 @@ Use the command `iput` for upload, `iget` for download, or `ihelp` for help. ...@@ -148,8 +148,9 @@ Use the command `iput` for upload, `iget` for download, or `ihelp` for help.
## Access to iRODS Collection From Other Resource ## Access to iRODS Collection From Other Resource
!!! note <Callout>
This guide assumes you are uploading your data from your local PC/VM. This guide assumes you are uploading your data from your local PC/VM.
</Callout>
Use the password from [AAI][f]. Use the password from [AAI][f].
...@@ -173,8 +174,9 @@ For access, set PAM passwords at [AAI][f]. ...@@ -173,8 +174,9 @@ For access, set PAM passwords at [AAI][f].
### Fuse ### Fuse
!!!note "Linux client only" <Callout>
This is a Linux client only, basic knowledge of the command line is necessary. This is a Linux client only, basic knowledge of the command line is necessary.
</Callout>
Fuse allows you to work with your iRODS collection like an ordinary directory. Fuse allows you to work with your iRODS collection like an ordinary directory.
...@@ -227,8 +229,9 @@ To stop/unmount your collection, use: ...@@ -227,8 +229,9 @@ To stop/unmount your collection, use:
### iCommands ### iCommands
!!!note "Linux client only" <Callout>
This is a Linux client only, basic knowledge of the command line is necessary. This is a Linux client only, basic knowledge of the command line is necessary.
</Callout>
We recommend Centos7, Ubuntu 20 is optional. We recommend Centos7, Ubuntu 20 is optional.
......
...@@ -30,8 +30,9 @@ After the migration, you must use your **e-INFRA CZ credentials** to access all ...@@ -30,8 +30,9 @@ After the migration, you must use your **e-INFRA CZ credentials** to access all
Successfully migrated accounts tied to e-INFRA CZ can be self-managed at [e-INFRA CZ User profile][4]. Successfully migrated accounts tied to e-INFRA CZ can be self-managed at [e-INFRA CZ User profile][4].
!!! tip "Recommendation" <Callout>
We recommend [verifying your SSH keys][6] for cluster access. We recommend [verifying your SSH keys][6] for cluster access.
</Callout>
## Troubleshooting ## Troubleshooting
......
...@@ -13,8 +13,9 @@ Note that bash is the only supported shell. ...@@ -13,8 +13,9 @@ Note that bash is the only supported shell.
| Barbora | yes | yes | yes | yes | no | | Barbora | yes | yes | yes | yes | no |
| DGX-2 | yes | no | no | no | no | | DGX-2 | yes | no | no | no | no |
!!! info <Callout>
Bash is the default shell. Should you need a different shell, contact [support\[at\]it4i.cz][3]. Bash is the default shell. Should you need a different shell, contact [support\[at\]it4i.cz][3].
</Callout>
## Environment Customization ## Environment Customization
...@@ -39,8 +40,9 @@ then ...@@ -39,8 +40,9 @@ then
fi fi
``` ```
!!! note <Callout>
Do not run commands outputting to standard output (echo, module list, etc.) in .bashrc for non-interactive SSH sessions. It breaks the fundamental functionality (SCP) of your account. Take care for SSH session interactivity for such commands as stated in the previous example. Do not run commands outputting to standard output (echo, module list, etc.) in .bashrc for non-interactive SSH sessions. It breaks the fundamental functionality (SCP) of your account. Take care for SSH session interactivity for such commands as stated in the previous example.
</Callout>
### Application Modules ### Application Modules
...@@ -74,8 +76,9 @@ Application modules on clusters are built using [EasyBuild][1]. The modules are ...@@ -74,8 +76,9 @@ Application modules on clusters are built using [EasyBuild][1]. The modules are
python: python packages python: python packages
``` ```
!!! note <Callout>
The modules set up the application paths, library paths and environment variables for running a particular application. The modules set up the application paths, library paths and environment variables for running a particular application.
</Callout>
The modules may be loaded, unloaded, and switched according to momentary needs. For details, see [lmod][2]. The modules may be loaded, unloaded, and switched according to momentary needs. For details, see [lmod][2].
......
...@@ -5,8 +5,9 @@ title: "e-INFRA CZ Account" ...@@ -5,8 +5,9 @@ title: "e-INFRA CZ Account"
which provides capacities and resources for the transmission, storage and processing of scientific and research data. which provides capacities and resources for the transmission, storage and processing of scientific and research data.
IT4Innovations has become a member of e-INFRA CZ on January 2022. IT4Innovations has become a member of e-INFRA CZ on January 2022.
!!! important <Callout type=warn>
Only persons affiliated with an academic institution from the Czech Republic ([eduID.cz][6]) are eligible for an e-INFRA CZ account. Only persons affiliated with an academic institution from the Czech Republic ([eduID.cz][6]) are eligible for an e-INFRA CZ account.
</Callout>
## Request e-INFRA CZ Account ## Request e-INFRA CZ Account
......
--- ---
title: "Get Project Membership" title: "Get Project Membership"
--- ---
!!! note <Callout>
You need to be named as a collaborator by a Primary Investigator (PI) in order to access and use the clusters. You need to be named as a collaborator by a Primary Investigator (PI) in order to access and use the clusters.
</Callout>
## Authorization by Web ## Authorization by Web
......
...@@ -18,8 +18,9 @@ and launch interactive apps on login nodes. ...@@ -18,8 +18,9 @@ and launch interactive apps on login nodes.
## OOD Apps on IT4I Clusters ## OOD Apps on IT4I Clusters
!!! note <Callout>
Barbora OOD offers Mate and XFCE Desktops on login node only. Other applications listed below are exclusive to Karolina OOD. Barbora OOD offers Mate and XFCE Desktops on login node only. Other applications listed below are exclusive to Karolina OOD.
</Callout>
* Desktops * Desktops
* Karolina Login Mate * Karolina Login Mate
......
...@@ -9,8 +9,9 @@ The recommended clients are [TightVNC][b] or [TigerVNC][c] (free, open source, a ...@@ -9,8 +9,9 @@ The recommended clients are [TightVNC][b] or [TigerVNC][c] (free, open source, a
## Create VNC Server Password ## Create VNC Server Password
!!! note <Callout>
VNC server password should be set before the first login. Use a strong password. VNC server password should be set before the first login. Use a strong password.
</Callout>
```console ```console
$ vncpasswd $ vncpasswd
...@@ -20,8 +21,9 @@ Verify: ...@@ -20,8 +21,9 @@ Verify:
## Start VNC Server ## Start VNC Server
!!! note <Callout>
To access VNC, a remote VNC Server must be started first and a tunnel using SSH port forwarding must be established. To access VNC, a remote VNC Server must be started first and a tunnel using SSH port forwarding must be established.
</Callout>
[See below][2] the details on SSH tunnels. [See below][2] the details on SSH tunnels.
...@@ -40,8 +42,9 @@ Generally, you can choose display number freely, *except these occupied numbers* ...@@ -40,8 +42,9 @@ Generally, you can choose display number freely, *except these occupied numbers*
Also remember that display number should be lower than or equal to 99. Also remember that display number should be lower than or equal to 99.
Based on this requirement, we have chosen the display number 61, as seen in the examples below. Based on this requirement, we have chosen the display number 61, as seen in the examples below.
!!! note <Callout>
Your situation may be different so the choice of your number may differ, as well. **Choose and use your own display number accordingly!** Your situation may be different so the choice of your number may differ, as well. **Choose and use your own display number accordingly!**
</Callout>
Start your remote VNC server on the chosen display number (61): Start your remote VNC server on the chosen display number (61):
...@@ -74,13 +77,15 @@ username :61 ...@@ -74,13 +77,15 @@ username :61
username :102 username :102
``` ```
!!! note <Callout>
The VNC server runs on port 59xx, where xx is the display number. To get your port number, simply add 5900 + display number, in our example 5900 + 61 = 5961. Another example for display number 102 is calculation of TCP port 5900 + 102 = 6002, but note that TCP ports above 6000 are often used by X11. **Calculate your own port number and use it instead of 5961 from examples below**. The VNC server runs on port 59xx, where xx is the display number. To get your port number, simply add 5900 + display number, in our example 5900 + 61 = 5961. Another example for display number 102 is calculation of TCP port 5900 + 102 = 6002, but note that TCP ports above 6000 are often used by X11. **Calculate your own port number and use it instead of 5961 from examples below**.
</Callout>
To access the remote VNC server you have to create a tunnel between the login node using TCP port 5961 and your local machine using a free TCP port (for simplicity the very same) in next step. See examples for [Linux/Mac OS][2] and [Windows][3]. To access the remote VNC server you have to create a tunnel between the login node using TCP port 5961 and your local machine using a free TCP port (for simplicity the very same) in next step. See examples for [Linux/Mac OS][2] and [Windows][3].
!!! note <Callout>
The tunnel must point to the same login node where you launched the VNC server, e.g. login2. If you use just cluster-name.it4i.cz, the tunnel might point to a different node due to DNS round robin. The tunnel must point to the same login node where you launched the VNC server, e.g. login2. If you use just cluster-name.it4i.cz, the tunnel might point to a different node due to DNS round robin.
</Callout>
## Linux/Mac OS Example of Creating a Tunnel ## Linux/Mac OS Example of Creating a Tunnel
...@@ -121,8 +126,9 @@ You have to close the SSH tunnel which is still running in the background after ...@@ -121,8 +126,9 @@ You have to close the SSH tunnel which is still running in the background after
kill 2022 kill 2022
``` ```
!!! note <Callout>
You can watch the instruction video on how to make a VNC connection between a local Ubuntu desktop and the IT4I cluster [here][e]. You can watch the instruction video on how to make a VNC connection between a local Ubuntu desktop and the IT4I cluster [here][e].
</Callout>
## Windows Example of Creating a Tunnel ## Windows Example of Creating a Tunnel
...@@ -215,8 +221,9 @@ or: ...@@ -215,8 +221,9 @@ or:
$ pkill vnc $ pkill vnc
``` ```
!!! note <Callout>
Also, do not forget to terminate the SSH tunnel, if it was used. For details, see the end of [this section][2]. Also, do not forget to terminate the SSH tunnel, if it was used. For details, see the end of [this section][2].
</Callout>
## GUI Applications on Compute Nodes Over VNC ## GUI Applications on Compute Nodes Over VNC
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please to comment