Skip to content
Snippets Groups Projects
Commit 16e6df7b authored by Pavel Jirásek's avatar Pavel Jirásek
Browse files

Merge branch 'content_revision' into 'master'

Content revision

See merge request !22
parents 00808274 acb2d8da
No related branches found
No related tags found
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!22Content revision
Pipeline #
......@@ -342,7 +342,7 @@ exit
In this example, some directory on the /home holds the input file input and executable mympiprog.x . We create a directory myjob on the /scratch filesystem, copy input and executable files from the /home directory where the qsub was invoked ($PBS_O_WORKDIR) to /scratch, execute the MPI programm mympiprog.x and copy the output file back to the /home directory. The mympiprog.x is executed as one process per node, on all allocated nodes.
!!! Note "Note"
Consider preloading inputs and executables onto [shared scratch](../storage/storage/) before the calculation starts.
Consider preloading inputs and executables onto [shared scratch](storage/) before the calculation starts.
In some cases, it may be impractical to copy the inputs to scratch and outputs to home. This is especially true when very large input and output files are expected, or when the files should be reused by a subsequent calculation. In such a case, it is users responsibility to preload the input files on shared /scratch before the job submission and retrieve the outputs manually, after all calculations are finished.
......@@ -382,7 +382,7 @@ sections.
!!! Note "Note"
Local scratch directory is often useful for single node jobs. Local scratch will be deleted immediately after the job ends.
Example jobscript for single node calculation, using [local scratch](../storage/storage/) on the node:
Example jobscript for single node calculation, using [local scratch](storage/) on the node:
```bash
#!/bin/bash
......
......@@ -197,7 +197,7 @@ Generally both shared file systems are available through GridFTP:
|/home|Lustre|Default HOME directories of users in format /home/prace/login/|
|/scratch|Lustre|Shared SCRATCH mounted on the whole cluster|
More information about the shared file systems is available [here](storage/storage/).
More information about the shared file systems is available [here](storage/).
Usage of the cluster
--------------------
......
......@@ -55,7 +55,7 @@ Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com
Example to the cluster login:
!!! Note "Note"
The environment is **not** shared between login nodes, except for [shared filesystems](../anselm-cluster-documentation/storage/storage/#shared-filesystems).
The environment is **not** shared between login nodes, except for [shared filesystems](storage/#shared-filesystems).
Data Transfer
-------------
......@@ -114,7 +114,7 @@ $ man sshfs
On Windows, use [WinSCP client](http://winscp.net/eng/download.php) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/) provides a way to mount the Anselm filesystems directly as an external disc.
More information about the shared file systems is available [here](../anselm-cluster-documentation/storage/storage/).
More information about the shared file systems is available [here](storage/).
Connection restrictions
......@@ -165,7 +165,7 @@ Note: Port number 6000 is chosen as an example only. Pick any free port.
Remote port forwarding from compute nodes allows applications running on the compute nodes to access hosts outside Anselm Cluster.
First, establish the remote port forwarding form the login node, as [described above](../anselm-cluster-documentation/shell-and-data-access/#port-forwarding-from-login-nodes).
First, establish the remote port forwarding form the login node, as [described above](#port-forwarding-from-login-nodes).
Second, invoke port forwarding from the compute node to the login node. Insert following line into your jobscript or interactive shell
......@@ -190,10 +190,10 @@ local $ ssh -D 1080 localhost
On Windows, install and run the free, open source [Sock Puppet](http://sockspuppet.com/) server.
Once the proxy server is running, establish ssh port forwarding from Anselm to the proxy server, port 1080, exactly as [described above](../anselm-cluster-documentation/shell-and-data-access/#port-forwarding-from-login-nodes).
Once the proxy server is running, establish ssh port forwarding from Anselm to the proxy server, port 1080, exactly as [described above](#port-forwarding-from-login-nodes).
```bash
local $ ssh -R 6000:localhost:1080 anselm.it4i.cz
```
Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](outgoing-connections/#port-forwarding-from-compute-nodes) as well .
Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well .
Storage
=======
There are two main shared file systems on Anselm cluster, the [HOME](../storage/#home) and [SCRATCH](../storage/#scratch). All login and compute nodes may access same data on shared filesystems. Compute nodes are also equipped with local (non-shared) scratch, ramdisk and tmp filesystems.
There are two main shared file systems on Anselm cluster, the [HOME](#home) and [SCRATCH](#scratch). All login and compute nodes may access same data on shared filesystems. Compute nodes are also equipped with local (non-shared) scratch, ramdisk and tmp filesystems.
Archiving
---------
Please don't use shared filesystems as a backup for large amount of data or long-term archiving mean. The academic staff and students of research institutions in the Czech Republic can use [CESNET storage service](../storage/#cesnet-data-storage), which is available via SSHFS.
Please don't use shared filesystems as a backup for large amount of data or long-term archiving mean. The academic staff and students of research institutions in the Czech Republic can use [CESNET storage service](#cesnet-data-storage), which is available via SSHFS.
Shared Filesystems
------------------
Anselm computer provides two main shared filesystems, the [HOME filesystem](../storage.html#home) and the [SCRATCH filesystem](../storage/#scratch). Both HOME and SCRATCH filesystems are realized as a parallel Lustre filesystem. Both shared file systems are accessible via the Infiniband network. Extended ACLs are provided on both Lustre filesystems for the purpose of sharing data with other users using fine-grained control.
Anselm computer provides two main shared filesystems, the [HOME filesystem](#home) and the [SCRATCH filesystem](#scratch). Both HOME and SCRATCH filesystems are realized as a parallel Lustre filesystem. Both shared file systems are accessible via the Infiniband network. Extended ACLs are provided on both Lustre filesystems for the purpose of sharing data with other users using fine-grained control.
### Understanding the Lustre Filesystems
......
......@@ -5,7 +5,7 @@ Introduction
------------
The Salomon cluster consists of 1008 computational nodes of which 576 are regular compute nodes and 432 accelerated nodes. Each node is a powerful x86-64 computer, equipped with 24 cores (two twelve-core Intel Xeon processors) and 128GB RAM. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 0.5PB /home NFS disk storage to store the user files. Users may use a DDN Lustre shared storage with capacity of 1.69 PB which is available for the scratch project data. The user access to the Salomon cluster is provided by four login nodes.
[More about schematic representation of the Salomon cluster compute nodes IB topology](../network/ib-single-plane-topology/).
[More about schematic representation of the Salomon cluster compute nodes IB topology](ib-single-plane-topology/).
![Salomon](../img/salomon-2)
......@@ -19,7 +19,7 @@ General information
|Primary purpose|High Performance Computing|
|Architecture of compute nodes|x86-64|
|Operating system|CentOS 6.7 Linux|
|[**Compute nodes**](../compute-nodes/)||
|[**Compute nodes**](compute-nodes/)||
|Totally|1008|
|Processor|2x Intel Xeon E5-2680v3, 2.5GHz, 12cores|
|RAM|128GB, 5.3GB per core, DDR4@2133 MHz|
......@@ -39,7 +39,7 @@ Compute nodes
|w/o accelerator|576|2x Intel Xeon E5-2680v3, 2.5GHz|24|128GB|-|
|MIC accelerated|432|2x Intel Xeon E5-2680v3, 2.5GHz|24|128GB|2x Intel Xeon Phi 7120P, 61cores, 16GB RAM|
For more details please refer to the [Compute nodes](../compute-nodes/).
For more details please refer to the [Compute nodes](compute-nodes/).
Remote visualization nodes
--------------------------
......
......@@ -402,7 +402,7 @@ exit
In this example, some directory on the /home holds the input file input and executable mympiprog.x . We create a directory myjob on the /scratch filesystem, copy input and executable files from the /home directory where the qsub was invoked ($PBS_O_WORKDIR) to /scratch, execute the MPI programm mympiprog.x and copy the output file back to the /home directory. The mympiprog.x is executed as one process per node, on all allocated nodes.
!!! Note "Note"
Consider preloading inputs and executables onto [shared scratch](../storage/storage/) before the calculation starts.
Consider preloading inputs and executables onto [shared scratch](storage/) before the calculation starts.
In some cases, it may be impractical to copy the inputs to scratch and outputs to home. This is especially true when very large input and output files are expected, or when the files should be reused by a subsequent calculation. In such a case, it is users responsibility to preload the input files on shared /scratch before the job submission and retrieve the outputs manually, after all calculations are finished.
......@@ -441,7 +441,7 @@ HTML commented section #2 (examples need to be reworked)
!!! Note "Note"
Local scratch directory is often useful for single node jobs. Local scratch will be deleted immediately after the job ends. Be very careful, use of RAM disk filesystem is at the expense of operational memory.
Example jobscript for single node calculation, using [local scratch](../storage/storage/) on the node:
Example jobscript for single node calculation, using [local scratch](storage/) on the node:
```bash
#!/bin/bash
......
Outgoing connections
====================
Connection restrictions
-----------------------
Outgoing connections, from Salomon Cluster login nodes to the outside world, are restricted to following ports:
|Port|Protocol|
|---|---|
|22|ssh|
|80|http|
|443|https|
|9418|git|
!!! Note "Note"
Please use **ssh port forwarding** and proxy servers to connect from Salomon to all other remote ports.
Outgoing connections, from Salomon Cluster compute nodes are restricted to the internal network. Direct connections form compute nodes to outside world are cut.
Port forwarding
---------------
### Port forwarding from login nodes
!!! Note "Note"
Port forwarding allows an application running on Salomon to connect to arbitrary remote host and port.
It works by tunneling the connection from Salomon back to users workstation and forwarding from the workstation to the remote host.
Pick some unused port on Salomon login node (for example 6000) and establish the port forwarding:
```bash
local $ ssh -R 6000:remote.host.com:1234 salomon.it4i.cz
```
In this example, we establish port forwarding between port 6000 on Salomon and port 1234 on the remote.host.com. By accessing localhost:6000 on Salomon, an application will see response of remote.host.com:1234. The traffic will run via users local workstation.
Port forwarding may be done **using PuTTY** as well. On the PuTTY Configuration screen, load your Salomon configuration first. Then go to Connection->SSH->Tunnels to set up the port forwarding. Click Remote radio button. Insert 6000 to Source port textbox. Insert remote.host.com:1234. Click Add button, then Open.
Port forwarding may be established directly to the remote host. However, this requires that user has ssh access to remote.host.com
```bash
$ ssh -L 6000:localhost:1234 remote.host.com
```
Note: Port number 6000 is chosen as an example only. Pick any free port.
### Port forwarding from compute nodes
Remote port forwarding from compute nodes allows applications running on the compute nodes to access hosts outside Salomon Cluster.
First, establish the remote port forwarding form the login node, as [described above](outgoing-connections/#port-forwarding-from-login-nodes).
Second, invoke port forwarding from the compute node to the login node. Insert following line into your jobscript or interactive shell
```bash
$ ssh -TN -f -L 6000:localhost:6000 login1
```
In this example, we assume that port forwarding from login1:6000 to remote.host.com:1234 has been established beforehand. By accessing localhost:6000, an application running on a compute node will see response of remote.host.com:1234
### Using proxy servers
Port forwarding is static, each single port is mapped to a particular port on remote host. Connection to other remote host, requires new forward.
!!! Note "Note"
Applications with inbuilt proxy support, experience unlimited access to remote hosts, via single proxy server.
To establish local proxy server on your workstation, install and run SOCKS proxy server software. On Linux, sshd demon provides the functionality. To establish SOCKS proxy server listening on port 1080 run:
```bash
local $ ssh -D 1080 localhost
```
On Windows, install and run the free, open source [Sock Puppet](http://sockspuppet.com/) server.
Once the proxy server is running, establish ssh port forwarding from Salomon to the proxy server, port 1080, exactly as [described above](outgoing-connections/#port-forwarding-from-login-nodes).
```bash
local $ ssh -R 6000:localhost:1080 salomon.it4i.cz
```
Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](outgoing-connections/#port-forwarding-from-compute-nodes) as well .
......@@ -119,11 +119,11 @@ If the user uses GSI SSH based access, then the procedure is similar to the SSH
### Access with SSH
After successful obtainment of login credentials for the local IT4Innovations account, the PRACE users can access the cluster as regular users using SSH. For more information please see the [section in general documentation](accessing-the-cluster/shell-and-data-access/shell-and-data-access/).
After successful obtainment of login credentials for the local IT4Innovations account, the PRACE users can access the cluster as regular users using SSH. For more information please see the [section in general documentation](shell-and-data-access/).
File transfers
------------------
PRACE users can use the same transfer mechanisms as regular users (if they've undergone the full registration procedure). For information about this, please see [the section in the general documentation](accessing-the-cluster/shell-and-data-access/shell-and-data-access/).
PRACE users can use the same transfer mechanisms as regular users (if they've undergone the full registration procedure). For information about this, please see [the section in the general documentation](shell-and-data-access/).
Apart from the standard mechanisms, for PRACE users to transfer data to/from Salomon cluster, a GridFTP server running Globus Toolkit GridFTP service is available. The service is available from public Internet as well as from the internal PRACE network (accessible only from other PRACE partners).
......
......@@ -42,7 +42,7 @@ On **Windows**, use [PuTTY ssh client](../get-started-with-it4innovations/access
After logging in, you will see the command prompt:
!!! Note "Note"
The environment is **not** shared between login nodes, except for [shared filesystems](../salomon/storage/storage/).
The environment is **not** shared between login nodes, except for [shared filesystems](storage/).
Data Transfer
-------------
......@@ -92,7 +92,7 @@ $ man sshfs
On Windows, use [WinSCP client](http://winscp.net/eng/download.php) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/) provides a way to mount the Salomon filesystems directly as an external disc.
More information about the shared file systems is available [here](../salomon/storage/storage/).
More information about the shared file systems is available [here](storage/).
Connection restrictions
-----------------------
......@@ -142,7 +142,7 @@ Note: Port number 6000 is chosen as an example only. Pick any free port.
Remote port forwarding from compute nodes allows applications running on the compute nodes to access hosts outside Salomon Cluster.
First, establish the remote port forwarding form the login node, as [described above](../salomon/shell-and-data-access/#port-forwarding-from-login-nodes).
First, establish the remote port forwarding form the login node, as [described above](#port-forwarding-from-login-nodes).
Second, invoke port forwarding from the compute node to the login node. Insert following line into your jobscript or interactive shell
......@@ -167,10 +167,10 @@ local $ ssh -D 1080 localhost
On Windows, install and run the free, open source [Sock Puppet](http://sockspuppet.com/) server.
Once the proxy server is running, establish ssh port forwarding from Salomon to the proxy server, port 1080, exactly as [described above](../salomon/shell-and-data-access/#port-forwarding-from-login-nodes).
Once the proxy server is running, establish ssh port forwarding from Salomon to the proxy server, port 1080, exactly as [described above](#port-forwarding-from-login-nodes).
```bash
local $ ssh -R 6000:localhost:1080 salomon.it4i.cz
```
Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](../salomon/shell-and-data-access/#port-forwarding-from-compute-nodes) as well .
Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well .
......@@ -3,7 +3,7 @@ Storage
Introduction
------------
There are two main shared file systems on Salomon cluster, the [HOME](../salomon/storage/#home)and [SCRATCH](../salomon/storage/#shared-filesystems).
There are two main shared file systems on Salomon cluster, the [HOME](#home)and [SCRATCH](#shared-filesystems).
All login and compute nodes may access same data on shared filesystems. Compute nodes are also equipped with local (non-shared) scratch, ramdisk and tmp filesystems.
......@@ -11,26 +11,26 @@ Policy (in a nutshell)
----------------------
!!! Note "Note"
Use [ for your most valuable data and programs.
Use [WORK](storage/#work) for your large project files.
Use [TEMP](storage/#temp) for large scratch data.
Use [WORK](#work) for your large project files.
Use [TEMP](#temp) for large scratch data.
Do not use for [archiving](storage/#archiving)!
Do not use for [archiving](#archiving)!
Archiving
-------------
Please don't use shared filesystems as a backup for large amount of data or long-term archiving mean. The academic staff and students of research institutions in the Czech Republic can use [CESNET storage service](../salomon/storage/#cesnet-data-storage), which is available via SSHFS.
Please don't use shared filesystems as a backup for large amount of data or long-term archiving mean. The academic staff and students of research institutions in the Czech Republic can use [CESNET storage service](#cesnet-data-storage), which is available via SSHFS.
Shared Filesystems
----------------------
Salomon computer provides two main shared filesystems, the [ HOME filesystem](storage/#home-filesystem) and the [SCRATCH filesystem](storage/#scratch-filesystem). The SCRATCH filesystem is partitioned to [WORK and TEMP workspaces](storage/#shared-workspaces). The HOME filesystem is realized as a tiered NFS disk storage. The SCRATCH filesystem is realized as a parallel Lustre filesystem. Both shared file systems are accessible via the Infiniband network. Extended ACLs are provided on both HOME/SCRATCH filesystems for the purpose of sharing data with other users using fine-grained control.
Salomon computer provides two main shared filesystems, the [ HOME filesystem](#home-filesystem) and the [SCRATCH filesystem](#scratch-filesystem). The SCRATCH filesystem is partitioned to [WORK and TEMP workspaces](#shared-workspaces). The HOME filesystem is realized as a tiered NFS disk storage. The SCRATCH filesystem is realized as a parallel Lustre filesystem. Both shared file systems are accessible via the Infiniband network. Extended ACLs are provided on both HOME/SCRATCH filesystems for the purpose of sharing data with other users using fine-grained control.
###HOME filesystem
The HOME filesystem is realized as a Tiered filesystem, exported via NFS. The first tier has capacity 100TB, second tier has capacity 400TB. The filesystem is available on all login and computational nodes. The Home filesystem hosts the [HOME workspace](storage/#home).
The HOME filesystem is realized as a Tiered filesystem, exported via NFS. The first tier has capacity 100TB, second tier has capacity 400TB. The filesystem is available on all login and computational nodes. The Home filesystem hosts the [HOME workspace](#home).
###SCRATCH filesystem
The architecture of Lustre on Salomon is composed of two metadata servers (MDS) and six data/object storage servers (OSS). Accessible capacity is 1.69 PB, shared among all users. The SCRATCH filesystem hosts the [WORK and TEMP workspaces](storage/#shared-workspaces).
The architecture of Lustre on Salomon is composed of two metadata servers (MDS) and six data/object storage servers (OSS). Accessible capacity is 1.69 PB, shared among all users. The SCRATCH filesystem hosts the [WORK and TEMP workspaces](#shared-workspaces).
Configuration of the SCRATCH Lustre storage
......
......@@ -18,8 +18,6 @@ pages:
- Pageant SSH agent: get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/pageant.md
- PuTTY key generator: get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/puttygen.md
- PuTTY: get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md
- GUI Access:
- Introduction: get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md
- X Window System: get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md
- VNC: get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md
- Cygwin and x11 Forwarding: get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/cygwin-and-x11-forwarding.md
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment