Skip to content
Snippets Groups Projects
Commit 3d23514a authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

add .md files

parent a621a1ab
No related branches found
No related tags found
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!1add .md files
Pipeline #
Showing
with 1359 additions and 0 deletions
docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/animated-overlay.gif

1.7 KiB

docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_205c90_256x240.png

4.44 KiB

docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_444444_256x240.png

6.83 KiB

docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_dd0000_256x240.png

4.44 KiB

docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_dd8800_256x240.png

4.44 KiB

docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_ffffff_256x240.png

6.15 KiB

This diff is collapsed.
Introduction
============
Welcome to Anselm supercomputer cluster. The Anselm cluster consists of
209 compute nodes, totaling 3344 compute cores with 15TB RAM and giving
over 94 Tflop/s theoretical peak performance. Each node is a <span
class="WYSIWYG_LINK">powerful</span> x86-64 computer, equipped with 16
cores, at least 64GB RAM, and 500GB harddrive. Nodes are interconnected
by fully non-blocking fat-tree Infiniband network and equipped with
Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA
Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware
Overview](https://docs.it4i.cz/anselm-cluster-documentation/hardware-overview).
The cluster runs bullx Linux [<span
class="WYSIWYG_LINK"></span>](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)[operating
system](https://docs.it4i.cz/anselm-cluster-documentation/software/operating-system),
which is compatible with the <span class="WYSIWYG_LINK">RedHat</span>
[<span class="WYSIWYG_LINK">Linux
family.</span>](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg)
We have installed a wide range of
[software](https://docs.it4i.cz/anselm-cluster-documentation/software)
packages targeted at different scientific domains. These packages are
accessible via the [modules
environment](https://docs.it4i.cz/anselm-cluster-documentation/environment-and-modules).
User data shared file-system (HOME, 320TB) and job data shared
file-system (SCRATCH, 146TB) are available to users.
The PBS Professional workload manager provides [computing resources
allocations and job
execution](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution).
Read more on how to [apply for
resources](https://docs.it4i.cz/get-started-with-it4innovations/applying-for-resources),
[obtain login
credentials,](https://docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials)
and [access the
cluster](https://docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster).
docs.it4i.cz/anselm-cluster-documentation/Anselmprofile.jpg

19.6 KiB

docs.it4i.cz/anselm-cluster-documentation/Authorization_chain.png

26.8 KiB

Shell access and data transfer
==============================
Interactive Login
-----------------
The Anselm cluster is accessed by SSH protocol via login nodes login1
and login2 at address anselm.it4i.cz. The login nodes may be addressed
specifically, by prepending the login node name to the address.
Login address Port Protocol Login node
----------------------- ------ ---------- ----------------------------------------------
anselm.it4i.cz 22 ssh round-robin DNS record for login1 and login2
login1.anselm.it4i.cz 22 ssh login1
login2.anselm.it4i.cz 22 ssh login2
The authentication is by the [private
key](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys)
Please verify SSH fingerprints during the first logon. They are
identical on all login nodes:<span class="monospace">
29:b3:f4:64:b0:73:f5:6f:a7:85:0f:e0:0d:be:76:bf (DSA)
d4:6f:5c:18:f4:3f:70:ef:bc:fc:cc:2b:fd:13:36:b7 (RSA)</span>
Private keys authentication:
On **Linux** or **Mac**, use
```
local $ ssh -i /path/to/id_rsa username@anselm.it4i.cz
```
If you see warning message "UNPROTECTED PRIVATE KEY FILE!", use this
command to set lower permissions to private key file.
```
local $ chmod 600 /path/to/id_rsa
```
On **Windows**, use [PuTTY ssh
client](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty).
After logging in, you will see the command prompt:
_
/ | |
/ _ __ ___ ___| |_ __ ___
/ / | '_ / __|/ _ | '_ ` _
/ ____ | | | __ __/ | | | | | |
/_/ __| |_|___/___|_|_| |_| |_|
http://www.it4i.cz/?lang=en
Last loginTue Jul 9 15:57:38 2013 from your-host.example.com
[username@login2.anselm ~]$
The environment is **not** shared between login nodes, except for
[shared
filesystems](https://docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/storage-1#section-1).
Data Transfer
-------------
Data in and out of the system may be transferred by the
[scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp
protocols. <span class="discreet">(Not available yet.) In case large
volumes of data are transferred, use dedicated data mover node
dm1.anselm.it4i.cz for increased performance.</span>
Address Port Protocol
-------------------------------------------------- ---------------------------------- -----------------------------------------
anselm.it4i.cz 22 scp, sftp
login1.anselm.it4i.cz 22 scp, sftp
login2.anselm.it4i.cz 22 scp, sftp
<span class="discreet">dm1.anselm.it4i.cz</span> <span class="discreet">22</span> <span class="discreet">scp, sftp</span>
The authentication is by the [private
key](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys)
Data transfer rates up to **160MB/s** can be achieved with scp or sftp.
1TB may be transferred in 1:50h.
To achieve 160MB/s transfer rates, the end user must be connected by 10G
line all the way to IT4Innovations and use computer with fast processor
for the transfer. Using Gigabit ethernet connection, up to 110MB/s may
be expected. Fast cipher (aes128-ctr) should be used.
If you experience degraded data transfer performance, consult your local
network provider.
On linux or Mac, use scp or sftp client to transfer the data to Anselm:
```
local $ scp -i /path/to/id_rsa my-local-file username@anselm.it4i.cz:directory/file
```
```
local $ scp -i /path/to/id_rsa -r my-local-dir username@anselm.it4i.cz:directory
```
> or
```
local $ sftp -o IdentityFile=/path/to/id_rsa username@anselm.it4i.cz
```
Very convenient way to transfer files in and out of the Anselm computer
is via the fuse filesystem
[sshfs](http://linux.die.net/man/1/sshfs)
```
local $ sshfs -o IdentityFile=/path/to/id_rsa username@anselm.it4i.cz:. mountpoint
```
Using sshfs, the users Anselm home directory will be mounted on your
local computer, just like an external disk.
Learn more on ssh, scp and sshfs by reading the manpages
```
$ man ssh
$ man scp
$ man sshfs
```
On Windows, use [WinSCP
client](http://winscp.net/eng/download.php) to transfer
the data. The [win-sshfs
client](http://code.google.com/p/win-sshfs/) provides a
way to mount the Anselm filesystems directly as an external disc.
More information about the shared file systems is available
[here](https://docs.it4i.cz/anselm-cluster-documentation/storage-1/storage).
Outgoing connections
====================
Connection restrictions
-----------------------
Outgoing connections, from Anselm Cluster login nodes to the outside
world, are restricted to following ports:
Port Protocol
------ ----------
22 ssh
80 http
443 https
9418 git
Please use **ssh port forwarding** and proxy servers to connect from
Anselm to all other remote ports.
Outgoing connections, from Anselm Cluster compute nodes are restricted
to the internal network. Direct connections form compute nodes to
outside world are cut.
Port forwarding
---------------
### []()Port forwarding from login nodes
Port forwarding allows an application running on Anselm to connect to
arbitrary remote host and port.
It works by tunneling the connection from Anselm back to users
workstation and forwarding from the workstation to the remote host.
Pick some unused port on Anselm login node (for example 6000) and
establish the port forwarding:
```
local $ ssh -R 6000:remote.host.com:1234 anselm.it4i.cz
```
In this example, we establish port forwarding between port 6000 on
Anselm and port 1234 on the remote.host.com. By accessing
localhost:6000 on Anselm, an application will see response of
remote.host.com:1234. The traffic will run via users local workstation.
Port forwarding may be done **using PuTTY** as well. On the PuTTY
Configuration screen, load your Anselm configuration first. Then go to
Connection-&gt;SSH-&gt;Tunnels to set up the port forwarding. Click
Remote radio button. Insert 6000 to Source port textbox. Insert
remote.host.com:1234. Click Add button, then Open.
Port forwarding may be established directly to the remote host. However,
this requires that user has ssh access to remote.host.com
```
$ ssh -L 6000:localhost:1234 remote.host.com
```
NotePort number 6000 is chosen as an example only. Pick any free port.
### []()Port forwarding from compute nodes
Remote port forwarding from compute nodes allows applications running on
the compute nodes to access hosts outside Anselm Cluster.
First, establish the remote port forwarding form the login node, as
[described above](#port-forwarding-from-login-nodes).
Second, invoke port forwarding from the compute node to the login node.
Insert following line into your jobscript or interactive shell
```
$ ssh -TN -f -L 6000:localhost:6000 login1
```
In this example, we assume that port forwarding from login1:6000 to
remote.host.com:1234 has been established beforehand. By accessing
localhost:6000, an application running on a compute node will see
response of remote.host.com:1234
### Using proxy servers
Port forwarding is static, each single port is mapped to a particular
port on remote host. Connection to other remote host, requires new
forward.
Applications with inbuilt proxy support, experience unlimited access to
remote hosts, via single proxy server.
To establish local proxy server on your workstation, install and run
SOCKS proxy server software. On Linux, sshd demon provides the
functionality. To establish SOCKS proxy server listening on port 1080
run:
```
local $ ssh -D 1080 localhost
```
On Windows, install and run the free, open source [Sock
Puppet](http://sockspuppet.com/) server.
Once the proxy server is running, establish ssh port forwarding from
Anselm to the proxy server, port 1080, exactly as [described
above](#port-forwarding-from-login-nodes).
```
local $ ssh -R 6000:localhost:1080 anselm.it4i.cz
```
Now, configure the applications proxy settings to **localhost:6000**.
Use port forwarding to access the [proxy server from compute
nodes](#port-forwarding-from-compute-nodes) as well .
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment