Newer
Older
The all IT4Innovations clusters are accessed by SSH protocol via login nodes loginX at the address **cluster-name.it4i.cz**. The login nodes may be addressed specifically, by prepending the login node name to the address.
!!! note
The alias **cluster-name.it4i.cz** is currently not available through VPN connection. Use **loginX.cluster-name.it4i.cz** when connected to VPN.
### Anselm Cluster
| Login address | Port | Protocol | Login node |
| --------------------- | ---- | -------- | --------------------------------------|
| anselm.it4i.cz | 22 | ssh | round-robin DNS record for login[1-2] |
| login1.anselm.it4i.cz | 22 | ssh | login1 |
| login2.anselm.it4i.cz | 22 | ssh | login2 |
### Barbora Cluster
| Login address | Port | Protocol | Login node |
| ------------------------- | ---- | -------- | ------------------------------------- |
| barbora.it4i.cz | 22 | ssh | round-robin DNS record for login[1-2] |
| login1.barbora.it4i.cz | 22 | ssh | login1 |
| login2.barbora.it4i.cz | 22 | ssh | login2 |
### Salomon Cluster
| Login address | Port | Protocol | Login node |
| ---------------------- | ---- | -------- | ------------------------------------- |
| salomon.it4i.cz | 22 | ssh | round-robin DNS record for login[1-4] |
| login1.salomon.it4i.cz | 22 | ssh | login1 |
| login2.salomon.it4i.cz | 22 | ssh | login2 |
| login3.salomon.it4i.cz | 22 | ssh | login3 |
| login4.salomon.it4i.cz | 22 | ssh | login4 |
## Authentication
!!! note
Verify SSH fingerprints during the first logon. They are identical on all login nodes:
29:b3:f4:64:b0:73:f5:6f:a7:85:0f:e0:0d:be:76:bf (DSA)
d4:6f:5c:18:f4:3f:70:ef:bc:fc:cc:2b:fd:13:36:b7 (RSA)
LX2034TYy6Lf0Q7Zf3zOIZuFlG09DaSGROGBz6LBUy4 (DSA)
+DcED3GDoA9piuyvQOho+ltNvwB9SJSYXbB639hbejY (RSA)
If you see a warning message **UNPROTECTED PRIVATE KEY FILE!**, use this command to set lower permissions to the private key file:
___ _____ _ _ ___ _ _
|_ _| |_ _| | || | |_ _| _ __ _ __ ___ __ __ __ _ | |_ (_) ___ _ __ ___
| | | | | || |_ | | | '_ \ | '_ \ / _ \ \ \ / / / _` | | __| | | / _ \ | '_ \ / __|
| | | | |__ _| | | | | | | | | | | | (_) | \ V / | (_| | | |_ | | | (_) | | | | | \__ \
|___| |_| |_| |___| |_| |_| |_| |_| \___/ \_/ \__,_| \__| |_| \___/ |_| |_| |___/
The environment is **not** shared between login nodes, except for [shared filesystems][3].
Data in and out of the system may be transferred by the [scp][a] and sftp protocols.
### Anselm Cluster
| anselm.it4i.cz | 22 | scp |
| login1.anselm.it4i.cz | 22 | scp |
| login2.anselm.it4i.cz | 22 | scp |
### Barbora Cluster
| Address | Port | Protocol |
| ------------------------- | ---- | ------- |
| barbora.it4i.cz | 22 | scp |
| login1.barbora.it4i.cz | 22 | scp |
| login2.barbora.it4i.cz | 22 | scp |
| Address | Port | Protocol |
| ---------------------- | ---- | --------- |
| salomon.it4i.cz | 22 | scp, sftp |
| login1.salomon.it4i.cz | 22 | scp, sftp |
| login2.salomon.it4i.cz | 22 | scp, sftp |
| login3.salomon.it4i.cz | 22 | scp, sftp |
| login4.salomon.it4i.cz | 22 | scp, sftp |
If you experience degraded data transfer performance, consult your local network provider.
On linux or Mac, use an scp or sftp client to transfer data to Barbora:
$ scp -i /path/to/id_rsa my-local-file username@cluster-name.it4i.cz:directory/file
$ scp -i /path/to/id_rsa -r my-local-dir username@cluster-name.it4i.cz:directory
$ sftp -o IdentityFile=/path/to/id_rsa username@cluster-name.it4i.cz
A very convenient way to transfer files in and out of cluster is via the fuse filesystem [sshfs][b].
$ sshfs -o IdentityFile=/path/to/id_rsa username@cluster-name.it4i.cz:. mountpoint
Using sshfs, the users Barbora home directory will be mounted on your local computer, just like an external disk.
Learn more about ssh, scp and sshfs by reading the manpages
On Windows, use the [WinSCP client][c] to transfer the data. The [win-sshfs client][d] provides a way to mount the cluster filesystems directly as an external disc.
More information about the shared file systems is available [here][4].
Outgoing connections, from cluster login nodes to the outside world, are restricted to the following ports:
| 22 | ssh |
| 80 | http |
| 443 | https |
| 9418 | git |
Use **ssh port forwarding** and proxy servers to connect from cluster to all other remote ports.
Outgoing connections, from Cluster compute nodes are restricted to the internal network. Direct connections form compute nodes to the outside world are cut.
Port forwarding allows an application running on cluster to connect to arbitrary remote hosts and ports.
It works by tunneling the connection from cluster back to users' workstations and forwarding from the workstation to the remote host.
Pick some unused port on the cluster login node (for example 6000) and establish the port forwarding:
In this example, we establish port forwarding between port 6000 on cluster and port 1234 on the remote.host.com. By accessing localhost:6000 on cluster, an application will see the response of remote.host.com:1234. The traffic will run via the user's local workstation.
Port forwarding may be done **using PuTTY** as well. On the PuTTY Configuration screen, load your cluster configuration first. Then go to *Connection->SSH->Tunnels* to set up the port forwarding. Click Remote radio button. Insert 6000 to theSource port textbox. Insert remote.host.com:1234. Click the Add button, then Open.
Port forwarding may be established directly to the remote host. However, this requires that the user has ssh access to remote.host.com
$ ssh -L 6000:localhost:1234 remote.host.com
```
!!! note
Port number 6000 is chosen as an example only. Pick any free port.
Remote port forwarding from compute nodes allows applications running on the compute nodes to access hosts outside the cluster.
First, establish the remote port forwarding form the login node, as [described above][5].
Second, invoke port forwarding from the compute node to the login node. Insert the following line into your jobscript or interactive shell:
In this example, we assume that port forwarding from `login1:6000` to `remote.host.com:1234` has been established beforehand. By accessing `localhost:6000`, an application running on a compute node will see the response of `remote.host.com:1234`.
Port forwarding is static, each single port is mapped to a particular port on a remote host. Connection to another remote host requires a new forward.
Applications with inbuilt proxy support experience unlimited access to remote hosts via a single proxy server.
To establish a local proxy server on your workstation, install and run SOCKS proxy server software. On Linux, sshd demon provides the functionality. To establish SOCKS proxy server listening on port 1080 run:
On Windows, install and run the free, open source [Sock Puppet][e] server.
Once the proxy server is running, establish ssh port forwarding from cluster to the proxy server, port 1080, exactly as [described above][5]:
Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes][5] as well.
* The [X Window system][6] is the principal way to get GUI access to the clusters.
* [Virtual Network Computing][7] is a graphical [desktop sharing][f] system that uses the [Remote Frame Buffer protocol][g] to remotely control another [computer][h].
* Access IT4Innovations internal resources via [VPN][8].
[1]: ../general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
[2]: ../general/accessing-the-clusters/shell-access-and-data-transfer/putty.md
[3]: ../anselm/storage.md#shared-filesystems
[4]: ../anselm/storage.md
[6]: ../general/accessing-the-clusters/graphical-user-interface/x-window-system.md
[7]: ../general/accessing-the-clusters/graphical-user-interface/vnc.md
[8]: ../general/accessing-the-clusters/vpn-access.md
[a]: http://en.wikipedia.org/wiki/Secure_copy
[b]: http://linux.die.net/man/1/sshfs
[c]: http://winscp.net/eng/download.php
[d]: http://code.google.com/p/win-sshfs/
[e]: http://sockspuppet.com/
[f]: http://en.wikipedia.org/wiki/Desktop_sharing
[g]: http://en.wikipedia.org/wiki/RFB_protocol
[h]: http://en.wikipedia.org/wiki/Computer