# Accessing the Clusters ## Shell Access The all IT4Innovations clusters are accessed by SSH protocol via login nodes loginX at the address **cluster-name.it4i.cz**. The login nodes may be addressed specifically, by prepending the login node name to the address. !!! note The alias **cluster-name.it4i.cz** is currently not available through VPN connection. Use **loginX.cluster-name.it4i.cz** when connected to VPN. ### Anselm Cluster | Login address | Port | Protocol | Login node | | --------------------- | ---- | -------- | --------------------------------------| | anselm.it4i.cz | 22 | ssh | round-robin DNS record for login[1-2] | | login1.anselm.it4i.cz | 22 | ssh | login1 | | login2.anselm.it4i.cz | 22 | ssh | login2 | ### Barbora Cluster | Login address | Port | Protocol | Login node | | ------------------------- | ---- | -------- | ------------------------------------- | | barbora.it4i.cz | 22 | ssh | round-robin DNS record for login[1-2] | | login1.barbora.it4i.cz | 22 | ssh | login1 | | login2.barbora.it4i.cz | 22 | ssh | login2 | ### Salomon Cluster | Login address | Port | Protocol | Login node | | ---------------------- | ---- | -------- | ------------------------------------- | | salomon.it4i.cz | 22 | ssh | round-robin DNS record for login[1-4] | | login1.salomon.it4i.cz | 22 | ssh | login1 | | login2.salomon.it4i.cz | 22 | ssh | login2 | | login3.salomon.it4i.cz | 22 | ssh | login3 | | login4.salomon.it4i.cz | 22 | ssh | login4 | ## Authentication Authentication is available by [private key][1] only. Verify SSH fingerprints during the first logon: Anselm: ```console md5: 29:b3:f4:64:b0:73:f5:6f:a7:85:0f:e0:0d:be:76:bf (DSA) d4:6f:5c:18:f4:3f:70:ef:bc:fc:cc:2b:fd:13:36:b7 (RSA) 1a:19:75:31:ab:53:45:53:ce:35:82:13:29:e4:0d:d5 (ECDSA) sha256: LX2034TYy6Lf0Q7Zf3zOIZuFlG09DaSGROGBz6LBUy4 (DSA) +DcED3GDoA9piuyvQOho+ltNvwB9SJSYXbB639hbejY (RSA) 2Keuu9gzrcs1K8pu7ljm2wDdUXU6f+QGGSs8pyrMM3M (ECDSA) ``` Salomon: ```console md5: f6:28:98:e4:f9:b2:a6:8f:f2:f4:2d:0a:09:67:69:80 (DSA) 70:01:c9:9a:5d:88:91:c7:1b:c0:84:d1:fa:4e:83:5c (RSA) 66:32:0a:ef:50:01:77:a7:52:3f:d9:f8:23:7c:2c:3a (ECDSA) sha256: epkqEU2eFzXnMeMMkpX02CykyWjGyLwFj528Vumpzn4 (DSA) WNIrR7oeQDYpBYy4N2d5A6cJ2p0837S7gzzTpaDBZrc (RSA) cYO4UdtUBYlS46GEFUB75BkgxkI6YFQvjVuFxOlRG3g (ECDSA) ``` !!! note SSH fingerprints are identical on all login nodes. Private key authentication: On **Linux** or **Mac**, use: ```console $ ssh -i /path/to/id_rsa username@cluster-name.it4i.cz ``` If you see a warning message **UNPROTECTED PRIVATE KEY FILE!**, use this command to set lower permissions to the private key file: ```console $ chmod 600 /path/to/id_rsa ``` On **Windows**, use [PuTTY ssh client][2]. After logging in, you will see the command prompt ```console ___ _____ _ _ ___ _ _ |_ _| |_ _| | || | |_ _| _ __ _ __ ___ __ __ __ _ | |_ (_) ___ _ __ ___ | | | | | || |_ | | | '_ \ | '_ \ / _ \ \ \ / / / _` | | __| | | / _ \ | '_ \ / __| | | | | |__ _| | | | | | | | | | | | (_) | \ V / | (_| | | |_ | | | (_) | | | | | \__ \ |___| |_| |_| |___| |_| |_| |_| |_| \___/ \_/ \__,_| \__| |_| \___/ |_| |_| |___/ http://www.it4i.cz/?lang=en Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com [username@login2.cluster-name ~]$ ``` !!! note The environment is **not** shared between login nodes, except for [shared filesystems][3]. ## Data Transfer Data in and out of the system may be transferred by the [scp][a] and sftp protocols. ### Anselm Cluster | Address | Port | Protocol | | --------------------- | ---- | --------- | | anselm.it4i.cz | 22 | scp | | login1.anselm.it4i.cz | 22 | scp | | login2.anselm.it4i.cz | 22 | scp | ### Barbora Cluster | Address | Port | Protocol | | ------------------------- | ---- | ------- | | barbora.it4i.cz | 22 | scp | | login1.barbora.it4i.cz | 22 | scp | | login2.barbora.it4i.cz | 22 | scp | ### Salomon Cluster | Address | Port | Protocol | | ---------------------- | ---- | --------- | | salomon.it4i.cz | 22 | scp, sftp | | login1.salomon.it4i.cz | 22 | scp, sftp | | login2.salomon.it4i.cz | 22 | scp, sftp | | login3.salomon.it4i.cz | 22 | scp, sftp | | login4.salomon.it4i.cz | 22 | scp, sftp | Authentication is by [private key][1] only. !!! note If you experience degraded data transfer performance, consult your local network provider. On linux or Mac, use an scp or sftp client to transfer data to Barbora: ```console $ scp -i /path/to/id_rsa my-local-file username@cluster-name.it4i.cz:directory/file ``` ```console $ scp -i /path/to/id_rsa -r my-local-dir username@cluster-name.it4i.cz:directory ``` or ```console $ sftp -o IdentityFile=/path/to/id_rsa username@cluster-name.it4i.cz ``` A very convenient way to transfer files in and out of cluster is via the fuse filesystem [sshfs][b]. ```console $ sshfs -o IdentityFile=/path/to/id_rsa username@cluster-name.it4i.cz:. mountpoint ``` Using sshfs, the users Barbora home directory will be mounted on your local computer, just like an external disk. Learn more about ssh, scp and sshfs by reading the manpages ```console $ man ssh $ man scp $ man sshfs ``` On Windows, use the [WinSCP client][c] to transfer the data. The [win-sshfs client][d] provides a way to mount the cluster filesystems directly as an external disc. More information about the shared file systems is available [here][4]. ## Connection Restrictions Outgoing connections, from cluster login nodes to the outside world, are restricted to the following ports: | Port | Protocol | | ---- | -------- | | 22 | ssh | | 80 | http | | 443 | https | | 9418 | git | !!! note Use **ssh port forwarding** and proxy servers to connect from cluster to all other remote ports. Outgoing connections, from Cluster compute nodes are restricted to the internal network. Direct connections form compute nodes to the outside world are cut. ## Port Forwarding ### Port Forwarding From Login Nodes !!! note Port forwarding allows an application running on cluster to connect to arbitrary remote hosts and ports. It works by tunneling the connection from cluster back to users' workstations and forwarding from the workstation to the remote host. Pick some unused port on the cluster login node (for example 6000) and establish the port forwarding: ```console $ ssh -R 6000:remote.host.com:1234 cluster-name.it4i.cz ``` In this example, we establish port forwarding between port 6000 on cluster and port 1234 on the remote.host.com. By accessing localhost:6000 on cluster, an application will see the response of remote.host.com:1234. The traffic will run via the user's local workstation. Port forwarding may be done **using PuTTY** as well. On the PuTTY Configuration screen, load your cluster configuration first. Then go to *Connection->SSH->Tunnels* to set up the port forwarding. Click Remote radio button. Insert 6000 to theSource port textbox. Insert remote.host.com:1234. Click the Add button, then Open. Port forwarding may be established directly to the remote host. However, this requires that the user has ssh access to remote.host.com ```console $ ssh -L 6000:localhost:1234 remote.host.com ``` !!! note Port number 6000 is chosen as an example only. Pick any free port. ### Port Forwarding From Compute Nodes Remote port forwarding from compute nodes allows applications running on the compute nodes to access hosts outside the cluster. First, establish the remote port forwarding form the login node, as [described above][5]. Second, invoke port forwarding from the compute node to the login node. Insert the following line into your jobscript or interactive shell: ```console $ ssh -TN -f -L 6000:localhost:6000 login1 ``` In this example, we assume that port forwarding from `login1:6000` to `remote.host.com:1234` has been established beforehand. By accessing `localhost:6000`, an application running on a compute node will see the response of `remote.host.com:1234`. ### Using Proxy Servers Port forwarding is static, each single port is mapped to a particular port on a remote host. Connection to another remote host requires a new forward. !!! note Applications with inbuilt proxy support experience unlimited access to remote hosts via a single proxy server. To establish a local proxy server on your workstation, install and run SOCKS proxy server software. On Linux, sshd demon provides the functionality. To establish SOCKS proxy server listening on port 1080 run: ```console $ ssh -D 1080 localhost ``` On Windows, install and run the free, open source [Sock Puppet][e] server. Once the proxy server is running, establish ssh port forwarding from cluster to the proxy server, port 1080, exactly as [described above][5]: ```console $ ssh -R 6000:localhost:1080 cluster-name.it4i.cz ``` Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes][5] as well. ## Graphical User Interface * The [X Window system][6] is the principal way to get GUI access to the clusters. * [Virtual Network Computing][7] is a graphical [desktop sharing][f] system that uses the [Remote Frame Buffer protocol][g] to remotely control another [computer][h]. ## VPN Access * Access IT4Innovations internal resources via [VPN][8]. [1]: ../general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md [2]: ../general/accessing-the-clusters/shell-access-and-data-transfer/putty.md [3]: ../anselm/storage.md#shared-filesystems [4]: ../anselm/storage.md [5]: #port-forwarding-from-login-nodes [6]: ../general/accessing-the-clusters/graphical-user-interface/x-window-system.md [7]: ../general/accessing-the-clusters/graphical-user-interface/vnc.md [8]: ../general/accessing-the-clusters/vpn-access.md [a]: http://en.wikipedia.org/wiki/Secure_copy [b]: http://linux.die.net/man/1/sshfs [c]: http://winscp.net/eng/download.php [d]: http://code.google.com/p/win-sshfs/ [e]: http://sockspuppet.com/ [f]: http://en.wikipedia.org/wiki/Desktop_sharing [g]: http://en.wikipedia.org/wiki/RFB_protocol [h]: http://en.wikipedia.org/wiki/Computer