Skip to content
Snippets Groups Projects
Commit 1e7e0f66 authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

version 0.36

parent 8c7a3a77
No related branches found
Tags
4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
Pipeline #
Showing
with 79 additions and 79 deletions
......@@ -44,7 +44,7 @@ local $ ssh -R 6000:remote.host.com:1234 anselm.it4i.cz
```
In this example, we establish port forwarding between port 6000 on
Anselm and port 1234 on the remote.host.com. By accessing
Anselm and port 1234 on the remote.host.com. By ing
localhost:6000 on Anselm, an application will see response of
remote.host.com:1234. The traffic will run via users local workstation.
......@@ -55,7 +55,7 @@ Remote radio button. Insert 6000 to Source port textbox. Insert
remote.host.com:1234. Click Add button, then Open.
Port forwarding may be established directly to the remote host. However,
this requires that user has ssh access to remote.host.com
this requires that user has ssh to remote.host.com
```
$ ssh -L 6000:localhost:1234 remote.host.com
......@@ -66,7 +66,7 @@ Note: Port number 6000 is chosen as an example only. Pick any free port.
### Port forwarding from compute nodes
Remote port forwarding from compute nodes allows applications running on
the compute nodes to access hosts outside Anselm Cluster.
the compute nodes to hosts outside Anselm Cluster.
First, establish the remote port forwarding form the login node, as
[described
......@@ -80,7 +80,7 @@ $ ssh  -TN -f -L 6000:localhost:6000 login1
```
In this example, we assume that port forwarding from login1:6000 to
remote.host.com:1234 has been established beforehand. By accessing
remote.host.com:1234 has been established beforehand. By ing
localhost:6000, an application running on a compute node will see
response of remote.host.com:1234
......@@ -90,7 +90,7 @@ Port forwarding is static, each single port is mapped to a particular
port on remote host. Connection to other remote host, requires new
forward.
Applications with inbuilt proxy support, experience unlimited access to
Applications with inbuilt proxy support, experience unlimited to
remote hosts, via single proxy server.
To establish local proxy server on your workstation, install and run
......@@ -114,6 +114,6 @@ local $ ssh -R 6000:localhost:1080 anselm.it4i.cz
```
Now, configure the applications proxy settings to **localhost:6000**.
Use port forwarding to access the [proxy server from compute
Use port forwarding to the [proxy server from compute
nodes](outgoing-connections.html#port-forwarding-from-compute-nodes)
as well .
Shell access and data transfer
Shell and data transfer
==============================
......@@ -8,7 +8,7 @@ Shell access and data transfer
Interactive Login
-----------------
The Anselm cluster is accessed by SSH protocol via login nodes login1
The Anselm cluster is ed by SSH protocol via login nodes login1
and login2 at address anselm.it4i.cz. The login nodes may be addressed
specifically, by prepending the login node name to the address.
......@@ -19,7 +19,7 @@ login1.anselm.it4i.cz 22 ssh login1
login2.anselm.it4i.cz 22 ssh login2
The authentication is by the [private
key](../../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html)
key](../../../get-started-with-it4innovations/ing-the-clusters/shell-access-and-data-transfer/ssh-keys.html)
Please verify SSH fingerprints during the first logon. They are
identical on all login nodes:
......@@ -44,7 +44,7 @@ local $ chmod 600 /path/to/id_rsa
```
On **Windows**, use [PuTTY ssh
client](../../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty.html).
client](../../../get-started-with-it4innovations/ing-the-clusters/shell-access-and-data-transfer/putty/putty.html).
After logging in, you will see the command prompt:
......@@ -79,10 +79,10 @@ Address Port
anselm.it4i.cz 22 scp, sftp
login1.anselm.it4i.cz 22 scp, sftp
login2.anselm.it4i.cz 22 scp, sftp
class="discreet">dm1.anselm.it4i.cz class="discreet">22 <span class="discreet">scp, sftp</span>
class="discreet">dm1.anselm.it4i.cz class="discreet">22 class="discreet">scp, sftp</span>
The authentication is by the [private
key](../../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html)
key](../../../get-started-with-it4innovations/ing-the-clusters/shell-access-and-data-transfer/ssh-keys.html)
Data transfer rates up to **160MB/s** can be achieved with scp or sftp.
1TB may be transferred in 1:50h.
......
......@@ -35,37 +35,37 @@ It is impossible to connect to VPN from other operating systems.
You can install VPN client from web interface after successful login
with LDAP credentials on address <https://vpn1.it4i.cz/anselm>
![](../login.jpg/@@images/30271119-b392-4db9-a212-309fb41925d6.jpeg)
![](login.jpeg)
According to the Java settings after login, the client either
automatically installs, or downloads installation file for your
operating system. It is necessary to allow start of installation tool
for automatic installation.
![Java
detection](../java_detection.jpg/@@images/5498e1ba-2242-4b9c-a799-0377a73f779e.jpeg "Java detection")
![Execution
access](../executionaccess.jpg/@@images/4d6e7cb7-9aa7-419c-9583-6dfd92b2c015.jpeg "Execution access")![Execution
access
2](../executionaccess2.jpg/@@images/bed3998c-4b82-4b40-83bd-c3528dde2425.jpeg "Execution access 2")
![](java_detection.jpeg)
](../executionaccess.jpg/@@images/4d6e7cb7-9aa7-419c-9583-6dfd92b2c015.jpeg "Execution access")
![](execution2.jpeg)
After successful installation, VPN connection will be established and
you can use available resources from IT4I network.
![Successfull
instalation](../successfullinstalation.jpg/@@images/c6d69ffe-da75-4cb6-972a-0cf4c686b6e1.jpeg "Successfull instalation")
![](successfullinstalation.jpeg)
If your Java setting doesn't allow automatic installation, you can
download installation file and install VPN client manually.
![Installation
file](../instalationfile.jpg/@@images/202d14e9-e2e1-450b-a584-e78c018d6b6a.jpeg "Installation file")
![](instalationfile.jpeg)
After you click on the link, download of installation file will start.
![Download file
successfull](../downloadfilesuccessfull.jpg/@@images/69842481-634a-484e-90cd-d65e0ddca1e8.jpeg "Download file successfull")
![](downloadfilesuccessfull.jpeg)
After successful download of installation file, you have to execute this
tool with administrator's rights and install VPN client manually.
......@@ -76,22 +76,22 @@ Working with VPN client
You can use graphical user interface or command line interface to run
VPN client on all supported operating systems. We suggest using GUI.
![Icon](../icon.jpg "Icon")
Before the first login to VPN, you have to fill
URL **https://vpn1.it4i.cz/anselm** into the text field.
![First run](../firstrun.jpg "First run")
![](firstrun.jpg)
After you click on the Connect button, you must fill your login
credentials.
![Login - GUI](../logingui.jpg "Login - GUI")
![](logingui.jpg)
After a successful login, the client will minimize to the system tray.
If everything works, you can see a lock in the Cisco tray icon.
![Successfull
connection](../anyconnecticon.jpg "Successfull connection")
If you right-click on this icon, you will see a context menu in which
......@@ -105,18 +105,18 @@ profile and creates a new item "ANSELM" in the connection list. For
subsequent connections, it is not necessary to re-enter the URL address,
but just select the corresponding item.
![Anselm profile](../Anselmprofile.jpg "Anselm profile")
![](Anselmprofile.jpg)
Then AnyConnect automatically proceeds like in the case of first logon.
![Login with
profile](../loginwithprofile.jpg/@@images/a6fd5f3f-bce4-45c9-85e1-8d93c6395eee.jpeg "Login with profile")
![](loginwithprofile.jpeg)
After a successful logon, you can see a green circle with a tick mark on
the lock icon.
![successful
login](../successfullconnection.jpg "successful login")
![](successfullconnection.jpg)
For disconnecting, right-click on the AnyConnect client icon in the
system tray and select **VPN Disconnect**.
......@@ -15,38 +15,38 @@ nodes.****
###Compute Nodes Without Accelerator**
- <div class="itemizedlist">
-
180 nodes
- <div class="itemizedlist">
-
2880 cores in total
- <div class="itemizedlist">
-
two Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
- <div class="itemizedlist">
-
64 GB of physical memory per node
- one 500GB SATA 2,5” 7,2 krpm HDD per node
- <div class="itemizedlist">
-
bullx B510 blade servers
- <div class="itemizedlist">
-
cn[1-180]
......@@ -54,44 +54,44 @@ nodes.****
###Compute Nodes With GPU Accelerator**
- <div class="itemizedlist">
-
23 nodes
- <div class="itemizedlist">
-
368 cores in total
- <div class="itemizedlist">
-
two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
- <div class="itemizedlist">
-
96 GB of physical memory per node
- one 500GB SATA 2,5” 7,2 krpm HDD per node
- <div class="itemizedlist">
-
GPU accelerator 1x NVIDIA Tesla Kepler K20 per node
- <div class="itemizedlist">
-
bullx B515 blade servers
- <div class="itemizedlist">
-
cn[181-203]
......@@ -99,44 +99,44 @@ nodes.****
###Compute Nodes With MIC Accelerator**
- <div class="itemizedlist">
-
4 nodes
- <div class="itemizedlist">
-
64 cores in total
- <div class="itemizedlist">
-
two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
- <div class="itemizedlist">
-
96 GB of physical memory per node
- one 500GB SATA 2,5” 7,2 krpm HDD per node
- <div class="itemizedlist">
-
MIC accelerator 1x Intel Phi 5110P per node
- <div class="itemizedlist">
-
bullx B515 blade servers
- <div class="itemizedlist">
-
cn[204-207]
......@@ -144,44 +144,44 @@ nodes.****
###Fat Compute Nodes**
- <div>
-
2 nodes
- <div>
-
32 cores in total
- <div>
-
2 Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
- <div>
-
512 GB of physical memory per node
- two 300GB SAS 3,5”15krpm HDD (RAID1) per node
- <div>
-
two 100GB SLC SSD per node
- <div>
-
bullx R423-E3 servers
- <div>
-
cn[208-209]
......@@ -220,10 +220,10 @@ with accelerator). Processors support Advanced Vector Extensions (AVX)
### Intel Sandy Bridge E5-2665 Processor
- eight-core
- speed: 2.4 GHz, up to 3.1 GHz using Turbo Boost Technology
- peak performance: class="emphasis">19.2 Gflop/s per
- peak performance: 19.2 Gflop/s per
core
- caches:
<div class="itemizedlist">
- L2: 256 KB per core
- L3: 20 MB per processor
......@@ -235,10 +235,10 @@ with accelerator). Processors support Advanced Vector Extensions (AVX)
### Intel Sandy Bridge E5-2470 Processor
- eight-core
- speed: 2.3 GHz, up to 3.1 GHz using Turbo Boost Technology
- peak performance: class="emphasis">18.4 Gflop/s per
- peak performance: 18.4 Gflop/s per
core
- caches:
<div class="itemizedlist">
- L2: 256 KB per core
- L3: 20 MB per processor
......@@ -277,7 +277,7 @@ Memory Architecture
### Compute Node Without Accelerator
- 2 sockets
- Memory Controllers are integrated into processors.
<div class="itemizedlist">
- 8 DDR3 DIMMS per node
- 4 DDR3 DIMMS per CPU
......@@ -291,7 +291,7 @@ Memory Architecture
### Compute Node With GPU or MIC Accelerator
- 2 sockets
- Memory Controllers are integrated into processors.
<div class="itemizedlist">
- 6 DDR3 DIMMS per node
- 3 DDR3 DIMMS per CPU
......@@ -305,7 +305,7 @@ Memory Architecture
### Fat Compute Node
- 2 sockets
- Memory Controllers are integrated into processors.
<div class="itemizedlist">
- 16 DDR3 DIMMS per node
- 8 DDR3 DIMMS per CPU
......
......@@ -35,7 +35,7 @@ etc) in .bashrc  for non-interactive SSH sessions. It breaks fundamental
functionality (scp, PBS) of your account! Take care for SSH session
interactivity for such commands as id="result_box"
class="short_text"> class="hps alt-edited">stated
class="hps">in the previous example.
in the previous example.
### Application Modules
......
......@@ -10,7 +10,7 @@ of which 180 are regular compute nodes, 23 GPU Kepler K20 accelerated
nodes, 4 MIC Xeon Phi 5110 accelerated nodes and 2 fat nodes. Each node
is a class="WYSIWYG_LINK">powerful x86-64 computer,
equipped with 16 cores (two eight-core Intel Sandy Bridge processors),
at least 64GB RAM, and local hard drive. The user access to the Anselm
at least 64GB RAM, and local hard drive. The user to the Anselm
cluster is provided by two login nodes login[1,2]. The nodes are
interlinked by high speed InfiniBand and Ethernet networks. All nodes
share 320TB /home disk storage to store the user files. The 146TB shared
......@@ -19,7 +19,7 @@ share 320TB /home disk storage to store the user files. The 146TB shared
The Fat nodes are equipped with large amount (512GB) of memory.
Virtualization infrastructure provides resources to run long term
servers and services in virtual mode. Fat nodes and virtual servers may
access 45 TB of dedicated block storage. Accelerated nodes, fat nodes,
45 TB of dedicated block storage. Accelerated nodes, fat nodes,
and virtualization infrastructure are available [upon
request](https://support.it4i.cz/rt) made by a PI.
......@@ -348,9 +348,9 @@ disk storage available on all compute nodes /lscratch.  [More about
class="WYSIWYG_LINK">Storage](storage.html).
The user access to the Anselm cluster is provided by two login nodes
login1, login2, and data mover node dm1. [More about accessing
cluster.](accessing-the-cluster.html)
The user to the Anselm cluster is provided by two login nodes
login1, login2, and data mover node dm1. [More about ing
cluster.](ing-the-cluster.html)
The parameters are summarized in the following tables:
......
......@@ -23,7 +23,7 @@ class="WYSIWYG_LINK">Linux
family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg)
We have installed a wide range of
[software](software.1.html) packages targeted at
different scientific domains. These packages are accessible via the
different scientific domains. These packages are ible via the
[modules environment](environment-and-modules.html).
User data shared file-system (HOME, 320TB) and job data shared
......@@ -37,4 +37,4 @@ Read more on how to [apply for
resources](../get-started-with-it4innovations/applying-for-resources.html),
[obtain login
credentials,](../get-started-with-it4innovations/obtaining-login-credentials.html)
and [access the cluster](accessing-the-cluster.html).
and [ the cluster](accessing-the-cluster.html).
......@@ -20,7 +20,7 @@ high-bandwidth, low-latency
QDR network (IB 4x QDR, 40 Gbps). The network topology is a fully
non-blocking fat-tree.
The compute nodes may be accessed via the Infiniband network using ib0
The compute nodes may be ed via the Infiniband network using ib0
network interface, in address range 10.2.1.1-209. The MPI may be used to
establish native Infiniband connection among the nodes.
......@@ -34,7 +34,7 @@ other nodes concurrently.
Ethernet Network
----------------
The compute nodes may be accessed via the regular Gigabit Ethernet
The compute nodes may be ed via the regular Gigabit Ethernet
network interface eth0, in address range 10.1.1.1-209, or by using
aliases cn1-cn209.
The network provides **114MB/s** transfer rates via the TCP connection.
......@@ -55,5 +55,5 @@ $ ssh 10.2.1.110
$ ssh 10.1.1.108
```
In this example, we access the node cn110 by Infiniband network via the
In this example, we the node cn110 by Infiniband network via the
ib0 interface, then from cn110 to cn108 by Ethernet network.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please to comment