Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • sccs/docs.it4i.cz
  • soj0018/docs.it4i.cz
  • lszustak/docs.it4i.cz
  • jarosjir/docs.it4i.cz
  • strakpe/docs.it4i.cz
  • beranekj/docs.it4i.cz
  • tab0039/docs.it4i.cz
  • davidciz/docs.it4i.cz
  • gui0013/docs.it4i.cz
  • mrazek/docs.it4i.cz
  • lriha/docs.it4i.cz
  • it4i-vhapla/docs.it4i.cz
  • hol0598/docs.it4i.cz
  • sccs/docs-it-4-i-cz-fumadocs
  • siw019/docs-it-4-i-cz-fumadocs
15 results
Show changes
Commits on Source (41)
Showing
with 1030 additions and 0 deletions
docs:
stage: test
image: davidhrbac/docker-mdcheck:latest
allow_failure: true
script:
- mdl -r ~MD013 *.md
shellcheck:
stage: test
image: davidhrbac/docker-shellcheck:latest
script:
- which shellcheck || apt-get update && apt-get install -y shellcheck
- shellcheck *.sh
Převod html dokumentace do md formátu (version 0.4) - BETA
===========================================================
Výsledkem projektu by měl být skript pro stažení a převod stávající dokumentace na docs.it4i.cz do md formátu
### Základní požadavky
- stažení celé adresářové struktury docs.it4i.cz
- konverze html souborů na md soubory
- zachování stejného formatování jako na webu
### Postup
Pro svou práci si naklonujete Gitem repozitář do svého pracovního adresáře.
>**Problémy**
> * obrázky nemají správný odkaz na uložení v adresářích (částečně vyřešeno)
> * interní a externí odkazy
> * formátování tabulek
> * formátovámí textu v některých místech
> * doba převodu ~24min
>**Změny v nové verzi**
> * oprava odkazu na obrázky a vylepšené filtrování
> * přehled o počtu souboru a stavu převodu
> * optimalizace, začlenění testovacích funkcí do hlavní větve programu
### Funkce skriptu
Stažení souborů:
```bash
html_md.sh -w
```
Konverze html do md
```bash
html_md.sh -c
```
Testování
```bash
html_md.sh -t
```
Outgoing connections
====================
Connection restrictions
-----------------------
Outgoing connections, from Anselm Cluster login nodes to the outside
world, are restricted to following ports:
Port Protocol
------ ----------
22 ssh
80 http
443 https
9418 git
Please use **ssh port forwarding** and proxy servers to connect from
Anselm to all other remote ports.
Outgoing connections, from Anselm Cluster compute nodes are restricted
to the internal network. Direct connections form compute nodes to
outside world are cut.
Port forwarding
---------------
### Port forwarding from login nodes
Port forwarding allows an application running on Anselm to connect to
arbitrary remote host and port.
It works by tunneling the connection from Anselm back to users
workstation and forwarding from the workstation to the remote host.
Pick some unused port on Anselm login node (for example 6000) and
establish the port forwarding:
```
local $ ssh -R 6000:remote.host.com:1234 anselm.it4i.cz
```
In this example, we establish port forwarding between port 6000 on
Anselm and port 1234 on the remote.host.com. By accessing
localhost:6000 on Anselm, an application will see response of
remote.host.com:1234. The traffic will run via users local workstation.
Port forwarding may be done **using PuTTY** as well. On the PuTTY
Configuration screen, load your Anselm configuration first. Then go to
Connection->SSH->Tunnels to set up the port forwarding. Click
Remote radio button. Insert 6000 to Source port textbox. Insert
remote.host.com:1234. Click Add button, then Open.
Port forwarding may be established directly to the remote host. However,
this requires that user has ssh access to remote.host.com
```
$ ssh -L 6000:localhost:1234 remote.host.com
```
Note: Port number 6000 is chosen as an example only. Pick any free port.
### Port forwarding from compute nodes
Remote port forwarding from compute nodes allows applications running on
the compute nodes to access hosts outside Anselm Cluster.
First, establish the remote port forwarding form the login node, as
[described
above](outgoing-connections.html#port-forwarding-from-login-nodes).
Second, invoke port forwarding from the compute node to the login node.
Insert following line into your jobscript or interactive shell
```
$ ssh -TN -f -L 6000:localhost:6000 login1
```
In this example, we assume that port forwarding from login1:6000 to
remote.host.com:1234 has been established beforehand. By accessing
localhost:6000, an application running on a compute node will see
response of remote.host.com:1234
### Using proxy servers
Port forwarding is static, each single port is mapped to a particular
port on remote host. Connection to other remote host, requires new
forward.
Applications with inbuilt proxy support, experience unlimited access to
remote hosts, via single proxy server.
To establish local proxy server on your workstation, install and run
SOCKS proxy server software. On Linux, sshd demon provides the
functionality. To establish SOCKS proxy server listening on port 1080
run:
```
local $ ssh -D 1080 localhost
```
On Windows, install and run the free, open source [Sock
Puppet](http://sockspuppet.com/) server.
Once the proxy server is running, establish ssh port forwarding from
Anselm to the proxy server, port 1080, exactly as [described
above](outgoing-connections.html#port-forwarding-from-login-nodes).
```
local $ ssh -R 6000:localhost:1080 anselm.it4i.cz
```
Now, configure the applications proxy settings to **localhost:6000**.
Use port forwarding to access the [proxy server from compute
nodes](outgoing-connections.html#port-forwarding-from-compute-nodes)
as well .
Shell access and data transfer
==============================
Interactive Login
-----------------
The Anselm cluster is accessed by SSH protocol via login nodes login1
and login2 at address anselm.it4i.cz. The login nodes may be addressed
specifically, by prepending the login node name to the address.
Login address Port Protocol Login node
----------------------- ------ ---------- ----------------------------------------------
anselm.it4i.cz 22 ssh round-robin DNS record for login1 and login2
login1.anselm.it4i.cz 22 ssh login1
login2.anselm.it4i.cz 22 ssh login2
The authentication is by the [private
key](../../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html)
Please verify SSH fingerprints during the first logon. They are
identical on all login nodes:
29:b3:f4:64:b0:73:f5:6f:a7:85:0f:e0:0d:be:76:bf (DSA)
d4:6f:5c:18:f4:3f:70:ef:bc:fc:cc:2b:fd:13:36:b7 (RSA)
Private keys authentication:
On **Linux** or **Mac**, use
```
local $ ssh -i /path/to/id_rsa username@anselm.it4i.cz
```
If you see warning message "UNPROTECTED PRIVATE KEY FILE!", use this
command to set lower permissions to private key file.
```
local $ chmod 600 /path/to/id_rsa
```
On **Windows**, use [PuTTY ssh
client](../../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty.html).
After logging in, you will see the command prompt:
_
/ | |
/ _ __ ___ ___| |_ __ ___
/ / | '_ / __|/ _ | '_ ` _
/ ____ | | | __ __/ | | | | | |
/_/ __| |_|___/___|_|_| |_| |_|
http://www.it4i.cz/?lang=en
Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com
[username@login2.anselm ~]$
The environment is **not** shared between login nodes, except for
[shared filesystems](../storage-1.html#section-1).
Data Transfer
-------------
Data in and out of the system may be transferred by the
[scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp
protocols. class="discreet">(Not available yet.) In case large
volumes of data are transferred, use dedicated data mover node
dm1.anselm.it4i.cz for increased performance.
Address Port Protocol
-------------------------------------------------- ---------------------------------- -----------------------------------------
anselm.it4i.cz 22 scp, sftp
login1.anselm.it4i.cz 22 scp, sftp
login2.anselm.it4i.cz 22 scp, sftp
class="discreet">dm1.anselm.it4i.cz 22 class="discreet">scp, sftp</span>
The authentication is by the [private
key](../../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html)
Data transfer rates up to **160MB/s** can be achieved with scp or sftp.
1TB may be transferred in 1:50h.
To achieve 160MB/s transfer rates, the end user must be connected by 10G
line all the way to IT4Innovations and use computer with fast processor
for the transfer. Using Gigabit ethernet connection, up to 110MB/s may
be expected. Fast cipher (aes128-ctr) should be used.
If you experience degraded data transfer performance, consult your local
network provider.
On linux or Mac, use scp or sftp client to transfer the data to Anselm:
```
local $ scp -i /path/to/id_rsa my-local-file username@anselm.it4i.cz:directory/file
```
```
local $ scp -i /path/to/id_rsa -r my-local-dir username@anselm.it4i.cz:directory
```
> or
```
local $ sftp -o IdentityFile=/path/to/id_rsa username@anselm.it4i.cz
```
Very convenient way to transfer files in and out of the Anselm computer
is via the fuse filesystem
[sshfs](http://linux.die.net/man/1/sshfs)
```
local $ sshfs -o IdentityFile=/path/to/id_rsa username@anselm.it4i.cz:. mountpoint
```
Using sshfs, the users Anselm home directory will be mounted on your
local computer, just like an external disk.
Learn more on ssh, scp and sshfs by reading the manpages
```
$ man ssh
$ man scp
$ man sshfs
```
On Windows, use [WinSCP
client](http://winscp.net/eng/download.php) to transfer
the data. The [win-sshfs
client](http://code.google.com/p/win-sshfs/) provides a
way to mount the Anselm filesystems directly as an external disc.
More information about the shared file systems is available
[here](../../storage.html).
VPN Access
==========
Accessing IT4Innovations internal resources via VPN
---------------------------------------------------
Failed to initialize connection subsystem Win 8.1 - 02-10-15 MS
patch**
Workaround can be found at
[https://docs.it4i.cz/vpn-connection-fail-in-win-8.1](../../vpn-connection-fail-in-win-8.1.html)
For using resources and licenses which are located at IT4Innovations
local network, it is necessary to VPN connect to this network.
We use Cisco AnyConnect Secure Mobility Client, which is supported on
the following operating systems:
- >Windows XP
- >Windows Vista
- >Windows 7
- >Windows 8
- >Linux
- >MacOS
It is impossible to connect to VPN from other operating systems.
>VPN client installation
------------------------------------
You can install VPN client from web interface after successful login
with LDAP credentials on address <https://vpn1.it4i.cz/anselm>
![](login.jpeg)
According to the Java settings after login, the client either
automatically installs, or downloads installation file for your
operating system. It is necessary to allow start of installation tool
for automatic installation.
![](java_detection.jpeg)
access](../executionaccess.jpg/@@images/4d6e7cb7-9aa7-419c-9583-6dfd92b2c015.jpeg "Execution access")
access
![](executionaccess2.jpeg)
After successful installation, VPN connection will be established and
you can use available resources from IT4I network.
![](successfullinstalation.jpeg)
If your Java setting doesn't allow automatic installation, you can
download installation file and install VPN client manually.
![](instalationfile.jpeg)
After you click on the link, download of installation file will start.
![](downloadfilesuccessfull.jpeg)
After successful download of installation file, you have to execute this
tool with administrator's rights and install VPN client manually.
Working with VPN client
-----------------------
You can use graphical user interface or command line interface to run
VPN client on all supported operating systems. We suggest using GUI.
Before the first login to VPN, you have to fill
URL **https://vpn1.it4i.cz/anselm** into the text field.
![](firstrun.jpg)
After you click on the Connect button, you must fill your login
credentials.
![](logingui.jpg)
After a successful login, the client will minimize to the system tray.
If everything works, you can see a lock in the Cisco tray icon.
![](anyconnecticon.jpg)
If you right-click on this icon, you will see a context menu in which
you can control the VPN connection.
![](anyconnectcontextmenu.jpg)
When you connect to the VPN for the first time, the client downloads the
profile and creates a new item "ANSELM" in the connection list. For
subsequent connections, it is not necessary to re-enter the URL address,
but just select the corresponding item.
![](Anselmprofile.jpg)
Then AnyConnect automatically proceeds like in the case of first logon.
![](loginwithprofile.jpg)
After a successful logon, you can see a green circle with a tick mark on
the lock icon.
![](successfullconnection.jpg)
For disconnecting, right-click on the AnyConnect client icon in the
system tray and select **VPN Disconnect**.
converted/docs.it4i.cz/anselm-cluster-documentation/bullxB510.png

21.5 KiB

Compute Nodes
=============
Nodes Configuration
-------------------
Anselm is cluster of x86-64 Intel based nodes built on Bull Extreme
Computing bullx technology. The cluster contains four types of compute
nodes.****
###Compute Nodes Without Accelerator**
-
180 nodes
-
2880 cores in total
-
two Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
-
64 GB of physical memory per node
- one 500GB SATA 2,5” 7,2 krpm HDD per node
-
bullx B510 blade servers
-
cn[1-180]
###Compute Nodes With GPU Accelerator**
-
23 nodes
-
368 cores in total
-
two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
-
96 GB of physical memory per node
- one 500GB SATA 2,5” 7,2 krpm HDD per node
-
GPU accelerator 1x NVIDIA Tesla Kepler K20 per node
-
bullx B515 blade servers
-
cn[181-203]
###Compute Nodes With MIC Accelerator**
-
4 nodes
-
64 cores in total
-
two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
-
96 GB of physical memory per node
- one 500GB SATA 2,5” 7,2 krpm HDD per node
-
MIC accelerator 1x Intel Phi 5110P per node
-
bullx B515 blade servers
-
cn[204-207]
###Fat Compute Nodes**
-
2 nodes
-
32 cores in total
-
2 Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
-
512 GB of physical memory per node
- two 300GB SAS 3,5”15krpm HDD (RAID1) per node
-
two 100GB SLC SSD per node
-
bullx R423-E3 servers
-
cn[208-209]
![](bullxB510.png)
**Figure Anselm bullx B510 servers****
### Compute Nodes Summary********
Node type Count Range Memory Cores [Access](resource-allocation-and-job-execution/resources-allocation-policy.html)
---------------------------- ------- --------------- -------- ------------- --------------------------------------------------------------------------------------------------
Nodes without accelerator 180 cn[1-180] 64GB 16 @ 2.4Ghz qexp, qprod, qlong, qfree
Nodes with GPU accelerator 23 cn[181-203] 96GB 16 @ 2.3Ghz qgpu, qprod
Nodes with MIC accelerator 4 cn[204-207] 96GB 16 @ 2.3GHz qmic, qprod
Fat compute nodes 2 cn[208-209] 512GB 16 @ 2.4GHz qfat, qprod
Processor Architecture
----------------------
Anselm is equipped with Intel Sandy Bridge processors Intel Xeon E5-2665
(nodes without accelerator and fat nodes) and Intel Xeon E5-2470 (nodes
with accelerator). Processors support Advanced Vector Extensions (AVX)
256-bit instruction set.
### Intel Sandy Bridge E5-2665 Processor
- eight-core
- speed: 2.4 GHz, up to 3.1 GHz using Turbo Boost Technology
- peak performance: 19.2 Gflop/s per
core
- caches:
- L2: 256 KB per core
- L3: 20 MB per processor
- memory bandwidth at the level of the processor: 51.2 GB/s
### Intel Sandy Bridge E5-2470 Processor
- eight-core
- speed: 2.3 GHz, up to 3.1 GHz using Turbo Boost Technology
- peak performance: 18.4 Gflop/s per
core
- caches:
- L2: 256 KB per core
- L3: 20 MB per processor
- memory bandwidth at the level of the processor: 38.4 GB/s
Nodes equipped with Intel Xeon E5-2665 CPU have set PBS resource
attribute cpu_freq = 24, nodes equipped with Intel Xeon E5-2470 CPU
have set PBS resource attribute cpu_freq = 23.
```
$ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16:cpu_freq=24 -I
```
In this example, we allocate 4 nodes, 16 cores at 2.4GHhz per node.
Intel Turbo Boost Technology is used by default, you can disable it for
all nodes of job by using resource attribute cpu_turbo_boost.
$ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16 -l cpu_turbo_boost=0 -I
Memory Architecture
-------------------
### Compute Node Without Accelerator
- 2 sockets
- Memory Controllers are integrated into processors.
- 8 DDR3 DIMMS per node
- 4 DDR3 DIMMS per CPU
- 1 DDR3 DIMMS per channel
- Data rate support: up to 1600MT/s
- Populated memory: 8x 8GB DDR3 DIMM 1600Mhz
### Compute Node With GPU or MIC Accelerator
- 2 sockets
- Memory Controllers are integrated into processors.
- 6 DDR3 DIMMS per node
- 3 DDR3 DIMMS per CPU
- 1 DDR3 DIMMS per channel
- Data rate support: up to 1600MT/s
- Populated memory: 6x 16GB DDR3 DIMM 1600Mhz
### Fat Compute Node
- 2 sockets
- Memory Controllers are integrated into processors.
- 16 DDR3 DIMMS per node
- 8 DDR3 DIMMS per CPU
- 2 DDR3 DIMMS per channel
- Data rate support: up to 1600MT/s
- Populated memory: 16x 32GB DDR3 DIMM 1600Mhz
Environment and Modules
=======================
### Environment Customization
After logging in, you may want to configure the environment. Write your
preferred path definitions, aliases, functions and module loads in the
.bashrc file
```
# ./bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# User specific aliases and functions
alias qs='qstat -a'
module load PrgEnv-gnu
# Display informations to standard output - only in interactive ssh session
if [ -n "$SSH_TTY" ]
then
module list # Display loaded modules
fi
```
Do not run commands outputing to standard output (echo, module list,
etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental
functionality (scp, PBS) of your account! Take care for SSH session
interactivity for such commands as id="result_box"
class="hps alt-edited">stated
in the previous example.
### Application Modules
In order to configure your shell for running particular application on
Anselm we use Module package interface.
The modules set up the application paths, library paths and environment
variables for running particular application.
We have also second modules repository. This modules repository is
created using tool called EasyBuild. On Salomon cluster, all modules
will be build by this tool. If you want to use software from this
modules repository, please follow instructions in section [Application
Modules
Path Expansion](environment-and-modules.html#EasyBuild).
The modules may be loaded, unloaded and switched, according to momentary
needs.
To check available modules use
```
$ module avail
```
To load a module, for example the octave module use
```
$ module load octave
```
loading the octave module will set up paths and environment variables of
your active shell such that you are ready to run the octave software
To check loaded modules use
```
$ module list
```
To unload a module, for example the octave module use
```
$ module unload octave
```
Learn more on modules by reading the module man page
```
$ man module
```
Following modules set up the development environment
PrgEnv-gnu sets up the GNU development environment in conjunction with
the bullx MPI library
PrgEnv-intel sets up the INTEL development environment in conjunction
with the Intel MPI library
### Application Modules Path Expansion
All application modules on Salomon cluster (and further) will be build
using tool called
[EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild").
In case that you want to use some applications that are build by
EasyBuild already, you have to modify your MODULEPATH environment
variable.
```
export MODULEPATH=$MODULEPATH:/apps/easybuild/modules/all/
```
This command expands your searched paths to modules. You can also add
this command to the .bashrc file to expand paths permanently. After this
command, you can use same commands to list/add/remove modules as is
described above.