Skip to content
Snippets Groups Projects
Commit 5dcd0a4f authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

add

parent 9a0dfe83
No related branches found
No related tags found
4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
Pipeline #
Showing
with 25 additions and 1081 deletions
......@@ -34,4 +34,3 @@ Smazání html/md souborů
html_md.sh -d -html
html_md.sh -d -md
```
docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/animated-overlay.gif

1.7 KiB

This diff is collapsed.
......@@ -13,30 +13,31 @@ cores, at least 64GB RAM, and 500GB harddrive. Nodes are interconnected
by fully non-blocking fat-tree Infiniband network and equipped with
Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA
Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware
Overview](https://docs.it4i.cz/anselm-cluster-documentation/hardware-overview).
Overview](anselm-cluster-documentation/hardware-overview.html).
The cluster runs bullx Linux [<span
class="WYSIWYG_LINK"></span>](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)[operating
system](https://docs.it4i.cz/anselm-cluster-documentation/software/operating-system),
system](anselm-cluster-documentation/software/operating-system.html),
which is compatible with the <span class="WYSIWYG_LINK">RedHat</span>
[<span class="WYSIWYG_LINK">Linux
family.</span>](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg)
We have installed a wide range of
[software](https://docs.it4i.cz/anselm-cluster-documentation/software)
[software](anselm-cluster-documentation/software.1.html)
packages targeted at different scientific domains. These packages are
accessible via the [modules
environment](https://docs.it4i.cz/anselm-cluster-documentation/environment-and-modules).
environment](anselm-cluster-documentation/environment-and-modules.html).
User data shared file-system (HOME, 320TB) and job data shared
file-system (SCRATCH, 146TB) are available to users.
The PBS Professional workload manager provides [computing resources
allocations and job
execution](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution).
execution](anselm-cluster-documentation/resource-allocation-and-job-execution.html).
Read more on how to [apply for
resources](https://docs.it4i.cz/get-started-with-it4innovations/applying-for-resources),
resources](get-started-with-it4innovations/applying-for-resources.html),
[obtain login
credentials,](https://docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials)
credentials,](get-started-with-it4innovations/obtaining-login-credentials.html)
and [access the
cluster](https://docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster).
cluster](anselm-cluster-documentation/accessing-the-cluster.html).
......@@ -19,7 +19,7 @@ specifically, by prepending the login node name to the address.
login2.anselm.it4i.cz 22 ssh login2
The authentication is by the [private
key](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys)
key](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html)
Please verify SSH fingerprints during the first logon. They are
identical on all login nodes:<span class="monospace">
......@@ -44,7 +44,7 @@ local $ chmod 600 /path/to/id_rsa
```
On **Windows**, use [PuTTY ssh
client](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty).
client](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty.html).
After logging in, you will see the command prompt:
......@@ -64,7 +64,7 @@ After logging in, you will see the command prompt:
The environment is **not** shared between login nodes, except for
[shared
filesystems](https://docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/storage-1#section-1).
filesystems](accessing-the-cluster/storage-1.html#section-1).
Data Transfer
-------------
......@@ -83,7 +83,7 @@ dm1.anselm.it4i.cz for increased performance.</span>
<span class="discreet">dm1.anselm.it4i.cz</span> <span class="discreet">22</span> <span class="discreet">scp, sftp</span>
The authentication is by the [private
key](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys)
key](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html)
Data transfer rates up to **160MB/s** can be achieved with scp or sftp.
1TB may be transferred in 1:50h.
......@@ -138,4 +138,5 @@ client](http://code.google.com/p/win-sshfs/) provides a
way to mount the Anselm filesystems directly as an external disc.
More information about the shared file systems is available
[here](https://docs.it4i.cz/anselm-cluster-documentation/storage-1/storage).
[here](storage.html).
......@@ -69,7 +69,8 @@ Remote port forwarding from compute nodes allows applications running on
the compute nodes to access hosts outside Anselm Cluster.
First, establish the remote port forwarding form the login node, as
[described above](#port-forwarding-from-login-nodes).
[described
above](outgoing-connections.html#port-forwarding-from-login-nodes).
Second, invoke port forwarding from the compute node to the login node.
Insert following line into your jobscript or interactive shell
......@@ -106,7 +107,7 @@ Puppet](http://sockspuppet.com/) server.
Once the proxy server is running, establish ssh port forwarding from
Anselm to the proxy server, port 1080, exactly as [described
above](#port-forwarding-from-login-nodes).
above](outgoing-connections.html#port-forwarding-from-login-nodes).
```
local $ ssh -R 6000:localhost:1080 anselm.it4i.cz
......@@ -114,5 +115,6 @@ local $ ssh -R 6000:localhost:1080 anselm.it4i.cz
Now, configure the applications proxy settings to **localhost:6000**.
Use port forwarding to access the [proxy server from compute
nodes](#port-forwarding-from-compute-nodes) as well .
nodes](outgoing-connections.html#port-forwarding-from-compute-nodes)
as well .
......@@ -19,7 +19,7 @@ specifically, by prepending the login node name to the address.
login2.anselm.it4i.cz 22 ssh login2
The authentication is by the [private
key](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys)
key](../../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html)
Please verify SSH fingerprints during the first logon. They are
identical on all login nodes:<span class="monospace">
......@@ -44,7 +44,7 @@ local $ chmod 600 /path/to/id_rsa
```
On **Windows**, use [PuTTY ssh
client](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty).
client](../../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty.html).
After logging in, you will see the command prompt:
......@@ -63,8 +63,7 @@ After logging in, you will see the command prompt:
[username@login2.anselm ~]$
The environment is **not** shared between login nodes, except for
[shared
filesystems](https://docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/storage-1#section-1).
[shared filesystems](../storage-1.html#section-1).
Data Transfer
-------------
......@@ -83,7 +82,7 @@ dm1.anselm.it4i.cz for increased performance.</span>
<span class="discreet">dm1.anselm.it4i.cz</span> <span class="discreet">22</span> <span class="discreet">scp, sftp</span>
The authentication is by the [private
key](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys)
key](../../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html)
Data transfer rates up to **160MB/s** can be achieved with scp or sftp.
1TB may be transferred in 1:50h.
......@@ -138,5 +137,5 @@ client](http://code.google.com/p/win-sshfs/) provides a
way to mount the Anselm filesystems directly as an external disc.
More information about the shared file systems is available
[here](https://docs.it4i.cz/anselm-cluster-documentation/storage-1/storage).
[here](../../storage.html).
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment