From e0198c19a3130578bd25e57bc95cea6561543a52 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Luk=C3=A1=C5=A1=20Krup=C4=8D=C3=ADk?= <lukas.krupcik@vsb.cz> Date: Tue, 30 Aug 2016 08:03:45 +0200 Subject: [PATCH] oprava internich linku test --- docs.it4i/anselm-cluster-documentation/introduction.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs.it4i/anselm-cluster-documentation/introduction.md b/docs.it4i/anselm-cluster-documentation/introduction.md index eb0862b0c..82c244bbe 100644 --- a/docs.it4i/anselm-cluster-documentation/introduction.md +++ b/docs.it4i/anselm-cluster-documentation/introduction.md @@ -1,13 +1,13 @@ Introduction ============ -Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15TB RAM and giving over 94 Tflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64GB RAM, and 500GB harddrive. Nodes are interconnected by fully non-blocking fat-tree Infiniband network and equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview.html). +Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15TB RAM and giving over 94 Tflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64GB RAM, and 500GB harddrive. Nodes are interconnected by fully non-blocking fat-tree Infiniband network and equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/). -The cluster runs bullx Linux [](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)[operating system](software/operating-system.html), which is compatible with the RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of [software](software.1.html) packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules.html). +The cluster runs bullx Linux [](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)[operating system](operating-system/), which is compatible with the RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/). User data shared file-system (HOME, 320TB) and job data shared file-system (SCRATCH, 146TB) are available to users. -The PBS Professional workload manager provides [computing resources allocations and job execution](resource-allocation-and-job-execution.html). +The PBS Professional workload manager provides [computing resources allocations and job execution](resource-allocation-and-job-execution/). -Read more on how to [apply for resources](../get-started-with-it4innovations/applying-for-resources.html), [obtain login credentials,](../get-started-with-it4innovations/obtaining-login-credentials.html) and [access the cluster](accessing-the-cluster.html). +Read more on how to [apply for resources](../get-started-with-it4innovations/applying-for-resources/), [obtain login credentials,](../get-started-with-it4innovations/obtaining-login-credentials//obtaining-login-credentials/) and [access the cluster](accessing-the-cluster/shell-and-data-access/). -- GitLab