From 5bb0eb468ee7c52d463c407daddbb6cd590bfa15 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Luk=C3=A1=C5=A1=20Krup=C4=8D=C3=ADk?= <lukas.krupcik@vsb.cz> Date: Fri, 26 Oct 2018 13:16:36 +0200 Subject: [PATCH] fix --- docs.it4i/anselm/shell-and-data-access.md | 4 ++-- .../shell-access-and-data-transfer/putty.md | 2 +- docs.it4i/salomon/capacity-computing.md | 16 ++++++++-------- docs.it4i/salomon/ib-single-plane-topology.md | 4 ++-- 4 files changed, 13 insertions(+), 13 deletions(-) diff --git a/docs.it4i/anselm/shell-and-data-access.md b/docs.it4i/anselm/shell-and-data-access.md index 738c022dd..aa8a03872 100644 --- a/docs.it4i/anselm/shell-and-data-access.md +++ b/docs.it4i/anselm/shell-and-data-access.md @@ -10,7 +10,7 @@ The Anselm cluster is accessed by SSH protocol via login nodes login1 and login2 | login1.anselm.it4i.cz | 22 | ssh | login1 | | login2.anselm.it4i.cz | 22 | ssh | login2 | -Authentication is by [private key](general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) +Authentication is by [private key](../general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) !!! note Please verify SSH fingerprints during the first logon. They are identical on all login nodes: @@ -39,7 +39,7 @@ If you see a warning message "UNPROTECTED PRIVATE KEY FILE!", use this command t $ chmod 600 /path/to/id_rsa ``` -On **Windows**, use [PuTTY ssh client](general/accessing-the-clusters/shell-access-and-data-transfer/putty.md). +On **Windows**, use [PuTTY ssh client](../general/accessing-the-clusters/shell-access-and-data-transfer/putty.md). After logging in, you will see the command prompt: diff --git a/docs.it4i/general/accessing-the-clusters/shell-access-and-data-transfer/putty.md b/docs.it4i/general/accessing-the-clusters/shell-access-and-data-transfer/putty.md index b3c27ffc1..c66a64842 100644 --- a/docs.it4i/general/accessing-the-clusters/shell-access-and-data-transfer/putty.md +++ b/docs.it4i/general/accessing-the-clusters/shell-access-and-data-transfer/putty.md @@ -16,7 +16,7 @@ We recommned you to download "**A Windows installer for everything except PuTTYt ## PuTTY - How to Connect to the IT4Innovations Cluster * Run PuTTY -* Enter Host name and Save session fields with [Login address](salomon/shell-and-data-access.md) and browse Connection - SSH - Auth menu. The _Host Name_ input may be in the format **"username@clustername.it4i.cz"** so you don't have to type your login each time.In this example we will connect to the Salomon cluster using **"salomon.it4i.cz"**. +* Enter Host name and Save session fields with [Login address](shell-and-data-access.md) and browse Connection - SSH - Auth menu. The _Host Name_ input may be in the format **"username@clustername.it4i.cz"** so you don't have to type your login each time.In this example we will connect to the Salomon cluster using **"salomon.it4i.cz"**.  diff --git a/docs.it4i/salomon/capacity-computing.md b/docs.it4i/salomon/capacity-computing.md index e2121b61d..f7be485d1 100644 --- a/docs.it4i/salomon/capacity-computing.md +++ b/docs.it4i/salomon/capacity-computing.md @@ -9,13 +9,13 @@ However, executing huge number of jobs via the PBS queue may strain the system. !!! note Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time. -* Use [Job arrays](salomon/capacity-computing.md#job-arrays) when running huge number of [multithread](salomon/capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs -* Use [GNU parallel](salomon/capacity-computing/#gnu-parallel) when running single core jobs -* Combine [GNU parallel with Job arrays](salomon/capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs +* Use [Job arrays](#job-arrays) when running huge number of [multithread](#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs +* Use [GNU parallel](#gnu-parallel) when running single core jobs +* Combine [GNU parallel with Job arrays](#job-arrays-and-gnu-parallel) when running huge number of single core jobs ## Policy -1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](salomon/capacity-computing/#job-arrays). +1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](#job-arrays). 1. The array size is at most 1500 subjobs. ## Job Arrays @@ -76,7 +76,7 @@ If huge number of parallel multicore (in means of multinode multithread, e. g. M ### Submit the Job Array -To submit the job array, use the qsub -J command. The 900 jobs of the [example above](salomon/capacity-computing/#array_example) may be submitted like this: +To submit the job array, use the qsub -J command. The 900 jobs of the [example above](#array_example) may be submitted like this: ```console $ qsub -N JOBNAME -J 1-900 jobscript @@ -209,7 +209,7 @@ In this example, tasks from tasklist are executed via the GNU parallel. The jobs ### Submit the Job -To submit the job, use the qsub command. The 101 tasks' job of the [example above](salomon/capacity-computing/#gp_example) may be submitted like this: +To submit the job, use the qsub command. The 101 tasks' job of the [example above](#gp_example) may be submitted like this: ```console $ qsub -N JOBNAME jobscript @@ -294,7 +294,7 @@ When deciding this values, think about following guiding rules : ### Submit the Job Array (-J) -To submit the job array, use the qsub -J command. The 960 tasks' job of the [example above](salomon/capacity-computing/#combined_example) may be submitted like this: +To submit the job array, use the qsub -J command. The 960 tasks' job of the [example above](#combined_example) may be submitted like this: ```console $ qsub -N JOBNAME -J 1-960:48 jobscript @@ -308,7 +308,7 @@ In this example, we submit a job array of 20 subjobs. Note the -J 1-960:48, thi ## Examples -Download the examples in [capacity.zip](salomon/capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs. +Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs. Unzip the archive in an empty directory on the cluster and follow the instructions in the README file diff --git a/docs.it4i/salomon/ib-single-plane-topology.md b/docs.it4i/salomon/ib-single-plane-topology.md index 5da7cee75..ab945605f 100644 --- a/docs.it4i/salomon/ib-single-plane-topology.md +++ b/docs.it4i/salomon/ib-single-plane-topology.md @@ -12,7 +12,7 @@ The SGI ICE X IB Premium Blade provides the first level of interconnection via d Each color in each physical IRU represents one dual-switch ASIC switch. -[IB single-plane topology - ICEX Mcell.pdf](src/IB single-plane topology - ICEX Mcell.pdf) +[IB single-plane topology - ICEX Mcell.pdf](../src/IB single-plane topology - ICEX Mcell.pdf)  @@ -26,6 +26,6 @@ As shown in a diagram [IB Topology](salomon/7d-enhanced-hypercube/#ib-topology) * Racks 27, 28, 29, 30, 31, 32 are equivalent to one M-Cell rack. * Racks 33, 34, 35, 36, 37, 38 are equivalent to one M-Cell rack. -[IB single-plane topology - Accelerated nodes.pdf](src/IB single-plane topology - Accelerated nodes.pdf) +[IB single-plane topology - Accelerated nodes.pdf](../src/IB single-plane topology - Accelerated nodes.pdf)  -- GitLab