diff --git a/docs.it4i/anselm/shell-and-data-access.md b/docs.it4i/anselm/shell-and-data-access.md index 738c022dd92ba06dd167fcffe527a08549191eaf..aa8a038727cdf82e59dfddd8fa2d17bbd37a4c9a 100644 --- a/docs.it4i/anselm/shell-and-data-access.md +++ b/docs.it4i/anselm/shell-and-data-access.md @@ -10,7 +10,7 @@ The Anselm cluster is accessed by SSH protocol via login nodes login1 and login2 | login1.anselm.it4i.cz | 22 | ssh | login1 | | login2.anselm.it4i.cz | 22 | ssh | login2 | -Authentication is by [private key](general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) +Authentication is by [private key](../general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) !!! note Please verify SSH fingerprints during the first logon. They are identical on all login nodes: @@ -39,7 +39,7 @@ If you see a warning message "UNPROTECTED PRIVATE KEY FILE!", use this command t $ chmod 600 /path/to/id_rsa ``` -On **Windows**, use [PuTTY ssh client](general/accessing-the-clusters/shell-access-and-data-transfer/putty.md). +On **Windows**, use [PuTTY ssh client](../general/accessing-the-clusters/shell-access-and-data-transfer/putty.md). After logging in, you will see the command prompt: diff --git a/docs.it4i/general/accessing-the-clusters/shell-access-and-data-transfer/putty.md b/docs.it4i/general/accessing-the-clusters/shell-access-and-data-transfer/putty.md index b3c27ffc14b5487d26a19a0243b8e863f239f0c7..c66a6484228ae91246600adeefb162cd7e7e30c1 100644 --- a/docs.it4i/general/accessing-the-clusters/shell-access-and-data-transfer/putty.md +++ b/docs.it4i/general/accessing-the-clusters/shell-access-and-data-transfer/putty.md @@ -16,7 +16,7 @@ We recommned you to download "**A Windows installer for everything except PuTTYt ## PuTTY - How to Connect to the IT4Innovations Cluster * Run PuTTY -* Enter Host name and Save session fields with [Login address](salomon/shell-and-data-access.md) and browse Connection - SSH - Auth menu. The _Host Name_ input may be in the format **"username@clustername.it4i.cz"** so you don't have to type your login each time.In this example we will connect to the Salomon cluster using **"salomon.it4i.cz"**. +* Enter Host name and Save session fields with [Login address](shell-and-data-access.md) and browse Connection - SSH - Auth menu. The _Host Name_ input may be in the format **"username@clustername.it4i.cz"** so you don't have to type your login each time.In this example we will connect to the Salomon cluster using **"salomon.it4i.cz"**.  diff --git a/docs.it4i/salomon/capacity-computing.md b/docs.it4i/salomon/capacity-computing.md index e2121b61d51b8143221dbfca041f67cd71c44519..f7be485d1731627aebf9a050a636e68c618cb79c 100644 --- a/docs.it4i/salomon/capacity-computing.md +++ b/docs.it4i/salomon/capacity-computing.md @@ -9,13 +9,13 @@ However, executing huge number of jobs via the PBS queue may strain the system. !!! note Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time. -* Use [Job arrays](salomon/capacity-computing.md#job-arrays) when running huge number of [multithread](salomon/capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs -* Use [GNU parallel](salomon/capacity-computing/#gnu-parallel) when running single core jobs -* Combine [GNU parallel with Job arrays](salomon/capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs +* Use [Job arrays](#job-arrays) when running huge number of [multithread](#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs +* Use [GNU parallel](#gnu-parallel) when running single core jobs +* Combine [GNU parallel with Job arrays](#job-arrays-and-gnu-parallel) when running huge number of single core jobs ## Policy -1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](salomon/capacity-computing/#job-arrays). +1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](#job-arrays). 1. The array size is at most 1500 subjobs. ## Job Arrays @@ -76,7 +76,7 @@ If huge number of parallel multicore (in means of multinode multithread, e. g. M ### Submit the Job Array -To submit the job array, use the qsub -J command. The 900 jobs of the [example above](salomon/capacity-computing/#array_example) may be submitted like this: +To submit the job array, use the qsub -J command. The 900 jobs of the [example above](#array_example) may be submitted like this: ```console $ qsub -N JOBNAME -J 1-900 jobscript @@ -209,7 +209,7 @@ In this example, tasks from tasklist are executed via the GNU parallel. The jobs ### Submit the Job -To submit the job, use the qsub command. The 101 tasks' job of the [example above](salomon/capacity-computing/#gp_example) may be submitted like this: +To submit the job, use the qsub command. The 101 tasks' job of the [example above](#gp_example) may be submitted like this: ```console $ qsub -N JOBNAME jobscript @@ -294,7 +294,7 @@ When deciding this values, think about following guiding rules : ### Submit the Job Array (-J) -To submit the job array, use the qsub -J command. The 960 tasks' job of the [example above](salomon/capacity-computing/#combined_example) may be submitted like this: +To submit the job array, use the qsub -J command. The 960 tasks' job of the [example above](#combined_example) may be submitted like this: ```console $ qsub -N JOBNAME -J 1-960:48 jobscript @@ -308,7 +308,7 @@ In this example, we submit a job array of 20 subjobs. Note the -J 1-960:48, thi ## Examples -Download the examples in [capacity.zip](salomon/capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs. +Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs. Unzip the archive in an empty directory on the cluster and follow the instructions in the README file diff --git a/docs.it4i/salomon/ib-single-plane-topology.md b/docs.it4i/salomon/ib-single-plane-topology.md index 5da7cee75859316d6fcd29247beb1a1d29810974..ab945605ff0deda9eadc72d15a758dc036e7cdc0 100644 --- a/docs.it4i/salomon/ib-single-plane-topology.md +++ b/docs.it4i/salomon/ib-single-plane-topology.md @@ -12,7 +12,7 @@ The SGI ICE X IB Premium Blade provides the first level of interconnection via d Each color in each physical IRU represents one dual-switch ASIC switch. -[IB single-plane topology - ICEX Mcell.pdf](src/IB single-plane topology - ICEX Mcell.pdf) +[IB single-plane topology - ICEX Mcell.pdf](../src/IB single-plane topology - ICEX Mcell.pdf)  @@ -26,6 +26,6 @@ As shown in a diagram [IB Topology](salomon/7d-enhanced-hypercube/#ib-topology) * Racks 27, 28, 29, 30, 31, 32 are equivalent to one M-Cell rack. * Racks 33, 34, 35, 36, 37, 38 are equivalent to one M-Cell rack. -[IB single-plane topology - Accelerated nodes.pdf](src/IB single-plane topology - Accelerated nodes.pdf) +[IB single-plane topology - Accelerated nodes.pdf](../src/IB single-plane topology - Accelerated nodes.pdf) 