diff --git a/docs.it4i/anselm/software/ansys/ansys-ls-dyna.md b/docs.it4i/anselm/software/ansys/ansys-ls-dyna.md index 18a0193bcbe0b49e2a6c30f5106fbef0c658e069..af46af93a30600c440e4e52cb5fdbd1edb677660 100644 --- a/docs.it4i/anselm/software/ansys/ansys-ls-dyna.md +++ b/docs.it4i/anselm/software/ansys/ansys-ls-dyna.md @@ -50,6 +50,6 @@ echo Machines: $hl /ansys_inc/v145/ansys/bin/ansys145 -dis -lsdynampp i=input.k -machines $hl ``` -Header of the PBS file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the PBS file (above) is common and description can be find on [this site](../../job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ANSYS solver via parameter i= diff --git a/docs.it4i/anselm/software/ansys/ls-dyna.md b/docs.it4i/anselm/software/ansys/ls-dyna.md index dd5682a25ba7846e49f179f3bf72666316f97aaf..063bcf245e7b74781c953eebb309adfad5c0e48d 100644 --- a/docs.it4i/anselm/software/ansys/ls-dyna.md +++ b/docs.it4i/anselm/software/ansys/ls-dyna.md @@ -30,6 +30,6 @@ module load lsdyna /apps/engineering/lsdyna/lsdyna700s i=input.k ``` -Header of the PBS file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.html). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the PBS file (above) is common and description can be find on [this site](../../job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA **.k** file which is attached to the LS-DYNA solver via parameter i= diff --git a/docs.it4i/anselm/software/debuggers/allinea-performance-reports.md b/docs.it4i/anselm/software/debuggers/allinea-performance-reports.md index ad8d74d773f621ffad8ce0af1ca3bb5000e7ece3..a05f8ad4ef9f50e570ee88174826c9a956b0d91e 100644 --- a/docs.it4i/anselm/software/debuggers/allinea-performance-reports.md +++ b/docs.it4i/anselm/software/debuggers/allinea-performance-reports.md @@ -29,7 +29,7 @@ Instead of [running your MPI program the usual way](../mpi/), use the the perf r $ perf-report mpirun ./mympiprog.x ``` -The mpi program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that [demanding MPI codes should be run within the queue system](../../resource-allocation-and-job-execution/job-submission-and-execution/). +The mpi program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that [demanding MPI codes should be run within the queue system](../../job-submission-and-execution/). ## Example diff --git a/docs.it4i/anselm/software/kvirtualization.md b/docs.it4i/anselm/software/kvirtualization.md index b838944b0b6c1914a55a900ef306a61e2561f340..668a12ee79aa35bc7bf384f91fbf9fd50344fc8e 100644 --- a/docs.it4i/anselm/software/kvirtualization.md +++ b/docs.it4i/anselm/software/kvirtualization.md @@ -12,7 +12,7 @@ There are situations when Anselm's environment is not suitable for user needs. * Application requires privileged access to operating system * ... and combinations of above cases -We offer solution for these cases - **virtualization**. Anselm's environment gives the possibility to run virtual machines on compute nodes. Users can create their own images of operating system with specific software stack and run instances of these images as virtual machines on compute nodes. Run of virtual machines is provided by standard mechanism of [Resource Allocation and Job Execution](../../resource-allocation-and-job-execution/introduction/). +We offer solution for these cases - **virtualization**. Anselm's environment gives the possibility to run virtual machines on compute nodes. Users can create their own images of operating system with specific software stack and run instances of these images as virtual machines on compute nodes. Run of virtual machines is provided by standard mechanism of [Resource Allocation and Job Execution](../job-submission-and-execution/). Solution is based on QEMU-KVM software stack and provides hardware-assisted x86 virtualization. @@ -202,7 +202,7 @@ Run script runs application from shared job directory (mapped as drive z:), proc ### Run Jobs -Run jobs as usual, see [Resource Allocation and Job Execution](../../resource-allocation-and-job-execution/introduction/). Use only full node allocation for virtualization jobs. +Run jobs as usual, see [Resource Allocation and Job Execution](../job-submission-and-execution/). Use only full node allocation for virtualization jobs. ### Running Virtual Machines diff --git a/docs.it4i/anselm/software/numerical-languages/octave.md b/docs.it4i/anselm/software/numerical-languages/octave.md index 7624f95e5ee9092be169638cafe611a9d0c91508..19142eb0f6b9150df56c553ba395d385c4b92a47 100644 --- a/docs.it4i/anselm/software/numerical-languages/octave.md +++ b/docs.it4i/anselm/software/numerical-languages/octave.md @@ -48,7 +48,7 @@ To run octave in batch mode, write an octave script, then write a bash jobscript exit ``` -This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](http://support.it4i.cz/docs/anselm/resource-allocation-and-job-execution). +This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](../../job-submission-and-execution/). The octave c compiler mkoctfile calls the GNU gcc 4.8.1 for compiling native c code. This is very useful for running native c subroutines in octave environment. diff --git a/docs.it4i/anselm/software/numerical-languages/r.md b/docs.it4i/anselm/software/numerical-languages/r.md index f62cad83d6f5e29a8310cef81d10eef8df6fcb60..d70ea9026f50ed82ff789a232a21de97b7b472cb 100644 --- a/docs.it4i/anselm/software/numerical-languages/r.md +++ b/docs.it4i/anselm/software/numerical-languages/r.md @@ -66,7 +66,7 @@ Example jobscript: exit ``` -This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section](../../resource-allocation-and-job-execution/job-submission-and-execution/). +This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section](../../job-submission-and-execution/). ## Parallel R @@ -396,4 +396,4 @@ Example jobscript for [static Rmpi](r/#static-rmpi) parallel R execution, runnin exit ``` -For more information about jobscript and MPI execution refer to the [Job submission](../../resource-allocation-and-job-execution/job-submission-and-execution/) and general [MPI](../mpi/mpi/) sections. +For more information about jobscript and MPI execution refer to the [Job submission](../../job-submission-and-execution/) and general [MPI](../mpi/mpi/) sections. diff --git a/docs.it4i/anselm/software/omics-master/overview.md b/docs.it4i/anselm/software/omics-master/overview.md index 7c3876e23f028126551d12c1ae7eb48e33b36e80..072994e9e862069afd8df7e42f2a661c1078a50b 100644 --- a/docs.it4i/anselm/software/omics-master/overview.md +++ b/docs.it4i/anselm/software/omics-master/overview.md @@ -231,7 +231,7 @@ second one. --project>. Project ID of your supercomputer allocation. - --queue. [Queue](../../resource-allocation-and-job-execution/introduction.html) to run the jobs in. + --queue. [Queue](../../introduction.html) to run the jobs in. ``` Input, output and ped arguments are mandatory. If the output folder does not exist, the pipeline will create it. @@ -278,7 +278,7 @@ Now, we can launch the pipeline (replace OPEN-0-0 with your Project ID): $ ngsPipeline -i /scratch/$USER/omics/sample_data/data -o /scratch/$USER/omics/results -p /scratch/$USER/omics/sample_data/data/file.ped --project OPEN-0-0 --queue qprod ``` -This command submits the processing [jobs to the queue](../../resource-allocation-and-job-execution/job-submission-and-execution.html). +This command submits the processing [jobs to the queue](../../job-submission-and-execution.html). If we want to re-launch the pipeline from stage 4 until stage 20 we should use the next command: diff --git a/docs.it4i/anselm/software/openfoam.md b/docs.it4i/anselm/software/openfoam.md index d7394a6e9282c42fb2ab1dc07ce3395f475dd645..a2c98e3f2d84e11b0e73b3b6c7d9c083422101bb 100644 --- a/docs.it4i/anselm/software/openfoam.md +++ b/docs.it4i/anselm/software/openfoam.md @@ -45,7 +45,7 @@ In /opt/modules/modulefiles/engineering you can see installed engineering softwa lsdyna/7.x.x openfoam/2.2.1-gcc481-openmpi1.6.5-SP ``` -For information how to use modules please [look here](../environment-and-modules/ "Environment and Modules "). +For information how to use modules please [look here](../environment-and-modules/). ## Getting Started @@ -112,7 +112,7 @@ Job submission $ qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=16,walltime=03:00:00 test.sh ``` -For information about job submission please [look here](../resource-allocation-and-job-execution/job-submission-and-execution/ "Job submission"). +For information about job submission please [look here](../job-submission-and-execution/). ## Running Applications in Parallel diff --git a/docs.it4i/anselm/software/paraview.md b/docs.it4i/anselm/software/paraview.md index 8d7f0552fef1a2a6d10e0f2471a153a3b9632875..7007369800f88b5c672640ee8c32952ca73d4df7 100644 --- a/docs.it4i/anselm/software/paraview.md +++ b/docs.it4i/anselm/software/paraview.md @@ -26,7 +26,7 @@ To launch the server, you must first allocate compute nodes, for example $ qsub -I -q qprod -A OPEN-0-0 -l select=2 ``` -to launch an interactive session on 2 nodes. Refer to [Resource Allocation and Job Execution](../resource-allocation-and-job-execution/introduction/) for details. +to launch an interactive session on 2 nodes. Refer to [Resource Allocation and Job Execution](../job-submission-and-execution/) for details. After the interactive session is opened, load the ParaView module : diff --git a/docs.it4i/general/accessing-the-clusters/graphical-user-interface/vnc.md b/docs.it4i/general/accessing-the-clusters/graphical-user-interface/vnc.md index e516ab59cad1774c000f5c894975745fa7bb8a87..f064b2e6a89dc4b2c8290a0b552eac82ca973941 100644 --- a/docs.it4i/general/accessing-the-clusters/graphical-user-interface/vnc.md +++ b/docs.it4i/general/accessing-the-clusters/graphical-user-interface/vnc.md @@ -198,7 +198,7 @@ Allow incoming X11 graphics from the compute nodes at the login node: $ xhost + ``` -Get an interactive session on a compute node (for more detailed info [look here](../../../anselm/resource-allocation-and-job-execution/job-submission-and-execution/)). Use the **-v DISPLAY** option to propagate the DISPLAY on the compute node. In this example, we want a complete node (24 cores in this example) from the production queue: +Get an interactive session on a compute node (for more detailed info [look here](../../../anselm/job-submission-and-execution/)). Use the **-v DISPLAY** option to propagate the DISPLAY on the compute node. In this example, we want a complete node (24 cores in this example) from the production queue: ```bash $ qsub -I -v DISPLAY=$(uname -n):$(echo $DISPLAY | cut -d ':' -f 2) -A PROJECT_ID -q qprod -l select=1:ncpus=24 diff --git a/docs.it4i/salomon/software/ansys/ansys-cfx.md b/docs.it4i/salomon/software/ansys/ansys-cfx.md index 0eb52d3e6f29fcf4e8ab2e37c27a7166faea2247..21ce8f93b16958a184d15af5235830e9d39406b9 100644 --- a/docs.it4i/salomon/software/ansys/ansys-cfx.md +++ b/docs.it4i/salomon/software/ansys/ansys-cfx.md @@ -47,7 +47,7 @@ echo Machines: $hl /ansys_inc/v145/CFX/bin/cfx5solve -def input.def -size 4 -size-ni 4x -part-large -start-method "Platform MPI Distributed Parallel" -par-dist $hl -P aa_r ``` -Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the pbs file (above) is common and description can be find on [this site](../../job-submission-and-execution/). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. >Input file has to be defined by common CFX def file which is attached to the cfx solver via parameter -def diff --git a/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md b/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md index 5d49022a7dd18dc85af28e501cfa08b31be28272..8646c26665ea9f10d6d70405e961f1e2efe7fbb9 100644 --- a/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md +++ b/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md @@ -50,6 +50,6 @@ echo Machines: $hl /ansys_inc/v145/ansys/bin/ansys145 -dis -lsdynampp i=input.k -machines $hl ``` -Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the pbs file (above) is common and description can be find on [this site](../../job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ansys solver via parameter i= diff --git a/docs.it4i/salomon/software/debuggers/allinea-performance-reports.md b/docs.it4i/salomon/software/debuggers/allinea-performance-reports.md index f79935222903cd0f98e90e9f1e923ae55c910eb0..3d0826e994bb6434b9cd0cd100249393191c03d3 100644 --- a/docs.it4i/salomon/software/debuggers/allinea-performance-reports.md +++ b/docs.it4i/salomon/software/debuggers/allinea-performance-reports.md @@ -28,7 +28,7 @@ Instead of [running your MPI program the usual way](../mpi/mpi/), use the the pe $ perf-report mpirun ./mympiprog.x ``` -The mpi program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that demanding MPI codes should be run within [the queue system](../../resource-allocation-and-job-execution/job-submission-and-execution/). +The mpi program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that demanding MPI codes should be run within [the queue system](../../job-submission-and-execution/). ## Example diff --git a/docs.it4i/salomon/software/numerical-languages/octave.md b/docs.it4i/salomon/software/numerical-languages/octave.md index 1f96d0849dd4c0ffc1928bcb2406607600f06fd9..6461bc4cc003b806d0f75320d58d5c9009ab5b8b 100644 --- a/docs.it4i/salomon/software/numerical-languages/octave.md +++ b/docs.it4i/salomon/software/numerical-languages/octave.md @@ -45,7 +45,7 @@ To run octave in batch mode, write an octave script, then write a bash jobscript exit ``` -This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](../../resource-allocation-and-job-execution/). +This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](../../). The octave c compiler mkoctfile calls the GNU gcc 4.8.1 for compiling native c code. This is very useful for running native c subroutines in octave environment. diff --git a/docs.it4i/salomon/software/numerical-languages/r.md b/docs.it4i/salomon/software/numerical-languages/r.md index e6f9a69b4d27fd0b4b844703759b7dc15d109d8c..6a01926e1b69bdd97d695d19b7a056419408acde 100644 --- a/docs.it4i/salomon/software/numerical-languages/r.md +++ b/docs.it4i/salomon/software/numerical-languages/r.md @@ -66,7 +66,7 @@ Example jobscript: exit ``` -This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section](../../resource-allocation-and-job-execution/job-submission-and-execution/). +This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section](../../job-submission-and-execution/). ## Parallel R @@ -392,7 +392,7 @@ Example jobscript for [static Rmpi](r/#static-rmpi) parallel R execution, runnin exit ``` -For more information about jobscripts and MPI execution refer to the [Job submission](../../resource-allocation-and-job-execution/job-submission-and-execution/) and general [MPI](../mpi/mpi/) sections. +For more information about jobscripts and MPI execution refer to the [Job submission](../../job-submission-and-execution/) and general [MPI](../mpi/mpi/) sections. ## Xeon Phi Offload