Skip to content
Snippets Groups Projects
Commit cc35d49d authored by Pavel Jirásek's avatar Pavel Jirásek
Browse files

Fix resource-allocation-and-job-execution

parent 7195ef92
No related branches found
No related tags found
No related merge requests found
Showing
with 20 additions and 20 deletions
...@@ -50,6 +50,6 @@ echo Machines: $hl ...@@ -50,6 +50,6 @@ echo Machines: $hl
/ansys_inc/v145/ansys/bin/ansys145 -dis -lsdynampp i=input.k -machines $hl /ansys_inc/v145/ansys/bin/ansys145 -dis -lsdynampp i=input.k -machines $hl
``` ```
Header of the PBS file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Header of the PBS file (above) is common and description can be find on [this site](../../job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ANSYS solver via parameter i= Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ANSYS solver via parameter i=
...@@ -30,6 +30,6 @@ module load lsdyna ...@@ -30,6 +30,6 @@ module load lsdyna
/apps/engineering/lsdyna/lsdyna700s i=input.k /apps/engineering/lsdyna/lsdyna700s i=input.k
``` ```
Header of the PBS file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.html). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Header of the PBS file (above) is common and description can be find on [this site](../../job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA **.k** file which is attached to the LS-DYNA solver via parameter i= Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA **.k** file which is attached to the LS-DYNA solver via parameter i=
...@@ -29,7 +29,7 @@ Instead of [running your MPI program the usual way](../mpi/), use the the perf r ...@@ -29,7 +29,7 @@ Instead of [running your MPI program the usual way](../mpi/), use the the perf r
$ perf-report mpirun ./mympiprog.x $ perf-report mpirun ./mympiprog.x
``` ```
The mpi program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that [demanding MPI codes should be run within the queue system](../../resource-allocation-and-job-execution/job-submission-and-execution/). The mpi program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that [demanding MPI codes should be run within the queue system](../../job-submission-and-execution/).
## Example ## Example
......
...@@ -12,7 +12,7 @@ There are situations when Anselm's environment is not suitable for user needs. ...@@ -12,7 +12,7 @@ There are situations when Anselm's environment is not suitable for user needs.
* Application requires privileged access to operating system * Application requires privileged access to operating system
* ... and combinations of above cases * ... and combinations of above cases
We offer solution for these cases - **virtualization**. Anselm's environment gives the possibility to run virtual machines on compute nodes. Users can create their own images of operating system with specific software stack and run instances of these images as virtual machines on compute nodes. Run of virtual machines is provided by standard mechanism of [Resource Allocation and Job Execution](../../resource-allocation-and-job-execution/introduction/). We offer solution for these cases - **virtualization**. Anselm's environment gives the possibility to run virtual machines on compute nodes. Users can create their own images of operating system with specific software stack and run instances of these images as virtual machines on compute nodes. Run of virtual machines is provided by standard mechanism of [Resource Allocation and Job Execution](../job-submission-and-execution/).
Solution is based on QEMU-KVM software stack and provides hardware-assisted x86 virtualization. Solution is based on QEMU-KVM software stack and provides hardware-assisted x86 virtualization.
...@@ -202,7 +202,7 @@ Run script runs application from shared job directory (mapped as drive z:), proc ...@@ -202,7 +202,7 @@ Run script runs application from shared job directory (mapped as drive z:), proc
### Run Jobs ### Run Jobs
Run jobs as usual, see [Resource Allocation and Job Execution](../../resource-allocation-and-job-execution/introduction/). Use only full node allocation for virtualization jobs. Run jobs as usual, see [Resource Allocation and Job Execution](../job-submission-and-execution/). Use only full node allocation for virtualization jobs.
### Running Virtual Machines ### Running Virtual Machines
......
...@@ -48,7 +48,7 @@ To run octave in batch mode, write an octave script, then write a bash jobscript ...@@ -48,7 +48,7 @@ To run octave in batch mode, write an octave script, then write a bash jobscript
exit exit
``` ```
This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](http://support.it4i.cz/docs/anselm/resource-allocation-and-job-execution). This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](../../job-submission-and-execution/).
The octave c compiler mkoctfile calls the GNU gcc 4.8.1 for compiling native c code. This is very useful for running native c subroutines in octave environment. The octave c compiler mkoctfile calls the GNU gcc 4.8.1 for compiling native c code. This is very useful for running native c subroutines in octave environment.
......
...@@ -66,7 +66,7 @@ Example jobscript: ...@@ -66,7 +66,7 @@ Example jobscript:
exit exit
``` ```
This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section](../../resource-allocation-and-job-execution/job-submission-and-execution/). This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section](../../job-submission-and-execution/).
## Parallel R ## Parallel R
...@@ -396,4 +396,4 @@ Example jobscript for [static Rmpi](r/#static-rmpi) parallel R execution, runnin ...@@ -396,4 +396,4 @@ Example jobscript for [static Rmpi](r/#static-rmpi) parallel R execution, runnin
exit exit
``` ```
For more information about jobscript and MPI execution refer to the [Job submission](../../resource-allocation-and-job-execution/job-submission-and-execution/) and general [MPI](../mpi/mpi/) sections. For more information about jobscript and MPI execution refer to the [Job submission](../../job-submission-and-execution/) and general [MPI](../mpi/mpi/) sections.
...@@ -231,7 +231,7 @@ second one. ...@@ -231,7 +231,7 @@ second one.
--project>. Project ID of your supercomputer allocation. --project>. Project ID of your supercomputer allocation.
--queue. [Queue](../../resource-allocation-and-job-execution/introduction.html) to run the jobs in. --queue. [Queue](../../introduction.html) to run the jobs in.
``` ```
Input, output and ped arguments are mandatory. If the output folder does not exist, the pipeline will create it. Input, output and ped arguments are mandatory. If the output folder does not exist, the pipeline will create it.
...@@ -278,7 +278,7 @@ Now, we can launch the pipeline (replace OPEN-0-0 with your Project ID): ...@@ -278,7 +278,7 @@ Now, we can launch the pipeline (replace OPEN-0-0 with your Project ID):
$ ngsPipeline -i /scratch/$USER/omics/sample_data/data -o /scratch/$USER/omics/results -p /scratch/$USER/omics/sample_data/data/file.ped --project OPEN-0-0 --queue qprod $ ngsPipeline -i /scratch/$USER/omics/sample_data/data -o /scratch/$USER/omics/results -p /scratch/$USER/omics/sample_data/data/file.ped --project OPEN-0-0 --queue qprod
``` ```
This command submits the processing [jobs to the queue](../../resource-allocation-and-job-execution/job-submission-and-execution.html). This command submits the processing [jobs to the queue](../../job-submission-and-execution.html).
If we want to re-launch the pipeline from stage 4 until stage 20 we should use the next command: If we want to re-launch the pipeline from stage 4 until stage 20 we should use the next command:
......
...@@ -45,7 +45,7 @@ In /opt/modules/modulefiles/engineering you can see installed engineering softwa ...@@ -45,7 +45,7 @@ In /opt/modules/modulefiles/engineering you can see installed engineering softwa
lsdyna/7.x.x openfoam/2.2.1-gcc481-openmpi1.6.5-SP lsdyna/7.x.x openfoam/2.2.1-gcc481-openmpi1.6.5-SP
``` ```
For information how to use modules please [look here](../environment-and-modules/ "Environment and Modules "). For information how to use modules please [look here](../environment-and-modules/).
## Getting Started ## Getting Started
...@@ -112,7 +112,7 @@ Job submission ...@@ -112,7 +112,7 @@ Job submission
$ qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=16,walltime=03:00:00 test.sh $ qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=16,walltime=03:00:00 test.sh
``` ```
For information about job submission please [look here](../resource-allocation-and-job-execution/job-submission-and-execution/ "Job submission"). For information about job submission please [look here](../job-submission-and-execution/).
## Running Applications in Parallel ## Running Applications in Parallel
......
...@@ -26,7 +26,7 @@ To launch the server, you must first allocate compute nodes, for example ...@@ -26,7 +26,7 @@ To launch the server, you must first allocate compute nodes, for example
$ qsub -I -q qprod -A OPEN-0-0 -l select=2 $ qsub -I -q qprod -A OPEN-0-0 -l select=2
``` ```
to launch an interactive session on 2 nodes. Refer to [Resource Allocation and Job Execution](../resource-allocation-and-job-execution/introduction/) for details. to launch an interactive session on 2 nodes. Refer to [Resource Allocation and Job Execution](../job-submission-and-execution/) for details.
After the interactive session is opened, load the ParaView module : After the interactive session is opened, load the ParaView module :
......
...@@ -198,7 +198,7 @@ Allow incoming X11 graphics from the compute nodes at the login node: ...@@ -198,7 +198,7 @@ Allow incoming X11 graphics from the compute nodes at the login node:
$ xhost + $ xhost +
``` ```
Get an interactive session on a compute node (for more detailed info [look here](../../../anselm/resource-allocation-and-job-execution/job-submission-and-execution/)). Use the **-v DISPLAY** option to propagate the DISPLAY on the compute node. In this example, we want a complete node (24 cores in this example) from the production queue: Get an interactive session on a compute node (for more detailed info [look here](../../../anselm/job-submission-and-execution/)). Use the **-v DISPLAY** option to propagate the DISPLAY on the compute node. In this example, we want a complete node (24 cores in this example) from the production queue:
```bash ```bash
$ qsub -I -v DISPLAY=$(uname -n):$(echo $DISPLAY | cut -d ':' -f 2) -A PROJECT_ID -q qprod -l select=1:ncpus=24 $ qsub -I -v DISPLAY=$(uname -n):$(echo $DISPLAY | cut -d ':' -f 2) -A PROJECT_ID -q qprod -l select=1:ncpus=24
......
...@@ -47,7 +47,7 @@ echo Machines: $hl ...@@ -47,7 +47,7 @@ echo Machines: $hl
/ansys_inc/v145/CFX/bin/cfx5solve -def input.def -size 4 -size-ni 4x -part-large -start-method "Platform MPI Distributed Parallel" -par-dist $hl -P aa_r /ansys_inc/v145/CFX/bin/cfx5solve -def input.def -size 4 -size-ni 4x -part-large -start-method "Platform MPI Distributed Parallel" -par-dist $hl -P aa_r
``` ```
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Header of the pbs file (above) is common and description can be find on [this site](../../job-submission-and-execution/). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. >Input file has to be defined by common CFX def file which is attached to the cfx solver via parameter -def Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. >Input file has to be defined by common CFX def file which is attached to the cfx solver via parameter -def
......
...@@ -50,6 +50,6 @@ echo Machines: $hl ...@@ -50,6 +50,6 @@ echo Machines: $hl
/ansys_inc/v145/ansys/bin/ansys145 -dis -lsdynampp i=input.k -machines $hl /ansys_inc/v145/ansys/bin/ansys145 -dis -lsdynampp i=input.k -machines $hl
``` ```
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Header of the pbs file (above) is common and description can be find on [this site](../../job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ansys solver via parameter i= Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ansys solver via parameter i=
...@@ -28,7 +28,7 @@ Instead of [running your MPI program the usual way](../mpi/mpi/), use the the pe ...@@ -28,7 +28,7 @@ Instead of [running your MPI program the usual way](../mpi/mpi/), use the the pe
$ perf-report mpirun ./mympiprog.x $ perf-report mpirun ./mympiprog.x
``` ```
The mpi program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that demanding MPI codes should be run within [the queue system](../../resource-allocation-and-job-execution/job-submission-and-execution/). The mpi program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that demanding MPI codes should be run within [the queue system](../../job-submission-and-execution/).
## Example ## Example
......
...@@ -45,7 +45,7 @@ To run octave in batch mode, write an octave script, then write a bash jobscript ...@@ -45,7 +45,7 @@ To run octave in batch mode, write an octave script, then write a bash jobscript
exit exit
``` ```
This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](../../resource-allocation-and-job-execution/). This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](../../).
The octave c compiler mkoctfile calls the GNU gcc 4.8.1 for compiling native c code. This is very useful for running native c subroutines in octave environment. The octave c compiler mkoctfile calls the GNU gcc 4.8.1 for compiling native c code. This is very useful for running native c subroutines in octave environment.
......
...@@ -66,7 +66,7 @@ Example jobscript: ...@@ -66,7 +66,7 @@ Example jobscript:
exit exit
``` ```
This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section](../../resource-allocation-and-job-execution/job-submission-and-execution/). This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section](../../job-submission-and-execution/).
## Parallel R ## Parallel R
...@@ -392,7 +392,7 @@ Example jobscript for [static Rmpi](r/#static-rmpi) parallel R execution, runnin ...@@ -392,7 +392,7 @@ Example jobscript for [static Rmpi](r/#static-rmpi) parallel R execution, runnin
exit exit
``` ```
For more information about jobscripts and MPI execution refer to the [Job submission](../../resource-allocation-and-job-execution/job-submission-and-execution/) and general [MPI](../mpi/mpi/) sections. For more information about jobscripts and MPI execution refer to the [Job submission](../../job-submission-and-execution/) and general [MPI](../mpi/mpi/) sections.
## Xeon Phi Offload ## Xeon Phi Offload
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment