* use project PROJECT-ID for job access and accounting
* use project `PROJECT-ID` for job access and accounting
* use partition/queue qcpu
* use partition/queue `qcpu`
* use four nodes
* use `4` nodes
* use 128 tasks per node - value used by MPI
* use `128` tasks per node - value used by MPI
* set job time limit to 12 hours
* set job time limit to `12` hours
* load appropriate module
* load appropriate module
* run command, srun serves as Slurm's native way of executing MPI-enabled applications, hostname is used in the example just for sake of simplicity
* run command, `srun` serves as Slurm's native way of executing MPI-enabled applications, `hostname` is used in the example just for sake of simplicity
Submit directory will be used as working directory for submitted job,
Submit directory will be used as working directory for submitted job,
so there is no need to change directory in the job script.
so there is no need to change directory in the job script.
Alternatively you can specify job working directory using sbatch `--chdir` (or shortly `-D`) option.
Alternatively you can specify job working directory using the sbatch `--chdir` (or shortly `-D`) option.
### Job Submit
### Job Submit
...
@@ -131,10 +131,10 @@ $ sbatch script.sh
...
@@ -131,10 +131,10 @@ $ sbatch script.sh
```
```
A path to `script.sh` (relative or absolute) should be given
A path to `script.sh` (relative or absolute) should be given
if the job script is in different location than the job working directory.
if the job script is in a different location than the job working directory.
By default, job output is stored in a file called `slurm-JOBID.out` and contains both job standard output and error output.
By default, job output is stored in a file called `slurm-JOBID.out` and contains both job standard output and error output.
This can be changed using sbatch options `--output` (shortly `-o`) and `--error` (shortly `-e`).
This can be changed using the sbatch options `--output` (shortly `-o`) and `--error` (shortly `-e`).