From 8780c8f253e0b5777157e5a7ea2ba874f4c748b5 Mon Sep 17 00:00:00 2001 From: Jan Siwiec <jan.siwiec@vsb.cz> Date: Wed, 16 Apr 2025 07:30:51 +0200 Subject: [PATCH] added slurm job arrays --- docs.it4i/general/job-arrays.md | 2 +- mkdocs.yml | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs.it4i/general/job-arrays.md b/docs.it4i/general/job-arrays.md index ca224d19..c6f85f9c 100644 --- a/docs.it4i/general/job-arrays.md +++ b/docs.it4i/general/job-arrays.md @@ -58,7 +58,7 @@ tasklist file. We copy the input file to a scratch directory `/scratch/project/ execute the myprog.x and copy the output file back to the submit directory, under the `$TASK.out` name. The myprog.x executable runs on one node only and must use threads to run in parallel. Be aware, that if the myprog.x **is not multithreaded or multi-process (MPI)**, then all the **jobs are run as single-thread programs, wasting node resources**. -## Submiting Job Array +## Submitting Job Array To submit the job array, use the `sbatch --array` command. The 900 jobs of the [example above][2] may be submitted like this: diff --git a/mkdocs.yml b/mkdocs.yml index 584a6a56..21dee1ea 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -109,7 +109,7 @@ nav: # - Slurm Job Submission and Execution: general/slurm-job-submission-and-execution.md - Capacity Computing: - Introduction: general/capacity-computing.md -# - Job Arrays: general/job-arrays.md + - Job Arrays: general/job-arrays.md - HyperQueue: general/hyperqueue.md # - Parallel Computing and MPI: general/karolina-mpi.md - Tools: -- GitLab