diff --git a/docs.it4i/general/job-arrays.md b/docs.it4i/general/job-arrays.md index ca224d19c76d38d745c30fff0c7cbcf3a10245b4..c6f85f9cb3570ff6b285de26b4e029b27807c326 100644 --- a/docs.it4i/general/job-arrays.md +++ b/docs.it4i/general/job-arrays.md @@ -58,7 +58,7 @@ tasklist file. We copy the input file to a scratch directory `/scratch/project/ execute the myprog.x and copy the output file back to the submit directory, under the `$TASK.out` name. The myprog.x executable runs on one node only and must use threads to run in parallel. Be aware, that if the myprog.x **is not multithreaded or multi-process (MPI)**, then all the **jobs are run as single-thread programs, wasting node resources**. -## Submiting Job Array +## Submitting Job Array To submit the job array, use the `sbatch --array` command. The 900 jobs of the [example above][2] may be submitted like this: diff --git a/mkdocs.yml b/mkdocs.yml index 584a6a561d0e80cb80eec7aa3cd5d49751e11021..21dee1ea2b8fb6d008b64c2b77190f73d5bd8165 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -109,7 +109,7 @@ nav: # - Slurm Job Submission and Execution: general/slurm-job-submission-and-execution.md - Capacity Computing: - Introduction: general/capacity-computing.md -# - Job Arrays: general/job-arrays.md + - Job Arrays: general/job-arrays.md - HyperQueue: general/hyperqueue.md # - Parallel Computing and MPI: general/karolina-mpi.md - Tools: