Skip to content
Snippets Groups Projects
Commit 86430162 authored by Jan Siwiec's avatar Jan Siwiec
Browse files

Merge branch 'orca-slurm' into 'master'

Orca slurm

See merge request !455
parents 335e3eea 6a2cc369
No related branches found
No related tags found
1 merge request!455Orca slurm
Pipeline #36053 passed with warnings
!!!warning
This page has not been updated yet. The page does not reflect the transition from PBS to Slurm.
# ORCA # ORCA
## Introduction ## Introduction
...@@ -17,7 +14,8 @@ $ ml av orca ...@@ -17,7 +14,8 @@ $ ml av orca
## Serial Computation With ORCA ## Serial Computation With ORCA
You can test a serial computation with this simple input file. Create a file called orca_serial.inp and fill it with the following ORCA commands: You can test a serial computation with this simple input file.
Create a file called `orca_serial.inp` and paste into it the following ORCA commands:
```bash ```bash
! HF SVP ! HF SVP
...@@ -27,25 +25,26 @@ You can test a serial computation with this simple input file. Create a file cal ...@@ -27,25 +25,26 @@ You can test a serial computation with this simple input file. Create a file cal
* *
``` ```
Next, create a PBS submission file for Karolina cluster (interactive job can be used too): Next, create a Slurm submission file for Karolina cluster (interactive job can be used too):
```bash ```bash
#!/bin/bash #!/bin/bash
#PBS -S /bin/bash #SBATCH --job-name=ORCA_SERIAL
#PBS -N ORCA_SERIAL #SBATCH --nodes=1
#PBS -l select=1:ncpus=128 #SBATCH --ntasks-per-node=128
#PBS -q qexp #SBATCH --partition=qexp
#PBS -A OPEN-0-0 #SBATCH --account=OPEN-0-0
ml ORCA/5.0.1-OpenMPI-4.1.1 ml ORCA/5.0.1-OpenMPI-4.1.1
orca orca_serial.inp srun orca orca_serial.inp
``` ```
Submit the job to the queue and wait before it ends. Then you can find an output log in your working directory: Submit the job to the queue.
After the job ends, you can find an output log in your working directory:
```console ```console
$ qsub submit_serial.pbs sbatch submit_serial.slurm
1417552.infra-pbs 1417552
$ ll ORCA_SERIAL.* $ ll ORCA_SERIAL.*
-rw------- 1 user user 0 Aug 21 12:24 ORCA_SERIAL.e1417552 -rw------- 1 user user 0 Aug 21 12:24 ORCA_SERIAL.e1417552
...@@ -82,7 +81,9 @@ TOTAL RUN TIME: 0 days 0 hours 0 minutes 1 seconds 47 msec ...@@ -82,7 +81,9 @@ TOTAL RUN TIME: 0 days 0 hours 0 minutes 1 seconds 47 msec
## Running ORCA in Parallel ## Running ORCA in Parallel
Your serial computation can be easily converted to parallel. Simply specify the number of parallel processes by the `%pal` directive. In this example, 4 nodes, 128 cores each are used. Your serial computation can be easily converted to parallel.
Simply specify the number of parallel processes by the `%pal` directive.
In this example, 4 nodes, 128 cores each are used.
!!! warning !!! warning
Do not use the `! PAL` directive as only PAL2 to PAL8 is recognized. Do not use the `! PAL` directive as only PAL2 to PAL8 is recognized.
...@@ -98,28 +99,31 @@ Your serial computation can be easily converted to parallel. Simply specify the ...@@ -98,28 +99,31 @@ Your serial computation can be easily converted to parallel. Simply specify the
* *
``` ```
You also need to edit the previously used PBS submission file. You have to specify number of nodes, cores and MPI-processes to run: You also need to edit the previously used Slurm submission file.
You have to specify number of nodes, cores, and MPI-processes to run:
```bash ```bash
#!/bin/bash #!/bin/bash
#PBS -S /bin/bash #SBATCH --job-name=ORCA_PARALLEL
#PBS -N ORCA_PARALLEL #SBATCH --nodes=4
#PBS -l select=4:ncpus=128:mpiprocs=128 #SBATCH --ntasks-per-node=128
#PBS -q qexp #SBATCH --partition=qexp
#PBS -A OPEN-0-0 #SBATCH --account=OPEN-0-0
ml ORCA/5.0.1-OpenMPI-4.1.1 ml ORCA/5.0.1-OpenMPI-4.1.1
orca orca_parallel.inp > output.out srun orca orca_parallel.inp > output.out
``` ```
!!! note !!! note
When running ORCA in parallel, ORCA should **NOT** be started with `mpirun` (e.g. `mpirun -np 4 orca`, etc.) like many MPI programs and **has to be called with a full pathname**. When running ORCA in parallel, ORCA should **NOT** be started with `mpirun` (e.g. `mpirun -np 4 orca`, etc.)
like many MPI programs and **has to be called with a full pathname**.
Submit this job to the queue and see the output file. Submit this job to the queue and see the output file.
```console ```console
$ qsub submit_parallel.pbs $ srun submit_parallel.slurm
1417598.infra-pbs 1417598
$ ll ORCA_PARALLEL.* $ ll ORCA_PARALLEL.*
-rw------- 1 user user 0 Aug 21 13:12 ORCA_PARALLEL.e1417598 -rw------- 1 user user 0 Aug 21 13:12 ORCA_PARALLEL.e1417598
...@@ -159,7 +163,8 @@ $ cat ORCA_PARALLEL.o1417598 ...@@ -159,7 +163,8 @@ $ cat ORCA_PARALLEL.o1417598
TOTAL RUN TIME: 0 days 0 hours 0 minutes 11 seconds 859 msec TOTAL RUN TIME: 0 days 0 hours 0 minutes 11 seconds 859 msec
``` ```
You can see, that the program was running with 512 parallel MPI-processes. In version 5.0.1, only the following modules are parallelized: You can see, that the program was running with 512 parallel MPI-processes.
In version 5.0.1, only the following modules are parallelized:
* ANOINT * ANOINT
* CASSCF / NEVPT2 * CASSCF / NEVPT2
...@@ -181,36 +186,38 @@ You can see, that the program was running with 512 parallel MPI-processes. In ve ...@@ -181,36 +186,38 @@ You can see, that the program was running with 512 parallel MPI-processes. In ve
## Example Submission Script ## Example Submission Script
The following script contains all of the necessary instructions to run an ORCA job, including copying of the files to and from /scratch to utilize the InfiniBand network: The following script contains all of the necessary instructions to run an ORCA job,
including copying of the files to and from `/scratch` to utilize the InfiniBand network:
```bash ```bash
#!/bin/bash #!/bin/bash
#PBS -S /bin/bash #SBATCH --account=OPEN-00-00
#PBS -A OPEN-00-00 #SBATCH --job-name=example-CO
#PBS -N example-CO #SBATCH --partition=qexp
#PBS -q qexp #SBATCH --nodes=2
#PBS -l select=2:ncpus=128:mpiprocs=128:ompthreads=1 #SBATCH --ntasks-per-node=128
#PBS -l walltime=00:05:00 #SBATCH --cpus-per-task=1
#SBATCH --time=00:05:00
ml purge ml purge
ml ORCA/5.0.1-OpenMPI-4.1.1 ml ORCA/5.0.1-OpenMPI-4.1.1
echo $PBS_O_WORKDIR echo $SLURM_O_WORKDIR
cd $PBS_O_WORKDIR cd $SLURM_O_WORKDIR
# create /scratch dir # create /scratch dir
b=$(basename $PBS_O_WORKDIR) b=$(basename $SLURM_O_WORKDIR)
SCRDIR=/scratch/project/OPEN-00-00/$USER/${b}_${PBS_JOBID}/ SCRDIR=/scratch/project/OPEN-00-00/$USER/${b}_${SLURM_JOBID}/
echo $SCRDIR echo $SCRDIR
mkdir -p $SCRDIR mkdir -p $SCRDIR
cd $SCRDIR || exit cd $SCRDIR || exit
# get number of cores used for our job # get number of cores used for our job
ncpus=$(qstat -f $PBS_JOBID | grep resources_used.ncpus | awk '{print $3}') ncpus=$(sacct -j 727825 --format=AllocCPUS --noheader | head -1)
### create ORCA input file ### create ORCA input file
cat > ${PBS_JOBNAME}.inp <<EOF cat > ${SLURM_JOBNAME}.inp <<EOF
! HF def2-TZVP ! HF def2-TZVP
%pal %pal
nprocs $ncpus nprocs $ncpus
...@@ -225,20 +232,21 @@ EOF ...@@ -225,20 +232,21 @@ EOF
### ###
# copy input files to /scratch # copy input files to /scratch
cp -r $PBS_O_WORKDIR/* . cp -r $SLURM_O_WORKDIR/* .
# run calculations # run calculations
$(which orca) ${PBS_JOBNAME}.inp > $PBS_O_WORKDIR/${PBS_JOBNAME}.out $(which orca) ${SLURM_JOBNAME}.inp > $SLURM_O_WORKDIR/${SLURM_JOBNAME}.out
# copy output files to home, delete the rest # copy output files to home, delete the rest
cp * $PBS_O_WORKDIR/ && cd $PBS_O_WORKDIR cp * $SLURM_O_WORKDIR/ && cd $SLURM_O_WORKDIR
rm -rf $SCRDIR rm -rf $SCRDIR
exit exit
``` ```
## Register as a User ## Register as User
You are encouraged to register as a user of ORCA [here][a] in order to take advantage of updates, announcements, and the users forum. You are encouraged to register as a user of ORCA [here][a]
in order to take advantage of updates, announcements, and the users forum.
## Documentation ## Documentation
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment