Skip to content
Snippets Groups Projects
Commit 1eec6847 authored by Jan Siwiec's avatar Jan Siwiec
Browse files

Merge branch 'hol0598-master-patch-83551' into 'master'

Update orca.md

See merge request !482
parents 1d1a597d 8b11a657
No related branches found
No related tags found
1 merge request!482Update orca.md
Pipeline #41294 failed
...@@ -31,12 +31,12 @@ Next, create a Slurm submission file for Karolina cluster (interactive job can b ...@@ -31,12 +31,12 @@ Next, create a Slurm submission file for Karolina cluster (interactive job can b
#!/bin/bash #!/bin/bash
#SBATCH --job-name=ORCA_SERIAL #SBATCH --job-name=ORCA_SERIAL
#SBATCH --nodes=1 #SBATCH --nodes=1
#SBATCH --ntasks-per-node=128 #SBATCH --partition=qcpu_exp
#SBATCH --partition=qexp #SBATCH --time=1:00:00
#SBATCH --account=OPEN-0-0 #SBATCH --account=OPEN-0-0
ml ORCA/5.0.1-OpenMPI-4.1.1 ml ORCA/6.0.0-gompi-2023a-avx2
srun orca orca_serial.inp orca orca_serial.inp
``` ```
Submit the job to the queue. Submit the job to the queue.
...@@ -56,34 +56,39 @@ $ cat ORCA_SERIAL.o1417552 ...@@ -56,34 +56,39 @@ $ cat ORCA_SERIAL.o1417552
* O R C A * * O R C A *
***************** *****************
--- An Ab Initio, DFT and Semiempirical electronic structure package --- #########################################################
# -***- #
####################################################### # Department of theory and spectroscopy #
# -***- # # #
# Department of molecular theory and spectroscopy # # Frank Neese #
# Directorship: Frank Neese # # #
# Max Planck Institute for Chemical Energy Conversion # # Directorship, Architecture, Infrastructure #
# D-45470 Muelheim/Ruhr # # SHARK, DRIVERS #
# Germany # # Core code/Algorithms in most modules #
# # # #
# All rights reserved # # Max Planck Institute fuer Kohlenforschung #
# -***- # # Kaiser Wilhelm Platz 1 #
####################################################### # D-45470 Muelheim/Ruhr #
# Germany #
# #
Program Version 5.0.1 - RELEASE - # All rights reserved #
# -***- #
#########################################################
Program Version 6.0.0 - RELEASE -
... ...
****ORCA TERMINATED NORMALLY**** ****ORCA TERMINATED NORMALLY****
TOTAL RUN TIME: 0 days 0 hours 0 minutes 1 seconds 47 msec TOTAL RUN TIME: 0 days 0 hours 0 minutes 0 seconds 980 msec
``` ```
## Running ORCA in Parallel ## Running ORCA in Parallel
Your serial computation can be easily converted to parallel. Your serial computation can be easily converted to parallel.
Simply specify the number of parallel processes by the `%pal` directive. Simply specify the number of parallel processes by the `%pal` directive.
In this example, 4 nodes, 128 cores each are used. In this example, 1 node, 16 cores are used.
!!! warning !!! warning
Do not use the `! PAL` directive as only PAL2 to PAL8 is recognized. Do not use the `! PAL` directive as only PAL2 to PAL8 is recognized.
...@@ -91,7 +96,7 @@ In this example, 4 nodes, 128 cores each are used. ...@@ -91,7 +96,7 @@ In this example, 4 nodes, 128 cores each are used.
```bash ```bash
! HF SVP ! HF SVP
%pal %pal
nprocs 512 # 4 nodes, 128 cores each nprocs 16
end end
* xyz 0 1 * xyz 0 1
C 0 0 0 C 0 0 0
...@@ -106,13 +111,14 @@ You have to specify number of nodes, cores, and MPI-processes to run: ...@@ -106,13 +111,14 @@ You have to specify number of nodes, cores, and MPI-processes to run:
#!/bin/bash #!/bin/bash
#SBATCH --job-name=ORCA_PARALLEL #SBATCH --job-name=ORCA_PARALLEL
#SBATCH --nodes=4 #SBATCH --nodes=1
#SBATCH --ntasks-per-node=128 #SBATCH --ntasks-per-node=16
#SBATCH --partition=qexp #SBATCH --partition=qcpu_exp
#SBATCH --account=OPEN-0-0 #SBATCH --account=OPEN-0-0
#SBATCH --time=1:00:00
ml ORCA/5.0.1-OpenMPI-4.1.1 ml ORCA/6.0.0-gompi-2023a-avx2
srun orca orca_parallel.inp > output.out $(which orca) orca_parallel.inp > output.out
``` ```
!!! note !!! note
...@@ -122,67 +128,79 @@ srun orca orca_parallel.inp > output.out ...@@ -122,67 +128,79 @@ srun orca orca_parallel.inp > output.out
Submit this job to the queue and see the output file. Submit this job to the queue and see the output file.
```console ```console
$ srun submit_parallel.slurm $ sbatch submit_parallel.slurm
1417598 Submitted batch job 2127305
$ ll ORCA_PARALLEL.*
-rw------- 1 user user 0 Aug 21 13:12 ORCA_PARALLEL.e1417598
-rw------- 1 user user 23561 Aug 21 13:13 ORCA_PARALLEL.o1417598
$ cat ORCA_PARALLEL.o1417598 $ cat output.out
***************** *****************
* O R C A * * O R C A *
***************** *****************
--- An Ab Initio, DFT and Semiempirical electronic structure package ---
####################################################### #########################################################
# -***- # # -***- #
# Department of molecular theory and spectroscopy # # Department of theory and spectroscopy #
# Directorship: Frank Neese # # #
# Max Planck Institute for Chemical Energy Conversion # # Frank Neese #
# D-45470 Muelheim/Ruhr # # #
# Germany # # Directorship, Architecture, Infrastructure #
# # # SHARK, DRIVERS #
# All rights reserved # # Core code/Algorithms in most modules #
# -***- # # #
####################################################### # Max Planck Institute fuer Kohlenforschung #
# Kaiser Wilhelm Platz 1 #
# D-45470 Muelheim/Ruhr #
Program Version 5.0.1 - RELEASE - # Germany #
# #
# All rights reserved #
# -***- #
#########################################################
Program Version 6.0.0 - RELEASE -
... ...
************************************************************ ************************************************************
* Program running with 64 parallel MPI-processes * * Program running with 16 parallel MPI-processes *
* working on a common directory * * working on a common directory *
************************************************************ ************************************************************
... ...
****ORCA TERMINATED NORMALLY**** ****ORCA TERMINATED NORMALLY****
TOTAL RUN TIME: 0 days 0 hours 0 minutes 11 seconds 859 msec TOTAL RUN TIME: 0 days 0 hours 0 minutes 17 seconds 62 msec
``` ```
You can see, that the program was running with 512 parallel MPI-processes. You can see, that the program was running with 16 parallel MPI-processes.
In version 5.0.1, only the following modules are parallelized: In version 6.0.0, only the following modules are parallelized:
* ANOINT * AUTOCI
* CASSCF / NEVPT2 * CASSCF / NEVPT2 / CASSCFRESP
* CIPSI * CIPSI
* CIS/TDDFT * CIS/TDDFT
* CPSCF * GRAD (general Gradient program)
* EPRNMR * GUESS
* GTOINT * LEANSCF (memory conserving SCF solver)
* MDCI (Canonical-, PNO-, DLPNO-Methods) * MCRPA
* MP2 and RI-MP2 (including Gradient and Hessian) * MDCI (Canonical- and DLPNO-Methods)
* MM
* MP2 and RI-MP2 (including Gradients)
* MRCI * MRCI
* PC * PC
* PLOT
* PNMR
* POP
* PROP
* PROPINT
* REL
* ROCIS * ROCIS
* SCF
* SCFGRAD * SCFGRAD
* SCFHESS * SCFRESP (with SCFHessian)
* SOC * STARTUP
* Numerical Gradients and Frequencies * VPOT
* Numerical Gradients, Frequencies, Overtones-and-Combination-Bands
* VPT2
* NEB (Nudged Elastic Band
## Example Submission Script ## Example Submission Script
...@@ -253,4 +271,5 @@ in order to take advantage of updates, announcements, and the users forum. ...@@ -253,4 +271,5 @@ in order to take advantage of updates, announcements, and the users forum.
A comprehensive [manual][b] is available online for registered users. A comprehensive [manual][b] is available online for registered users.
[a]: https://orcaforum.kofo.mpg.de/app.php/portal [a]: https://orcaforum.kofo.mpg.de/app.php/portal
[b]: https://orcaforum.kofo.mpg.de/app.php/dlext/?cat=1 [b]: https://www.faccts.de/docs
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment