Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
docs.it4i.cz
Manage
Activity
Members
Labels
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container Registry
Model registry
Operate
Environments
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
SCS
docs.it4i.cz
Commits
1eec6847
Commit
1eec6847
authored
2 months ago
by
Jan Siwiec
Browse files
Options
Downloads
Plain Diff
Merge branch 'hol0598-master-patch-83551' into 'master'
Update orca.md See merge request
!482
parents
1d1a597d
8b11a657
No related branches found
No related tags found
1 merge request
!482
Update orca.md
Pipeline
#41294
failed
2 months ago
Stage: test
Stage: build
Stage: deploy
Stage: after_test
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
docs.it4i/software/chemistry/orca.md
+85
-66
85 additions, 66 deletions
docs.it4i/software/chemistry/orca.md
with
85 additions
and
66 deletions
docs.it4i/software/chemistry/orca.md
+
85
−
66
View file @
1eec6847
...
...
@@ -31,12 +31,12 @@ Next, create a Slurm submission file for Karolina cluster (interactive job can b
#!/bin/bash
#SBATCH --job-name=ORCA_SERIAL
#SBATCH --nodes=1
#SBATCH --
ntasks-per-node=128
#SBATCH --
partition=qexp
#SBATCH --
partition=qcpu_exp
#SBATCH --
time=1:00:00
#SBATCH --account=OPEN-0-0
ml ORCA/
5
.0.
1-OpenMPI-4.1.1
srun
orca orca_serial.inp
ml ORCA/
6
.0.
0-gompi-2023a-avx2
orca orca_serial.inp
```
Submit the job to the queue.
...
...
@@ -56,34 +56,39 @@ $ cat ORCA_SERIAL.o1417552
* O R C A *
*****************
--- An Ab Initio, DFT and Semiempirical electronic structure package ---
#
######################################################
#
-
***
-
#
#
Department of molecular theory and spectroscopy
#
#
Directorship: Frank Neese
#
#
Max Planck Institute
for
Chemical Energy Conversion
#
#
D-45470 Muelheim/Ruhr
#
#
Germany
#
#
#
#
All rights reserved
#
#
-
***
-
#
#
######################################################
Program Version 5.0.1 - RELEASE -
#
########################################################
#
-
***
-
#
#
Department of theory and spectroscopy
#
#
#
#
Frank Neese
#
#
#
#
Directorship, Architecture, Infrastructure
#
#
SHARK, DRIVERS
#
#
Core code/Algorithms
in
most modules
#
#
#
#
Max Planck Institute fuer Kohlenforschung
#
#
Kaiser Wilhelm Platz 1
#
#
D-45470 Muelheim/Ruhr
#
#
Germany
#
#
#
#
All rights reserved
#
#
-
***
-
#
#
########################################################
Program Version 6.0.0 - RELEASE -
...
****ORCA TERMINATED NORMALLY****
TOTAL RUN TIME: 0 days 0 hours 0 minutes
1
seconds
47
msec
TOTAL RUN TIME: 0 days 0 hours 0 minutes
0
seconds
980
msec
```
## Running ORCA in Parallel
Your serial computation can be easily converted to parallel.
Simply specify the number of parallel processes by the
`%pal`
directive.
In this example,
4
node
s
, 1
28
cores
each
are used.
In this example,
1
node, 1
6
cores are used.
!!! warning
Do not use the
`! PAL`
directive as only PAL2 to PAL8 is recognized.
...
...
@@ -91,7 +96,7 @@ In this example, 4 nodes, 128 cores each are used.
```
bash
!
HF SVP
%pal
nprocs
512
# 4 nodes, 128 cores each
nprocs
16
end
*
xyz 0 1
C 0 0 0
...
...
@@ -106,13 +111,14 @@ You have to specify number of nodes, cores, and MPI-processes to run:
#!/bin/bash
#SBATCH --job-name=ORCA_PARALLEL
#SBATCH --nodes=
4
#SBATCH --ntasks-per-node=1
28
#SBATCH --partition=qexp
#SBATCH --nodes=
1
#SBATCH --ntasks-per-node=1
6
#SBATCH --partition=q
cpu_
exp
#SBATCH --account=OPEN-0-0
#SBATCH --time=1:00:00
ml ORCA/
5
.0.
1-OpenMPI-4.1.1
srun
orca orca_parallel.inp
>
output.out
ml ORCA/
6
.0.
0-gompi-2023a-avx2
$(
which
orca
)
orca_parallel.inp
>
output.out
```
!!! note
...
...
@@ -122,67 +128,79 @@ srun orca orca_parallel.inp > output.out
Submit this job to the queue and see the output file.
```
console
$
srun submit_parallel.slurm
1417598
$
ll ORCA_PARALLEL.
*
-rw------- 1 user user 0 Aug 21 13:12 ORCA_PARALLEL.e1417598
-rw------- 1 user user 23561 Aug 21 13:13 ORCA_PARALLEL.o1417598
$
sbatch submit_parallel.slurm
Submitted batch job 2127305
$
cat
ORCA_PARALLEL.o1417598
$
cat
output.out
*****************
* O R C A *
*****************
--- An Ab Initio, DFT and Semiempirical electronic structure package ---
#
######################################################
#
-
***
-
#
#
Department of molecular theory and spectroscopy
#
#
Directorship: Frank Neese
#
#
Max Planck Institute
for
Chemical Energy Conversion
#
#
D-45470 Muelheim/Ruhr
#
#
Germany
#
#
#
#
All rights reserved
#
#
-
***
-
#
#
######################################################
Program Version 5.0.1 - RELEASE -
#
########################################################
#
-
***
-
#
#
Department of theory and spectroscopy
#
#
#
#
Frank Neese
#
#
#
#
Directorship, Architecture, Infrastructure
#
#
SHARK, DRIVERS
#
#
Core code/Algorithms
in
most modules
#
#
#
#
Max Planck Institute fuer Kohlenforschung
#
#
Kaiser Wilhelm Platz 1
#
#
D-45470 Muelheim/Ruhr
#
#
Germany
#
#
#
#
All rights reserved
#
#
-
***
-
#
#
########################################################
Program Version 6.0.0 - RELEASE -
...
************************************************************
* Program running with 6
4
parallel MPI-processes *
* Program running with
1
6 parallel MPI-processes *
* working on a common directory *
************************************************************
...
****ORCA TERMINATED NORMALLY****
TOTAL RUN TIME: 0 days 0 hours 0 minutes 1
1
seconds
859
msec
TOTAL RUN TIME: 0 days 0 hours 0 minutes 1
7
seconds
62
msec
```
You can see, that the program was running with
512
parallel MPI-processes.
In version
5
.0.
1
, only the following modules are parallelized:
You can see, that the program was running with
16
parallel MPI-processes.
In version
6
.0.
0
, only the following modules are parallelized:
*
A
NOINT
*
CASSCF / NEVPT2
*
A
UTOCI
*
CASSCF / NEVPT2
/ CASSCFRESP
*
CIPSI
*
CIS/TDDFT
*
CPSCF
*
EPRNMR
*
GTOINT
*
MDCI (Canonical-, PNO-, DLPNO-Methods)
*
MP2 and RI-MP2 (including Gradient and Hessian)
*
GRAD (general Gradient program)
*
GUESS
*
LEANSCF (memory conserving SCF solver)
*
MCRPA
*
MDCI (Canonical- and DLPNO-Methods)
*
MM
*
MP2 and RI-MP2 (including Gradients)
*
MRCI
*
PC
*
PLOT
*
PNMR
*
POP
*
PROP
*
PROPINT
*
REL
*
ROCIS
*
SCF
*
SCFGRAD
*
SCFHESS
*
SOC
*
Numerical Gradients and Frequencies
*
SCFRESP (with SCFHessian)
*
STARTUP
*
VPOT
*
Numerical Gradients, Frequencies, Overtones-and-Combination-Bands
*
VPT2
*
NEB (Nudged Elastic Band
## Example Submission Script
...
...
@@ -253,4 +271,5 @@ in order to take advantage of updates, announcements, and the users forum.
A comprehensive
[
manual
][
b
]
is available online for registered users.
[
a
]:
https://orcaforum.kofo.mpg.de/app.php/portal
[
b
]:
https://orcaforum.kofo.mpg.de/app.php/dlext/?cat=1
[
b
]:
https://www.faccts.de/docs
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment