4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
@@ -10,7 +10,7 @@ MOLPRO is a software package used for accurate ab-initio quantum chemistry calcu
MOLPRO software package is available only to users that have a valid license. Contact support to enable access to MOLPRO if you have a valid license appropriate for running on our cluster (e.g. academic research group licence, parallel execution).
To run MOLPRO, you need to have a valid license token present in " $HOME/.molpro/token". You can download the token from [MOLPRO website][b].
To run MOLPRO, you need to have a valid license token present in `HOME/.molpro/token`. You can download the token from [MOLPRO website][b].
## Installed Version
...
...
@@ -30,12 +30,12 @@ Compilation parameters are default:
## Running
MOLPRO is compiled for parallel execution using MPI and OpenMP. By default, MOLPRO reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using the -n, -t, and helper-server options. For more details, see the [MOLPRO documentation][c].
MOLPRO is compiled for parallel execution using MPI and OpenMP. By default, MOLPRO reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using the `-n`, `-t`, and `helper-server` options. For more details, see the [MOLPRO documentation][c].
!!! note
The OpenMP parallelization in MOLPRO is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing the mpiprocs=16:ompthreads=1 option to PBS.
The OpenMP parallelization in MOLPRO is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing the `mpiprocs=16:ompthreads=1` option to PBS.
You are advised to use the -d option to point to a directory in [SCRATCH file system - Salomon][1]. MOLPRO can produce a large amount of temporary data during its run, so it is important that these are placed in the fast scratch file system.
You are advised to use the `-d` option to point to a directory in [SCRATCH file system - Salomon][1]. MOLPRO can produce a large amount of temporary data during its run, so it is important that these are placed in the fast scratch file system.