Skip to content
Snippets Groups Projects
Commit 75378e2f authored by David Hrbáč's avatar David Hrbáč
Browse files

Links OK

parent 59f056e1
No related branches found
No related tags found
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!219Virtual environment, upgrade MKdocs, upgrade Material design
......@@ -4,13 +4,13 @@ Molpro is a complete system of ab initio programs for molecular electronic struc
## About Molpro
Molpro is a software package used for accurate ab-initio quantum chemistry calculations. More information can be found at the [official webpage](http://www.molpro.net/).
Molpro is a software package used for accurate ab-initio quantum chemistry calculations. More information can be found at the [official webpage][a].
## License
Molpro software package is available only to users that have a valid license. Please contact support to enable access to Molpro if you have a valid license appropriate for running on our cluster (eg. academic research group licence, parallel execution).
To run Molpro, you need to have a valid license token present in " $HOME/.molpro/token". You can download the token from [Molpro website](https://www.molpro.net/licensee/?portal=licensee).
To run Molpro, you need to have a valid license token present in " $HOME/.molpro/token". You can download the token from [Molpro website][b].
## Installed Version
......@@ -30,12 +30,12 @@ Compilation parameters are default:
## Running
Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details.
Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation][c] for more details.
!!! note
The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS.
You are advised to use the -d option to point to a directory in [SCRATCH file system - Salomon](salomon/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch file system.
You are advised to use the -d option to point to a directory in [SCRATCH file system - Salomon][1]. Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch file system.
### Example jobscript
......@@ -61,3 +61,8 @@ molpro -d /scratch/$USER/$PBS_JOBID caffeine_opt_diis.com
# delete scratch directory
rm -rf /scratch/$USER/$PBS_JOBID
```
[1]: ../../salomon/storage.md
[a]: http://www.molpro.net/
[b]: https://www.molpro.net/licensee/?portal=licensee
[c]: http://www.molpro.net/info/2010.1/doc/manual/node9.html
......@@ -2,9 +2,7 @@
## Introduction
NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters.
[Homepage](http://www.nwchem-sw.org/index.php/Main_Page)
[NWChem][a] aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters.
## Installed Versions
......@@ -30,7 +28,12 @@ mpirun nwchem h2o.nw
## Options
Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
Please refer to [the documentation][b] and in the input file set the following directives:
* MEMORY : controls the amount of memory NWChem will use
* SCRATCH_DIR : set this to a directory in [SCRATCH filesystem - Salomon](salomon/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
* SCRATCH_DIR : set this to a directory in [SCRATCH filesystem - Salomon][1] (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
[1]: ../../salomon/storage.md
[a]: http://www.nwchem-sw.org/index.php/Main_Page
[b]: http://www.nwchem-sw.org/index.php/Release62:Top-level
......@@ -183,8 +183,11 @@ You can see, that the program was running with 64 parallel MPI-processes. In ver
## Register as a User
You are encouraged to register as a user of ORCA at [Here](https://orcaforum.cec.mpg.de/) in order to take advantage of updates, announcements and also of the users forum.
You are encouraged to register as a user of ORCA [here][a] in order to take advantage of updates, announcements and also of the users forum.
## Documentation
A comprehensive [PDF](https://orcaforum.cec.mpg.de/OrcaManual.pdf) manual is available online.
A comprehensive [manual][b] is available online.
[a]: https://orcaforum.cec.mpg.de/
[b]: https://orcaforum.cec.mpg.de/OrcaManual.pdf
......@@ -2,7 +2,7 @@
## Introduction
This GPL software calculates phonon-phonon interactions via the third order force constants. It allows to obtain lattice thermal conductivity, phonon lifetime/linewidth, imaginary part of self energy at the lowest order, joint density of states (JDOS) and weighted-JDOS. For details see Phys. Rev. B 91, 094306 (2015) and [http://atztogo.github.io/phono3py/index.html](http://atztogo.github.io/phono3py/index.html)
This GPL software calculates phonon-phonon interactions via the third order force constants. It allows to obtain lattice thermal conductivity, phonon lifetime/linewidth, imaginary part of self energy at the lowest order, joint density of states (JDOS) and weighted-JDOS. For details see Phys. Rev. B 91, 094306 (2015) and [website][a].
Available modules
......@@ -18,7 +18,7 @@ $ ml phono3py
### Calculating Force Constants
One needs to calculate second order and third order force constants using the diamond structure of silicon stored in [POSCAR](POSCAR) (the same form as in VASP) using single displacement calculations within supercell.
One needs to calculate second order and third order force constants using the diamond structure of silicon stored in [POSCAR][1] (the same form as in VASP) using single displacement calculations within supercell.
```console
$ cat POSCAR
......@@ -61,7 +61,7 @@ POSCAR-00006 POSCAR-00015 POSCAR-00024 POSCAR-00033 POSCAR-00042 POSCAR-00051
POSCAR-00007 POSCAR-00016 POSCAR-00025 POSCAR-00034 POSCAR-00043 POSCAR-00052 POSCAR-00061 POSCAR-00070 POSCAR-00079 POSCAR-00088 POSCAR-00097 POSCAR-00106
```
For each displacement the forces needs to be calculated, i.e. in form of the output file of VASP (vasprun.xml). For a single VASP calculations one needs [KPOINTS](KPOINTS), [POTCAR](POTCAR), [INCAR](INCAR) in your case directory (where you have POSCARS) and those 111 displacements calculations can be generated by [prepare.sh](prepare.sh) script. Then each of the single 111 calculations is submitted [run.sh](run.sh) by [submit.sh](submit.sh).
For each displacement the forces needs to be calculated, i.e. in form of the output file of VASP (vasprun.xml). For a single VASP calculations one needs [KPOINTS][2], [POTCAR][3], and [INCAR][4] in your case directory (where you have POSCARS) and those 111 displacements calculations can be generated by [prepare.sh][5] script. Then each of the single 111 calculations is submitted [run.sh][6] by [submit.sh][7].
```console
$./prepare.sh
......@@ -149,13 +149,13 @@ ir_grid_points: # [address, weight]
* grid_point: 364
```
one finds which grid points needed to be calculated, for instance using following
One finds which grid points needed to be calculated, for instance using following:
```console
$ phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" -c POSCAR --sigma 0.1 --br --write-gamma --gp="0 1 2
```
one calculates grid points 0, 1, 2. To automize one can use for instance scripts to submit 5 points in series, see [gofree-cond1.sh](gofree-cond1.sh)
One calculates grid points 0, 1, 2. To automize one can use for instance scripts to submit 5 points in series, see [gofree-cond1.sh[8].]
```console
$ qsub gofree-cond1.sh
......@@ -166,3 +166,14 @@ Finally the thermal conductivity result is produced by grouping single conductiv
```console
$ phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" --br --read_gamma
```
[1]: POSCAR
[2]: KPOINTS
[3]: POTCAR
[4]: INCAR
[5]: prepare.sh
[6]: run.sh
[7]: submit.sh
[8]: gofree-cond1.sh
[a]: http://atztogo.github.io/phono3py/index.html
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment