Skip to content
Snippets Groups Projects
Commit 18cefc67 authored by Pavel Gajdušek's avatar Pavel Gajdušek
Browse files

structure, script to check paths

parent 4b8a2426
No related branches found
No related tags found
6 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!196Master,!161Gajdusek cleaning
Showing with 2422 additions and 108 deletions
# Molpro
Molpro is a complete system of ab initio programs for molecular electronic structure calculations.
## About Molpro
Molpro is a software package used for accurate ab-initio quantum chemistry calculations. More information can be found at the [official webpage](http://www.molpro.net/).
## License
Molpro software package is available only to users that have a valid license. Please contact support to enable access to Molpro if you have a valid license appropriate for running on our cluster (eg. academic research group licence, parallel execution).
To run Molpro, you need to have a valid license token present in " $HOME/.molpro/token". You can download the token from [Molpro website](https://www.molpro.net/licensee/?portal=licensee).
## Installed Version
Currently on Anselm is installed version 2010.1, patch level 45, parallel version compiled with Intel compilers and Intel MPI.
Compilation parameters are default:
| Parameter | Value |
| ---------------------------------- | ------------ |
| max number of atoms | 200 |
| max number of valence orbitals | 300 |
| max number of basis functions | 4095 |
| max number of states per symmmetry | 20 |
| max number of state symmetries | 16 |
| max number of records | 200 |
| max number of primitives | maxbfn x [2] |
## Running
Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details.
!!! note
The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS.
You are advised to use the -d option to point to a directory in [SCRATCH filesystem](../../storage/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch filesystem.
### Example jobscript
```bash
#PBS -A IT4I-0-0
#PBS -q qprod
#PBS -l select=1:ncpus=16:mpiprocs=16:ompthreads=1
cd $PBS_O_WORKDIR
# load Molpro module
module add molpro
# create a directory in the SCRATCH filesystem
mkdir -p /scratch/$USER/$PBS_JOBID
# copy an example input
cp /apps/chem/molpro/2010.1/molprop_2010_1_Linux_x86_64_i8/examples/caffeine_opt_diis.com .
# run Molpro with default options
molpro -d /scratch/$USER/$PBS_JOBID caffeine_opt_diis.com
# delete scratch directory
rm -rf /scratch/$USER/$PBS_JOBID
```
# NWChem
## Introduction
NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters.
[Homepage](http://www.nwchem-sw.org/index.php/Main_Page)
## Installed Versions
The following versions are currently installed:
* NWChem/6.3.revision2-2013-10-17-Python-2.7.8, current release. Compiled with Intel compilers, MKL and Intel MPI
* NWChem/6.5.revision26243-intel-2015b-2014-09-10-Python-2.7.8
For a current list of installed versions, execute:
```console
$ ml av NWChem
```
The recommend to use version 6.5. Version 6.3 fails on Salomon nodes with accelerator, because it attempts to communicate over scif0 interface. In 6.5 this is avoided by setting ARMCI_OPENIB_DEVICE=mlx4_0, this setting is included in the module.
## Running
NWChem is compiled for parallel MPI execution. Normal procedure for MPI jobs applies. Sample jobscript :
```bash
#PBS -A IT4I-0-0
#PBS -q qprod
#PBS -l select=1:ncpus=24:mpiprocs=24
cd $PBS_O_WORKDIR
module add NWChem/6.5.revision26243-intel-2015b-2014-09-10-Python-2.7.8
mpirun nwchem h2o.nw
```
## Options
Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
* MEMORY : controls the amount of memory NWChem will use
* SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
PREC = Accurate
IBRION = -1
ENCUT = 500
EDIFF = 1.0e-08
ISMEAR = 0
SIGMA = 0.01
IALGO = 38
LREAL = .FALSE.
ADDGRID = .TRUE.
LWAVE = .FALSE.
LCHARG = .FALSE.
NCORE = 8
KPAR = 8
Automatic mesh
0
Monkhorst Pack
3 3 3
0.5 0.5 0.5
Si
1.0
5.4335600309153529 0.0000000000000000 0.0000000000000000
0.0000000000000000 5.4335600309153529 0.0000000000000000
0.0000000000000000 0.0000000000000000 5.4335600309153529
Si
8
Direct
0.8750000000000000 0.8750000000000000 0.8750000000000000
0.8750000000000000 0.3750000000000000 0.3750000000000000
0.3750000000000000 0.8750000000000000 0.3750000000000000
0.3750000000000000 0.3750000000000000 0.8750000000000000
0.1250000000000000 0.1250000000000000 0.1250000000000000
0.1250000000000000 0.6250000000000000 0.6250000000000000
0.6250000000000000 0.1250000000000000 0.6250000000000000
0.6250000000000000 0.6250000000000000 0.1250000000000000
This diff is collapsed.
#!/bin/bash
#PBS -A OPEN-6-23
#PBS -N Si-test1
#PBS -q qfree
#PBS -l select=1:ncpus=24:mpiprocs=24:ompthreads=1
#PBS -l walltime=01:59:59
##PBS-l mem=6gb
#PBS -j oe
#PBS -S /bin/bash
module purge
module load phono3py/0.9.14-ictce-7.3.5-Python-2.7.9
export OMP_NUM_THREADS=1
export I_MPI_COMPATIBILITY=4
##export OMP_STACKSIZE=10gb
##0 1 2 3 4 10 11 12 13 20 21 22 30 31 40 91 92 93 94 101 102 103 111 112 121 182 183 184 192 193 202 273 274 283 364
cd $PBS_O_WORKDIR
phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" -c POSCAR --sigma 0.1 --br --write_gamma --gp="0 1 3 4 10"
#phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" -c POSCAR --sigma 0.1 --br --write_gamma --gp="11 12 13 20 21"
#phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" -c POSCAR --sigma 0.1 --br --write_gamma --gp="21 22 30 31 40"
#phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" -c POSCAR --sigma 0.1 --br --write_gamma --gp="91 92 93 94 101"
#phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" -c POSCAR --sigma 0.1 --br --write_gamma --gp="102 103 111 112 121"
#phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" -c POSCAR --sigma 0.1 --br --write_gamma --gp="182 183 184 192 193"
#phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" -c POSCAR --sigma 0.1 --br --write_gamma --gp="202 273 274 283 364"
#!/bin/bash
P=`pwd`
# number of displacements
poc=9
for i in `seq 1 $poc `;
do
cd $P
mkdir disp-0000"$i"
cd disp-0000"$i"
cp ../KPOINTS .
cp ../INCAR .
cp ../POTCAR .
cp ../POSCAR-0000"$i" POSCAR
echo $i
done
poc=99
for i in `seq 10 $poc `;
do
cd $P
mkdir disp-000"$i"
cd disp-000"$i"
cp ../KPOINTS .
cp ../INCAR .
cp ../POTCAR .
cp ../POSCAR-000"$i" POSCAR
echo $i
done
poc=111
for i in `seq 100 $poc `;
do
cd $P
mkdir disp-00"$i"
cd disp-00"$i"
cp ../KPOINTS .
cp ../INCAR .
cp ../POTCAR .
cp ../POSCAR-00"$i" POSCAR
echo $i
done
#!/bin/bash
#PBS -A IT4I-9-11
#PBS -N Si-test
#PBS -q qprod
#PBS -l select=8:ncpus=16:mpiprocs=16:ompthreads=1
#PBS -l walltime=23:59:59
##PBS-l mem=6gb
#PBS -j oe
#PBS -S /bin/bash
module load impi/4.1.1.036 intel/13.5.192 fftw3-mpi/3.3.3-icc
export OMP_NUM_THREADS=1
export I_MPI_COMPATIBILITY=4
##export OMP_STACKSIZE=10gb
b=`basename $PBS_O_WORKDIR`
echo $b >log.vasp
SCRDIR=/scratch/$USER/$b
mkdir -p $SCRDIR
cd $SCRDIR || exit
# copy input file to scratch
cp $PBS_O_WORKDIR/* .
mpirun ~/bin/vasp5.4.1 > log.exc
# copy output file to home
cp * $PBS_O_WORKDIR/. && cd ..
rm -rf "$SCRDIR"
#exit
exit
#!/bin/bash
P=`pwd`
# number of displacements
poc=9
for i in `seq 1 $poc `;
do
cd $P
cd disp-0000"$i"
cp ../run.sh .
qsub run.sh
echo $i
done
poc=99
for i in `seq 10 $poc `;
do
cd $P
cd disp-000"$i"
cp ../run.sh .
qsub run.sh
echo $i
done
poc=111
for i in `seq 100 $poc `;
do
cd $P
cd disp-00"$i"
cp ../run.sh .
qsub run.sh
echo $i
done
...@@ -18,7 +18,7 @@ On the clusters COMSOL is available in the latest stable version. There are two ...@@ -18,7 +18,7 @@ On the clusters COMSOL is available in the latest stable version. There are two
* **Non commercial** or so called **EDU variant**, which can be used for research and educational purposes. * **Non commercial** or so called **EDU variant**, which can be used for research and educational purposes.
* **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing [here](licensing-and-available-versions). * **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing [here](licensing-and-available-versions/).
To load the of COMSOL load the module To load the of COMSOL load the module
......
...@@ -146,7 +146,7 @@ Every evaluation of the integrad function runs in parallel on different process. ...@@ -146,7 +146,7 @@ Every evaluation of the integrad function runs in parallel on different process.
package Rmpi provides an interface (wrapper) to MPI APIs. package Rmpi provides an interface (wrapper) to MPI APIs.
It also provides interactive R slave environment. On the cluster, Rmpi provides interface to the [OpenMPI](../mpi/Running_OpenMPI/). It also provides interactive R slave environment. On the cluster, Rmpi provides interface to the [OpenMPI](../../anselm/software/mpi/Running_OpenMPI/).
Read more on Rmpi at <http://cran.r-project.org/web/packages/Rmpi/>, reference manual is available at <http://cran.r-project.org/web/packages/Rmpi/Rmpi.pdf> Read more on Rmpi at <http://cran.r-project.org/web/packages/Rmpi/>, reference manual is available at <http://cran.r-project.org/web/packages/Rmpi/Rmpi.pdf>
......
#!/bin/bash
for file in $@; do
check=$(cat $file | grep -Eo "\[.*?\]\([^ ]*\)" | grep -v "#" | grep -vE "http|www|ftp|none" | sed 's/\[.*\]//g' | sed 's/[()]//g' | sed 's/\/$/.md/g')
if [ ! -z "$check" ]; then
# echo "\n+++++ $file +++++\n"
wrong=0
for line in $check; do
#echo $line
pathtocheck=$(dirname $file)/$line
if [ -f $(dirname $file)/$line ]; then
:
#echo "ok $pathtocheck"
else
if [ $wrong -eq "0" ]; then
echo ""
echo "\n+++++ $file +++++\n"
fi
wrong=1
echo "wrong link in $pathtocheck"
fi
done
fi
done
echo ""
todelete 0 → 100644
docs.it4i/anselm/software/numerical-languages/introduction.md
docs.it4i/anselm/software/numerical-languages/matlab.md
docs.it4i/anselm/software/numerical-languages/matlab_1314.md
docs.it4i/anselm/software/numerical-languages/octave.md
docs.it4i/anselm/software/numerical-languages/r.md
docs.it4i/salomon/software/comsol/licensing-and-available-versions.md
docs.it4i/salomon/software/java.md
docs.it4i/salomon/software/numerical-languages/introduction.md
docs.it4i/salomon/software/numerical-languages/matlab.md
docs.it4i/salomon/software/numerical-languages/octave.md
docs.it4i/salomon/software/numerical-languages/opencoarrays.md
docs.it4i/salomon/software/numerical-languages/r.md
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment