Skip to content
Snippets Groups Projects
Commit 207aef5d authored by Pavel Gajdušek's avatar Pavel Gajdušek
Browse files

fixed links, changed structur

parent 0c3c2bf7
No related branches found
No related tags found
6 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!196Master,!161Gajdusek cleaning
......@@ -35,7 +35,7 @@ Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molp
!!! note
The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS.
You are advised to use the -d option to point to a directory in [SCRATCH file system](../../storage/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch file system.
You are advised to use the -d option to point to a directory in [SCRATCH file system - Salomon](../../salomon/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch file system.
### Example jobscript
......
......@@ -33,4 +33,4 @@ mpirun nwchem h2o.nw
Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
* MEMORY : controls the amount of memory NWChem will use
* SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
* SCRATCH_DIR : set this to a directory in [SCRATCH filesystem - Salomon](../../salomon/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
......@@ -18,7 +18,7 @@ $ ml phono3py
### Calculating Force Constants
One needs to calculate second order and third order force constants using the diamond structure of silicon stored in [POSCAR](poscar-si) (the same form as in VASP) using single displacement calculations within supercell.
One needs to calculate second order and third order force constants using the diamond structure of silicon stored in [POSCAR](POSCAR) (the same form as in VASP) using single displacement calculations within supercell.
```console
$ cat POSCAR
......
......@@ -76,7 +76,7 @@ Working directory has to be created before sending the (comsol.pbs) job script i
COMSOL is the software package for the numerical solution of the partial differential equations. LiveLink for MATLAB allows connection to the COMSOL API (Application Programming Interface) with the benefits of the programming language and computing environment of the MATLAB.
LiveLink for MATLAB is available in both **EDU** and **COM** **variant** of the COMSOL release. On the clusters 1 commercial (**COM**) license and the 5 educational (**EDU**) licenses of LiveLink for MATLAB (please see the [ISV Licenses](../../../anselm/software/isv_licenses/)) are available. Following example shows how to start COMSOL model from MATLAB via LiveLink in the interactive mode (on Anselm use 16 threads).
LiveLink for MATLAB is available in both **EDU** and **COM** **variant** of the COMSOL release. On the clusters 1 commercial (**COM**) license and the 5 educational (**EDU**) licenses of LiveLink for MATLAB (please see the [ISV Licenses](../isv_licenses/)) are available. Following example shows how to start COMSOL model from MATLAB via LiveLink in the interactive mode (on Anselm use 16 threads).
```console
$ xhost +
......
......@@ -21,9 +21,9 @@ $ ml av MATLAB
If you need to use the Matlab GUI to prepare your Matlab programs, you can use Matlab directly on the login nodes. But for all computations use Matlab on the compute nodes via PBS Pro scheduler.
If you require the Matlab GUI, please follow the general information about [running graphical applications](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/).
If you require the Matlab GUI, please follow the general information about [running graphical applications](../../general/accessing-the-clusters/graphical-user-interface/x-window-system/).
Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part [here](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/)) is recommended.
Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part [here](../../general/accessing-the-clusters/graphical-user-interface/x-window-system/)) is recommended.
To run Matlab with GUI, use
......@@ -68,7 +68,7 @@ With the new mode, MATLAB itself launches the workers via PBS, so you can either
### Parallel Matlab Interactive Session
Following example shows how to start interactive session with support for Matlab GUI. For more information about GUI based applications on Anselm see [this page](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/).
Following example shows how to start interactive session with support for Matlab GUI. For more information about GUI based applications on Anselm see [this page](../../general/accessing-the-clusters/graphical-user-interface/x-window-system/).
```console
$ xhost +
......@@ -249,11 +249,11 @@ delete(pool)
### Non-Interactive Session and Licenses
If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the `-l __feature__matlab__MATLAB=1` for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, please [look here](../../../anselm/software/isv_licenses/).
If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the `-l __feature__matlab__MATLAB=1` for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, please [look here](../isv_licenses/).
The licensing feature of PBS is currently disabled.
In case of non-interactive session please read the [following information](../../../anselm/software/isv_licenses/) on how to modify the qsub command to test for available licenses prior getting the resource allocation.
In case of non-interactive session please read the [following information](../isv_licenses/) on how to modify the qsub command to test for available licenses prior getting the resource allocation.
### Matlab Distributed Computing Engines Start Up Time
......
......@@ -46,7 +46,7 @@ Plots, images, etc... will be still available.
Recommended parallel mode for running parallel Matlab on Anselm is MPIEXEC mode. In this mode user allocates resources through PBS prior to starting Matlab. Once resources are granted the main Matlab instance is started on the first compute node assigned to job by PBS and workers are started on all remaining nodes. User can use both interactive and non-interactive PBS sessions. This mode guarantees that the data processing is not performed on login nodes, but all processing is on compute nodes.
![Parallel Matlab](../../../img/Matlab.png "Parallel Matlab")
![Parallel Matlab](../../img/Matlab.png "Parallel Matlab")
For the performance reasons Matlab should use system MPI. On Anselm the supported MPI implementation for Matlab is Intel MPI. To switch to system MPI user has to override default Matlab setting by creating new configuration file in its home directory. The path and file name has to be exactly the same as in the following listing:
......
......@@ -48,7 +48,7 @@ cp output.out $PBS_O_WORKDIR/.
exit
```
This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](../../job-submission-and-execution/).
This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](../../salomon/job-submission-and-execution/).
The octave c compiler mkoctfile calls the GNU gcc 4.8.1 for compiling native c code. This is very useful for running native c subroutines in octave environment.
......@@ -60,7 +60,7 @@ Octave may use MPI for interprocess communication This functionality is currentl
## Xeon Phi Support
Octave may take advantage of the Xeon Phi accelerators. This will only work on the [Intel Xeon Phi](../intel-xeon-phi/) [accelerated nodes](../../compute-nodes/).
Octave may take advantage of the Xeon Phi accelerators. This will only work on the [Intel Xeon Phi](../../salomon/software/intel-xeon-phi/) [accelerated nodes](../../salomon/compute-nodes/).
### Automatic Offload Support
......
......@@ -124,4 +124,4 @@ $ mpiexec -np 4 ./synchronization_test.x
**-np 4** is number of images to run. The parameters of **cafrun** and **mpiexec** are the same.
For more information about running CAF program please follow [Running OpenMPI](../mpi/Running_OpenMPI.md)
For more information about running CAF program please follow [Running OpenMPI - Salomon](../../salomon/software/mpi/Running_OpenMPI.md)
......@@ -66,7 +66,7 @@ cp routput.out $PBS_O_WORKDIR/.
exit
```
This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section](../../job-submission-and-execution/).
This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section - Anselm](../../anselm/job-submission-and-execution/).
## Parallel R
......@@ -392,7 +392,7 @@ cp routput.out $PBS_O_WORKDIR/.
exit
```
For more information about jobscripts and MPI execution refer to the [Job submission](../../anselm/job-submission-and-execution/) and general [MPI](../mpi/mpi/) sections.
For more information about jobscripts and MPI execution refer to the [Job submission](../../anselm/job-submission-and-execution/) and general [MPI](../../anselm/software/mpi/mpi/) sections.
## Xeon Phi Offload
......@@ -402,4 +402,4 @@ By leveraging MKL, R can accelerate certain computations, most notably linear al
$ export MKL_MIC_ENABLE=1
```
[Read more about automatic offload](../intel-xeon-phi/)
[Read more about automatic offload](../../anselm/software/intel-xeon-phi/)
......@@ -4,7 +4,7 @@
for file in $@; do
check=$(cat $file | grep -Po "\[.*?\]\(.*?\)" | grep -v "#" | grep -vE "http|www|ftp|none" | sed 's/\[.*\]//g' | sed 's/[()]//g' | sed 's/\/$/.md/g')
check=$(cat $file | grep -Po "\[.*?\]\([^ ]*?\)" | grep -v "#" | grep -vE "http|www|ftp|none" | sed 's/\[.*\]//g' | sed 's/[()]//g' | sed 's/\/$/.md/g')
if [ ! -z "$check" ]; then
# echo "\n+++++ $file +++++\n"
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment