Commit 4cd8b230 authored by Lukáš Krupčík's avatar Lukáš Krupčík

fix

parent c2ee2cdf
......@@ -61,7 +61,7 @@ POSCAR-00006 POSCAR-00015 POSCAR-00024 POSCAR-00033 POSCAR-00042 POSCAR-00051
POSCAR-00007 POSCAR-00016 POSCAR-00025 POSCAR-00034 POSCAR-00043 POSCAR-00052 POSCAR-00061 POSCAR-00070 POSCAR-00079 POSCAR-00088 POSCAR-00097 POSCAR-00106
```
For each displacement the forces needs to be calculated, i.e. in form of the output file of VASP (vasprun.xml). For a single VASP calculations one needs [KPOINTS](software/chemistry/KPOINTS), [POTCAR](software/chemistry/POTCAR), [INCAR](software/chemistry/INCAR) in your case directory (where you have POSCARS) and those 111 displacements calculations can be generated by [prepare.sh](software/chemistry/prepare.sh) script. Then each of the single 111 calculations is submitted [run.sh](software/chemistry/run.sh) by [submit.sh](software/chemistry/submit.sh).
For each displacement the forces needs to be calculated, i.e. in form of the output file of VASP (vasprun.xml). For a single VASP calculations one needs [KPOINTS](KPOINTS), [POTCAR](POTCAR), [INCAR](INCAR) in your case directory (where you have POSCARS) and those 111 displacements calculations can be generated by [prepare.sh](prepare.sh) script. Then each of the single 111 calculations is submitted [run.sh](run.sh) by [submit.sh](submit.sh).
```console
$./prepare.sh
......
......@@ -56,4 +56,4 @@ Now lets profile the code:
$ perf-report mpirun ./mympiprog.x
```
Performance report files [mympiprog_32p\*.txt](software/debuggers/mympiprog_32p_2014-10-15_16-56.txt) and [mympiprog_32p\*.html](software/debuggers/mympiprog_32p_2014-10-15_16-56.html) were created. We can see that the code is very efficient on MPI and is CPU bounded.
Performance report files [mympiprog_32p\*.txt](mympiprog_32p_2014-10-15_16-56.txt) and [mympiprog_32p\*.html](mympiprog_32p_2014-10-15_16-56.html) were created. We can see that the code is very efficient on MPI and is CPU bounded.
......@@ -30,7 +30,7 @@ $ traceanalyzer
The GUI will launch and you can open the produced `*`.stf file.
![](../../img/Snmekobrazovky20151204v15.35.12.png)
![](../../../img/Snmekobrazovky20151204v15.35.12.png)
Please refer to Intel documenation about usage of the GUI tool.
......
......@@ -178,7 +178,7 @@ class Hello {
}
```
* C: [hello_c.c](../../../src/ompi/hello_c.c)
* C: [hello_c.c](../../src/ompi/hello_c.c)
* C++: [hello_cxx.cc](../../src/ompi/hello_cxx.cc)
* Fortran mpif.h: [hello_mpifh.f](../../src/ompi/hello_mpifh.f)
* Fortran use mpi: [hello_usempi.f90](../../src/ompi/hello_usempi.f90)
......@@ -202,11 +202,11 @@ Additionally, there's one further example application, but this one only uses th
### Test the Connectivity Between All Pross
* C: [connectivity_c.c](src/ompi/connectivity_c.c)
* C: [connectivity_c.c](../../src/ompi/connectivity_c.c)
## Build Examples
Download [examples](src/ompi/ompi.tar.gz).
Download [examples](../../src/ompi/ompi.tar.gz).
The Makefile in this directory will build the examples for the supported languages (e.g., if you do not have the Fortran "use mpi" bindings compiled as part of OpenMPI, those examples will be skipped).
......
......@@ -4,7 +4,7 @@
It is possible to run Workbench scripts in batch mode. You need to configure solvers of individual components to run in parallel mode. Open your project in Workbench. Then, for example, in Mechanical, go to Tools - Solve Process Settings ...
![](../../img/AMsetPar1.png)
![](../../../img/AMsetPar1.png)
Enable Distribute Solution checkbox and enter number of cores (e.g. 48 to run on two Salomon nodes). If you want the job to run on more then 1 node, you must also provide a so called MPI appfile. In the Additional Command Line Arguments input field, enter:
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment