Newer
Older
PETSc is a suite of building blocks for the scalable solution of scientific and engineering applications modelled by partial differential equations. It supports MPI, shared memory, and GPUs through CUDA or OpenCL, as well as hybrid MPI-shared memory or MPI-GPU parallelism.
PETSc (Portable, Extensible Toolkit for Scientific Computation) is a suite of building blocks (data structures and routines) for the scalable solution of scientific and engineering applications modelled by partial differential equations. It allows thinking in terms of high-level objects (matrices) instead of low-level objects (raw arrays). Written in C language but can also be called from FORTRAN, C++, Python and Java codes. It supports MPI, shared memory, and GPUs through CUDA or OpenCL, as well as hybrid MPI-shared memory or MPI-GPU parallelism.
- [project webpage](http://www.mcs.anl.gov/petsc/)
- [documentation](http://www.mcs.anl.gov/petsc/documentation/)
- [PETSc Users Manual (PDF)](http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf)
- [index of all manual pages](http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html)
- PRACE Video Tutorial [part1](http://www.youtube.com/watch?v=asVaFg1NDqY), [part2](http://www.youtube.com/watch?v=ubp_cSibb9I), [part3](http://www.youtube.com/watch?v=vJAAAQv-aaw), [part4](http://www.youtube.com/watch?v=BKVlqWNh8jY), [part5](http://www.youtube.com/watch?v=iXkbLEBFjlM)
You can start using PETSc on Anselm by loading the PETSc module. Module names obey this pattern:
# module load petsc/version-compiler-mpi-blas-variant, e.g.
module load petsc/3.4.4-icc-impi-mkl-opt
where `variant` is replaced by one of `{dbg, opt, threads-dbg, threads-opt}`. The `opt` variant is compiled without debugging information (no `-g` option) and with aggressive compiler optimizations (`-O3 -xAVX`). This variant is suitable for performance measurements and production runs. In all other cases use the debug (`dbg`) variant, because it contains debugging information, performs validations and self-checks, and provides a clear stack trace and message in case of an error. The other two variants `threads-dbg` and `threads-opt` are `dbg` and `opt`, respectively, built with [OpenMP and pthreads threading support](http://www.mcs.anl.gov/petsc/features/threads.html).
PETSc needs at least MPI, BLAS and LAPACK. These dependencies are currently satisfied with Intel MPI and Intel MKL in Anselm `petsc` modules.
PETSc can be linked with a plethora of [external numerical libraries](http://www.mcs.anl.gov/petsc/miscellaneous/external.html), extending PETSc functionality, e.g. direct linear system solvers, preconditioners or partitioners. See below a list of libraries currently included in Anselm `petsc` modules.
All these libraries can be used also alone, without PETSc. Their static or shared program libraries are available in
`$PETSC_DIR/$PETSC_ARCH/lib` and header files in `$PETSC_DIR/$PETSC_ARCH/include`. `PETSC_DIR` and `PETSC_ARCH` are environment variables pointing to a specific PETSc instance based on the petsc module loaded.
### Libraries linked to PETSc on Anselm (as of 11 April 2015)
- dense linear algebra
- [Intel MKL Pardiso](https://software.intel.com/en-us/node/470282)
- [MUMPS](http://mumps.enseeiht.fr/)
- [PaStiX](http://pastix.gforge.inria.fr/)
- [SuiteSparse](http://faculty.cse.tamu.edu/davis/suitesparse.html)
- [SuperLU](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu)
- [SuperLU_Dist](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu_dist)
- [ExodusII](http://sourceforge.net/projects/exodusii/)
- [HDF5](http://www.hdfgroup.org/HDF5/)
- [NetCDF](http://www.unidata.ucar.edu/software/netcdf/)
- [Chaco](http://www.cs.sandia.gov/CRF/chac.html)
- [METIS](http://glaros.dtc.umn.edu/gkhome/metis/metis/overview)
- [ParMETIS](http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview)
- [PT-Scotch](http://www.labri.fr/perso/pelegrin/scotch/)
- [Hypre](http://acts.nersc.gov/hypre/)
- [Trilinos ML](http://trilinos.sandia.gov/packages/ml/)
- [SPAI - Sparse Approximate Inverse](https://bitbucket.org/petsc/pkg-spai)