Skip to content
Snippets Groups Projects
Commit 8b98c2ab authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

* -> *

parent 2b9f45a7
No related branches found
No related tags found
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!74Md revision
Pipeline #
......@@ -69,8 +69,8 @@ Anselm is equipped with Intel Sandy Bridge processors Intel Xeon E5-2665 (nodes
* speed: 2.4 GHz, up to 3.1 GHz using Turbo Boost Technology
* peak performance: 19.2 GFLOP/s per core
* caches:
* L2: 256 KB per core
* L3: 20 MB per processor
* L2: 256 KB per core
* L3: 20 MB per processor
* memory bandwidth at the level of the processor: 51.2 GB/s
### Intel Sandy Bridge E5-2470 Processor
......@@ -79,8 +79,8 @@ Anselm is equipped with Intel Sandy Bridge processors Intel Xeon E5-2665 (nodes
* speed: 2.3 GHz, up to 3.1 GHz using Turbo Boost Technology
* peak performance: 18.4 GFLOP/s per core
* caches:
* L2: 256 KB per core
* L3: 20 MB per processor
* L2: 256 KB per core
* L3: 20 MB per processor
* memory bandwidth at the level of the processor: 38.4 GB/s
Nodes equipped with Intel Xeon E5-2665 CPU have set PBS resource attribute cpu_freq = 24, nodes equipped with Intel Xeon E5-2470 CPU have set PBS resource attribute cpu_freq = 23.
......@@ -103,28 +103,28 @@ Intel Turbo Boost Technology is used by default, you can disable it for all nod
* 2 sockets
* Memory Controllers are integrated into processors.
* 8 DDR3 DIMMs per node
* 4 DDR3 DIMMs per CPU
* 1 DDR3 DIMMs per channel
* Data rate support: up to 1600MT/s
* 8 DDR3 DIMMs per node
* 4 DDR3 DIMMs per CPU
* 1 DDR3 DIMMs per channel
* Data rate support: up to 1600MT/s
* Populated memory: 8 x 8 GB DDR3 DIMM 1600 MHz
### Compute Node With GPU or MIC Accelerator
* 2 sockets
* Memory Controllers are integrated into processors.
* 6 DDR3 DIMMs per node
* 3 DDR3 DIMMs per CPU
* 1 DDR3 DIMMs per channel
* Data rate support: up to 1600MT/s
* 6 DDR3 DIMMs per node
* 3 DDR3 DIMMs per CPU
* 1 DDR3 DIMMs per channel
* Data rate support: up to 1600MT/s
* Populated memory: 6 x 16 GB DDR3 DIMM 1600 MHz
### Fat Compute Node
* 2 sockets
* Memory Controllers are integrated into processors.
* 16 DDR3 DIMMs per node
* 8 DDR3 DIMMs per CPU
* 2 DDR3 DIMMs per channel
* Data rate support: up to 1600MT/s
* 16 DDR3 DIMMs per node
* 8 DDR3 DIMMs per CPU
* 2 DDR3 DIMMs per channel
* Data rate support: up to 1600MT/s
* Populated memory: 16 x 32 GB DDR3 DIMM 1600 MHz
......@@ -99,7 +99,7 @@ We recommend, that startup script
* maps Job Directory from host (from compute node)
* runs script (we call it "run script") from Job Directory and waits for application's exit
* for management purposes if run script does not exist wait for some time period (few minutes)
* for management purposes if run script does not exist wait for some time period (few minutes)
* shutdowns/quits OS
For Windows operating systems we suggest using Local Group Policy Startup script, for Linux operating systems rc.local, runlevel init script or similar service.
......
......@@ -10,8 +10,8 @@ PETSc (Portable, Extensible Toolkit for Scientific Computation) is a suite of bu
* [project webpage](http://www.mcs.anl.gov/petsc/)
* [documentation](http://www.mcs.anl.gov/petsc/documentation/)
* [PETSc Users Manual (PDF)](http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf)
* [index of all manual pages](http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html)
* [PETSc Users Manual (PDF)](http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf)
* [index of all manual pages](http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html)
* PRACE Video Tutorial [part1](http://www.youtube.com/watch?v=asVaFg1NDqY), [part2](http://www.youtube.com/watch?v=ubp_cSibb9I), [part3](http://www.youtube.com/watch?v=vJAAAQv-aaw), [part4](http://www.youtube.com/watch?v=BKVlqWNh8jY), [part5](http://www.youtube.com/watch?v=iXkbLEBFjlM)
## Modules
......@@ -37,24 +37,24 @@ All these libraries can be used also alone, without PETSc. Their static or share
### Libraries Linked to PETSc on Anselm (As of 11 April 2015)
* dense linear algebra
* [Elemental](http://libelemental.org/)
* [Elemental](http://libelemental.org/)
* sparse linear system solvers
* [Intel MKL Pardiso](https://software.intel.com/en-us/node/470282)
* [MUMPS](http://mumps.enseeiht.fr/)
* [PaStiX](http://pastix.gforge.inria.fr/)
* [SuiteSparse](http://faculty.cse.tamu.edu/davis/suitesparse.html)
* [SuperLU](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu)
* [SuperLU_Dist](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu_dist)
* [Intel MKL Pardiso](https://software.intel.com/en-us/node/470282)
* [MUMPS](http://mumps.enseeiht.fr/)
* [PaStiX](http://pastix.gforge.inria.fr/)
* [SuiteSparse](http://faculty.cse.tamu.edu/davis/suitesparse.html)
* [SuperLU](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu)
* [SuperLU_Dist](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu_dist)
* input/output
* [ExodusII](http://sourceforge.net/projects/exodusii/)
* [HDF5](http://www.hdfgroup.org/HDF5/)
* [NetCDF](http://www.unidata.ucar.edu/software/netcdf/)
* [ExodusII](http://sourceforge.net/projects/exodusii/)
* [HDF5](http://www.hdfgroup.org/HDF5/)
* [NetCDF](http://www.unidata.ucar.edu/software/netcdf/)
* partitioning
* [Chaco](http://www.cs.sandia.gov/CRF/chac.html)
* [METIS](http://glaros.dtc.umn.edu/gkhome/metis/metis/overview)
* [ParMETIS](http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview)
* [PT-Scotch](http://www.labri.fr/perso/pelegrin/scotch/)
* [Chaco](http://www.cs.sandia.gov/CRF/chac.html)
* [METIS](http://glaros.dtc.umn.edu/gkhome/metis/metis/overview)
* [ParMETIS](http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview)
* [PT-Scotch](http://www.labri.fr/perso/pelegrin/scotch/)
* preconditioners & multigrid
* [Hypre](http://www.nersc.gov/users/software/programming-libraries/math-libraries/petsc/)
* [Trilinos ML](http://trilinos.sandia.gov/packages/ml/)
* [SPAI - Sparse Approximate Inverse](https://bitbucket.org/petsc/pkg-spai)
* [Hypre](http://www.nersc.gov/users/software/programming-libraries/math-libraries/petsc/)
* [Trilinos ML](http://trilinos.sandia.gov/packages/ml/)
* [SPAI - Sparse Approximate Inverse](https://bitbucket.org/petsc/pkg-spai)
......@@ -96,12 +96,12 @@ BAM is the binary representation of SAM and keeps exactly the same information a
Some features
* Quality control
* reads with N errors
* reads with multiple mappings
* strand bias
* paired-end insert
* reads with N errors
* reads with multiple mappings
* strand bias
* paired-end insert
* Filtering: by number of errors, number of hits
* Comparator: stats, intersection, ...
* Comparator: stats, intersection, ...
** Input: ** BAM file.
......
......@@ -81,22 +81,22 @@ The architecture of Lustre on Anselm is composed of two metadata servers (MDS) a
Configuration of the storages
* HOME Lustre object storage
* One disk array NetApp E5400
* 22 OSTs
* 227 2TB NL-SAS 7.2krpm disks
* 22 groups of 10 disks in RAID6 (8+2)
* 7 hot-spare disks
* One disk array NetApp E5400
* 22 OSTs
* 227 2TB NL-SAS 7.2krpm disks
* 22 groups of 10 disks in RAID6 (8+2)
* 7 hot-spare disks
* SCRATCH Lustre object storage
* Two disk arrays NetApp E5400
* 10 OSTs
* 106 2TB NL-SAS 7.2krpm disks
* 10 groups of 10 disks in RAID6 (8+2)
* 6 hot-spare disks
* Two disk arrays NetApp E5400
* 10 OSTs
* 106 2TB NL-SAS 7.2krpm disks
* 10 groups of 10 disks in RAID6 (8+2)
* 6 hot-spare disks
* Lustre metadata storage
* One disk array NetApp E2600
* 12 300GB SAS 15krpm disks
* 2 groups of 5 disks in RAID5
* 2 hot-spare disks
* One disk array NetApp E2600
* 12 300GB SAS 15krpm disks
* 2 groups of 5 disks in RAID5
* 2 hot-spare disks
\###HOME
......
......@@ -129,18 +129,18 @@ A FAQ about certificates can be found here: [Certificates FAQ](certificates-faq/
Follow these steps **only** if you can not obtain your certificate in a standard way. In case you choose this procedure, please attach a **scan of photo ID** (personal ID or passport or drivers license) when applying for [login credentials](obtaining-login-credentials/#the-login-credentials).
* Go to [CAcert](www.cacert.org).
* If there's a security warning, just acknowledge it.
* If there's a security warning, just acknowledge it.
* Click _Join_.
* Fill in the form and submit it by the _Next_ button.
* Type in the e-mail address which you use for communication with us.
* Don't forget your chosen _Pass Phrase_.
* Type in the e-mail address which you use for communication with us.
* Don't forget your chosen _Pass Phrase_.
* You will receive an e-mail verification link. Follow it.
* After verifying, go to the CAcert's homepage and login using _Password Login_.
* Go to _Client Certificates_ _New_.
* Tick _Add_ for your e-mail address and click the _Next_ button.
* Click the _Create Certificate Request_ button.
* You'll be redirected to a page from where you can download/install your certificate.
* Simultaneously you'll get an e-mail with a link to the certificate.
* Simultaneously you'll get an e-mail with a link to the certificate.
## Installation of the Certificate Into Your Mail Client
......
......@@ -61,7 +61,7 @@ Salomon is equipped with Intel Xeon processors Intel Xeon E5-2680v3. Processors
* speed: 2.5 GHz, up to 3.3 GHz using Turbo Boost Technology
* peak performance: 19.2 GFLOP/s per core
* caches:
* Intel® Smart Cache: 30 MB
* Intel® Smart Cache: 30 MB
* memory bandwidth at the level of the processor: 68 GB/s
### MIC Accelerator Intel Xeon Phi 7120P Processor
......@@ -71,7 +71,7 @@ Salomon is equipped with Intel Xeon processors Intel Xeon E5-2680v3. Processors
GHz, up to 1.333 GHz using Turbo Boost Technology
* peak performance: 18.4 GFLOP/s per core
* caches:
* L2: 30.5 MB
* L2: 30.5 MB
* memory bandwidth at the level of the processor: 352 GB/s
## Memory Architecture
......@@ -82,9 +82,9 @@ Memory is equally distributed across all CPUs and cores for optimal performance.
* 2 sockets
* Memory Controllers are integrated into processors.
* 8 DDR4 DIMMs per node
* 4 DDR4 DIMMs per CPU
* 1 DDR4 DIMMs per channel
* 8 DDR4 DIMMs per node
* 4 DDR4 DIMMs per CPU
* 1 DDR4 DIMMs per channel
* Populated memory: 8 x 16 GB DDR4 DIMM >2133 MHz
### Compute Node With MIC Accelerator
......@@ -102,6 +102,6 @@ MIC Accelerator Intel Xeon Phi 7120P Processor
* 2 sockets
* Memory Controllers are are connected via an
Interprocessor Network (IPN) ring.
* 16 GDDR5 DIMMs per node
* 8 GDDR5 DIMMs per CPU
* 2 GDDR5 DIMMs per channel
* 16 GDDR5 DIMMs per node
* 8 GDDR5 DIMMs per CPU
* 2 GDDR5 DIMMs per channel
......@@ -35,14 +35,14 @@ The architecture of Lustre on Salomon is composed of two metadata servers (MDS)
Configuration of the SCRATCH Lustre storage
* SCRATCH Lustre object storage
* Disk array SFA12KX
* 540 x 4 TB SAS 7.2krpm disk
* 54 x OST of 10 disks in RAID6 (8+2)
* 15 x hot-spare disk
* 4 x 400 GB SSD cache
* Disk array SFA12KX
* 540 x 4 TB SAS 7.2krpm disk
* 54 x OST of 10 disks in RAID6 (8+2)
* 15 x hot-spare disk
* 4 x 400 GB SSD cache
* SCRATCH Lustre metadata storage
* Disk array EF3015
* 12 x 600 GB SAS 15 krpm disk
* Disk array EF3015
* 12 x 600 GB SAS 15 krpm disk
### Understanding the Lustre File Systems
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment