Commit afd8f956 authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

repair external and internal links

parent 7dd1632e
......@@ -71,6 +71,6 @@ Load modules and compile:
$ mpicc testfftw3mpi.c -o testfftw3mpi.x -Wl,-rpath=$LIBRARY_PATH -lfftw3_mpi
```
Run the example as [Intel MPI program](../mpi-1/running-mpich2.html).
Run the example as [Intel MPI program](../mpi/running-mpich2/).
Read more on FFTW usage on the [FFTW website.](http://www.fftw.org/fftw3_doc/)
\ No newline at end of file
Read more on FFTW usage on the [FFTW website.](http://www.fftw.org/fftw3_doc/)![external](../../../img/external.png)
\ No newline at end of file
......@@ -3,7 +3,7 @@ HDF5
Hierarchical Data Format library. Serial and MPI parallel version.
[HDF5 (Hierarchical Data Format)](http://www.hdfgroup.org/HDF5/) is a general purpose library and file format for storing scientific data. HDF5 can store two primary objects: datasets and groups. A dataset is essentially a multidimensional array of data elements, and a group is a structure for organizing objects in an HDF5 file. Using these two basic objects, one can create and store almost any kind of scientific data structure, such as images, arrays of vectors, and structured and unstructured grids. You can also mix and match them in HDF5 files according to your needs.
[HDF5 (Hierarchical Data Format)](http://www.hdfgroup.org/HDF5/)![external](../../../img/external.png) is a general purpose library and file format for storing scientific data. HDF5 can store two primary objects: datasets and groups. A dataset is essentially a multidimensional array of data elements, and a group is a structure for organizing objects in an HDF5 file. Using these two basic objects, one can create and store almost any kind of scientific data structure, such as images, arrays of vectors, and structured and unstructured grids. You can also mix and match them in HDF5 files according to your needs.
Versions **1.8.11** and **1.8.13** of HDF5 library are available on Anselm, compiled for **Intel MPI** and **OpenMPI** using **intel** and **gnu** compilers. These are available via modules:
......@@ -23,7 +23,7 @@ Versions **1.8.11** and **1.8.13** of HDF5 library are available on Anselm, comp
The module sets up environment variables, required for linking and running HDF5 enabled applications. Make sure that the choice of HDF5 module is consistent with your choice of MPI library. Mixing MPI of different implementations may have unpredictable results.
>Be aware, that GCC version of **HDF5 1.8.11** has serious performance issues, since it's compiled with -O0 optimization flag. This version is provided only for testing of code compiled only by GCC and IS NOT recommended for production computations. For more informations, please see: <http://www.hdfgroup.org/ftp/HDF5/prev-releases/ReleaseFiles/release5-1811>
>Be aware, that GCC version of **HDF5 1.8.11** has serious performance issues, since it's compiled with -O0 optimization flag. This version is provided only for testing of code compiled only by GCC and IS NOT recommended for production computations. For more informations, please see: <http://www.hdfgroup.org/ftp/HDF5/prev-releases/ReleaseFiles/release5-1811>![external](../../../img/external.png)
All GCC versions of **HDF5 1.8.13** are not affected by the bug, are compiled with -O3 optimizations and are recommended for production computations.
Example
......@@ -84,6 +84,6 @@ Load modules and compile:
$ mpicc hdf5test.c -o hdf5test.x -Wl,-rpath=$LIBRARY_PATH $HDF5_INC $HDF5_SHLIB
```
Run the example as [Intel MPI program](../anselm-cluster-documentation/software/mpi-1/running-mpich2.html).
Run the example as [Intel MPI program](../anselm-cluster-documentation/software/mpi/running-mpich2/).
For further informations, please see the website: <http://www.hdfgroup.org/HDF5/>
\ No newline at end of file
For further informations, please see the website: <http://www.hdfgroup.org/HDF5/>![external](../../../img/external.png)
\ No newline at end of file
......@@ -11,7 +11,7 @@ Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, e
$ module load mkl
```
Read more at the [Intel MKL](../intel-suite/intel-mkl.html) page.
Read more at the [Intel MKL](../intel-suite/intel-mkl/) page.
Intel Integrated Performance Primitives
---------------------------------------
......@@ -21,7 +21,7 @@ Intel Integrated Performance Primitives, version 7.1.1, compiled for AVX is avai
$ module load ipp
```
Read more at the [Intel IPP](../intel-suite/intel-integrated-performance-primitives.html) page.
Read more at the [Intel IPP](../intel-suite/intel-integrated-performance-primitives/) page.
Intel Threading Building Blocks
-------------------------------
......@@ -31,4 +31,4 @@ Intel Threading Building Blocks (Intel TBB) is a library that supports scalable
$ module load tbb
```
Read more at the [Intel TBB](../intel-suite/intel-tbb.html) page.
\ No newline at end of file
Read more at the [Intel TBB](../intel-suite/intel-tbb/) page.
\ No newline at end of file
......@@ -69,8 +69,8 @@ To test if the MAGMA server runs properly we can run one of examples that are pa
**export OMP_NUM_THREADS=16**
See more details at [MAGMA home page](http://icl.cs.utk.edu/magma/).
See more details at [MAGMA home page](http://icl.cs.utk.edu/magma/)![external](../../../img/external.png).
References
----------
[1] MAGMA MIC: Linear Algebra Library for Intel Xeon Phi Coprocessors, Jack Dongarra et. al, [http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf](http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf)
\ No newline at end of file
[1] MAGMA MIC: Linear Algebra Library for Intel Xeon Phi Coprocessors, Jack Dongarra et. al, [http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf](http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf)![external](../../../img/external.png)
\ No newline at end of file
......@@ -9,13 +9,11 @@ PETSc (Portable, Extensible Toolkit for Scientific Computation) is a suite of bu
Resources
---------
- [project webpage](http://www.mcs.anl.gov/petsc/)
- [documentation](http://www.mcs.anl.gov/petsc/documentation/)
- [PETSc Users
Manual (PDF)](http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf)
- [index of all manual
pages](http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html)
- PRACE Video Tutorial [part1](http://www.youtube.com/watch?v=asVaFg1NDqY), [part2](http://www.youtube.com/watch?v=ubp_cSibb9I), [part3](http://www.youtube.com/watch?v=vJAAAQv-aaw), [part4](http://www.youtube.com/watch?v=BKVlqWNh8jY), [part5](http://www.youtube.com/watch?v=iXkbLEBFjlM)
- [project webpage](http://www.mcs.anl.gov/petsc/)![external](../../../img/external.png)
- [documentation](http://www.mcs.anl.gov/petsc/documentation/)![external](../../../img/external.png)
- [PETSc Users Manual (PDF)](http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf)![external](../../../img/external.png)
- [index of all manual pages](http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html)![external](../../../img/external.png)
- PRACE Video Tutorial [part1](http://www.youtube.com/watch?v=asVaFg1NDqY)![external](../../../img/external.png), [part2](http://www.youtube.com/watch?v=ubp_cSibb9I)![external](../../../img/external.png), [part3](http://www.youtube.com/watch?v=vJAAAQv-aaw)![external](../../../img/external.png), [part4](http://www.youtube.com/watch?v=BKVlqWNh8jY)![external](../../../img/external.png), [part5](http://www.youtube.com/watch?v=iXkbLEBFjlM)![external](../../../img/external.png)
Modules
-------
......@@ -27,13 +25,13 @@ You can start using PETSc on Anselm by loading the PETSc module. Module names ob
module load petsc/3.4.4-icc-impi-mkl-opt
```
where `variant` is replaced by one of `{dbg, opt, threads-dbg, threads-opt}`. The `opt` variant is compiled without debugging information (no `-g` option) and with aggressive compiler optimizations (`-O3 -xAVX`). This variant is suitable for performance measurements and production runs. In all other cases use the debug (`dbg`) variant, because it contains debugging information, performs validations and self-checks, and provides a clear stack trace and message in case of an error. The other two variants `threads-dbg` and `threads-opt` are `dbg` and `opt`, respectively, built with [OpenMP and pthreads threading support](http://www.mcs.anl.gov/petsc/features/threads.html).
where `variant` is replaced by one of `{dbg, opt, threads-dbg, threads-opt}`. The `opt` variant is compiled without debugging information (no `-g` option) and with aggressive compiler optimizations (`-O3 -xAVX`). This variant is suitable for performance measurements and production runs. In all other cases use the debug (`dbg`) variant, because it contains debugging information, performs validations and self-checks, and provides a clear stack trace and message in case of an error. The other two variants `threads-dbg` and `threads-opt` are `dbg` and `opt`, respectively, built with [OpenMP and pthreads threading support](http://www.mcs.anl.gov/petsc/features/threads.html)![external](../../../img/external.png).
External libraries
------------------
PETSc needs at least MPI, BLAS and LAPACK. These dependencies are currently satisfied with Intel MPI and Intel MKL in Anselm `petsc` modules.
PETSc can be linked with a plethora of [external numerical libraries](http://www.mcs.anl.gov/petsc/miscellaneous/external.html), extending PETSc functionality, e.g. direct linear system solvers, preconditioners or partitioners. See below a list of libraries currently included in Anselm `petsc` modules.
PETSc can be linked with a plethora of [external numerical libraries](http://www.mcs.anl.gov/petsc/miscellaneous/external.html)![external](../../../img/external.png), extending PETSc functionality, e.g. direct linear system solvers, preconditioners or partitioners. See below a list of libraries currently included in Anselm `petsc` modules.
All these libraries can be used also alone, without PETSc. Their static or shared program libraries are available in
`$PETSC_DIR/$PETSC_ARCH/lib` and header files in `$PETSC_DIR/$PETSC_ARCH/include`. `PETSC_DIR` and `PETSC_ARCH` are environment variables pointing to a specific PETSc instance based on the petsc module loaded.
......@@ -41,24 +39,24 @@ All these libraries can be used also alone, without PETSc. Their static or share
### Libraries linked to PETSc on Anselm (as of 11 April 2015)
- dense linear algebra
- [Elemental](http://libelemental.org/)
- [Elemental](http://libelemental.org/)![external](../../../img/external.png)
- sparse linear system solvers
- [Intel MKL Pardiso](https://software.intel.com/en-us/node/470282)
- [MUMPS](http://mumps.enseeiht.fr/)
- [PaStiX](http://pastix.gforge.inria.fr/)
- [SuiteSparse](http://faculty.cse.tamu.edu/davis/suitesparse.html)
- [SuperLU](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu)
- [SuperLU_Dist](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu_dist)
- [Intel MKL Pardiso](https://software.intel.com/en-us/node/470282)![external](../../../img/external.png)
- [MUMPS](http://mumps.enseeiht.fr/)![external](../../../img/external.png)
- [PaStiX](http://pastix.gforge.inria.fr/)![external](../../../img/external.png)
- [SuiteSparse](http://faculty.cse.tamu.edu/davis/suitesparse.html)![external](../../../img/external.png)
- [SuperLU](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu)![external](../../../img/external.png)
- [SuperLU_Dist](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu_dist)![external](../../../img/external.png)
- input/output
- [ExodusII](http://sourceforge.net/projects/exodusii/)
- [HDF5](http://www.hdfgroup.org/HDF5/)
- [NetCDF](http://www.unidata.ucar.edu/software/netcdf/)
- [ExodusII](http://sourceforge.net/projects/exodusii/)![external](../../../img/external.png)
- [HDF5](http://www.hdfgroup.org/HDF5/)![external](../../../img/external.png)
- [NetCDF](http://www.unidata.ucar.edu/software/netcdf/)![external](../../../img/external.png)
- partitioning
- [Chaco](http://www.cs.sandia.gov/CRF/chac.html)
- [METIS](http://glaros.dtc.umn.edu/gkhome/metis/metis/overview)
- [ParMETIS](http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview)
- [PT-Scotch](http://www.labri.fr/perso/pelegrin/scotch/)
- [Chaco](http://www.cs.sandia.gov/CRF/chac.html)![external](../../../img/external.png)
- [METIS](http://glaros.dtc.umn.edu/gkhome/metis/metis/overview)![external](../../../img/external.png)
- [ParMETIS](http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview)![external](../../../img/external.png)
- [PT-Scotch](http://www.labri.fr/perso/pelegrin/scotch/)![external](../../../img/external.png)
- preconditioners & multigrid
- [Hypre](http://acts.nersc.gov/hypre/)
- [Trilinos ML](http://trilinos.sandia.gov/packages/ml/)
- [SPAI - Sparse Approximate Inverse](https://bitbucket.org/petsc/pkg-spai)
\ No newline at end of file
- [Hypre](http://acts.nersc.gov/hypre/)![external](../../../img/external.png)
- [Trilinos ML](http://trilinos.sandia.gov/packages/ml/)![external](../../../img/external.png)
- [SPAI - Sparse Approximate Inverse](https://bitbucket.org/petsc/pkg-spai)![external](../../../img/external.png)
\ No newline at end of file
......@@ -19,7 +19,7 @@ Current Trilinos installation on ANSELM contains (among others) the following ma
- **IFPACK** - distributed algebraic preconditioner (includes e.g. incomplete LU factorization)
- **Teuchos** - common tools packages. This package contains classes for memory management, output, performance monitoring, BLAS and LAPACK wrappers etc.
For the full list of Trilinos packages, descriptions of their capabilities, and user manuals see [http://trilinos.sandia.gov.](http://trilinos.sandia.gov)
For the full list of Trilinos packages, descriptions of their capabilities, and user manuals see [http://trilinos.sandia.gov.](http://trilinos.sandia.gov)![external](../../../img/external.png)
### Installed version
......@@ -33,7 +33,7 @@ First, load the appropriate module:
$ module load trilinos
```
For the compilation of CMake-aware project, Trilinos provides the FIND_PACKAGE( Trilinos ) capability, which makes it easy to build against Trilinos, including linking against the correct list of libraries. For details, see <http://trilinos.sandia.gov/Finding_Trilinos.txt>
For the compilation of CMake-aware project, Trilinos provides the FIND_PACKAGE( Trilinos ) capability, which makes it easy to build against Trilinos, including linking against the correct list of libraries. For details, see <http://trilinos.sandia.gov/Finding_Trilinos.txt>![external](../../../img/external.png)
For compiling using simple makefiles, Trilinos provides Makefile.export system, which allows users to include important Trilinos variables directly into their makefiles. This can be done simply by inserting the following line into the makefile:
......@@ -47,4 +47,4 @@ or
include Makefile.export.<package>
```
if you are interested only in a specific Trilinos package. This will give you access to the variables such as Trilinos_CXX_COMPILER, Trilinos_INCLUDE_DIRS, Trilinos_LIBRARY_DIRS etc. For the detailed description and example makefile see <http://trilinos.sandia.gov/Export_Makefile.txt>.
\ No newline at end of file
if you are interested only in a specific Trilinos package. This will give you access to the variables such as Trilinos_CXX_COMPILER, Trilinos_INCLUDE_DIRS, Trilinos_LIBRARY_DIRS etc. For the detailed description and example makefile see <http://trilinos.sandia.gov/Export_Makefile.txt>![external](../../../img/external.png).
\ No newline at end of file
......@@ -3,9 +3,9 @@ Diagnostic component (TEAM)
### Access
TEAM is available at the following address: <http://omics.it4i.cz/team/>
TEAM is available at the following address: <http://omics.it4i.cz/team/>![external](../../../img/external.png)
>The address is accessible only via [VPN. ](../../accessing-the-cluster/vpn-access.html)
>The address is accessible only via [VPN. ](../../accessing-the-cluster/vpn-access/)
### Diagnostic component (TEAM)
......
......@@ -163,9 +163,9 @@ CellBase includes SNPs from dbSNP (16)^; SNP population frequencies from HapMap
We also import systems biology information like interactome information from IntAct (24). Reactome (25) stores pathway and interaction information in BioPAX (26) format. BioPAX data exchange format enables the integration of diverse pathway
resources. We successfully solved the problem of storing data released in BioPAX format into a SQL relational schema, which allowed us importing Reactome in CellBase.
### [Diagnostic component (TEAM)](diagnostic-component-team.md)
### [Diagnostic component (TEAM)](diagnostic-component-team/)
### [Priorization component (BiERApp)](priorization-component-bierapp.md)
### [Priorization component (BiERApp)](priorization-component-bierapp/)
Usage
-----
......@@ -181,7 +181,7 @@ If we launch ngsPipeline with ‘-h’, we will get the usage help:
```bash
$ ngsPipeline -h
Usage: ngsPipeline.py [-h] -i INPUT -o OUTPUT -p PED --project PROJECT --queue
Usage: ngsPipeline.py [-h] -i INPUT -o OUTPUT -p PED --project PROJECT --queue
                      QUEUE [--stages-path STAGES_PATH] [--email EMAIL]
[--prefix PREFIX] [-s START] [-e END] --log
......@@ -235,7 +235,6 @@ Input, output and ped arguments are mandatory. If the output folder does not exi
Examples
---------------------
This is an example usage of NGSpipeline:
We have a folder with the following structure in
......@@ -262,7 +261,7 @@ The ped file ( file.ped) contains the following info:
FAM sample_B 0 0 2 2
```
Now, lets load the NGSPipeline module and copy the sample data to a [scratch directory](../../storage.html):
Now, lets load the NGSPipeline module and copy the sample data to a [scratch directory](../../storage/storage/):
```bash
$ module load ngsPipeline
......@@ -288,17 +287,16 @@ Details on the pipeline
------------------------------------
The pipeline calls the following tools:
- [fastqc](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/), quality control tool for high throughput
- [fastqc](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/)![external](../../../img/external.png), quality control tool for high throughput
sequence data.
- [gatk](https://www.broadinstitute.org/gatk/), The Genome Analysis Toolkit or GATK is a software package developed at
- [gatk](https://www.broadinstitute.org/gatk/)![external](../../../img/external.png), The Genome Analysis Toolkit or GATK is a software package developed at
the Broad Institute to analyze high-throughput sequencing data. The toolkit offers a wide variety of tools, with a primary focus on variant discovery and genotyping as well as strong emphasis on data quality assurance. Its robust architecture, powerful processing engine and high-performance computing features make it capable of taking on projects of any size.
- [hpg-aligner](http://wiki.opencb.org/projects/hpg/doku.php?id=aligner:downloads), HPG Aligner has been designed to align short and long reads with high sensitivity, therefore any number of mismatches or indels are allowed. HPG Aligner implements and combines two well known algorithms: *Burrows-Wheeler Transform* (BWT) to speed-up mapping high-quality reads, and *Smith-Waterman*> (SW) to increase sensitivity when reads cannot be mapped using BWT.
- [hpg-fastq](http://docs.bioinfo.cipf.es/projects/fastqhpc/wiki), a quality control tool for high throughput sequence data.
- [hpg-variant](http://wiki.opencb.org/projects/hpg/doku.php?id=variant:downloads), The HPG Variant suite is an ambitious project aimed to provide a complete suite of tools to work with genomic variation data, from VCF tools to variant profiling or genomic statistics. It is being implemented using High Performance Computing technologies to provide the best performance possible.
- [picard](http://picard.sourceforge.net/), Picard comprises Java-based command-line utilities that manipulate SAM files, and a Java API (HTSJDK) for creating new programs that read and write SAM files. Both SAM text format and SAM binary (BAM) format are supported.
- [samtools](http://samtools.sourceforge.net/samtools-c.shtml), SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.
- [snpEff](http://snpeff.sourceforge.net/), Genetic variant annotation and effect prediction toolbox.
- [hpg-aligner](http://wiki.opencb.org/projects/hpg/doku.php?id=aligner:downloads)![external](../../../img/external.png), HPG Aligner has been designed to align short and long reads with high sensitivity, therefore any number of mismatches or indels are allowed. HPG Aligner implements and combines two well known algorithms: *Burrows-Wheeler Transform* (BWT) to speed-up mapping high-quality reads, and *Smith-Waterman*> (SW) to increase sensitivity when reads cannot be mapped using BWT.
- [hpg-fastq](http://docs.bioinfo.cipf.es/projects/fastqhpc/wiki)![external](../../../img/external.png), a quality control tool for high throughput sequence data.
- [hpg-variant](http://wiki.opencb.org/projects/hpg/doku.php?id=variant:downloads)![external](../../../img/external.png), The HPG Variant suite is an ambitious project aimed to provide a complete suite of tools to work with genomic variation data, from VCF tools to variant profiling or genomic statistics. It is being implemented using High Performance Computing technologies to provide the best performance possible.
- [picard](http://picard.sourceforge.net/)![external](../../../img/external.png), Picard comprises Java-based command-line utilities that manipulate SAM files, and a Java API (HTSJDK) for creating new programs that read and write SAM files. Both SAM text format and SAM binary (BAM) format are supported.
- [samtools](http://samtools.sourceforge.net/samtools-c.shtml)![external](../../../img/external.png), SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.
- [snpEff](http://snpeff.sourceforge.net/)![external](../../../img/external.png), Genetic variant annotation and effect prediction toolbox.
This listing show which tools are used in each step of the pipeline :
......@@ -341,7 +339,7 @@ The output folder contains all the subfolders with the intermediate data. This f
**Figure 7**. *TEAM upload panel.* *Once the file has been uploaded, a panel must be chosen from the Panel* list. Then, pressing the Run button the diagnostic process starts.
Once the file has been uploaded, a panel must be chosen from the Panel list. Then, pressing the Run button the diagnostic process starts. TEAM searches first for known diagnostic mutation(s) taken from four databases: HGMD-public (20), [HUMSAVAR](http://www.uniprot.org/docs/humsavar), ClinVar (29)^ and COSMIC (23).
Once the file has been uploaded, a panel must be chosen from the Panel list. Then, pressing the Run button the diagnostic process starts. TEAM searches first for known diagnostic mutation(s) taken from four databases: HGMD-public (20), [HUMSAVAR](http://www.uniprot.org/docs/humsavar)![external](../../../img/external.png), ClinVar (29)^ and COSMIC (23).
![The panel manager. The elements used to define a panel are (A) disease terms, (B) diagnostic mutations and (C) genes. Arrows represent actions that can be taken in the panel manager. Panels can be defined by using the known mutations and genes of a particular disease. This can be done by dragging them to the Primary Diagnostic box (action D). This action, in addition to defining the diseases in the Primary Diagnostic box, automatically adds the corresponding genes to the Genes box. The panels can be customized by adding new genes (action F) or removing undesired genes (action G). New disease mutations can be added independently or associated to an already existing disease term (action E). Disease terms can be removed by simply dragging themback (action H).](fig7x.png)
......@@ -388,6 +386,6 @@ References
25. Croft,D., O’Kelly,G., Wu,G., Haw,R., Gillespie,M., Matthews,L., Caudy,M., Garapati,P., Gopinath,G., Jassal,B. et al. (2011) Reactome: a database of reactions, pathways and biological processes. Nucleic Acids Res., 39, D691–D697.
26. Demir,E., Cary,M.P., Paley,S., Fukuda,K., Lemer,C., Vastrik,I.,Wu,G., D’Eustachio,P., Schaefer,C., Luciano,J. et al. (2010) The BioPAX community standard for pathway data sharing. Nature Biotechnol., 28, 935–942.
27. Alemán Z, García-García F, Medina I, Dopazo J (2014): A web tool for the design and management of panels of genes for targeted enrichment and massive sequencing for clinical applications. Nucleic Acids Res 42: W83-7.
28. [Alemán A](http://www.ncbi.nlm.nih.gov/pubmed?term=Alem%C3%A1n%20A%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Garcia-Garcia F](http://www.ncbi.nlm.nih.gov/pubmed?term=Garcia-Garcia%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Salavert F](http://www.ncbi.nlm.nih.gov/pubmed?term=Salavert%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Medina I](http://www.ncbi.nlm.nih.gov/pubmed?term=Medina%20I%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Dopazo J](http://www.ncbi.nlm.nih.gov/pubmed?term=Dopazo%20J%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)> (2014). A web-based interactive framework to assist in the prioritization of disease candidate genes in whole-exome sequencing studies. [Nucleic Acids Res.](http://www.ncbi.nlm.nih.gov/pubmed/?term=BiERapp "Nucleic acids research.")>42 :W88-93.
28. [Alemán A](http://www.ncbi.nlm.nih.gov/pubmed?term=Alem%C3%A1n%20A%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)![external](../../../img/external.png)>, [Garcia-Garcia F](http://www.ncbi.nlm.nih.gov/pubmed?term=Garcia-Garcia%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)![external](../../../img/external.png)>, [Salavert F](http://www.ncbi.nlm.nih.gov/pubmed?term=Salavert%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)![external](../../../img/external.png)>, [Medina I](http://www.ncbi.nlm.nih.gov/pubmed?term=Medina%20I%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)![external](../../../img/external.png)>, [Dopazo J](http://www.ncbi.nlm.nih.gov/pubmed?term=Dopazo%20J%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)![external](../../../img/external.png)> (2014). A web-based interactive framework to assist in the prioritization of disease candidate genes in whole-exome sequencing studies. [Nucleic Acids Res.](http://www.ncbi.nlm.nih.gov/pubmed/?term=BiERapp "Nucleic acids research.")![external](../../../img/external.png)>42 :W88-93.
29. Landrum,M.J., Lee,J.M., Riley,G.R., Jang,W., Rubinstein,W.S., Church,D.M. and Maglott,D.R. (2014) ClinVar: public archive of relationships among sequence variation and human phenotype. Nucleic Acids Res., 42, D980–D985.
30. Medina I, Salavert F, Sanchez R, de Maria A, Alonso R, Escobar P, Bleda M, Dopazo J: Genome Maps, a new generation genome browser. Nucleic Acids Res 2013, 41:W41-46.
\ No newline at end of file
......@@ -5,7 +5,7 @@ Priorization component (BiERApp)
BiERApp is available at the following address: <http://omics.it4i.cz/bierapp/>
>The address is accessible onlyvia [VPN. ](../../accessing-the-cluster/vpn-access.html)
>The address is accessible onlyvia [VPN. ](../../accessing-the-cluster/vpn-access/)
###BiERApp
......
Operating System
===============
##The operating system, deployed on ANSELM
The operating system on Anselm is Linux - bullx Linux Server release 6.3.
The operating system on Anselm is Linux - **bullx Linux Server release 6.X**
bullx Linux is based on Red Hat Enterprise Linux. bullx Linux is a Linux distribution provided by Bull and dedicated to HPC applications.
......@@ -15,7 +15,7 @@ The Salomon cluster is accessed by SSH protocol via login nodes login1, login2,
|login1.salomon.it4i.cz|22|ssh|login1|
|login1.salomon.it4i.cz|22|ssh|login1|
The authentication is by the [private key](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html)
The authentication is by the [private key](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/)
>Please verify SSH fingerprints during the first logon. They are identical on all login nodes:
f6:28:98:e4:f9:b2:a6:8f:f2:f4:2d:0a:09:67:69:80 (DSA)
......@@ -35,7 +35,7 @@ If you see warning message "UNPROTECTED PRIVATE KEY FILE!", use this command to
local $ chmod 600 /path/to/id_rsa
```
On **Windows**, use [PuTTY ssh client](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty.html).
On **Windows**, use [PuTTY ssh client](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty/).
After logging in, you will see the command prompt:
......@@ -55,11 +55,11 @@ Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com
[username@login2.salomon ~]$
```
>The environment is **not** shared between login nodes, except for [shared filesystems](storage/storage.html).
>The environment is **not** shared between login nodes, except for [shared filesystems](storage/storage/).
Data Transfer
-------------
Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp protocols.
Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy)![external](../../img/external.png) and sftp protocols.
In case large volumes of data are transferred, use dedicated data mover nodes cedge[1-3].salomon.it4i.cz for increased performance.
......@@ -73,7 +73,7 @@ HTML commented section #1 (removed cedge servers from the table)
|login3.salomon.it4i.cz|22|scp, sftp|
|login4.salomon.it4i.cz|22|scp, sftp|
The authentication is by the [private key](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html)
The authentication is by the [private key](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/)
HTML commented section #2 (ssh transfer performance data need to be verified)
......@@ -93,7 +93,7 @@ or
local $ sftp -o IdentityFile=/path/to/id_rsa username@salomon.it4i.cz
```
Very convenient way to transfer files in and out of the Salomon computer is via the fuse filesystem [sshfs](http://linux.die.net/man/1/sshfs)
Very convenient way to transfer files in and out of the Salomon computer is via the fuse filesystem [sshfs](http://linux.die.net/man/1/sshfs)![external](../../img/external.png)
```bash
local $ sshfs -o IdentityFile=/path/to/id_rsa username@salomon.it4i.cz:. mountpoint
......@@ -109,6 +109,6 @@ $ man scp
$ man sshfs
```
On Windows, use [WinSCP client](http://winscp.net/eng/download.php) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/) provides a way to mount the Salomon filesystems directly as an external disc.
On Windows, use [WinSCP client](http://winscp.net/eng/download.php)![external](../../img/external.png) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/)![external](../../img/external.png) provides a way to mount the Salomon filesystems directly as an external disc.
More information about the shared file systems is available [here](storage/storage.html).
\ No newline at end of file
More information about the shared file systems is available [here](storage/storage/).
\ No newline at end of file
......@@ -47,7 +47,7 @@ Note: Port number 6000 is chosen as an example only. Pick any free port.
Remote port forwarding from compute nodes allows applications running on the compute nodes to access hosts outside Salomon Cluster.
First, establish the remote port forwarding form the login node, as [described above](outgoing-connections.html#port-forwarding-from-login-nodes).
First, establish the remote port forwarding form the login node, as [described above](outgoing-connections/#port-forwarding-from-login-nodes).
Second, invoke port forwarding from the compute node to the login node. Insert following line into your jobscript or interactive shell
......@@ -69,12 +69,12 @@ To establish local proxy server on your workstation, install and run SOCKS proxy
local $ ssh -D 1080 localhost
```
On Windows, install and run the free, open source [Sock Puppet](http://sockspuppet.com/) server.
On Windows, install and run the free, open source [Sock Puppet](http://sockspuppet.com/)![external](../../img/external.png) server.
Once the proxy server is running, establish ssh port forwarding from Salomon to the proxy server, port 1080, exactly as [described above](outgoing-connections.html#port-forwarding-from-login-nodes).
Once the proxy server is running, establish ssh port forwarding from Salomon to the proxy server, port 1080, exactly as [described above](outgoing-connections/#port-forwarding-from-login-nodes).
```bash
local $ ssh -R 6000:localhost:1080 salomon.it4i.cz
```
Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding  to access the [proxy server from compute nodes](outgoing-connections.html#port-forwarding-from-compute-nodes) as well .
\ No newline at end of file
Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding  to access the [proxy server from compute nodes](outgoing-connections/#port-forwarding-from-compute-nodes) as well .
\ No newline at end of file
......@@ -18,7 +18,7 @@ It is impossible to connect to VPN from other operating systems.
VPN client installation
------------------------------------
You can install VPN client from web interface after successful login with LDAP credentials on address <https://vpn.it4i.cz/user>
You can install VPN client from web interface after successful login with LDAP credentials on address <https://vpn.it4i.cz/user>![external](../../img/external.png)
![](vpn_web_login.png)
......@@ -47,7 +47,7 @@ Working with VPN client
You can use graphical user interface or command line interface to run VPN client on all supported operating systems. We suggest using GUI.
Before the first login to VPN, you have to fill URL **[https://vpn.it4i.cz/user](https://vpn.it4i.cz/user)** into the text field.
Before the first login to VPN, you have to fill URL **[https://vpn.it4i.cz/user](https://vpn.it4i.cz/user)![external](../../img/external.png)** into the text field.
![](vpn_contacting_https_cluster.png)
......
......@@ -30,7 +30,7 @@ fi
In order to configure your shell for  running particular application on Salomon we use Module package interface.
Application modules on Salomon cluster are built using [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). The modules are divided into the following structure:
Application modules on Salomon cluster are built using [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild")![external](../img/external.png). The modules are divided into the following structure:
```bash
base: Default module class
......@@ -120,5 +120,4 @@ On Salomon, we have currently following toolchains installed:
|gompi|GCC, OpenMPI|
|goolf|BLACS, FFTW, GCC, OpenBLAS, OpenMPI, ScaLAPACK|
|iompi|OpenMPI, icc, ifort|
|iccifort|icc, ifort|
|iccifort|icc, ifort|
\ No newline at end of file
......@@ -5,7 +5,7 @@ Introduction
------------
The Salomon cluster consists of 1008 computational nodes of which 576 are regular compute nodes and 432 accelerated nodes. Each node is a powerful x86-64 computer, equipped with 24 cores (two twelve-core Intel Xeon processors) and 128GB RAM. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 0.5PB /home NFS disk storage to store the user files. Users may use a DDN Lustre shared storage with capacity of 1.69 PB which is available for the scratch project data. The user access to the Salomon cluster is provided by four login nodes.
[More about schematic representation of the Salomon cluster compute nodes IB topology](../network/ib-single-plane-topology.md).
[More about schematic representation of the Salomon cluster compute nodes IB topology](../network/ib-single-plane-topology/).
![Salomon](salomon-2)
......@@ -19,7 +19,7 @@ General information
|Primary purpose|High Performance Computing|
|Architecture of compute nodes|x86-64|
|Operating system|CentOS 6.7 Linux|
|[**Compute nodes**](../compute-nodes.md)||
|[**Compute nodes**](../compute-nodes/)||
|Totally|1008|
|Processor|2x Intel Xeon E5-2680v3, 2.5GHz, 12cores|
|RAM|128GB, 5.3GB per core, DDR4@2133 MHz|
......@@ -39,7 +39,7 @@ Compute nodes
|w/o accelerator|576|2x Intel Xeon E5-2680v3, 2.5GHz|24|128GB|-|
|MIC accelerated|432|2x Intel Xeon E5-2680v3, 2.5GHz|24|128GB|2x Intel Xeon Phi 7120P, 61cores, 16GB RAM|
For more details please refer to the [Compute nodes](../compute-nodes.md).
For more details please refer to the [Compute nodes](../compute-nodes/).
Remote visualization nodes
--------------------------
......
Introduction
============
Welcome to Salomon supercomputer cluster. The Salomon cluster consists of 1008 compute nodes, totaling 24192 compute cores with 129TB RAM and giving over 2 Pflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 24 cores, at least 128GB RAM. Nodes are interconnected by 7D Enhanced hypercube Infiniband network and equipped with Intel Xeon E5-2680v3 processors. The Salomon cluster consists of 576 nodes without accelerators and 432 nodes equipped with Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview-1/hardware-overview.html).
Welcome to Salomon supercomputer cluster. The Salomon cluster consists of 1008 compute nodes, totaling 24192 compute cores with 129TB RAM and giving over 2 Pflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 24 cores, at least 128GB RAM. Nodes are interconnected by 7D Enhanced hypercube Infiniband network and equipped with Intel Xeon E5-2680v3 processors. The Salomon cluster consists of 576 nodes without accelerators and 432 nodes equipped with Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/).
The cluster runs [CentOS Linux](http://www.bull.com/bullx-logiciels/systeme-exploitation.html) operating system, which is compatible with the RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg)
The cluster runs [CentOS Linux](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)![external](../img/external.png) operating system, which is compatible with the RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg)![external](../img/external.png)
**Water-cooled Compute Nodes With MIC Accelerator**
......
7D Enhanced Hypercube
=====================
[More about Job submission - Placement by IB switch / Hypercube dimension.](../resource-allocation-and-job-execution/job-submission-and-execution.md)
[More about Job submission - Placement by IB switch / Hypercube dimension.](../resource-allocation-and-job-execution/job-submission-and-execution/)
Nodes may be selected via the PBS resource attribute ehc_[1-7]d .
......@@ -15,7 +15,7 @@ Nodes may be selected via the PBS resource attribute ehc_[1-7]d .
|6D|ehc_6d|
|7D|ehc_7d|
[Schematic representation of the Salomon cluster IB single-plain topology represents hypercube dimension 0](ib-single-plane-topology.md).
[Schematic representation of the Salomon cluster IB single-plain topology represents hypercube dimension 0](ib-single-plane-topology/).
### 7D Enhanced Hypercube {#d-enhanced-hypercube}
......
......@@ -17,7 +17,7 @@ Each colour in each physical IRU represents one dual-switch ASIC switch.
### IB single-plane topology - Accelerated nodes
Each of the 3 inter-connected D racks are equivalent to one half of Mcell rack. 18x D rack with MIC accelerated nodes [r21-r38] are equivalent to 3 Mcell racks as shown in a diagram [7D Enhanced Hypercube](7d-enhanced-hypercube.md).
Each of the 3 inter-connected D racks are equivalent to one half of Mcell rack. 18x D rack with MIC accelerated nodes [r21-r38] are equivalent to 3 Mcell racks as shown in a diagram [7D Enhanced Hypercube](7d-enhanced-hypercube/).
As shown in a diagram ![IB Topology](Salomon_IB_topology.png):
......
Network
=======
All compute and login nodes of Salomon are interconnected by 7D Enhanced hypercube [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) network and by Gigabit [Ethernet](http://en.wikipedia.org/wiki/Ethernet)
network. Only [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) network may be used to transfer user data.
All compute and login nodes of Salomon are interconnected by 7D Enhanced hypercube [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) network and by Gigabit [Ethernet](http://en.wikipedia.org/wiki/Ethernet)![external](../../img/external.png)
network. Only [Infiniband](http://en.wikipedia.org/wiki/InfiniBand)![external](../../img/external.png) network may be used to transfer user data.
Infiniband Network
------------------
All compute and login nodes of Salomon are interconnected by 7D Enhanced hypercube [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) network (56 Gbps). The network topology is a [7D Enhanced hypercube](7d-enhanced-hypercube.md).
All compute and login nodes of Salomon are interconnected by 7D Enhanced hypercube [Infiniband](http://en.wikipedia.org/wiki/InfiniBand)![external](../../img/external.png) network (56 Gbps). The network topology is a [7D Enhanced hypercube](7d-enhanced-hypercube/).
Read more about schematic representation of the Salomon cluster [IB single-plain topology](ib-single-plane-topology.md)
([hypercube dimension](7d-enhanced-hypercube.md) 0).
Read more about schematic representation of the Salomon cluster [IB single-plain topology](ib-single-plane-topology/)
([hypercube dimension](7d-enhanced-hypercube/) 0).
The compute nodes may be accessed via the Infiniband network using ib0 network interface, in address range 10.17.0.0 (mask 255.255.224.0). The MPI may be used to establish native Infiniband connection among the nodes.
......
......@@ -3,22 +3,22 @@ PRACE User Support
Intro
-----
PRACE users coming to Salomon as to TIER-1 system offered through the DECI calls are in general treated as standard users and so most of the general documentation applies to them as well. This section shows the main differences for quicker orientation, but often uses references to the original documentation. PRACE users who don't undergo the full procedure (including signing the IT4I AuP on top of the PRACE AuP) will not have a password and thus access to some services intended for regular users. This can lower their comfort, but otherwise they should be able to use the TIER-1 system as intended. Please see the [Obtaining Login Credentials section](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.html), if the same level of access is required.
PRACE users coming to Salomon as to TIER-1 system offered through the DECI calls are in general treated as standard users and so most of the general documentation applies to them as well. This section shows the main differences for quicker orientation, but often uses references to the original documentation. PRACE users who don't undergo the full procedure (including signing the IT4I AuP on top of the PRACE AuP) will not have a password and thus access to some services intended for regular users. This can lower their comfort, but otherwise they should be able to use the TIER-1 system as intended. Please see the [Obtaining Login Credentials section](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/), if the same level of access is required.
All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation/) should be read before continuing reading the local documentation here.
All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation/)![external](../img/external.png) should be read before continuing reading the local documentation here.
Help and Support
------------------------
If you have any troubles, need information, request support or want to install additional software, please use [PRACE
Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/).
Information about the local services are provided in the [introduction of general user documentation](introduction.html). Please keep in mind, that standard PRACE accounts don't have a password to access the web interface of the local (IT4Innovations) request tracker and thus a new ticket should be created by sending an e-mail to support[at]it4i.cz.
Information about the local services are provided in the [introduction of general user documentation](introduction/). Please keep in mind, that standard PRACE accounts don't have a password to access the web interface of the local (IT4Innovations) request tracker and thus a new ticket should be created by sending an e-mail to support[at]it4i.cz.
Obtaining Login Credentials
---------------------------
In general PRACE users already have a PRACE account setup through their HOMESITE (institution from their country) as a result of rewarded PRACE project proposal. This includes signed PRACE AuP, generated and registered certificates, etc.
If there's a special need a PRACE user can get a standard (local) account at IT4Innovations. To get an account on the Salomon cluster, the user needs to obtain the login credentials. The procedure is the same as for general users of the cluster, so please see the corresponding [section of the general documentation here](../get-started-with-it4innovations/obtaining-login-credentials.html).
If there's a special need a PRACE user can get a standard (local) account at IT4Innovations. To get an account on the Salomon cluster, the user needs to obtain the login credentials. The procedure is the same as for general users of the cluster, so please see the corresponding [section of the general documentation here](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/).
Accessing the cluster
---------------------
......@@ -31,11 +31,11 @@ The user will need a valid certificate and to be present in the PRACE LDAP (plea
Most of the information needed by PRACE users accessing the Salomon TIER-1 system can be found here:
- [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)
- [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)
- [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh)
- [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details)
- [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer)
- [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)![external](../img/external.png)
- [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)![external](../img/external.png)
- [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh)![external](../img/external.png)
- [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details)![external](../img/external.png)
- [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer)![external](../img/external.png)
Before you start to use any of the services don't forget to create a proxy certificate from your certificate:
......@@ -95,7 +95,7 @@ When logging from other PRACE system, the prace_service script can be used:
$ gsissh `prace_service -e -s salomon`
```
Although the preferred and recommended file transfer mechanism is [using GridFTP](prace.html#file-transfers), the GSI SSH
Although the preferred and recommended file transfer mechanism is [using GridFTP](prace/#file-transfers), the GSI SSH
implementation on Salomon supports also SCP, so for small files transfer gsiscp can be used:
```bash
......@@ -110,9 +110,9 @@ implementation on Salomon supports also SCP, so for small files transfer gsiscp
### Access to X11 applications (VNC)
If the user needs to run X11 based graphical application and does not have a X11 server, the applications can be run using VNC service. If the user is using regular SSH based access, please see the [section in general documentation](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html).
If the user needs to run X11 based graphical application and does not have a X11 server, the applications can be run using VNC service. If the user is using regular SSH based access, please see the [section in general documentation](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/).
If the user uses GSI SSH based access, then the procedure is similar to the SSH based access ([look here](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html)), only the port forwarding must be done using GSI SSH:
If the user uses GSI SSH based access, then the procedure is similar to the SSH based access ([look here](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/)), only the port forwarding must be done using GSI SSH:
```bash
$ gsissh -p 2222 salomon.it4i.cz -L 5961:localhost:5961
......@@ -120,11 +120,11 @@ If the user uses GSI SSH based access, then the procedure is similar to the SSH
### Access with SSH
After successful obtainment of login credentials for the local IT4Innovations account, the PRACE users can access the cluster as regular users using SSH. For more information please see the [section in general documentation](accessing-the-cluster/shell-and-data-access/shell-and-data-access.html).
After successful obtainment of login credentials for the local IT4Innovations account, the PRACE users can access the cluster as regular users using SSH. For more information please see the [section in general documentation](accessing-the-cluster/shell-and-data-access/shell-and-data-access/).
File transfers
------------------
PRACE users can use the same transfer mechanisms as regular users (if they've undergone the full registration procedure). For information about this, please see [the section in the general documentation](accessing-the-cluster/shell-and-data-access/shell-and-data-access.html).
PRACE users can use the same transfer mechanisms as regular users (if they've undergone the full registration procedure). For information about this, please see [the section in the general documentation](accessing-the-cluster/shell-and-data-access/shell-and-data-access/).
Apart from the standard mechanisms, for PRACE users to transfer data to/from Salomon cluster, a GridFTP server running Globus Toolkit GridFTP service is available. The service is available from public Internet as well as from the internal PRACE network (accessible only from other PRACE partners).
......@@ -203,7 +203,7 @@ Generally both shared file systems are available through GridFTP:
|/home|Lustre|Default HOME directories of users in format /home/prace/login/|
|/scratch|Lustre|Shared SCRATCH mounted on the whole cluster|
More information about the shared file systems is available [here](storage.html).
More information about the shared file systems is available [here](storage/storage/).
Please note, that for PRACE users a "prace" directory is used also on the SCRATCH file system.
......@@ -216,13 +216,13 @@ Usage of the cluster
--------------------
There are some limitations for PRACE user when using the cluster. By default PRACE users aren't allowed to access special queues in the PBS Pro to have high priority or exclusive access to some special equipment like accelerated nodes and high memory (fat) nodes. There may be also restrictions obtaining a working license for the commercial software installed on the cluster, mostly because of the license agreement or because of insufficient amount of licenses.