diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 91877db5691bf8eff9548076e89bd85027e354e6..b5405032bc033d841de5fb6f3be27fb8c5d9b5b0 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -9,7 +9,7 @@ docs: image: davidhrbac/docker-mdcheck:latest allow_failure: true script: - - mdl -r ~MD013 *.md docs.it4i/ + - mdl -r ~MD013,~MD033,~MD014 *.md docs.it4i/ two spaces: stage: test diff --git a/README.md b/README.md index b68f738eb644b91457be9d28d183bf66b6075f17..443314c559cfe20f0fbb64151e22cc16c7d86a06 100644 --- a/README.md +++ b/README.md @@ -1,20 +1,24 @@ -# User documentation - TEST +User documentation +================== -## Environments +Environments +------------ -* https://docs-new.it4i.cz - master branch -* https://docs-new.it4i.cz/devel/$BRANCH_NAME - maps the branches +* [https://docs-new.it4i.cz - master branch](https://docs-new.it4i.cz - master branch) +* [https://docs-new.it4i.cz/devel/$BRANCH_NAME](https://docs-new.it4i.cz/devel/$BRANCH_NAME) - maps the branches -## URLs +URLs +---- -* http://facelessuser.github.io/pymdown-extensions/ -* http://squidfunk.github.io/mkdocs-material/ +* [http://facelessuser.github.io/pymdown-extensions/](http://facelessuser.github.io/pymdown-extensions/) +* [http://squidfunk.github.io/mkdocs-material/](http://squidfunk.github.io/mkdocs-material/) -## Rules +Rules +----- -* spellcheck https://github.com/lukeapage/node-markdown-spellcheck +* [spellcheck https://github.com/lukeapage/node-markdown-spellcheck](spellcheck https://github.com/lukeapage/node-markdown-spellcheck) -* SI units http://physics.nist.gov/cuu/Units/checklist.html +* [SI units http://physics.nist.gov/cuu/Units/checklist.html](SI units http://physics.nist.gov/cuu/Units/checklist.html) ``` fair-share diff --git a/docs.it4i/anselm-cluster-documentation/compute-nodes.md b/docs.it4i/anselm-cluster-documentation/compute-nodes.md index 2d4f1707c960531d25e64c206d5895a7a4e10338..f7b65a0f137a065f3a39e12670497ca11ecc5592 100644 --- a/docs.it4i/anselm-cluster-documentation/compute-nodes.md +++ b/docs.it4i/anselm-cluster-documentation/compute-nodes.md @@ -1,55 +1,53 @@ -Compute Nodes -============= +# Compute Nodes + +## Nodes Configuration -Nodes Configuration -------------------- Anselm is cluster of x86-64 Intel based nodes built on Bull Extreme Computing bullx technology. The cluster contains four types of compute nodes. -###Compute Nodes Without Accelerator - -- 180 nodes -- 2880 cores in total -- two Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node -- 64 GB of physical memory per node -- one 500GB SATA 2,5” 7,2 krpm HDD per node -- bullx B510 blade servers -- cn[1-180] - -###Compute Nodes With GPU Accelerator - -- 23 nodes -- 368 cores in total -- two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node -- 96 GB of physical memory per node -- one 500GB SATA 2,5” 7,2 krpm HDD per node -- GPU accelerator 1x NVIDIA Tesla Kepler K20 per node -- bullx B515 blade servers -- cn[181-203] - -###Compute Nodes With MIC Accelerator - -- 4 nodes -- 64 cores in total -- two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node -- 96 GB of physical memory per node -- one 500GB SATA 2,5” 7,2 krpm HDD per node -- MIC accelerator 1x Intel Phi 5110P per node -- bullx B515 blade servers -- cn[204-207] - -###Fat Compute Nodes - -- 2 nodes -- 32 cores in total -- 2 Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node -- 512 GB of physical memory per node -- two 300GB SAS 3,5”15krpm HDD (RAID1) per node -- two 100GB SLC SSD per node -- bullx R423-E3 servers -- cn[208-209] +### Compute Nodes Without Accelerator + +* 180 nodes +* 2880 cores in total +* two Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node +* 64 GB of physical memory per node +* one 500GB SATA 2,5” 7,2 krpm HDD per node +* bullx B510 blade servers +* cn[1-180] + +### Compute Nodes With GPU Accelerator + +* 23 nodes +* 368 cores in total +* two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node +* 96 GB of physical memory per node +* one 500GB SATA 2,5” 7,2 krpm HDD per node +* GPU accelerator 1x NVIDIA Tesla Kepler K20 per node +* bullx B515 blade servers +* cn[181-203] + +### Compute Nodes With MIC Accelerator + +* 4 nodes +* 64 cores in total +* two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node +* 96 GB of physical memory per node +* one 500GB SATA 2,5” 7,2 krpm HDD per node +* MIC accelerator 1x Intel Phi 5110P per node +* bullx B515 blade servers +* cn[204-207] + +### Fat Compute Nodes + +* 2 nodes +* 32 cores in total +* 2 Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node +* 512 GB of physical memory per node +* two 300GB SAS 3,5”15krpm HDD (RAID1) per node +* two 100GB SLC SSD per node +* bullx R423-E3 servers +* cn[208-209]  - **Figure Anselm bullx B510 servers** ### Compute Nodes Summary @@ -61,31 +59,29 @@ Anselm is cluster of x86-64 Intel based nodes built on Bull Extreme Computing bu |Nodes with MIC accelerator|4|cn[204-207]|96GB|16 @ 2.3GHz|qmic, qprod| |Fat compute nodes|2|cn[208-209]|512GB|16 @ 2.4GHz|qfat, qprod| -Processor Architecture ----------------------- +## Processor Architecture + Anselm is equipped with Intel Sandy Bridge processors Intel Xeon E5-2665 (nodes without accelerator and fat nodes) and Intel Xeon E5-2470 (nodes with accelerator). Processors support Advanced Vector Extensions (AVX) 256-bit instruction set. ### Intel Sandy Bridge E5-2665 Processor -- eight-core -- speed: 2.4 GHz, up to 3.1 GHz using Turbo Boost Technology -- peak performance: 19.2 GFLOP/s per - core -- caches: - - L2: 256 KB per core - - L3: 20 MB per processor -- memory bandwidth at the level of the processor: 51.2 GB/s +* eight-core +* speed: 2.4 GHz, up to 3.1 GHz using Turbo Boost Technology +* peak performance: 19.2 GFLOP/s per core +* caches: + * L2: 256 KB per core + * L3: 20 MB per processor +* memory bandwidth at the level of the processor: 51.2 GB/s ### Intel Sandy Bridge E5-2470 Processor -- eight-core -- speed: 2.3 GHz, up to 3.1 GHz using Turbo Boost Technology -- peak performance: 18.4 GFLOP/s per - core -- caches: - - L2: 256 KB per core - - L3: 20 MB per processor -- memory bandwidth at the level of the processor: 38.4 GB/s +* eight-core +* speed: 2.3 GHz, up to 3.1 GHz using Turbo Boost Technology +* peak performance: 18.4 GFLOP/s per core +* caches: + * L2: 256 KB per core + * L3: 20 MB per processor +* memory bandwidth at the level of the processor: 38.4 GB/s Nodes equipped with Intel Xeon E5-2665 CPU have set PBS resource attribute cpu_freq = 24, nodes equipped with Intel Xeon E5-2470 CPU have set PBS resource attribute cpu_freq = 23. @@ -101,35 +97,34 @@ Intel Turbo Boost Technology is used by default, you can disable it for all nod $ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16 -l cpu_turbo_boost=0 -I ``` -Memory Architecture -------------------- +## Memory Architecture ### Compute Node Without Accelerator -- 2 sockets -- Memory Controllers are integrated into processors. - - 8 DDR3 DIMMs per node - - 4 DDR3 DIMMs per CPU - - 1 DDR3 DIMMs per channel - - Data rate support: up to 1600MT/s -- Populated memory: 8 x 8 GB DDR3 DIMM 1600 MHz +* 2 sockets +* Memory Controllers are integrated into processors. + * 8 DDR3 DIMMs per node + * 4 DDR3 DIMMs per CPU + * 1 DDR3 DIMMs per channel + * Data rate support: up to 1600MT/s +* Populated memory: 8 x 8 GB DDR3 DIMM 1600 MHz ### Compute Node With GPU or MIC Accelerator -- 2 sockets -- Memory Controllers are integrated into processors. - - 6 DDR3 DIMMs per node - - 3 DDR3 DIMMs per CPU - - 1 DDR3 DIMMs per channel - - Data rate support: up to 1600MT/s -- Populated memory: 6 x 16 GB DDR3 DIMM 1600 MHz +* 2 sockets +* Memory Controllers are integrated into processors. + * 6 DDR3 DIMMs per node + * 3 DDR3 DIMMs per CPU + * 1 DDR3 DIMMs per channel + * Data rate support: up to 1600MT/s +* Populated memory: 6 x 16 GB DDR3 DIMM 1600 MHz ### Fat Compute Node -- 2 sockets -- Memory Controllers are integrated into processors. - - 16 DDR3 DIMMs per node - - 8 DDR3 DIMMs per CPU - - 2 DDR3 DIMMs per channel - - Data rate support: up to 1600MT/s -- Populated memory: 16 x 32 GB DDR3 DIMM 1600 MHz +* 2 sockets +* Memory Controllers are integrated into processors. + * 16 DDR3 DIMMs per node + * 8 DDR3 DIMMs per CPU + * 2 DDR3 DIMMs per channel + * Data rate support: up to 1600MT/s +* Populated memory: 16 x 32 GB DDR3 DIMM 1600 MHz diff --git a/docs.it4i/anselm-cluster-documentation/introduction.md b/docs.it4i/anselm-cluster-documentation/introduction.md index ffdac28026d210c248e820eaee604733ae5af8e4..6cf377ecf0fe651ee18040170eeb29a5578f4907 100644 --- a/docs.it4i/anselm-cluster-documentation/introduction.md +++ b/docs.it4i/anselm-cluster-documentation/introduction.md @@ -1,13 +1,11 @@ -Introduction -============ +# Introduction Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15 TB RAM and giving over 94 TFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB RAM, and 500 GB hard disk drive. Nodes are interconnected by fully non-blocking fat-tree InfiniBand network and equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/). -The cluster runs bullx Linux ([bull](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)) [operating system](software/operating-system/), which is compatible with the RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/). +The cluster runs [operating system](software/operating-system/), which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/). User data shared file-system (HOME, 320 TB) and job data shared file-system (SCRATCH, 146 TB) are available to users. The PBS Professional workload manager provides [computing resources allocations and job execution](resources-allocation-policy/). Read more on how to [apply for resources](../get-started-with-it4innovations/applying-for-resources/), [obtain login credentials,](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/) and [access the cluster](shell-and-data-access/). - diff --git a/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md b/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md index b500a5a2be7cc0275961858008136d8de903b1c7..a7dd204066f718ced9ca569f865c8cc5c9e4b9ff 100644 --- a/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md +++ b/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md @@ -1,19 +1,18 @@ -Job submission and execution -============================ +# Job submission and execution + +## Job Submission -Job Submission --------------- When allocating computational resources for the job, please specify -1. suitable queue for your job (default is qprod) -2. number of computational nodes required -3. number of cores per node required -4. maximum wall time allocated to your calculation, note that jobs exceeding maximum wall time will be killed -5. Project ID -6. Jobscript or interactive switch +1. suitable queue for your job (default is qprod) +1. number of computational nodes required +1. number of cores per node required +1. maximum wall time allocated to your calculation, note that jobs exceeding maximum wall time will be killed +1. Project ID +1. Jobscript or interactive switch !!! Note "Note" - Use the **qsub** command to submit your job to a queue for allocation of the computational resources. + Use the **qsub** command to submit your job to a queue for allocation of the computational resources. Submit the job using the qsub command: @@ -61,8 +60,7 @@ By default, the PBS batch system sends an e-mail only when the job is aborted. D $ qsub -m n ``` -Advanced job placement ----------------------- +## Advanced job placement ### Placement by name @@ -103,8 +101,7 @@ We recommend allocating compute nodes of a single switch when best possible comp In this example, we request all the 18 nodes sharing the isw11 switch for 24 hours. Full chassis will be allocated. -Advanced job handling ---------------------- +## Advanced job handling ### Selecting Turbo Boost off @@ -133,10 +130,10 @@ The MPI processes will be distributed differently on the nodes connected to the Although this example is somewhat artificial, it demonstrates the flexibility of the qsub command options. -Job Management --------------- +## Job Management + !!! Note "Note" - Check status of your jobs using the **qstat** and **check-pbs-jobs** commands + Check status of your jobs using the **qstat** and **check-pbs-jobs** commands ```bash $ qstat -a @@ -217,7 +214,7 @@ Run loop 3 In this example, we see actual output (some iteration loops) of the job 35141.dm2 !!! Note "Note" - Manage your queued or running jobs, using the **qhold**, **qrls**, **qdel**, **qsig** or **qalter** commands + Manage your queued or running jobs, using the **qhold**, **qrls**, **qdel**, **qsig** or **qalter** commands You may release your allocation at any time, using qdel command @@ -237,18 +234,17 @@ Learn more by reading the pbs man page $ man pbs_professional ``` -Job Execution -------------- +## Job Execution ### Jobscript !!! Note "Note" - Prepare the jobscript to run batch jobs in the PBS queue system + Prepare the jobscript to run batch jobs in the PBS queue system The Jobscript is a user made script, controlling sequence of commands for executing the calculation. It is often written in bash, other scripts may be used as well. The jobscript is supplied to PBS **qsub** command as an argument and executed by the PBS Professional workload manager. !!! Note "Note" - The jobscript or interactive shell is executed on first of the allocated nodes. + The jobscript or interactive shell is executed on first of the allocated nodes. ```bash $ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob @@ -278,7 +274,7 @@ $ pwd In this example, 4 nodes were allocated interactively for 1 hour via the qexp queue. The interactive shell is executed in the home directory. !!! Note "Note" - All nodes within the allocation may be accessed via ssh. Unallocated nodes are not accessible to user. + All nodes within the allocation may be accessed via ssh. Unallocated nodes are not accessible to user. The allocated nodes are accessible via ssh from login nodes. The nodes may access each other via ssh as well. @@ -310,7 +306,7 @@ In this example, the hostname program is executed via pdsh from the interactive ### Example Jobscript for MPI Calculation !!! Note "Note" - Production jobs must use the /scratch directory for I/O + Production jobs must use the /scratch directory for I/O The recommended way to run production jobs is to change to /scratch directory early in the jobscript, copy all inputs to /scratch, execute the calculations and copy outputs to home directory. @@ -342,12 +338,12 @@ exit In this example, some directory on the /home holds the input file input and executable mympiprog.x . We create a directory myjob on the /scratch filesystem, copy input and executable files from the /home directory where the qsub was invoked ($PBS_O_WORKDIR) to /scratch, execute the MPI programm mympiprog.x and copy the output file back to the /home directory. The mympiprog.x is executed as one process per node, on all allocated nodes. !!! Note "Note" - Consider preloading inputs and executables onto [shared scratch](storage/) before the calculation starts. + Consider preloading inputs and executables onto [shared scratch](storage/) before the calculation starts. In some cases, it may be impractical to copy the inputs to scratch and outputs to home. This is especially true when very large input and output files are expected, or when the files should be reused by a subsequent calculation. In such a case, it is users responsibility to preload the input files on shared /scratch before the job submission and retrieve the outputs manually, after all calculations are finished. !!! Note "Note" - Store the qsub options within the jobscript. Use **mpiprocs** and **ompthreads** qsub options to control the MPI job execution. + Store the qsub options within the jobscript. Use **mpiprocs** and **ompthreads** qsub options to control the MPI job execution. Example jobscript for an MPI job with preloaded inputs and executables, options for qsub are stored within the script : @@ -380,7 +376,7 @@ sections. ### Example Jobscript for Single Node Calculation !!! Note "Note" - Local scratch directory is often useful for single node jobs. Local scratch will be deleted immediately after the job ends. + Local scratch directory is often useful for single node jobs. Local scratch will be deleted immediately after the job ends. Example jobscript for single node calculation, using [local scratch](storage/) on the node: diff --git a/docs.it4i/anselm-cluster-documentation/prace.md b/docs.it4i/anselm-cluster-documentation/prace.md index c48fc3e226262b265d2e7ba4761519540960d1a7..0b12438381379f4ce57dbeb46fc18720f745c74a 100644 --- a/docs.it4i/anselm-cluster-documentation/prace.md +++ b/docs.it4i/anselm-cluster-documentation/prace.md @@ -1,26 +1,24 @@ -PRACE User Support -================== +# PRACE User Support + +## Intro -Intro ------ PRACE users coming to Anselm as to TIER-1 system offered through the DECI calls are in general treated as standard users and so most of the general documentation applies to them as well. This section shows the main differences for quicker orientation, but often uses references to the original documentation. PRACE users who don't undergo the full procedure (including signing the IT4I AuP on top of the PRACE AuP) will not have a password and thus access to some services intended for regular users. This can lower their comfort, but otherwise they should be able to use the TIER-1 system as intended. Please see the [Obtaining Login Credentials section](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/), if the same level of access is required. All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation/) should be read before continuing reading the local documentation here. -Help and Support --------------------- +## Help and Support + If you have any troubles, need information, request support or want to install additional software, please use [PRACE Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/). Information about the local services are provided in the [introduction of general user documentation](introduction/). Please keep in mind, that standard PRACE accounts don't have a password to access the web interface of the local (IT4Innovations) request tracker and thus a new ticket should be created by sending an e-mail to support[at]it4i.cz. -Obtaining Login Credentials ---------------------------- +## Obtaining Login Credentials + In general PRACE users already have a PRACE account setup through their HOMESITE (institution from their country) as a result of rewarded PRACE project proposal. This includes signed PRACE AuP, generated and registered certificates, etc. If there's a special need a PRACE user can get a standard (local) account at IT4Innovations. To get an account on the Anselm cluster, the user needs to obtain the login credentials. The procedure is the same as for general users of the cluster, so please see the corresponding section of the general documentation here. -Accessing the cluster ---------------------- +## Accessing the cluster ### Access with GSI-SSH @@ -30,11 +28,11 @@ The user will need a valid certificate and to be present in the PRACE LDAP (plea Most of the information needed by PRACE users accessing the Anselm TIER-1 system can be found here: -- [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs) -- [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ) -- [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh) -- [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details) -- [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer) +* [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs) +* [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ) +* [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh) +* [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details) +* [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer) Before you start to use any of the services don't forget to create a proxy certificate from your certificate: @@ -116,8 +114,8 @@ If the user uses GSI SSH based access, then the procedure is similar to the SSH After successful obtainment of login credentials for the local IT4Innovations account, the PRACE users can access the cluster as regular users using SSH. For more information please see the section in general documentation. -File transfers ------------------- +## File transfers + PRACE users can use the same transfer mechanisms as regular users (if they've undergone the full registration procedure). For information about this, please see the section in the general documentation. Apart from the standard mechanisms, for PRACE users to transfer data to/from Anselm cluster, a GridFTP server running Globus Toolkit GridFTP service is available. The service is available from public Internet as well as from the internal PRACE network (accessible only from other PRACE partners). @@ -199,9 +197,9 @@ Generally both shared file systems are available through GridFTP: More information about the shared file systems is available [here](storage/). -Usage of the cluster --------------------- - There are some limitations for PRACE user when using the cluster. By default PRACE users aren't allowed to access special queues in the PBS Pro to have high priority or exclusive access to some special equipment like accelerated nodes and high memory (fat) nodes. There may be also restrictions obtaining a working license for the commercial software installed on the cluster, mostly because of the license agreement or because of insufficient amount of licenses. +## Usage of the cluster + +There are some limitations for PRACE user when using the cluster. By default PRACE users aren't allowed to access special queues in the PBS Pro to have high priority or exclusive access to some special equipment like accelerated nodes and high memory (fat) nodes. There may be also restrictions obtaining a working license for the commercial software installed on the cluster, mostly because of the license agreement or because of insufficient amount of licenses. For production runs always use scratch file systems, either the global shared or the local ones. The available file systems are described [here](hardware-overview/). @@ -225,7 +223,7 @@ For PRACE users, the default production run queue is "qprace". PRACE users can a |---|---|---|---|---|---|---| |**qexp** Express queue|no|none required|2 reserved, 8 total|high|no|1 / 1h| |**qprace** Production queue|yes|> 0|178 w/o accelerator|medium|no|24 / 48 h| -|**qfree** Free resource queue|yes|none required|178 w/o accelerator|very low|no| 12 / 12 h| +|**qfree** Free resource queue|yes|none required|178 w/o accelerator|very low|no|12 / 12 h| **qprace**, the PRACE: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprace. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprace is 12 hours. If the job needs longer time, it must use checkpoint/restart functionality. @@ -238,7 +236,7 @@ PRACE users should check their project accounting using the [PRACE Accounting To Users who have undergone the full local registration procedure (including signing the IT4Innovations Acceptable Use Policy) and who have received local password may check at any time, how many core-hours have been consumed by themselves and their projects using the command "it4ifree". Please note that you need to know your user password to use the command and that the displayed core hours are "system core hours" which differ from PRACE "standardized core hours". !!! Note "Note" - The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients> + The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients> ```bash $ it4ifree diff --git a/docs.it4i/anselm-cluster-documentation/remote-visualization.md b/docs.it4i/anselm-cluster-documentation/remote-visualization.md index 0de276a0a1f432b62d48844b5de5ec5eced32487..af1ef71fb5b044a1c227be8f3f3567491e7ce852 100644 --- a/docs.it4i/anselm-cluster-documentation/remote-visualization.md +++ b/docs.it4i/anselm-cluster-documentation/remote-visualization.md @@ -1,8 +1,7 @@ -Remote visualization service -============================ +# Remote visualization service + +## Introduction -Introduction ------------- The goal of this service is to provide the users a GPU accelerated use of OpenGL applications, especially for pre- and post- processing work, where not only the GPU performance is needed but also fast access to the shared file systems of the cluster and a reasonable amount of RAM. The service is based on integration of open source tools VirtualGL and TurboVNC together with the cluster's job scheduler PBS Professional. @@ -18,17 +17,15 @@ Currently two compute nodes are dedicated for this service with following config |Local disk drive|yes - 500 GB| |Compute network|InfiniBand QDR| -Schematic overview ------------------- +## Schematic overview   -How to use the service ----------------------- +## How to use the service -### Setup and start your own TurboVNC server. +### Setup and start your own TurboVNC server TurboVNC is designed and implemented for cooperation with VirtualGL and available for free for all major platforms. For more information and download, please refer to: <http://sourceforge.net/projects/turbovnc/> @@ -36,11 +33,11 @@ TurboVNC is designed and implemented for cooperation with VirtualGL and availabl The procedure is: -#### 1. Connect to a login node. +#### 1. Connect to a login node Please [follow the documentation](shell-and-data-access/). -#### 2. Run your own instance of TurboVNC server. +#### 2. Run your own instance of TurboVNC server To have the OpenGL acceleration, **24 bit color depth must be used**. Otherwise only the geometry (desktop size) definition is needed. @@ -58,7 +55,7 @@ Starting applications specified in /home/username/.vnc/xstartup.turbovnc Log file is /home/username/.vnc/login2:1.log ``` -#### 3. Remember which display number your VNC server runs (you will need it in the future to stop the server). +#### 3. Remember which display number your VNC server runs (you will need it in the future to stop the server) ```bash $ vncserver -list @@ -71,7 +68,7 @@ X DISPLAY # PROCESS ID In this example the VNC server runs on display **:1**. -#### 4. Remember the exact login node, where your VNC server runs. +#### 4. Remember the exact login node, where your VNC server runs ```bash $ uname -n @@ -80,7 +77,7 @@ login2 In this example the VNC server runs on **login2**. -#### 5. Remember on which TCP port your own VNC server is running. +#### 5. Remember on which TCP port your own VNC server is running To get the port you have to look to the log file of your VNC server. @@ -91,22 +88,23 @@ $ grep -E "VNC.*port" /home/username/.vnc/login2:1.log In this example the VNC server listens on TCP port **5901**. -#### 6. Connect to the login node where your VNC server runs with SSH to tunnel your VNC session. +#### 6. Connect to the login node where your VNC server runs with SSH to tunnel your VNC session Tunnel the TCP port on which your VNC server is listenning. ```bash $ ssh login2.anselm.it4i.cz -L 5901:localhost:5901 ``` + x-window-system/ *If you use Windows and Putty, please refer to port forwarding setup in the documentation:* [x-window-and-vnc#section-12](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) -#### 7. If you don't have Turbo VNC installed on your workstation. +#### 7. If you don't have Turbo VNC installed on your workstation Get it from: <http://sourceforge.net/projects/turbovnc/> -#### 8. Run TurboVNC Viewer from your workstation. +#### 8. Run TurboVNC Viewer from your workstation Mind that you should connect through the SSH tunneled port. In this example it is 5901 on your workstation (localhost). @@ -116,11 +114,11 @@ $ vncviewer localhost:5901 *If you use Windows version of TurboVNC Viewer, just run the Viewer and use address **localhost:5901**.* -#### 9. Proceed to the chapter "Access the visualization node." +#### 9. Proceed to the chapter "Access the visualization node" *Now you should have working TurboVNC session connected to your workstation.* -#### 10. After you end your visualization session. +#### 10. After you end your visualization session *Don't forget to correctly shutdown your own VNC server on the login node!* @@ -128,8 +126,8 @@ $ vncviewer localhost:5901 $ vncserver -kill :1 ``` -Access the visualization node ------------------------------ +### Access the visualization node + **To access the node use a dedicated PBS Professional scheduler queue qviz**. The queue has following properties: @@ -141,7 +139,7 @@ Currently when accessing the node, each user gets 4 cores of a CPU allocated, th To access the visualization node, follow these steps: -#### 1. In your VNC session, open a terminal and allocate a node using PBSPro qsub command. +#### 1. In your VNC session, open a terminal and allocate a node using PBSPro qsub command *This step is necessary to allow you to proceed with next steps.* @@ -168,7 +166,7 @@ srv8 In this example the visualization session was assigned to node **srv8**. -#### 2. In your VNC session open another terminal (keep the one with interactive PBSPro job open). +#### 2. In your VNC session open another terminal (keep the one with interactive PBSPro job open) Setup the VirtualGL connection to the node, which PBSPro allocated for our job. @@ -178,13 +176,13 @@ $ vglconnect srv8 You will be connected with created VirtualGL tunnel to the visualization ode, where you will have a shell. -#### 3. Load the VirtualGL module. +#### 3. Load the VirtualGL module ```bash $ module load virtualgl/2.4 ``` -#### 4. Run your desired OpenGL accelerated application using VirtualGL script "vglrun". +#### 4. Run your desired OpenGL accelerated application using VirtualGL script "vglrun" ```bash $ vglrun glxgears @@ -197,26 +195,26 @@ $ module load marc/2013.1 $ vglrun mentat ``` -#### 5. After you end your work with the OpenGL application. +#### 5. After you end your work with the OpenGL application Just logout from the visualization node and exit both opened terminals nd end your VNC server session as described above. -Tips and Tricks ---------------- +## Tips and Tricks + If you want to increase the responsibility of the visualization, please adjust your TurboVNC client settings in this way:  To have an idea how the settings are affecting the resulting picture utility three levels of "JPEG image quality" are demonstrated: -1. JPEG image quality = 30 +** JPEG image quality = 30 **  -2. JPEG image quality = 15 +** JPEG image quality = 15 **  -3. JPEG image quality = 10 +** JPEG image quality = 10 **  diff --git a/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md b/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md index 0576b4e4fed3b62e1a62339f4f961643e6e6428e..92d338d84563e33b39153afcf687f5e76fe9ba6d 100644 --- a/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md +++ b/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md @@ -1,31 +1,30 @@ -Resources Allocation Policy -=========================== +# Resources Allocation Policy + +## Introduction -Resources Allocation Policy ---------------------------- The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The Fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview: !!! Note "Note" - Check the queue status at https://extranet.it4i.cz/anselm/ + Check the queue status at [https://extranet.it4i.cz/anselm/](https://extranet.it4i.cz/anselm/) - |queue |active project |project resources |nodes|min ncpus|priority|authorization|walltime | - | --- | --- | --- | --- | --- | --- | --- | --- | - |qexp |no |none required |2 reserved, 31 totalincluding MIC, GPU and FAT nodes |1 |150 |no |1 h | - |qprod |yes |0 |178 nodes w/o accelerator |16 |0 |no |24/48 h | - |qlong |yes |0 |60 nodes w/o accelerator |16 |0 |no |72/144 h | - |qnvidia, qmic, qfat |yes |0 |23 total qnvidia4 total qmic2 total qfat |16 |200 |yes |24/48 h | - |qfree |yes |none required |178 w/o accelerator |16 |-1024 |no |12 h | +|queue|active project|project resources|nodes|min ncpus|priority|authorization|walltime| +|---|---|---|---|---|---|---|---| +|qexp|no|none required|2 reserved, 31 totalincluding MIC, GPU and FAT nodes|1|150|no|1 h| +|qprod|yes|0|178 nodes w/o accelerator|16|0|no|24/48 h| +|qlong|yes|0|60 nodes w/o accelerator|16|0|no|72/144 h| +|qnvidia, qmic, qfat|yes|0|23 total qnvidia4 total qmic2 total qfat|16|200|yes|24/48 h| +|qfree|yes|none required|178 w/o accelerator|16|-1024|no|12 h| !!! Note "Note" - **The qfree queue is not free of charge**. [Normal accounting](#resources-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply for Directors Discreation's projects (DD projects) by default. Usage of qfree after exhaustion of DD projects computational resources is allowed after request for this queue. + **The qfree queue is not free of charge**. [Normal accounting](#resources-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply for Directors Discreation's projects (DD projects) by default. Usage of qfree after exhaustion of DD projects computational resources is allowed after request for this queue. - **The qexp queue is equipped with the nodes not having the very same CPU clock speed.** Should you need the very same CPU speed, you have to select the proper nodes during the PSB job submission. +**The qexp queue is equipped with the nodes not having the very same CPU clock speed.** Should you need the very same CPU speed, you have to select the proper nodes during the PSB job submission. -- **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user, from a pool of nodes containing Nvidia accelerated nodes (cn181-203), MIC accelerated nodes (cn204-207) and Fat nodes with 512GB RAM (cn208-209). This enables to test and tune also accelerated code or code with higher RAM requirements. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour. -- **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, except the reserved ones. 178 nodes without accelerator are included. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours. -- **qlong**, the Long queue: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 60 nodes without acceleration may be accessed via the qlong queue. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times of the standard qprod time - 3 * 48 h). -- **qnvidia**, qmic, qfat, the Dedicated queues: The queue qnvidia is dedicated to access the Nvidia accelerated nodes, the qmic to access MIC nodes and qfat the Fat nodes. It is required that active project with nonzero remaining resources is specified to enter these queues. 23 nvidia, 4 mic and 2 fat nodes are included. Full nodes, 16 cores per node are allocated. The queues run with very high priority, the jobs will be scheduled before the jobs coming from the qexp queue. An PI needs explicitly ask [support](https://support.it4i.cz/rt/) for authorization to enter the dedicated queues for all users associated to her/his Project. -- **qfree**, The Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 16 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours. +* **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user, from a pool of nodes containing Nvidia accelerated nodes (cn181-203), MIC accelerated nodes (cn204-207) and Fat nodes with 512GB RAM (cn208-209). This enables to test and tune also accelerated code or code with higher RAM requirements. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour. +* **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, except the reserved ones. 178 nodes without accelerator are included. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours. +* **qlong**, the Long queue: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 60 nodes without acceleration may be accessed via the qlong queue. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times of the standard qprod time - 3 * 48 h). +* **qnvidia**, qmic, qfat, the Dedicated queues: The queue qnvidia is dedicated to access the Nvidia accelerated nodes, the qmic to access MIC nodes and qfat the Fat nodes. It is required that active project with nonzero remaining resources is specified to enter these queues. 23 nvidia, 4 mic and 2 fat nodes are included. Full nodes, 16 cores per node are allocated. The queues run with very high priority, the jobs will be scheduled before the jobs coming from the qexp queue. An PI needs explicitly ask [support](https://support.it4i.cz/rt/) for authorization to enter the dedicated queues for all users associated to her/his Project. +* **qfree**, The Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 16 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours. ### Notes @@ -37,7 +36,8 @@ Anselm users may check current queue configuration at <https://extranet.it4i.cz/ ### Queue status ->Check the status of jobs, queues and compute nodes at <https://extranet.it4i.cz/anselm/> +!!! tip + Check the status of jobs, queues and compute nodes at <https://extranet.it4i.cz/anselm/>  @@ -105,8 +105,7 @@ Options: --incl-finished Include finished jobs ``` -Resources Accounting Policy -------------------------------- +## Resources Accounting Policy ### The Core-Hour @@ -115,7 +114,7 @@ The resources that are currently subject to accounting are the core-hours. The c ### Check consumed resources !!! Note "Note" - The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients> + The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients> User may check at any time, how many core-hours have been consumed by himself/herself and his/her projects. The command is available on clusters' login nodes. diff --git a/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md b/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md index a72995bb0fb5e73c8518c31a6f94aedd56042d9d..1b318ba453cc225458d8f52150ff04b7697cd2cf 100644 --- a/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md +++ b/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md @@ -1,8 +1,7 @@ -Accessing the Cluster -============================== +# Accessing the Cluster + +## Shell Access -Shell Access ------------------ The Anselm cluster is accessed by SSH protocol via login nodes login1 and login2 at address anselm.it4i.cz. The login nodes may be addressed specifically, by prepending the login node name to the address. |Login address|Port|Protocol|Login node| @@ -13,11 +12,11 @@ The Anselm cluster is accessed by SSH protocol via login nodes login1 and login2 The authentication is by the [private key](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) -!!! Note "Note" - Please verify SSH fingerprints during the first logon. They are identical on all login nodes: +!!! note + Please verify SSH fingerprints during the first logon. They are identical on all login nodes: - 29:b3:f4:64:b0:73:f5:6f:a7:85:0f:e0:0d:be:76:bf (DSA) - d4:6f:5c:18:f4:3f:70:ef:bc:fc:cc:2b:fd:13:36:b7 (RSA) + 29:b3:f4:64:b0:73:f5:6f:a7:85:0f:e0:0d:be:76:bf (DSA) + d4:6f:5c:18:f4:3f:70:ef:bc:fc:cc:2b:fd:13:36:b7 (RSA) Private key authentication: @@ -55,10 +54,10 @@ Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com Example to the cluster login: !!! Note "Note" - The environment is **not** shared between login nodes, except for [shared filesystems](storage/#shared-filesystems). + The environment is **not** shared between login nodes, except for [shared filesystems](storage/#shared-filesystems). + +## Data Transfer -Data Transfer -------------- Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp protocols. (Not available yet.) In case large volumes of data are transferred, use dedicated data mover node dm1.anselm.it4i.cz for increased performance. |Address|Port|Protocol| @@ -71,14 +70,14 @@ Data in and out of the system may be transferred by the [scp](http://en.wikipedi The authentication is by the [private key](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) !!! Note "Note" - Data transfer rates up to **160MB/s** can be achieved with scp or sftp. + Data transfer rates up to **160MB/s** can be achieved with scp or sftp. 1TB may be transferred in 1:50h. To achieve 160MB/s transfer rates, the end user must be connected by 10G line all the way to IT4Innovations and use computer with fast processor for the transfer. Using Gigabit ethernet connection, up to 110MB/s may be expected. Fast cipher (aes128-ctr) should be used. !!! Note "Note" - If you experience degraded data transfer performance, consult your local network provider. + If you experience degraded data transfer performance, consult your local network provider. On linux or Mac, use scp or sftp client to transfer the data to Anselm: @@ -116,9 +115,8 @@ On Windows, use [WinSCP client](http://winscp.net/eng/download.php) to transfer More information about the shared file systems is available [here](storage/). +## Connection restrictions -Connection restrictions ------------------------ Outgoing connections, from Anselm Cluster login nodes to the outside world, are restricted to following ports: |Port|Protocol| @@ -129,17 +127,16 @@ Outgoing connections, from Anselm Cluster login nodes to the outside world, are |9418|git| !!! Note "Note" - Please use **ssh port forwarding** and proxy servers to connect from Anselm to all other remote ports. + Please use **ssh port forwarding** and proxy servers to connect from Anselm to all other remote ports. Outgoing connections, from Anselm Cluster compute nodes are restricted to the internal network. Direct connections form compute nodes to outside world are cut. -Port forwarding ---------------- +## Port forwarding ### Port forwarding from login nodes !!! Note "Note" - Port forwarding allows an application running on Anselm to connect to arbitrary remote host and port. + Port forwarding allows an application running on Anselm to connect to arbitrary remote host and port. It works by tunneling the connection from Anselm back to users workstation and forwarding from the workstation to the remote host. @@ -159,7 +156,8 @@ Port forwarding may be established directly to the remote host. However, this re $ ssh -L 6000:localhost:1234 remote.host.com ``` -Note: Port number 6000 is chosen as an example only. Pick any free port. +!!! note + Port number 6000 is chosen as an example only. Pick any free port. ### Port forwarding from compute nodes @@ -180,7 +178,7 @@ In this example, we assume that port forwarding from login1:6000 to remote.host. Port forwarding is static, each single port is mapped to a particular port on remote host. Connection to other remote host, requires new forward. !!! Note "Note" - Applications with inbuilt proxy support, experience unlimited access to remote hosts, via single proxy server. + Applications with inbuilt proxy support, experience unlimited access to remote hosts, via single proxy server. To establish local proxy server on your workstation, install and run SOCKS proxy server software. On Linux, sshd demon provides the functionality. To establish SOCKS proxy server listening on port 1080 run: @@ -198,13 +196,11 @@ local $ ssh -R 6000:localhost:1080 anselm.it4i.cz Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well. -Graphical User Interface ------------------------- +## Graphical User Interface -- The [X Window system](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters. -- The [Virtual Network Computing](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer). +* The [X Window system](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters. +* The [Virtual Network Computing](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer). -VPN Access ----------- +## VPN Access -- Access to IT4Innovations internal resources via [VPN](../get-started-with-it4innovations/accessing-the-clusters/vpn-access/). +* Access to IT4Innovations internal resources via [VPN](../get-started-with-it4innovations/accessing-the-clusters/vpn-access/). diff --git a/docs.it4i/anselm-cluster-documentation/software/gpi2.md b/docs.it4i/anselm-cluster-documentation/software/gpi2.md index d61fbed6f984945d0751ae4abf6e2e241ddffc1b..8818844f924a3274e69957abfcd9d70533dd8648 100644 --- a/docs.it4i/anselm-cluster-documentation/software/gpi2.md +++ b/docs.it4i/anselm-cluster-documentation/software/gpi2.md @@ -1,16 +1,13 @@ -GPI-2 -===== +# GPI-2 -##A library that implements the GASPI specification +## Introduction -Introduction ------------- Programming Next Generation Supercomputers: GPI-2 is an API library for asynchronous interprocess, cross-node communication. It provides a flexible, scalable and fault tolerant interface for parallel applications. The GPI-2 library ([www.gpi-site.com/gpi2/](http://www.gpi-site.com/gpi2/)) implements the GASPI specification (Global Address Space Programming Interface, [www.gaspi.de](http://www.gaspi.de/en/project.html)). GASPI is a Partitioned Global Address Space (PGAS) API. It aims at scalable, flexible and failure tolerant computing in massively parallel environments. -Modules -------- +## Modules + The GPI-2, version 1.0.2 is available on Anselm via module gpi2: ```bash @@ -19,10 +16,10 @@ The GPI-2, version 1.0.2 is available on Anselm via module gpi2: The module sets up environment variables, required for linking and running GPI-2 enabled applications. This particular command loads the default module, which is gpi2/1.0.2 -Linking -------- -!!! Note "Note" - Link with -lGPI2 -libverbs +## Linking + +!!! note + Link with -lGPI2 -libverbs Load the gpi2 module. Link using **-lGPI2** and **-libverbs** switches to link your code against GPI-2. The GPI-2 requires the OFED infinband communication library ibverbs. @@ -42,11 +39,10 @@ Load the gpi2 module. Link using **-lGPI2** and **-libverbs** switches to link y $ gcc myprog.c -o myprog.x -Wl,-rpath=$LIBRARY_PATH -lGPI2 -libverbs ``` -Running the GPI-2 codes ------------------------ +## Running the GPI-2 codes -!!! Note "Note" - gaspi_run starts the GPI-2 application +!!! note + gaspi_run starts the GPI-2 application The gaspi_run utility is used to start and run GPI-2 applications: @@ -54,7 +50,7 @@ The gaspi_run utility is used to start and run GPI-2 applications: $ gaspi_run -m machinefile ./myprog.x ``` -A machine file (**machinefile**) with the hostnames of nodes where the application will run, must be provided. The machinefile lists all nodes on which to run, one entry per node per process. This file may be hand created or obtained from standard $PBS_NODEFILE: +A machine file (** machinefile **) with the hostnames of nodes where the application will run, must be provided. The machinefile lists all nodes on which to run, one entry per node per process. This file may be hand created or obtained from standard $PBS_NODEFILE: ```bash $ cut -f1 -d"." $PBS_NODEFILE > machinefile @@ -80,8 +76,8 @@ machinefle: This machinefile will run 4 GPI-2 processes, 2 on node cn79 o 2 on node cn80. -!!! Note "Note" - Use the **mpiprocs** to control how many GPI-2 processes will run per node +!!! note + Use the **mpiprocs**to control how many GPI-2 processes will run per node Example: @@ -93,13 +89,12 @@ This example will produce $PBS_NODEFILE with 16 entries per node. ### gaspi_logger -!!! Note "Note" - gaspi_logger views the output form GPI-2 application ranks +!!! note + gaspi_logger views the output form GPI-2 application ranks The gaspi_logger utility is used to view the output from all nodes except the master node (rank 0). The gaspi_logger is started, on another session, on the master node - the node where the gaspi_run is executed. The output of the application, when called with gaspi_printf(), will be redirected to the gaspi_logger. Other I/O routines (e.g. printf) will not. -Example -------- +## Example Following is an example GPI-2 enabled code: @@ -169,4 +164,4 @@ At the same time, in another session, you may start the gaspi logger: [cn80:0] Hello from rank 1 of 2 ``` -In this example, we compile the helloworld_gpi.c code using the **gnu compiler** (gcc) and link it to the GPI-2 and ibverbs library. The library search path is compiled in. For execution, we use the qexp queue, 2 nodes 1 core each. The GPI module must be loaded on the master compute node (in this example the cn79), gaspi_logger is used from different session to view the output of the second process. +In this example, we compile the helloworld_gpi.c code using the **gnu compiler**(gcc) and link it to the GPI-2 and ibverbs library. The library search path is compiled in. For execution, we use the qexp queue, 2 nodes 1 core each. The GPI module must be loaded on the master compute node (in this example the cn79), gaspi_logger is used from different session to view the output of the second process. diff --git a/docs.it4i/anselm-cluster-documentation/software/omics-master/diagnostic-component-team.md b/docs.it4i/anselm-cluster-documentation/software/omics-master/diagnostic-component-team.md index 47f3e6478f15d6070bb28df14ee3b2e3226eed31..908641a1974158f103ae123fa9eef08cf3755b40 100644 --- a/docs.it4i/anselm-cluster-documentation/software/omics-master/diagnostic-component-team.md +++ b/docs.it4i/anselm-cluster-documentation/software/omics-master/diagnostic-component-team.md @@ -1,14 +1,13 @@ -Diagnostic component (TEAM) -=========================== +# Diagnostic component (TEAM) -### Access +## Access TEAM is available at the [following address](http://omics.it4i.cz/team/) -!!! Note "Note" - The address is accessible only via VPN. +!!! note + The address is accessible only via VPN. -### Diagnostic component (TEAM) +## Diagnostic component VCF files are scanned by this diagnostic tool for known diagnostic disease-associated variants. When no diagnostic mutation is found, the file can be sent to the disease-causing gene discovery tool to see whether new disease associated variants can be found. @@ -16,4 +15,4 @@ TEAM (27) is an intuitive and easy-to-use web tool that fills the gap between th  -**Figure 5.** Interface of the application. Panels for defining targeted regions of interest can be set up by just drag and drop known disease genes or disease definitions from the lists. Thus, virtual panels can be interactively improved as the knowledge of the disease increases. +** Figure 5. **Interface of the application. Panels for defining targeted regions of interest can be set up by just drag and drop known disease genes or disease definitions from the lists. Thus, virtual panels can be interactively improved as the knowledge of the disease increases. diff --git a/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md b/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md index 2b3be2f52d915af055444920373ade4de68bc1d2..a02517c34f360d9197d5ab25ff09c9ad3d61a23c 100644 --- a/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md +++ b/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md @@ -1,10 +1,9 @@ -Overview -======== +# Overview The human NGS data processing solution -Introduction ------------- +## Introduction + The scope of this OMICS MASTER solution is restricted to human genomics research (disease causing gene discovery in whole human genome or exome) or diagnosis (panel sequencing), although it could be extended in the future to other usages. The pipeline inputs the raw data produced by the sequencing machines and undergoes a processing procedure that consists on a quality control, the mapping and variant calling steps that result in a file containing the set of variants in the sample. From this point, the prioritization component or the diagnostic component can be launched. @@ -12,7 +11,7 @@ The pipeline inputs the raw data produced by the sequencing machines and undergo  -**Figure 1.** OMICS MASTER solution overview. Data is produced in the external labs and comes to IT4I (represented by the blue dashed line). The data pre-processor converts raw data into a list of variants and annotations for each sequenced patient. These lists files together with primary and secondary (alignment) data files are stored in IT4I sequence DB and uploaded to the discovery (candidate prioritization) or diagnostic component where they can be analyzed directly by the user that produced them, depending of the experimental design carried out. +** Figure 1. ** OMICS MASTER solution overview. Data is produced in the external labs and comes to IT4I (represented by the blue dashed line). The data pre-processor converts raw data into a list of variants and annotations for each sequenced patient. These lists files together with primary and secondary (alignment) data files are stored in IT4I sequence DB and uploaded to the discovery (candidate prioritization) or diagnostic component where they can be analyzed directly by the user that produced them, depending of the experimental design carried out. Typical genomics pipelines are composed by several components that need to be launched manually. The advantage of OMICS MASTER pipeline is that all these components are invoked sequentially in an automated way. @@ -20,8 +19,7 @@ OMICS MASTER pipeline inputs a FASTQ file and outputs an enriched VCF file. This Let’s see each of the OMICS MASTER solution components: -Components ----------- +## Components ### Processing @@ -37,26 +35,26 @@ FastQC& FastQC. These steps are carried out over the original FASTQ file with optimized scripts and includes the following steps: sequence cleansing, estimation of base quality scores, elimination of duplicates and statistics. -Input: **FASTQ file.** +Input: ** FASTQ file **. -Output: **FASTQ file plus an HTML file containing statistics on the data.** +Output: ** FASTQ file plus an HTML file containing statistics on the data **. FASTQ format It represents the nucleotide sequence and its corresponding quality scores.  -**Figure 2.**FASTQ file. +** Figure 2 **.FASTQ file. #### Mapping -Component:** Hpg-aligner.** +Component: ** Hpg-aligner **. Sequence reads are mapped over the human reference genome. SOLiD reads are not covered by this solution; they should be mapped with specific software (among the few available options, SHRiMP seems to be the best one). For the rest of NGS machine outputs we use HPG Aligner. HPG-Aligner is an innovative solution, based on a combination of mapping with BWT and local alignment with Smith-Waterman (SW), that drastically increases mapping accuracy (97% versus 62-70% by current mappers, in the most common scenarios). This proposal provides a simple and fast solution that maps almost all the reads, even those containing a high number of mismatches or indels. -Input: **FASTQ file.** +Input: ** FASTQ file **. -Output:** Aligned file in BAM format.** +Output: ** Aligned file in BAM format **. -**Sequence Alignment/Map (SAM)** +** Sequence Alignment/Map (SAM) ** It is a human readable tab-delimited format in which each read and its alignment is represented on a single line. The format can represent unmapped reads, reads that are mapped to unique locations, and reads that are mapped to multiple locations. @@ -65,55 +63,61 @@ The SAM format (1) consists of one header section and one alignment section. The In SAM, each alignment line has 11 mandatory fields and a variable number of optional fields. The mandatory fields are briefly described in Table 1. They must be present but their value can be a â€*’ or a zero (depending on the field) if the corresponding information is unavailable. - |**No.** |**Name** |**Description**| - |--|--| - |1 |QNAME |Query NAME of the read or the read pai | - |2 |FLAG |Bitwise FLAG (pairing,strand,mate strand,etc.) | - |3 |RNAME |<p>Reference sequence NAME | - |4 |POS |<p>1-Based leftmost POSition of clipped alignment | - |5 |MAPQ |<p>MAPping Quality (Phred-scaled) | - |6 |CIGAR |<p>Extended CIGAR string (operations:MIDNSHP) | - |7 |MRNM |<p>Mate REference NaMe ('=' if same RNAME) | - |8 |MPOS |<p>1-Based leftmost Mate POSition | - |9 |ISIZE |<p>Inferred Insert SIZE | - |10 |SEQ |<p>Query SEQuence on the same strand as the reference | - |11 |QUAL |<p>Query QUALity (ASCII-33=Phred base quality) | - -**Table 1.** Mandatory fields in the SAM format. + |** No. **|** Name **|** Description **| + |--|--|--| + |1|QNAME|Query NAME of the read or the read pai| + |2|FLAG|Bitwise FLAG (pairing,strand,mate strand,etc.)| + |3|RNAME|<p>Reference sequence NAME| + |4|POS|<p>1-Based leftmost POSition of clipped alignment| + |5|MAPQ|<p>MAPping Quality (Phred-scaled)| + |6|CIGAR|<p>Extended CIGAR string (operations:MIDNSHP)| + |7|MRNM|<p>Mate REference NaMe ('=' if same RNAME)| + |8|MPOS|<p>1-Based leftmost Mate POSition| + |9|ISIZE|<p>Inferred Insert SIZE| + |10|SEQ|<p>Query SEQuence on the same strand as the reference| + |11|QUAL|<p>Query QUALity (ASCII-33=Phred base quality)| + +** Table 1 **. Mandatory fields in the SAM format. The standard CIGAR description of pairwise alignment defines three operations: â€M’ for match/mismatch, â€I’ for insertion compared with the reference and â€D’ for deletion. The extended CIGAR proposed in SAM added four more operations: â€N’ for skipped bases on the reference, â€S’ for soft clipping, â€H’ for hard clipping and â€P’ for padding. These support splicing, clipping, multi-part and padded alignments. Figure 3 shows examples of CIGAR strings for different types of alignments.  -**Figure 3.** SAM format file. The â€@SQ’ line in the header section gives the order of reference sequences. Notably, r001 is the name of a read pair. According to FLAG 163 (=1+2+32+128), the read mapped to position 7 is the second read in the pair (128) and regarded as properly paired (1 + 2); its mate is mapped to 37 on the reverse strand (32). Read r002 has three soft-clipped (unaligned) bases. The coordinate shown in SAM is the position of the first aligned base. The CIGAR string for this alignment contains a P (padding) operation which correctly aligns the inserted sequences. Padding operations can be absent when an aligner does not support multiple sequence alignment. The last six bases of read r003 map to position 9, and the first five to position 29 on the reverse strand. The hard clipping operation H indicates that the clipped sequence is not present in the sequence field. The NM tag gives the number of mismatches. Read r004 is aligned across an intron, indicated by the N operation. +** Figure 3 **. SAM format file. The â€@SQ’ line in the header section gives the order of reference sequences. Notably, r001 is the name of a read pair. According to FLAG 163 (=1+2+32+128), the read mapped to position 7 is the second read in the pair (128) and regarded as properly paired (1 + 2); its mate is mapped to 37 on the reverse strand (32). Read r002 has three soft-clipped (unaligned) bases. The coordinate shown in SAM is the position of the first aligned base. The CIGAR string for this alignment contains a P (padding) operation which correctly aligns the inserted sequences. Padding operations can be absent when an aligner does not support multiple sequence alignment. The last six bases of read r003 map to position 9, and the first five to position 29 on the reverse strand. The hard clipping operation H indicates that the clipped sequence is not present in the sequence field. The NM tag gives the number of mismatches. Read r004 is aligned across an intron, indicated by the N operation. -**Binary Alignment/Map (BAM)** +** Binary Alignment/Map (BAM) ** BAM is the binary representation of SAM and keeps exactly the same information as SAM. BAM uses lossless compression to reduce the size of the data by about 75% and provides an indexing system that allows reads that overlap a region of the genome to be retrieved and rapidly traversed. #### Quality control, preprocessing and statistics for BAM -**Component:** Hpg-Fastq & FastQC. Some features: +** Component **: Hpg-Fastq & FastQC. + +Some features -- Quality control: % reads with N errors, % reads with multiple mappings, strand bias, paired-end insert, ... -- Filtering: by number of errors, number of hits, … - - Comparator: stats, intersection, ... +* Quality control + * reads with N errors + * reads with multiple mappings + * strand bias + * paired-end insert +* Filtering: by number of errors, number of hits + * Comparator: stats, intersection, ... -**Input:** BAM file. +** Input: ** BAM file. -**Output:** BAM file plus an HTML file containing statistics. +** Output: ** BAM file plus an HTML file containing statistics. #### Variant Calling -Component:** GATK.** +Component: ** GATK **. Identification of single nucleotide variants and indels on the alignments is performed using the Genome Analysis Toolkit (GATK). GATK (2) is a software package developed at the Broad Institute to analyze high-throughput sequencing data. The toolkit offers a wide variety of tools, with a primary focus on variant discovery and genotyping as well as strong emphasis on data quality assurance. -**Input:** BAM +** Input: ** BAM -**Output:** VCF +** Output: ** VCF -**Variant Call Format (VCF)** +** Variant Call Format (VCF) ** VCF (3) is a standardized format for storing the most prevalent types of sequence variation, including SNPs, indels and larger structural variants, together with rich annotations. The format was developed with the primary intention to represent human genetic variation, but its use is not restricted >to diploid genomes and can be used in different contexts as well. Its flexibility and user extensibility allows representation of a wide variety of genomic variation with respect to a single reference sequence. @@ -123,42 +127,42 @@ A VCF file consists of a header section and a data section. The header contains this list; the reference haplotype is designated as 0. For multiploid data, the separator indicates whether the data are phased (|) or unphased (/). Thus, the two alleles C and G at the positions 2 and 5 in this figure occur on the same chromosome in SAMPLE1. The first data line shows an example of a deletion (present in SAMPLE1) and a replacement of two bases by another base (SAMPLE2); the second line shows a SNP and an insertion; the third a SNP; the fourth a large structural variant described by the annotation in the INFO column, the coordinate is that of the base before the variant. (b–f ) Alignments and VCF representations of different sequence variants: SNP, insertion, deletion, replacement, and a large deletion. The REF columns shows the reference bases replaced by the haplotype in the ALT column. The coordinate refers to the first reference base. (g) Users are advised to use simplest representation possible and lowest coordinate in cases where the position is ambiguous.](../../../img/fig4.png) -**Figure 4.** (a) Example of valid VCF. The header lines ##fileformat and #CHROM are mandatory, the rest is optional but strongly recommended. Each line of the body describes variants present in the sampled population at one genomic position or region. All alternate alleles are listed in the ALT column and referenced from the genotype fields as 1-based indexes to this list; the reference haplotype is designated as 0. For multiploid data, the separator indicates whether the data are phased (|) or unphased (/). Thus, the two alleles C and G at the positions 2 and 5 in this figure occur on the same chromosome in SAMPLE1. The first data line shows an example of a deletion (present in SAMPLE1) and a replacement of two bases by another base (SAMPLE2); the second line shows a SNP and an insertion; the third a SNP; the fourth a large structural variant described by the annotation in the INFO column, the coordinate is that of the base before the variant. (b–f ) Alignments and VCF representations of different sequence variants: SNP, insertion, deletion, replacement, and a large deletion. The REF columns shows the reference bases replaced by the haplotype in the ALT column. The coordinate refers to the first reference base. (g) Users are advised to use simplest representation possible and lowest coordinate in cases where the position is ambiguous. +** Figure 4 **. (a) Example of valid VCF. The header lines ##fileformat and #CHROM are mandatory, the rest is optional but strongly recommended. Each line of the body describes variants present in the sampled population at one genomic position or region. All alternate alleles are listed in the ALT column and referenced from the genotype fields as 1-based indexes to this list; the reference haplotype is designated as 0. For multiploid data, the separator indicates whether the data are phased (|) or unphased (/). Thus, the two alleles C and G at the positions 2 and 5 in this figure occur on the same chromosome in SAMPLE1. The first data line shows an example of a deletion (present in SAMPLE1) and a replacement of two bases by another base (SAMPLE2); the second line shows a SNP and an insertion; the third a SNP; the fourth a large structural variant described by the annotation in the INFO column, the coordinate is that of the base before the variant. (b–f ) Alignments and VCF representations of different sequence variants: SNP, insertion, deletion, replacement, and a large deletion. The REF columns shows the reference bases replaced by the haplotype in the ALT column. The coordinate refers to the first reference base. (g) Users are advised to use simplest representation possible and lowest coordinate in cases where the position is ambiguous. -###Annotating +### Annotating -**Component:** HPG-Variant +** Component: ** HPG-Variant -The functional consequences of every variant found are then annotated using the HPG-Variant software, which extracts from CellBase**,** the Knowledge database, all the information relevant on the predicted pathologic effect of the variants. +The functional consequences of every variant found are then annotated using the HPG-Variant software, which extracts from CellBase, the Knowledge database, all the information relevant on the predicted pathologic effect of the variants. VARIANT (VARIant Analysis Tool) (4) reports information on the variants found that include consequence type and annotations taken from different databases and repositories (SNPs and variants from dbSNP and 1000 genomes, and disease-related variants from the Genome-Wide Association Study (GWAS) catalog, Online Mendelian Inheritance in Man (OMIM), Catalog of Somatic Mutations in Cancer (COSMIC) mutations, etc. VARIANT also produces a rich variety of annotations that include information on the regulatory (transcription factor or miRNAbinding sites, etc.) or structural roles, or on the selective pressures on the sites affected by the variation. This information allows extending the conventional reports beyond the coding regions and expands the knowledge on the contribution of non-coding or synonymous variants to the phenotype studied. -**Input:** VCF +** Input: ** VCF -**Output:** The output of this step is the Variant Calling Format (VCF) file, which contains changes with respect to the reference genome with the corresponding QC and functional annotations. +** Output: ** The output of this step is the Variant Calling Format (VCF) file, which contains changes with respect to the reference genome with the corresponding QC and functional annotations. #### CellBase CellBase(5) is a relational database integrates biological information from different sources and includes: -**Core features:** +** Core features: ** We took genome sequences, genes, transcripts, exons, cytobands or cross references (xrefs) identifiers (IDs) from Ensembl (6). Protein information including sequences, xrefs or protein features (natural variants, mutagenesis sites, post-translational modifications, etc.) were imported from UniProt (7). -**Regulatory:** +** Regulatory: ** CellBase imports miRNA from miRBase (8); curated and non-curated miRNA targets from miRecords (9), miRTarBase (10), TargetScan(11) and microRNA.org (12) and CpG islands and conserved regions from the UCSC database (13). -**Functional annotation** +** Functional annotation ** -OBO Foundry (14) develops many biomedical ontologies that are implemented in OBO format. We designed a SQL schema to store these OBO ontologies and >30 ontologies were imported. OBO ontology term annotations were taken from Ensembl (6). InterPro (15) annotations were also imported. +OBO Foundry (14) develops many biomedical ontologies that are implemented in OBO format. We designed a SQL schema to store these OBO ontologies and 30 ontologies were imported. OBO ontology term annotations were taken from Ensembl (6). InterPro (15) annotations were also imported. -**Variation** +** Variation ** CellBase includes SNPs from dbSNP (16)^; SNP population frequencies from HapMap (17), 1000 genomes project (18) and Ensembl (6); phenotypically annotated SNPs were imported from NHRI GWAS Catalog (19),HGMD (20), Open Access GWAS Database (21), UniProt (7) and OMIM (22); mutations from COSMIC (23) and structural variations from Ensembl (6). -**Systems biology** +** Systems biology ** We also import systems biology information like interactome information from IntAct (24). Reactome (25) stores pathway and interaction information in BioPAX (26) format. BioPAX data exchange format enables the integration of diverse pathway resources. We successfully solved the problem of storing data released in BioPAX format into a SQL relational schema, which allowed us importing Reactome in CellBase. @@ -167,8 +171,8 @@ resources. We successfully solved the problem of storing data released in BioPAX ### [Priorization component (BiERApp)](priorization-component-bierapp/) -Usage ------ +## Usage + First of all, we should load ngsPipeline module: ```bash @@ -182,27 +186,27 @@ If we launch ngsPipeline with â€-h’, we will get the usage help: ```bash $ ngsPipeline -h Usage: ngsPipeline.py [-h] -i INPUT -o OUTPUT -p PED --project PROJECT --queue - QUEUE [--stages-path STAGES_PATH] [--email EMAIL] + QUEUE [--stages-path STAGES_PATH] [--email EMAIL] [--prefix PREFIX] [-s START] [-e END] --log Python pipeline optional arguments: - -h, --help show this help message and exit + -h, --help show this help message and exit -i INPUT, --input INPUT -o OUTPUT, --output OUTPUT - Output Data directory + Output Data directory -p PED, --ped PED Ped file with all individuals --project PROJECT Project Id --queue QUEUE Queue Id --stages-path STAGES_PATH - Custom Stages path + Custom Stages path --email EMAIL Email --prefix PREFIX Prefix name for Queue Jobs name -s START, --start START - Initial stage + Initial stage -e END, --end END Final stage - --log Log to file + --log Log to file ``` @@ -233,8 +237,8 @@ second one. Input, output and ped arguments are mandatory. If the output folder does not exist, the pipeline will create it. -Examples ---------------------- +## Examples + This is an example usage of NGSpipeline: We have a folder with the following structure in @@ -249,8 +253,8 @@ We have a folder with the following structure in │ ├── sample1_1.fq │ └── sample1_2.fq └── sample2 - ├── sample2_1.fq - └── sample2_2.fq + ├── sample2_1.fq + └── sample2_2.fq ``` The ped file ( file.ped) contains the following info: @@ -283,109 +287,106 @@ If we want to re-launch the pipeline from stage 4 until stage 20 we should use t $ ngsPipeline -i /scratch/$USER/omics/sample_data/data -o /scratch/$USER/omics/results -p /scratch/$USER/omics/sample_data/data/file.ped -s 4 -e 20 --project OPEN-0-0 --queue qprod ``` -Details on the pipeline ------------------------------------- +## Details on the pipeline + +The pipeline calls the following tools -The pipeline calls the following tools: -- [fastqc](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/), quality control tool for high throughput - sequence data. -- [gatk](https://www.broadinstitute.org/gatk/), The Genome Analysis Toolkit or GATK is a software package developed at +* [fastqc](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/), quality control tool for high throughput sequence data. +* [gatk](https://www.broadinstitute.org/gatk/), The Genome Analysis Toolkit or GATK is a software package developed at the Broad Institute to analyze high-throughput sequencing data. The toolkit offers a wide variety of tools, with a primary focus on variant discovery and genotyping as well as strong emphasis on data quality assurance. Its robust architecture, powerful processing engine and high-performance computing features make it capable of taking on projects of any size. -- [hpg-aligner](https://github.com/opencb-hpg/hpg-aligner), HPG Aligner has been designed to align short and long reads with high sensitivity, therefore any number of mismatches or indels are allowed. HPG Aligner implements and combines two well known algorithms: *Burrows-Wheeler Transform* (BWT) to speed-up mapping high-quality reads, and *Smith-Waterman*> (SW) to increase sensitivity when reads cannot be mapped using BWT. -- [hpg-fastq](http://docs.bioinfo.cipf.es/projects/fastqhpc/wiki), a quality control tool for high throughput sequence data. -- [hpg-variant](http://docs.bioinfo.cipf.es/projects/hpg-variant/wiki), The HPG Variant suite is an ambitious project aimed to provide a complete suite of tools to work with genomic variation data, from VCF tools to variant profiling or genomic statistics. It is being implemented using High Performance Computing technologies to provide the best performance possible. -- [picard](http://picard.sourceforge.net/), Picard comprises Java-based command-line utilities that manipulate SAM files, and a Java API (HTSJDK) for creating new programs that read and write SAM files. Both SAM text format and SAM binary (BAM) format are supported. -- [samtools](http://samtools.sourceforge.net/samtools-c.shtml), SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format. -- [snpEff](http://snpeff.sourceforge.net/), Genetic variant annotation and effect prediction toolbox. - -This listing show which tools are used in each step of the pipeline : - -- stage-00: fastqc -- stage-01: hpg_fastq -- stage-02: fastqc -- stage-03: hpg_aligner and samtools -- stage-04: samtools -- stage-05: samtools -- stage-06: fastqc -- stage-07: picard -- stage-08: fastqc -- stage-09: picard -- stage-10: gatk -- stage-11: gatk -- stage-12: gatk -- stage-13: gatk -- stage-14: gatk -- stage-15: gatk -- stage-16: samtools -- stage-17: samtools -- stage-18: fastqc -- stage-19: gatk -- stage-20: gatk -- stage-21: gatk -- stage-22: gatk -- stage-23: gatk -- stage-24: hpg-variant -- stage-25: hpg-variant -- stage-26: snpEff -- stage-27: snpEff -- stage-28: hpg-variant - -Interpretation ---------------------------- +* [hpg-aligner](https://github.com/opencb-hpg/hpg-aligner), HPG Aligner has been designed to align short and long reads with high sensitivity, therefore any number of mismatches or indels are allowed. HPG Aligner implements and combines two well known algorithms: *Burrows-Wheeler Transform* (BWT) to speed-up mapping high-quality reads, and *Smith-Waterman*> (SW) to increase sensitivity when reads cannot be mapped using BWT. +* [hpg-fastq](http://docs.bioinfo.cipf.es/projects/fastqhpc/wiki), a quality control tool for high throughput sequence data. +* [hpg-variant](http://docs.bioinfo.cipf.es/projects/hpg-variant/wiki), The HPG Variant suite is an ambitious project aimed to provide a complete suite of tools to work with genomic variation data, from VCF tools to variant profiling or genomic statistics. It is being implemented using High Performance Computing technologies to provide the best performance possible. +* [picard](http://picard.sourceforge.net/), Picard comprises Java-based command-line utilities that manipulate SAM files, and a Java API (HTSJDK) for creating new programs that read and write SAM files. Both SAM text format and SAM binary (BAM) format are supported. +* [samtools](http://samtools.sourceforge.net/samtools-c.shtml), SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format. +* [snpEff](http://snpeff.sourceforge.net/), Genetic variant annotation and effect prediction toolbox. + +This listing show which tools are used in each step of the pipeline + +* stage-00: fastqc +* stage-01: hpg_fastq +* stage-02: fastqc +* stage-03: hpg_aligner and samtools +* stage-04: samtools +* stage-05: samtools +* stage-06: fastqc +* stage-07: picard +* stage-08: fastqc +* stage-09: picard +* stage-10: gatk +* stage-11: gatk +* stage-12: gatk +* stage-13: gatk +* stage-14: gatk +* stage-15: gatk +* stage-16: samtools +* stage-17: samtools +* stage-18: fastqc +* stage-19: gatk +* stage-20: gatk +* stage-21: gatk +* stage-22: gatk +* stage-23: gatk +* stage-24: hpg-variant +* stage-25: hpg-variant +* stage-26: snpEff +* stage-27: snpEff +* stage-28: hpg-variant + +## Interpretation The output folder contains all the subfolders with the intermediate data. This folder contains the final VCF with all the variants. This file can be uploaded into [TEAM](diagnostic-component-team.html) by using the VCF file button. It is important to note here that the entire management of the VCF file is local: no patient’s sequence data is sent over the Internet thus avoiding any problem of data privacy or confidentiality.  -**Figure 7**. *TEAM upload panel.* *Once the file has been uploaded, a panel must be chosen from the Panel* list. Then, pressing the Run button the diagnostic process starts. +** Figure 7. ** *TEAM upload panel.* *Once the file has been uploaded, a panel must be chosen from the Panel* list. Then, pressing the Run button the diagnostic process starts. -Once the file has been uploaded, a panel must be chosen from the Panel list. Then, pressing the Run button the diagnostic process starts. TEAM searches first for known diagnostic mutation(s) taken from four databases: HGMD-public (20), [HUMSAVAR](http://www.uniprot.org/docs/humsavar), ClinVar (29)^ and COSMIC (23). +Once the file has been uploaded, a panel must be chosen from the Panel list. Then, pressing the Run button the diagnostic process starts. TEAM searches first for known diagnostic mutation(s) taken from four databases: HGMD-public (20), [HUMSAVAR](http://www.uniprot.org/docs/humsavar), ClinVar (29) and COSMIC (23).  -**Figure 7.** *The panel manager. The elements used to define a panel are (**A**) disease terms, (**B**) diagnostic mutations and (**C**) genes. Arrows represent actions that can be taken in the panel manager. Panels can be defined by using the known mutations and genes of a particular disease. This can be done by dragging them to the **Primary Diagnostic** box (action **D**). This action, in addition to defining the diseases in the **Primary Diagnostic** box, automatically adds the corresponding genes to the **Genes** box. The panels can be customized by adding new genes (action **F**) or removing undesired genes (action **G**). New disease mutations can be added independently or associated to an already existing disease term (action **E**). Disease terms can be removed by simply dragging them back (action **H**).* +** Figure 7. ** The panel manager. The elements used to define a panel are (** A **) disease terms, (** B **) diagnostic mutations and (** C **) genes. Arrows represent actions that can be taken in the panel manager. Panels can be defined by using the known mutations and genes of a particular disease. This can be done by dragging them to the ** Primary Diagnostic ** box (action ** D **). This action, in addition to defining the diseases in the ** Primary Diagnostic ** box, automatically adds the corresponding genes to the ** Genes ** box. The panels can be customized by adding new genes (action ** F **) or removing undesired genes (action **G**). New disease mutations can be added independently or associated to an already existing disease term (action ** E **). Disease terms can be removed by simply dragging them back (action ** H **). For variant discovering/filtering we should upload the VCF file into BierApp by using the following form: ** -**Figure 8.** *BierApp VCF upload panel. It is recommended to choose a name for the job as well as a description.** +** Figure 8 **. *BierApp VCF upload panel. It is recommended to choose a name for the job as well as a description **. Each prioritization (â€job’) has three associated screens that facilitate the filtering steps. The first one, the â€Summary’ tab, displays a statistic of the data set analyzed, containing the samples analyzed, the number and types of variants found and its distribution according to consequence types. The second screen, in the â€Variants and effect’ tab, is the actual filtering tool, and the third one, the â€Genome view’ tab, offers a representation of the selected variants within the genomic context provided by an embedded version of the Genome Maps Tool (30). - - -**Figure 9.** This picture shows all the information associated to the variants. If a variant has an associated phenotype we could see it in the last column. In this case, the variant 7:132481242 C>T is associated to the phenotype: large intestine tumor. - -References ------------------------ - -1. Heng Li, Bob Handsaker, Alec Wysoker, Tim Fennell, Jue Ruan, Nils Homer, Gabor Marth5, Goncalo Abecasis6, Richard Durbin and 1000 Genome Project Data Processing Subgroup: The Sequence Alignment/Map format and SAMtools. Bioinformatics 2009, 25: 2078-2079. -2. McKenna A, Hanna M, Banks E, Sivachenko A, Cibulskis K, Kernytsky A, Garimella K, Altshuler D, Gabriel S, Daly M, DePristo MA: The Genome Analysis Toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. *Genome Res* >2010, 20:1297-1303. -3. Petr Danecek, Adam Auton, Goncalo Abecasis, Cornelis A. Albers, Eric Banks, Mark A. DePristo, Robert E. Handsaker, Gerton Lunter, Gabor T. Marth, Stephen T. Sherry, Gilean McVean, Richard Durbin, and 1000 Genomes Project Analysis Group. The variant call format and VCFtools. Bioinformatics 2011, 27: 2156-2158. -4. Medina I, De Maria A, Bleda M, Salavert F, Alonso R, Gonzalez CY, Dopazo J: VARIANT: Command Line, Web service and Web interface for fast and accurate functional characterization of variants found by Next-Generation Sequencing. Nucleic Acids Res 2012, 40:W54-58. -5. Bleda M, Tarraga J, de Maria A, Salavert F, Garcia-Alonso L, Celma M, Martin A, Dopazo J, Medina I: CellBase, a comprehensive collection of RESTful web services for retrieving relevant biological information from heterogeneous sources. Nucleic Acids Res 2012, 40:W609-614. -6. Flicek,P., Amode,M.R., Barrell,D., Beal,K., Brent,S., Carvalho-Silva,D., Clapham,P., Coates,G., Fairley,S., Fitzgerald,S. et al. (2012) Ensembl 2012. Nucleic Acids Res., 40, D84–D90. -7. UniProt Consortium. (2012) Reorganizing the protein space at the Universal Protein Resource (UniProt). Nucleic Acids Res., 40, D71–D75. -8. Kozomara,A. and Griffiths-Jones,S. (2011) miRBase: integrating microRNA annotation and deep-sequencing data. Nucleic Acids Res., 39, D152–D157. -9. Xiao,F., Zuo,Z., Cai,G., Kang,S., Gao,X. and Li,T. (2009) miRecords: an integrated resource for microRNA-target interactions. Nucleic Acids Res., 37, D105–D110. -10. Hsu,S.D., Lin,F.M., Wu,W.Y., Liang,C., Huang,W.C., Chan,W.L., Tsai,W.T., Chen,G.Z., Lee,C.J., Chiu,C.M. et al. (2011) miRTarBase: a database curates experimentally validated microRNA-target interactions. Nucleic Acids Res., 39, D163–D169. -11. Friedman,R.C., Farh,K.K., Burge,C.B. and Bartel,D.P. (2009) Most mammalian mRNAs are conserved targets of microRNAs. Genome Res., 19, 92–105. 12. Betel,D., Wilson,M., Gabow,A., Marks,D.S. and Sander,C. (2008) The microRNA.org resource: targets and expression. Nucleic Acids Res., 36, D149–D153. -13. Dreszer,T.R., Karolchik,D., Zweig,A.S., Hinrichs,A.S., Raney,B.J., Kuhn,R.M., Meyer,L.R., Wong,M., Sloan,C.A., Rosenbloom,K.R. et al. (2012) The UCSC genome browser database: extensions and updates 2011. Nucleic Acids Res.,40, D918–D923. -14. Smith,B., Ashburner,M., Rosse,C., Bard,J., Bug,W., Ceusters,W., Goldberg,L.J., Eilbeck,K., Ireland,A., Mungall,C.J. et al. (2007) The OBO Foundry: coordinated evolution of ontologies to support biomedical data integration. Nat. Biotechnol., 25, 1251–1255. -15. Hunter,S., Jones,P., Mitchell,A., Apweiler,R., Attwood,T.K.,Bateman,A., Bernard,T., Binns,D., Bork,P., Burge,S. et al. (2012) InterPro in 2011: new developments in the family and domain prediction database. Nucleic Acids Res.,40, D306–D312. -16. Sherry,S.T., Ward,M.H., Kholodov,M., Baker,J., Phan,L., Smigielski,E.M. and Sirotkin,K. (2001) dbSNP: the NCBI database of genetic variation. Nucleic Acids Res., 29, 308–311. -17. Altshuler,D.M., Gibbs,R.A., Peltonen,L., Dermitzakis,E., Schaffner,S.F., Yu,F., Bonnen,P.E., de Bakker,P.I., Deloukas,P., Gabriel,S.B. et al. (2010) Integrating common and rare genetic variation in diverse human populations. Nature, 467, 52–58. -18. 1000 Genomes Project Consortium. (2010) A map of human genome variation from population-scale sequencing. Nature, 467, 1061–1073. -19. Hindorff,L.A., Sethupathy,P., Junkins,H.A., Ramos,E.M., Mehta,J.P., Collins,F.S. and Manolio,T.A. (2009) Potential etiologic and functional implications of genome-wide association loci for human diseases and traits. Proc. Natl Acad. Sci. USA, 106, 9362–9367. -20. Stenson,P.D., Ball,E.V., Mort,M., Phillips,A.D., Shiel,J.A., Thomas,N.S., Abeysinghe,S., Krawczak,M. and Cooper,D.N. (2003) Human gene mutation database (HGMD): 2003 update. Hum. Mutat., 21, 577–581. -21. Johnson,A.D. and O’Donnell,C.J. (2009) An open access database of genome-wide association results. BMC Med. Genet, 10, 6. -22. McKusick,V. (1998) A Catalog of Human Genes and Genetic Disorders, 12th edn. John Hopkins University Press,Baltimore, MD. -23. Forbes,S.A., Bindal,N., Bamford,S., Cole,C., Kok,C.Y., Beare,D., Jia,M., Shepherd,R., Leung,K., Menzies,A. et al. (2011) COSMIC: mining complete cancer genomes in the catalogue of somatic mutations in cancer. Nucleic Acids Res., 39, D945–D950. -24. Kerrien,S., Aranda,B., Breuza,L., Bridge,A., Broackes-Carter,F., Chen,C., Duesbury,M., Dumousseau,M., Feuermann,M., Hinz,U. et al. (2012) The Intact molecular interaction database in 2012. Nucleic Acids Res., 40, D841–D846. -25. Croft,D., O’Kelly,G., Wu,G., Haw,R., Gillespie,M., Matthews,L., Caudy,M., Garapati,P., Gopinath,G., Jassal,B. et al. (2011) Reactome: a database of reactions, pathways and biological processes. Nucleic Acids Res., 39, D691–D697. -26. Demir,E., Cary,M.P., Paley,S., Fukuda,K., Lemer,C., Vastrik,I.,Wu,G., D’Eustachio,P., Schaefer,C., Luciano,J. et al. (2010) The BioPAX community standard for pathway data sharing. Nature Biotechnol., 28, 935–942. -27. Alemán Z, GarcĂa-GarcĂa F, Medina I, Dopazo J (2014): A web tool for the design and management of panels of genes for targeted enrichment and massive sequencing for clinical applications. Nucleic Acids Res 42: W83-7. -28. [Alemán A](http://www.ncbi.nlm.nih.gov/pubmed?term=Alem%C3%A1n%20A%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Garcia-Garcia F](http://www.ncbi.nlm.nih.gov/pubmed?term=Garcia-Garcia%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Salavert F](http://www.ncbi.nlm.nih.gov/pubmed?term=Salavert%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Medina I](http://www.ncbi.nlm.nih.gov/pubmed?term=Medina%20I%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Dopazo J](http://www.ncbi.nlm.nih.gov/pubmed?term=Dopazo%20J%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)> (2014). A web-based interactive framework to assist in the prioritization of disease candidate genes in whole-exome sequencing studies. [Nucleic Acids Res.](http://www.ncbi.nlm.nih.gov/pubmed/?term=BiERapp "Nucleic acids research.")>42 :W88-93. -29. Landrum,M.J., Lee,J.M., Riley,G.R., Jang,W., Rubinstein,W.S., Church,D.M. and Maglott,D.R. (2014) ClinVar: public archive of relationships among sequence variation and human phenotype. Nucleic Acids Res., 42, D980–D985. -30. Medina I, Salavert F, Sanchez R, de Maria A, Alonso R, Escobar P, Bleda M, Dopazo J: Genome Maps, a new generation genome browser. Nucleic Acids Res 2013, 41:W41-46. + + +** Figure 9 **. This picture shows all the information associated to the variants. If a variant has an associated phenotype we could see it in the last column. In this case, the variant 7:132481242 CT is associated to the phenotype: large intestine tumor. + +## References + +1. Heng Li, Bob Handsaker, Alec Wysoker, Tim Fennell, Jue Ruan, Nils Homer, Gabor Marth5, Goncalo Abecasis6, Richard Durbin and 1000 Genome Project Data Processing Subgroup: The Sequence Alignment/Map format and SAMtools. Bioinformatics 2009, 25: 2078-2079. +1. McKenna A, Hanna M, Banks E, Sivachenko A, Cibulskis K, Kernytsky A, Garimella K, Altshuler D, Gabriel S, Daly M, DePristo MA: The Genome Analysis Toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. *Genome Res* >2010, 20:1297-1303. +1. Petr Danecek, Adam Auton, Goncalo Abecasis, Cornelis A. Albers, Eric Banks, Mark A. DePristo, Robert E. Handsaker, Gerton Lunter, Gabor T. Marth, Stephen T. Sherry, Gilean McVean, Richard Durbin, and 1000 Genomes Project Analysis Group. The variant call format and VCFtools. Bioinformatics 2011, 27: 2156-2158. +1. Medina I, De Maria A, Bleda M, Salavert F, Alonso R, Gonzalez CY, Dopazo J: VARIANT: Command Line, Web service and Web interface for fast and accurate functional characterization of variants found by Next-Generation Sequencing. Nucleic Acids Res 2012, 40:W54-58. +1. Bleda M, Tarraga J, de Maria A, Salavert F, Garcia-Alonso L, Celma M, Martin A, Dopazo J, Medina I: CellBase, a comprehensive collection of RESTful web services for retrieving relevant biological information from heterogeneous sources. Nucleic Acids Res 2012, 40:W609-614. +1. Flicek,P., Amode,M.R., Barrell,D., Beal,K., Brent,S., Carvalho-Silva,D., Clapham,P., Coates,G., Fairley,S., Fitzgerald,S. et al. (2012) Ensembl 2012. Nucleic Acids Res., 40, D84–D90. +1. UniProt Consortium. (2012) Reorganizing the protein space at the Universal Protein Resource (UniProt). Nucleic Acids Res., 40, D71–D75. +1. Kozomara,A. and Griffiths-Jones,S. (2011) miRBase: integrating microRNA annotation and deep-sequencing data. Nucleic Acids Res., 39, D152–D157. +1. Xiao,F., Zuo,Z., Cai,G., Kang,S., Gao,X. and Li,T. (2009) miRecords: an integrated resource for microRNA-target interactions. Nucleic Acids Res., 37, D105–D110. +1. Hsu,S.D., Lin,F.M., Wu,W.Y., Liang,C., Huang,W.C., Chan,W.L., Tsai,W.T., Chen,G.Z., Lee,C.J., Chiu,C.M. et al. (2011) miRTarBase: a database curates experimentally validated microRNA-target interactions. Nucleic Acids Res., 39, D163–D169. +1. Friedman,R.C., Farh,K.K., Burge,C.B. and Bartel,D.P. (2009) Most mammalian mRNAs are conserved targets of microRNAs. Genome Res., 19, 92–105. 12. Betel,D., Wilson,M., Gabow,A., Marks,D.S. and Sander,C. (2008) The microRNA.org resource: targets and expression. Nucleic Acids Res., 36, D149–D153. +1. Dreszer,T.R., Karolchik,D., Zweig,A.S., Hinrichs,A.S., Raney,B.J., Kuhn,R.M., Meyer,L.R., Wong,M., Sloan,C.A., Rosenbloom,K.R. et al. (2012) The UCSC genome browser database: extensions and updates 2011. Nucleic Acids Res.,40, D918–D923. +1. Smith,B., Ashburner,M., Rosse,C., Bard,J., Bug,W., Ceusters,W., Goldberg,L.J., Eilbeck,K., Ireland,A., Mungall,C.J. et al. (2007) The OBO Foundry: coordinated evolution of ontologies to support biomedical data integration. Nat. Biotechnol., 25, 1251–1255. +1. Hunter,S., Jones,P., Mitchell,A., Apweiler,R., Attwood,T.K.,Bateman,A., Bernard,T., Binns,D., Bork,P., Burge,S. et al. (2012) InterPro in 2011: new developments in the family and domain prediction database. Nucleic Acids Res.,40, D306–D312. +1. Sherry,S.T., Ward,M.H., Kholodov,M., Baker,J., Phan,L., Smigielski,E.M. and Sirotkin,K. (2001) dbSNP: the NCBI database of genetic variation. Nucleic Acids Res., 29, 308–311. +1. Altshuler,D.M., Gibbs,R.A., Peltonen,L., Dermitzakis,E., Schaffner,S.F., Yu,F., Bonnen,P.E., de Bakker,P.I., Deloukas,P., Gabriel,S.B. et al. (2010) Integrating common and rare genetic variation in diverse human populations. Nature, 467, 52–58. +1. 1000 Genomes Project Consortium. (2010) A map of human genome variation from population-scale sequencing. Nature, 467, 1061–1073. +1. Hindorff,L.A., Sethupathy,P., Junkins,H.A., Ramos,E.M., Mehta,J.P., Collins,F.S. and Manolio,T.A. (2009) Potential etiologic and functional implications of genome-wide association loci for human diseases and traits. Proc. Natl Acad. Sci. USA, 106, 9362–9367. +1. Stenson,P.D., Ball,E.V., Mort,M., Phillips,A.D., Shiel,J.A., Thomas,N.S., Abeysinghe,S., Krawczak,M. and Cooper,D.N. (2003) Human gene mutation database (HGMD): 2003 update. Hum. Mutat., 21, 577–581. +1. Johnson,A.D. and O’Donnell,C.J. (2009) An open access database of genome-wide association results. BMC Med. Genet, 10, 6. +1. McKusick,V. (1998) A Catalog of Human Genes and Genetic Disorders, 12th edn. John Hopkins University Press,Baltimore, MD. +1. Forbes,S.A., Bindal,N., Bamford,S., Cole,C., Kok,C.Y., Beare,D., Jia,M., Shepherd,R., Leung,K., Menzies,A. et al. (2011) COSMIC: mining complete cancer genomes in the catalogue of somatic mutations in cancer. Nucleic Acids Res., 39, D945–D950. +1. Kerrien,S., Aranda,B., Breuza,L., Bridge,A., Broackes-Carter,F., Chen,C., Duesbury,M., Dumousseau,M., Feuermann,M., Hinz,U. et al. (2012) The Intact molecular interaction database in 2012. Nucleic Acids Res., 40, D841–D846. +1. Croft,D., O’Kelly,G., Wu,G., Haw,R., Gillespie,M., Matthews,L., Caudy,M., Garapati,P., Gopinath,G., Jassal,B. et al. (2011) Reactome: a database of reactions, pathways and biological processes. Nucleic Acids Res., 39, D691–D697. +1. Demir,E., Cary,M.P., Paley,S., Fukuda,K., Lemer,C., Vastrik,I.,Wu,G., D’Eustachio,P., Schaefer,C., Luciano,J. et al. (2010) The BioPAX community standard for pathway data sharing. Nature Biotechnol., 28, 935–942. +1. Alemán Z, GarcĂa-GarcĂa F, Medina I, Dopazo J (2014): A web tool for the design and management of panels of genes for targeted enrichment and massive sequencing for clinical applications. Nucleic Acids Res 42: W83-7. +1. [Alemán A](http://www.ncbi.nlm.nih.gov/pubmed?term=Alem%C3%A1n%20A%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Garcia-Garcia F](http://www.ncbi.nlm.nih.gov/pubmed?term=Garcia-Garcia%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Salavert F](http://www.ncbi.nlm.nih.gov/pubmed?term=Salavert%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Medina I](http://www.ncbi.nlm.nih.gov/pubmed?term=Medina%20I%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Dopazo J](http://www.ncbi.nlm.nih.gov/pubmed?term=Dopazo%20J%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)> (2014). A web-based interactive framework to assist in the prioritization of disease candidate genes in whole-exome sequencing studies. [Nucleic Acids Res.](http://www.ncbi.nlm.nih.gov/pubmed/?term=BiERapp "Nucleic acids research.")>42 :W88-93. +1. Landrum,M.J., Lee,J.M., Riley,G.R., Jang,W., Rubinstein,W.S., Church,D.M. and Maglott,D.R. (2014) ClinVar: public archive of relationships among sequence variation and human phenotype. Nucleic Acids Res., 42, D980–D985. +1. Medina I, Salavert F, Sanchez R, de Maria A, Alonso R, Escobar P, Bleda M, Dopazo J: Genome Maps, a new generation genome browser. Nucleic Acids Res 2013, 41:W41-46. diff --git a/docs.it4i/anselm-cluster-documentation/software/omics-master/priorization-component-bierapp.md b/docs.it4i/anselm-cluster-documentation/software/omics-master/priorization-component-bierapp.md index 3c2c24cd07fb3e995fb86ed7798f2131fb853e91..53f94e0125e713fa5f902f09047f295cb9fd9ff6 100644 --- a/docs.it4i/anselm-cluster-documentation/software/omics-master/priorization-component-bierapp.md +++ b/docs.it4i/anselm-cluster-documentation/software/omics-master/priorization-component-bierapp.md @@ -1,21 +1,20 @@ -Prioritization component (BiERapp) -================================ +# Prioritization component (BiERapp) -### Access +## Access BiERapp is available at the [following address](http://omics.it4i.cz/bierapp/) -!!! Note "Note" - The address is accessible onlyvia VPN. +!!! note + The address is accessible only via VPN. -###BiERapp +## BiERapp -**This tool is aimed to discover new disease genes or variants by studying affected families or cases and controls. It carries out a filtering process to sequentially remove: (i) variants which are not no compatible with the disease because are not expected to have impact on the protein function; (ii) variants that exist at frequencies incompatible with the disease; (iii) variants that do not segregate with the disease. The result is a reduced set of disease gene candidates that should be further validated experimentally.** +** This tool is aimed to discover new disease genes or variants by studying affected families or cases and controls. It carries out a filtering process to sequentially remove: (i) variants which are not no compatible with the disease because are not expected to have impact on the protein function; (ii) variants that exist at frequencies incompatible with the disease; (iii) variants that do not segregate with the disease. The result is a reduced set of disease gene candidates that should be further validated experimentally. ** BiERapp (28) efficiently helps in the identification of causative variants in family and sporadic genetic diseases. The program reads lists of predicted variants (nucleotide substitutions and indels) in affected individuals or tumor samples and controls. In family studies, different modes of inheritance can easily be defined to filter out variants that do not segregate with the disease along the family. Moreover, BiERapp integrates additional information such as allelic frequencies in the general population and the most popular damaging scores to further narrow down the number of putative variants in successive filtering steps. BiERapp provides an interactive and user-friendly interface that implements the filtering strategy used in the context of a large-scale genomic project carried out by the Spanish Network for Research, in Rare Diseases (CIBERER) and the Medical Genome Project. in which more than 800 exomes have been analyzed.  -**Figure 6**. Web interface to the prioritization tool. This figure shows the interface of the web tool for candidate gene +** Figure 6 **. Web interface to the prioritization tool. This figure shows the interface of the web tool for candidate gene prioritization with the filters available. The tool includes a genomic viewer (Genome Maps 30) that enables the representation of the variants in the corresponding genomic coordinates. diff --git a/docs.it4i/colors.md b/docs.it4i/colors.md deleted file mode 100644 index 568ebd2e4d42ff5ab4514224e469a30c1d07cc6f..0000000000000000000000000000000000000000 --- a/docs.it4i/colors.md +++ /dev/null @@ -1,62 +0,0 @@ -## Primary colors - -Click on a tile to change the primary color of the theme: - -<button data-md-color-primary="red">Red</button> -<button data-md-color-primary="pink">Pink</button> -<button data-md-color-primary="purple">Purple</button> -<button data-md-color-primary="deep-purple">Deep Purple</button> -<button data-md-color-primary="indigo">Indigo</button> -<button data-md-color-primary="blue">Blue</button> -<button data-md-color-primary="light-blue">Light Blue</button> -<button data-md-color-primary="cyan">Cyan</button> -<button data-md-color-primary="teal">Teal</button> -<button data-md-color-primary="green">Green</button> -<button data-md-color-primary="light-green">Light Green</button> -<button data-md-color-primary="lime">Lime</button> -<button data-md-color-primary="yellow">Yellow</button> -<button data-md-color-primary="amber">Amber</button> -<button data-md-color-primary="orange">Orange</button> -<button data-md-color-primary="deep-orange">Deep Orange</button> -<button data-md-color-primary="brown">Brown</button> -<button data-md-color-primary="grey">Grey</button> -<button data-md-color-primary="blue-grey">Blue Grey</button> - -<script> - var buttons = document.querySelectorAll("button[data-md-color-primary]"); - Array.prototype.forEach.call(buttons, function(button) { - button.addEventListener("click", function() { - document.body.dataset.mdColorPrimary = this.dataset.mdColorPrimary; - }) - }) -</script> - -## Accent colors - -Click on a tile to change the accent color of the theme: - -<button data-md-color-accent="red">Red</button> -<button data-md-color-accent="pink">Pink</button> -<button data-md-color-accent="purple">Purple</button> -<button data-md-color-accent="deep-purple">Deep Purple</button> -<button data-md-color-accent="indigo">Indigo</button> -<button data-md-color-accent="blue">Blue</button> -<button data-md-color-accent="light-blue">Light Blue</button> -<button data-md-color-accent="cyan">Cyan</button> -<button data-md-color-accent="teal">Teal</button> -<button data-md-color-accent="green">Green</button> -<button data-md-color-accent="light-green">Light Green</button> -<button data-md-color-accent="lime">Lime</button> -<button data-md-color-accent="yellow">Yellow</button> -<button data-md-color-accent="amber">Amber</button> -<button data-md-color-accent="orange">Orange</button> -<button data-md-color-accent="deep-orange">Deep Orange</button> - -<script> - var buttons = document.querySelectorAll("button[data-md-color-accent]"); - Array.prototype.forEach.call(buttons, function(button) { - button.addEventListener("click", function() { - document.body.dataset.mdColorAccent = this.dataset.mdColorAccent; - }) - }) -</script> diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md index b136cf4a0ace74fe507c3fe6d94169d3cb8fc2eb..eea8cf0cb08c61f102304419fa5223098bd0e612 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md @@ -1,15 +1,13 @@ -VNC -=== +# VNC The **Virtual Network Computing** (**VNC**) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing "Desktop sharing") system that uses the [Remote Frame Buffer protocol (RFB)](http://en.wikipedia.org/wiki/RFB_protocol "RFB protocol") to remotely control another [computer](http://en.wikipedia.org/wiki/Computer "Computer"). It transmits the [keyboard](http://en.wikipedia.org/wiki/Computer_keyboard "Computer keyboard") and [mouse](http://en.wikipedia.org/wiki/Computer_mouse") events from one computer to another, relaying the graphical [screen](http://en.wikipedia.org/wiki/Computer_screen "Computer screen") updates back in the other direction, over a [network](http://en.wikipedia.org/wiki/Computer_network "Computer network"). The recommended clients are [TightVNC](http://www.tightvnc.com) or [TigerVNC](http://sourceforge.net/apps/mediawiki/tigervnc/index.php?title=Main_Page) (free, open source, available for almost any platform). -Create VNC password -------------------- +## Create VNC password !!! Note "Note" - Local VNC password should be set before the first login. Do use a strong password. + Local VNC password should be set before the first login. Do use a strong password. ```bash [username@login2 ~]$ vncpasswd @@ -17,13 +15,12 @@ Password: Verify: ``` -Start vncserver ---------------- +## Start vncserver !!! Note "Note" - To access VNC a local vncserver must be started first and also a tunnel using SSH port forwarding must be established. + To access VNC a local vncserver must be started first and also a tunnel using SSH port forwarding must be established. - [See below](vnc.md#linux-example-of-creating-a-tunnel) for the details on SSH tunnels. In this example we use port 61. +[See below](vnc.md#linux-example-of-creating-a-tunnel) for the details on SSH tunnels. In this example we use port 61. You can find ports which are already occupied. Here you can see that ports " /usr/bin/Xvnc :79" and " /usr/bin/Xvnc :60" are occupied. @@ -67,10 +64,9 @@ username 10296 0.0 0.0 131772 21076 pts/29 SN 13:01 0:01 /usr/bin/Xvn To access the VNC server you have to create a tunnel between the login node using TCP **port 5961** and your machine using a free TCP port (for simplicity the very same, in this case). !!! Note "Note" - The tunnel must point to the same login node where you launched the VNC server, eg. login2. If you use just cluster-name.it4i.cz, the tunnel might point to a different node due to DNS round robin. + The tunnel must point to the same login node where you launched the VNC server, eg. login2. If you use just cluster-name.it4i.cz, the tunnel might point to a different node due to DNS round robin. -Linux/Mac OS example of creating a tunnel ------------------------------------------ +## Linux/Mac OS example of creating a tunnel At your machine, create the tunnel: @@ -109,8 +105,7 @@ You have to destroy the SSH tunnel which is still running at the background afte kill 2022 ``` -Windows example of creating a tunnel ------------------------------------- +## Windows example of creating a tunnel Use PuTTY to log in on cluster. @@ -133,29 +128,25 @@ Fill the Source port and Destination fields. **Do not forget to click the Add bu Run the VNC client of your choice, select VNC server 127.0.0.1, port 5961 and connect using VNC password. -Example of starting TigerVNC viewer ------------------------------------ +## Example of starting TigerVNC viewer  In this example, we connect to VNC server on port 5961, via the ssh tunnel, using TigerVNC viewer. The connection is encrypted and secured. The VNC server listening on port 5961 provides screen of 1600x900 pixels. -Example of starting TightVNC Viewer ------------------------------------ +## Example of starting TightVNC Viewer Use your VNC password to log using TightVNC Viewer and start a Gnome Session on the login node.  -Gnome session -------------- +## Gnome session You should see after the successful login.  -Disable your Gnome session screensaver --------------------------------------- +## Disable your Gnome session screensaver Open Screensaver preferences dialog: @@ -165,8 +156,7 @@ Uncheck both options below the slider:  -Kill screensaver if locked screen ---------------------------------- +## Kill screensaver if locked screen If the screen gets locked you have to kill the screensaver. Do not to forget to disable the screensaver then. @@ -178,8 +168,7 @@ username 24316 0.0 0.0 270564 3528 ? Ss 14:12 0:00 gnome-scree [username@login2 .vnc]$ kill 24316 ``` -Kill vncserver after finished work ----------------------------------- +## Kill vncserver after finished work You should kill your VNC server using command: @@ -195,8 +184,7 @@ Or this way: [username@login2 .vnc]$ pkill vnc ``` -GUI applications on compute nodes over VNC ------------------------------------------- +## GUI applications on compute nodes over VNC The very same methods as described above, may be used to run the GUI applications on compute nodes. However, for maximum performance, proceed following these steps: diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md index e4a3c023cbb78cc9071c8440501aacb277d43106..b275082163a3ed4a30785b5824ab0520c8819def 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md @@ -1,13 +1,11 @@ -X Window System -=============== +# X Window System The X Window system is a principal way to get GUI access to the clusters. The **X Window System** (commonly known as **X11**, based on its current major version being 11, or shortened to simply **X**, and sometimes informally **X-Windows**) is a computer software system and network [protocol](http://en.wikipedia.org/wiki/Protocol_%28computing%29 "Protocol (computing)") that provides a basis for [graphical user interfaces](http://en.wikipedia.org/wiki/Graphical_user_interface "Graphical user interface") (GUIs) and rich input device capability for [networked computers](http://en.wikipedia.org/wiki/Computer_network "Computer network"). -!!! Note "Note" - The X display forwarding must be activated and the X server running on client side +!!! tip + The X display forwarding must be activated and the X server running on client side -X display ---------- +## X display In order to display graphical user interface GUI of various software tools, you need to enable the X display forwarding. On Linux and Mac, log in using the -X option tho ssh client: @@ -15,10 +13,9 @@ In order to display graphical user interface GUI of various software tools, you local $ ssh -X username@cluster-name.it4i.cz ``` -X Display Forwarding on Windows -------------------------------- +## X Display Forwarding on Windows -On Windows use the PuTTY client to enable X11 forwarding. In PuTTY menu, go to Connection->SSH->X11, mark the Enable X11 forwarding checkbox before logging in. Then log in as usual. +On Windows use the PuTTY client to enable X11 forwarding. In PuTTY menu, go to Connection-SSH-X11, mark the Enable X11 forwarding checkbox before logging in. Then log in as usual. To verify the forwarding, type @@ -34,18 +31,15 @@ localhost:10.0 then the X11 forwarding is enabled. -X Server --------- +## X Server In order to display graphical user interface GUI of various software tools, you need running X server on your desktop computer. For Linux users, no action is required as the X server is the default GUI environment on most Linux distributions. Mac and Windows users need to install and run the X server on their workstations. -X Server on OS X ----------------- +## X Server on OS X Mac OS users need to install [XQuartz server](https://www.xquartz.org). -X Server on Windows -------------------- +## X Server on Windows There are variety of X servers available for Windows environment. The commercial Xwin32 is very stable and rich featured. The Cygwin environment provides fully featured open-source XWin X server. For simplicity, we recommend open-source X server by the [Xming project](http://sourceforge.net/projects/xming/). For stability and full features we recommend the [XWin](http://x.cygwin.com/) X server by Cygwin @@ -56,11 +50,10 @@ There are variety of X servers available for Windows environment. The commercial Read more on [http://www.math.umn.edu/systems_guide/putty_xwin32.html](http://www.math.umn.edu/systems_guide/putty_xwin32.shtml) -Running GUI Enabled Applications --------------------------------- +## Running GUI Enabled Applications !!! Note "Note" - Make sure that X forwarding is activated and the X server is running. + Make sure that X forwarding is activated and the X server is running. Then launch the application as usual. Use the & to run the application in background. @@ -75,8 +68,7 @@ $ xterm In this example, we activate the intel programing environment tools, then start the graphical gvim editor. -GUI Applications on Compute Nodes ---------------------------------- +## GUI Applications on Compute Nodes Allocate the compute nodes using -X option on the qsub command @@ -94,13 +86,11 @@ $ ssh -X r24u35n680 In this example, we log in on the r24u35n680 compute node, with the X11 forwarding enabled. -The Gnome GUI Environment -------------------------- +## The Gnome GUI Environment The Gnome 2.28 GUI environment is available on the clusters. We recommend to use separate X server window for displaying the Gnome environment. -Gnome on Linux and OS X ------------------------ +## Gnome on Linux and OS X To run the remote Gnome session in a window on Linux/OS X computer, you need to install Xephyr. Ubuntu package is xserver-xephyr, on OS X it is part of [XQuartz](http://xquartz.macosforge.org/landing/). First, launch Xephyr on local machine: @@ -126,8 +116,7 @@ xinit /usr/bin/ssh -XT -i .ssh/path_to_your_key yourname@cluster-namen.it4i.cz g However this method does not seem to work with recent Linux distributions and you will need to manually source /etc/profile to properly set environment variables for PBS. -Gnome on Windows ----------------- +## Gnome on Windows Use Xlaunch to start the Xming server or run the XWin.exe. Select the "One window" mode. @@ -139,8 +128,7 @@ $ gnome-session & In this way, we run remote gnome session on the cluster, displaying it in the local X server -Use System->Log Out to close the gnome-session - +Use System-Log Out to close the gnome-session ### If no able to forward X11 using PuTTY to CygwinX @@ -162,7 +150,7 @@ PuTTY X11 proxy: unable to connect to forwarded X server: Network error: Connect  -2. Check Putty settings: +1. Check Putty settings: Enable X11 forwarding  diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/introduction.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/introduction.md index 696b2fc41a7a84f06e4d690ac01259d4bd448cd4..31ca54a28db73f16abc008a8823d619f393624ed 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/introduction.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/introduction.md @@ -1,30 +1,24 @@ -Accessing the Clusters -====================== +# Accessing the Clusters The IT4Innovations clusters are accessed by SSH protocol via login nodes. !!! Note "Note" - Read more on [Accessing the Salomon Cluster](../../salomon/shell-and-data-access.md) or [Accessing the Anselm Cluster](../../anselm-cluster-documentation/shell-and-data-access.md) pages. + Read more on [Accessing the Salomon Cluster](../../salomon/shell-and-data-access.md) or [Accessing the Anselm Cluster](../../anselm-cluster-documentation/shell-and-data-access.md) pages. -PuTTY ------ +## PuTTY On **Windows**, use [PuTTY ssh client](shell-access-and-data-transfer/putty/). -SSH keys --------- +## SSH keys Read more about [SSH keys management](shell-access-and-data-transfer/ssh-keys/). -Graphical User Interface ------------------------- +## Graphical User Interface Read more about [X Window System](./graphical-user-interface/x-window-system/). Read more about [Virtual Network Computing (VNC)](./graphical-user-interface/vnc/). - -Accessing IT4Innovations internal resources via VPN ---------------------------------------------------- +## Accessing IT4Innovations internal resources via VPN Read more about [VPN Access](vpn-access/). diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md index 394184946dce39df963d18d64997f355843c7ebe..8957f3b1f9304d36db2363165cb748fac540f7ca 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md @@ -1,12 +1,11 @@ # PuTTY (Windows) -Windows PuTTY Installer ------------------------ +## Windows PuTTY Installer We recommned you to download "**A Windows installer for everything except PuTTYtel**" with **Pageant** (SSH authentication agent) and **PuTTYgen** (PuTTY key generator) which is available [here](http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html). !!! Note "Note" - After installation you can proceed directly to private keys authentication using ["Putty"](putty#putty). + After installation you can proceed directly to private keys authentication using ["Putty"](putty#putty). "Change Password for Existing Private Key" is optional. @@ -14,53 +13,49 @@ We recommned you to download "**A Windows installer for everything except PuTTYt "Pageant" is optional. -PuTTY - how to connect to the IT4Innovations cluster ----------------------------------------------------- +## PuTTY - how to connect to the IT4Innovations cluster -- Run PuTTY -- Enter Host name and Save session fields with [Login address](../../../salomon/shell-and-data-access.md) and browse Connection - > SSH -> Auth menu. The *Host Name* input may be in the format **"username@clustername.it4i.cz"** so you don't have to type your login each time.In this example we will connect to the Salomon cluster using **"salomon.it4i.cz"**. +* Run PuTTY +* Enter Host name and Save session fields with [Login address](../../../salomon/shell-and-data-access.md) and browse Connection - SSH - Auth menu. The *Host Name* input may be in the format **"username@clustername.it4i.cz"** so you don't have to type your login each time.In this example we will connect to the Salomon cluster using **"salomon.it4i.cz"**.  -- Category -> Connection - > SSH -> Auth: +* Category - Connection - SSH - Auth: Select Attempt authentication using Pageant. Select Allow agent forwarding. Browse and select your [private key](ssh-keys/) file.  -- Return to Session page and Save selected configuration with *Save* button. +* Return to Session page and Save selected configuration with *Save* button.  -- Now you can log in using *Open* button. +* Now you can log in using *Open* button.  -- Enter your username if the *Host Name* input is not in the format "username@salomon.it4i.cz". -- Enter passphrase for selected [private key](ssh-keys/) file if Pageant **SSH authentication agent is not used.** +* Enter your username if the *Host Name* input is not in the format "username@salomon.it4i.cz". +* Enter passphrase for selected [private key](ssh-keys/) file if Pageant **SSH authentication agent is not used.** -Another PuTTY Settings ----------------------- +## Another PuTTY Settings -- Category -> Windows -> Translation -> Remote character set and select **UTF-8**. -- Category -> Terminal -> Features and select **Disable application keypad mode** (enable numpad) -- Save your configuration on Session page in to Default Settings with *Save* button. +* Category - Windows - Translation - Remote character set and select **UTF-8**. +* Category - Terminal - Features and select **Disable application keypad mode** (enable numpad) +* Save your configuration on Session page in to Default Settings with *Save* button. -Pageant SSH agent ------------------ +## Pageant SSH agent Pageant holds your private key in memory without needing to retype a passphrase on every login. -- Run Pageant. -- On Pageant Key List press *Add key* and select your private key (id_rsa.ppk). -- Enter your passphrase. -- Now you have your private key in memory without needing to retype a passphrase on every login. +* Run Pageant. +* On Pageant Key List press *Add key* and select your private key (id_rsa.ppk). +* Enter your passphrase. +* Now you have your private key in memory without needing to retype a passphrase on every login.  -PuTTY key generator -------------------- +## PuTTY key generator PuTTYgen is the PuTTY key generator. You can load in an existing private key and change your passphrase or generate a new public/private key pair. @@ -68,11 +63,11 @@ PuTTYgen is the PuTTY key generator. You can load in an existing private key and You can change the password of your SSH key with "PuTTY Key Generator". Make sure to backup the key. -- Load your [private key](../shell-access-and-data-transfer/ssh-keys/) file with *Load* button. -- Enter your current passphrase. -- Change key passphrase. -- Confirm key passphrase. -- Save your private key with *Save private key* button. +* Load your [private key](../shell-access-and-data-transfer/ssh-keys/) file with *Load* button. +* Enter your current passphrase. +* Change key passphrase. +* Confirm key passphrase. +* Save your private key with *Save private key* button.  @@ -80,33 +75,33 @@ You can change the password of your SSH key with "PuTTY Key Generator". Make sur You can generate an additional public/private key pair and insert public key into authorized_keys file for authentication with your own private key. -- Start with *Generate* button. +* Start with *Generate* button.  -- Generate some randomness. +* Generate some randomness.  -- Wait. +* Wait.  -- Enter a *comment* for your key using format 'username@organization.example.com'. +* Enter a *comment* for your key using format 'username@organization.example.com'. Enter key passphrase. Confirm key passphrase. - Save your new private key `in "*.ppk" `format with *Save private key* button. + Save your new private key in "*.ppk" format with *Save private key* button.  -- Save the public key with *Save public key* button. +* Save the public key with *Save public key* button. You can copy public key out of the â€Public key for pasting into authorized_keys file’ box.  -- Export private key in OpenSSH format "id_rsa" using Conversion -> Export OpenSSH key +* Export private key in OpenSSH format "id_rsa" using Conversion - Export OpenSSH key  -- Now you can insert additional public key into authorized_keys file for authentication with your own private key. +* Now you can insert additional public key into authorized_keys file for authentication with your own private key. You must log in using ssh key received after registration. Then proceed to [How to add your own key](../shell-access-and-data-transfer/ssh-keys/). diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md index 9cb17c16b47f81a0e7022acbc48128e1b3acc51c..85fcd1da797fddb65f3258611f7a9de221f2817f 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md @@ -19,12 +19,11 @@ After logging in, you can see .ssh/ directory with SSH keys and authorized_keys !!! Hint Private keys in .ssh directory are without passphrase and allow you to connect within the cluster. -Access privileges on .ssh folder --------------------------------- +## Access privileges on .ssh folder -- .ssh directory: 700 (drwx------) -- Authorized_keys, known_hosts and public key (.pub file): 644 (-rw-r--r--) -- Private key (id_rsa/id_rsa.ppk): 600 (-rw-------) +* .ssh directory: 700 (drwx------) +* Authorized_keys, known_hosts and public key (.pub file): 644 (-rw-r--r--) +* Private key (id_rsa/id_rsa.ppk): 600 (-rw-------) ```bash cd /home/username/ @@ -36,8 +35,7 @@ Access privileges on .ssh folder chmod 600 .ssh/id_rsa.ppk ``` -Private key ------------ +## Private key !!! Note "Note" The path to a private key is usually /home/username/.ssh/ @@ -76,8 +74,7 @@ An example of private key format: -----END RSA PRIVATE KEY----- ``` -Public key ----------- +## Public key Public key file in "*.pub" format is used to verify a digital signature. Public key is present on the remote side and allows access to the owner of the matching private key. @@ -87,8 +84,7 @@ An example of public key format: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCpujuOiTKCcGkbbBhrk0Hjmezr5QpM0swscXQE7fOZG0oQSURoapd9tjC9eVy5FvZ339jl1WkJkdXSRtjc2G1U5wQh77VE5qJT0ESxQCEw0S+CItWBKqXhC9E7gFY+UyP5YBZcOneh6gGHyCVfK6H215vzKr3x+/WvWl5gZGtbf+zhX6o4RJDRdjZPutYJhEsg/qtMxcCtMjfm/dZTnXeafuebV8nug3RCBUflvRb1XUrJuiX28gsd4xfG/P6L/mNMR8s4kmJEZhlhxpj8Th0iIc+XciVtXuGWQrbddcVRLxAmvkYAPGnVVOQeNj69pqAR/GXaFAhvjYkseEowQao1 username@organization.example.com ``` -How to add your own key ------------------------ +## How to add your own key First, generate a new keypair of your public and private key: @@ -97,7 +93,7 @@ First, generate a new keypair of your public and private key: ``` !!! Note "Note" - Please, enter **strong** **passphrase** for securing your private key. + Please, enter **strong** **passphrase** for securing your private key. You can insert additional public key into authorized_keys file for authentication with your own private key. Additional records in authorized_keys file must be delimited by new line. Users are not advised to remove the default public key from authorized_keys file. @@ -109,7 +105,6 @@ Example: In this example, we add an additional public key, stored in file additional_key.pub into the authorized_keys. Next time we log in, we will be able to use the private addtional_key key to log in. -How to remove your own key --------------------------- +## How to remove your own key Removing your key from authorized_keys can be done simply by deleting the corresponding public key which can be identified by a comment at the end of line (eg. *username@organization.example.com*). diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md index 03e8702677432f344a816a28378c2a9780007ece..b70bfd7f9d3b1fb97fe2ac41599ecf8816db2e44 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md @@ -1,19 +1,17 @@ -VPN - Connection fail in Win 8.1 -================================ +# VPN - Connection fail in Win 8.1 -**Failed to initialize connection subsystem Win 8.1 - 02-10-15 MS patch** +## Failed to initialize connection subsystem Win 8.1 - 02-10-15 MS patch AnyConnect users on Windows 8.1 will receive a "Failed to initialize connection subsystem" error after installing the Windows 8.1 02/10/15 security patch. This OS defect introduced with the 02/10/15 patch update will also impact WIndows 7 users with IE11. Windows Server 2008/2012 are also impacted by this defect, but neither is a supported OS for AnyConnect. -**Workaround:** +## Workaround -- Close the Cisco AnyConnect Window and the taskbar mini-icon -- Right click vpnui.exe in the 'Cisco AnyConnect Secure Mobility Client' folder. (C:Program Files (x86)CiscoCisco AnyConnect Secure Mobility Client) -- Click on the 'Run compatibility troubleshooter' button -- Choose 'Try recommended settings' -- The wizard suggests Windows 8 compatibility. -- Click 'Test Program'. This will open the program. -- Close +* Close the Cisco AnyConnect Window and the taskbar mini-icon +* Right click vpnui.exe in the 'Cisco AnyConnect Secure Mobility Client' folder. (C:Program Files (x86)CiscoCisco AnyConnect Secure Mobility Client) +* Click on the 'Run compatibility troubleshooter' button +* Choose 'Try recommended settings' +* The wizard suggests Windows 8 compatibility. +* Click 'Test Program'. This will open the program. +* Close  - diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn-access.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn-access.md index 7423e2b407adb81784c92ed6b334dfab0761bfb5..40960cee821bb85c826db8dd9f8a92828878c285 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn-access.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn-access.md @@ -1,21 +1,20 @@ -VPN Access -========== +# VPN Access + +## Accessing IT4Innovations internal resources via VPN -Accessing IT4Innovations internal resources via VPN ---------------------------------------------------- For using resources and licenses which are located at IT4Innovations local network, it is necessary to VPN connect to this network. We use Cisco AnyConnect Secure Mobility Client, which is supported on the following operating systems: -- Windows XP -- Windows Vista -- Windows 7 -- Windows 8 -- Linux -- MacOS +* Windows XP +* Windows Vista +* Windows 7 +* Windows 8 +* Linux +* MacOS It is impossible to connect to VPN from other operating systems. -VPN client installation ------------------------------------- +## VPN client installation + You can install VPN client from web interface after successful login with LDAP credentials on address <https://vpn.it4i.cz/user>  @@ -40,8 +39,7 @@ After you click on the link, download of installation file will start. After successful download of installation file, you have to execute this tool with administrator's rights and install VPN client manually. -Working with VPN client ------------------------ +## Working with VPN client You can use graphical user interface or command line interface to run VPN client on all supported operating systems. We suggest using GUI. diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn1-access.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn1-access.md index dc7f0c5ce628953259a709c6a035d8ff6fb6b17f..d49ab6b1d69dd6004886803dadb9862c25cc84e2 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn1-access.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn1-access.md @@ -1,27 +1,24 @@ -VPN Access -========== +# VPN Access -Accessing IT4Innovations internal resources via VPN ---------------------------------------------------- +## Accessing IT4Innovations internal resources via VPN !!! Note "Note" - **Failed to initialize connection subsystem Win 8.1 - 02-10-15 MS patch** + **Failed to initialize connection subsystem Win 8.1 - 02-10-15 MS patch** - Workaround can be found at [vpn-connection-fail-in-win-8.1](../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.html) +Workaround can be found at [vpn-connection-fail-in-win-8.1](../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.html) For using resources and licenses which are located at IT4Innovations local network, it is necessary to VPN connect to this network. We use Cisco AnyConnect Secure Mobility Client, which is supported on the following operating systems: -- Windows XP -- Windows Vista -- Windows 7 -- Windows 8 -- Linux -- MacOS +* Windows XP +* Windows Vista +* Windows 7 +* Windows 8 +* Linux +* MacOS It is impossible to connect to VPN from other operating systems. -VPN client installation ------------------------------------- +## VPN client installation You can install VPN client from web interface after successful login with LDAP credentials on address <https://vpn1.it4i.cz/anselm> @@ -49,12 +46,11 @@ After you click on the link, download of installation file will start. After successful download of installation file, you have to execute this tool with administrator's rights and install VPN client manually. -Working with VPN client ------------------------ +## Working with VPN client You can use graphical user interface or command line interface to run VPN client on all supported operating systems. We suggest using GUI. -Before the first login to VPN, you have to fill URL **https://vpn1.it4i.cz/anselm** into the text field. +Before the first login to VPN, you have to fill URL [**https://vpn1.it4i.cz/anselm**](https://vpn1.it4i.cz/anselm) into the text field.  diff --git a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md index eb9aedb5ed9758edec9ee583a6d2f495c7e6feae..2ab422dece0646d6565df0d38939305787615501 100644 --- a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md +++ b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md @@ -1,46 +1,39 @@ -Certificates FAQ -================ +# Certificates FAQ FAQ about certificates in general -Q: What are certificates? -------------------------- +## Q: What are certificates? IT4Innovations employs X.509 certificates for secure communication (e. g. credentials exchange) and for grid services related to PRACE, as they present a single method of authentication for all PRACE services, where only one password is required. There are different kinds of certificates, each with a different scope of use. We mention here: -- User (Private) certificates -- Certificate Authority (CA) certificates -- Host certificates -- Service certificates +* User (Private) certificates +* Certificate Authority (CA) certificates +* Host certificates +* Service certificates However, users need only manage User and CA certificates. Note that your user certificate is protected by an associated private key, and this **private key must never be disclosed**. -Q: Which X.509 certificates are recognised by IT4Innovations? -------------------------------------------------------------- +## Q: Which X.509 certificates are recognised by IT4Innovations? [The Certificates for Digital Signatures](obtaining-login-credentials/#the-certificates-for-digital-signatures). -Q: How do I get a User Certificate that can be used with IT4Innovations? ------------------------------------------------------------------------- +## Q: How do I get a User Certificate that can be used with IT4Innovations? To get a certificate, you must make a request to your local, IGTF approved, Certificate Authority (CA). Usually you then must visit, in person, your nearest Registration Authority (RA) to verify your affiliation and identity (photo identification is required). Usually, you will then be emailed details on how to retrieve your certificate, although procedures can vary between CAs. If you are in Europe, you can locate [your trusted CA](www.eugridpma.org/members/worldmap). In some countries certificates can also be retrieved using the TERENA Certificate Service, see the FAQ below for the link. -Q: Does IT4Innovations support short lived certificates (SLCS)? ---------------------------------------------------------------- +## Q: Does IT4Innovations support short lived certificates (SLCS)? Yes, provided that the CA which provides this service is also a member of IGTF. -Q: Does IT4Innovations support the TERENA certificate service? --------------------------------------------------------------- +## Q: Does IT4Innovations support the TERENA certificate service? Yes, ITInnovations supports TERENA eScience personal certificates. For more information, please visit [TCS - Trusted Certificate Service](https://tcs-escience-portal.terena.org/), where you also can find if your organisation/country can use this service -Q: What format should my certificate take? ------------------------------------------- +## Q: What format should my certificate take? User Certificates come in many formats, the three most common being the ’PKCS12’, ’PEM’ and the JKS formats. @@ -54,8 +47,7 @@ JKS is the Java KeyStore and may contain both your personal certificate with you To convert your Certificate from p12 to JKS, IT4Innovations recommends using the keytool utiliy (see separate FAQ entry). -Q: What are CA certificates? ----------------------------- +## Q: What are CA certificates? Certification Authority (CA) certificates are used to verify the link between your user certificate and the authority which issued it. They are also used to verify the link between the host certificate of a IT4Innovations server and the CA which issued that certificate. In essence they establish a chain of trust between you and the target server. Thus, for some grid services, users must have a copy of all the CA certificates. @@ -71,17 +63,15 @@ Lastly, if you need the CA certificates for a personal Globus 5 installation, th If you run this command as ’root’, then it will install the certificates into /etc/grid-security/certificates. If you run this not as ’root’, then the certificates will be installed into $HOME/.globus/certificates. For Globus, you can download the globuscerts.tar.gz packet [available here](https://winnetou.surfsara.nl/prace/certs/). -Q: What is a DN and how do I find mine? ---------------------------------------- +## Q: What is a DN and how do I find mine? DN stands for Distinguished Name and is part of your user certificate. IT4Innovations needs to know your DN to enable your account to use the grid services. You may use openssl (see below) to determine your DN or, if your browser contains your user certificate, you can extract your DN from your browser. -For Internet Explorer users, the DN is referred to as the "subject" of your certificate. Tools->Internet Options->Content->Certificates->View->Details->Subject. +For Internet Explorer users, the DN is referred to as the "subject" of your certificate. ToolsInternet OptionsContentCertificatesViewDetailsSubject. -For users running Firefox under Windows, the DN is referred to as the "subject" of your certificate. Tools->Options->Advanced->Encryption->View Certificates. Highlight your name and then Click View->Details->Subject. +For users running Firefox under Windows, the DN is referred to as the "subject" of your certificate. ToolsOptionsAdvancedEncryptionView Certificates. Highlight your name and then Click ViewDetailsSubject. -Q: How do I use the openssl tool? ---------------------------------- +## Q: How do I use the openssl tool? The following examples are for Unix/Linux operating systems only. @@ -116,8 +106,7 @@ To check your certificate (e.g., DN, validity, issuer, public key algorithm, etc To download openssl if not pre-installed, [please visit](https://www.openssl.org/source/). On Macintosh Mac OS X computers openssl is already pre-installed and can be used immediately. -Q: How do I create and then manage a keystore? ----------------------------------------------- +## Q: How do I create and then manage a keystore? IT4innovations recommends the java based keytool utility to create and manage keystores, which themselves are stores of keys and certificates. For example if you want to convert your pkcs12 formatted key pair into a java keystore you can use the following command. @@ -139,8 +128,7 @@ where $mydomain.crt is the certificate of a trusted signing authority (CA) and $ More information on the tool can be found [here](http://docs.oracle.com/javase/7/docs/technotes/tools/solaris/keytool.html) -Q: How do I use my certificate to access the different grid Services? ---------------------------------------------------------------------- +## Q: How do I use my certificate to access the different grid Services? Most grid services require the use of your certificate; however, the format of your certificate depends on the grid Service you wish to employ. @@ -150,26 +138,22 @@ If the grid service is UNICORE, then you bind your certificate, in either the p1 If the grid service is part of Globus, such as GSI-SSH, GriFTP or GRAM5, then the certificates can be in either p12 or PEM format and must reside in the "$HOME/.globus" directory for Linux and Mac users or %HOMEPATH%.globus for Windows users. (Windows users will have to use the DOS command ’cmd’ to create a directory which starts with a ’.’). Further, user certificates should be named either "usercred.p12" or "usercert.pem" and "userkey.pem", and the CA certificates must be kept in a pre-specified directory as follows. For Linux and Mac users, this directory is either $HOME/.globus/certificates or /etc/grid-security/certificates. For Windows users, this directory is %HOMEPATH%.globuscertificates. (If you are using GSISSH-Term from prace-ri.eu then you do not have to create the .globus directory nor install CA certificates to use this tool alone). -Q: How do I manually import my certificate into my browser? ------------------------------------------------------------ +## Q: How do I manually import my certificate into my browser? -If you employ the Firefox browser, then you can import your certificate by first choosing the "Preferences" window. For Windows, this is Tools->Options. For Linux, this is Edit->Preferences. For Mac, this is Firefox->Preferences. Then, choose the "Advanced" button; followed by the "Encryption" tab. Then, choose the "Certificates" panel; select the option "Select one automatically" if you have only one certificate, or "Ask me every time" if you have more then one. Then click on the "View Certificates" button to open the "Certificate Manager" window. You can then select the "Your Certificates" tab and click on button "Import". Then locate the PKCS12 (.p12) certificate you wish to import, and employ its associated password. +If you employ the Firefox browser, then you can import your certificate by first choosing the "Preferences" window. For Windows, this is ToolsOptions. For Linux, this is EditPreferences. For Mac, this is FirefoxPreferences. Then, choose the "Advanced" button; followed by the "Encryption" tab. Then, choose the "Certificates" panel; select the option "Select one automatically" if you have only one certificate, or "Ask me every time" if you have more then one. Then click on the "View Certificates" button to open the "Certificate Manager" window. You can then select the "Your Certificates" tab and click on button "Import". Then locate the PKCS12 (.p12) certificate you wish to import, and employ its associated password. -If you are a Safari user, then simply open the "Keychain Access" application and follow "File->Import items". +If you are a Safari user, then simply open the "Keychain Access" application and follow "FileImport items". -If you are an Internet Explorer user, click Start->Settings->Control Panel and then double-click on Internet. On the Content tab, click Personal, and then click Import. In the Password box, type your password. NB you may be prompted multiple times for your password. In the "Certificate File To Import" box, type the filename of the certificate you wish to import, and then click OK. Click Close, and then click OK. +If you are an Internet Explorer user, click StartSettingsControl Panel and then double-click on Internet. On the Content tab, click Personal, and then click Import. In the Password box, type your password. NB you may be prompted multiple times for your password. In the "Certificate File To Import" box, type the filename of the certificate you wish to import, and then click OK. Click Close, and then click OK. -Q: What is a proxy certificate? -------------------------------- +## Q: What is a proxy certificate? A proxy certificate is a short-lived certificate which may be employed by UNICORE and the Globus services. The proxy certificate consists of a new user certificate and a newly generated proxy private key. This proxy typically has a rather short lifetime (normally 12 hours) and often only allows a limited delegation of rights. Its default location, for Unix/Linux, is /tmp/x509_u*uid* but can be set via the $X509_USER_PROXY environment variable. -Q: What is the MyProxy service? -------------------------------- +## Q: What is the MyProxy service? [The MyProxy Service](http://grid.ncsa.illinois.edu/myproxy/) , can be employed by gsissh-term and Globus tools, and is an online repository that allows users to store long lived proxy certificates remotely, which can then be retrieved for use at a later date. Each proxy is protected by a password provided by the user at the time of storage. This is beneficial to Globus users as they do not have to carry their private keys and certificates when travelling; nor do users have to install private keys and certificates on possibly insecure computers. -Q: Someone may have copied or had access to the private key of my certificate either in a separate file or in the browser. What should I do? --------------------------------------------------------------------------------------------------------------------------------------------- +## Q: Someone may have copied or had access to the private key of my certificate either in a separate file or in the browser. What should I do? Please ask the CA that issued your certificate to revoke this certificate and to supply you with a new one. In addition, please report this to IT4Innovations by contacting [the support team](https://support.it4i.cz/rt). diff --git a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md index 0f5f02bd1ce1f94c874bd34e1f11bdfdf6b8479b..8be289456e028d8602e67c8a0b119512a78672dc 100644 --- a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md +++ b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md @@ -1,43 +1,39 @@ -Obtaining Login Credentials -=========================== +# Obtaining Login Credentials + +## Obtaining Authorization -Obtaining Authorization ------------------------ The computational resources of IT4I are allocated by the Allocation Committee to a [Project](/), investigated by a Primary Investigator. By allocating the computational resources, the Allocation Committee is authorizing the PI to access and use the clusters. The PI may decide to authorize a number of her/his Collaborators to access and use the clusters, to consume the resources allocated to her/his Project. These collaborators will be associated to the Project. The Figure below is depicting the authorization chain:  -!!! Note "Note" - You need to either [become the PI](../applying-for-resources/) or [be named as a collaborator](obtaining-login-credentials/#authorization-by-web) by a PI in order to access and use the clusters. +!!! note + You need to either [become the PI](../applying-for-resources/) or [be named as a collaborator](obtaining-login-credentials/#authorization-by-web) by a PI in order to access and use the clusters. Head of Supercomputing Services acts as a PI of a project DD-13-5. Joining this project, you may **access and explore the clusters**, use software, development environment and computers via the qexp and qfree queues. You may use these resources for own education/research, no paperwork is required. All IT4I employees may contact the Head of Supercomputing Services in order to obtain **free access to the clusters**. -Authorization of PI by Allocation Committee -------------------------------------------- +## Authorization of PI by Allocation Committee The PI is authorized to use the clusters by the allocation decision issued by the Allocation Committee.The PI will be informed by IT4I about the Allocation Committee decision. -Authorization by web --------------------- +## Authorization by web -!!! Note "Note" - **Only** for those who already have their IT4I HPC account. This is a preferred way of granting access to project resources. Please, use this method whenever it's possible. +!!! warning + **Only** for those who already have their IT4I HPC account. This is a preferred way of granting access to project resources. Please, use this method whenever it's possible. This is a preferred way of granting access to project resources. Please, use this method whenever it's possible. Log in to the [IT4I Extranet portal](https://extranet.it4i.cz) using IT4I credentials and go to the **Projects** section. -- **Users:** Please, submit your requests for becoming a project member. -- **Primary Investigators:** Please, approve or deny users' requests in the same section. +* **Users:** Please, submit your requests for becoming a project member. +* **Primary Investigators:** Please, approve or deny users' requests in the same section. -Authorization by e-mail (an alternative approach) -------------------------------------------------- +## Authorization by e-mail (an alternative approach) In order to authorize a Collaborator to utilize the allocated resources, the PI should contact the [IT4I support](https://support.it4i.cz/rt/) (E-mail: [support[at]it4i.cz](mailto:support@it4i.cz)) and provide following information: 1. Identify your project by project ID -2. Provide list of people, including himself, who are authorized to use the resources allocated to the project. The list must include full name, e-mail and affiliation. Provide usernames as well, if collaborator login access already exists on the IT4I systems. -3. Include "Authorization to IT4Innovations" into the subject line. +1. Provide list of people, including himself, who are authorized to use the resources allocated to the project. The list must include full name, e-mail and affiliation. Provide usernames as well, if collaborator login access already exists on the IT4I systems. +1. Include "Authorization to IT4Innovations" into the subject line. Example (except the subject line which must be in English, you may use Czech or Slovak language for communication with us): @@ -59,17 +55,16 @@ Example (except the subject line which must be in English, you may use Czech or Should the above information be provided by e-mail, the e-mail **must be** digitally signed. Read more on [digital signatures](obtaining-login-credentials/#the-certificates-for-digital-signatures) below. -The Login Credentials ---------------------- +## The Login Credentials Once authorized by PI, every person (PI or Collaborator) wishing to access the clusters, should contact the [IT4I support](https://support.it4i.cz/rt/) (E-mail: [support[at]it4i.cz](mailto:support@it4i.cz)) providing following information: -1. Project ID -2. Full name and affiliation -3. Statement that you have read and accepted the [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf) (AUP). -4. Attach the AUP file. -5. Your preferred username, max 8 characters long. The preferred username must associate your surname and name or be otherwise derived from it. Only alphanumeric sequences, dash and underscore signs are allowed. -6. In case you choose [Alternative way to personal certificate](obtaining-login-credentials/#alternative-way-of-getting-personal-certificate), a **scan of photo ID** (personal ID or passport or driver license) is required +1. Project ID +1. Full name and affiliation +1. Statement that you have read and accepted the [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf) (AUP). +1. Attach the AUP file. +1. Your preferred username, max 8 characters long. The preferred username must associate your surname and name or be otherwise derived from it. Only alphanumeric sequences, dash and underscore signs are allowed. +1. In case you choose [Alternative way to personal certificate](obtaining-login-credentials/#alternative-way-of-getting-personal-certificate), a **scan of photo ID** (personal ID or passport or driver license) is required Example (except the subject line which must be in English, you may use Czech or Slovak language for communication with us): @@ -99,14 +94,13 @@ For various reasons we do not accept PGP keys.** Please, use only X.509 PKI cert You will receive your personal login credentials by protected e-mail. The login credentials include: -1. username -2. ssh private key and private key passphrase -3. system password +1. username +1. ssh private key and private key passphrase +1. system password The clusters are accessed by the [private key](../accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) and username. Username and password is used for login to the [information systems](http://support.it4i.cz/). -Change Passphrase ------------------ +## Change Passphrase On Linux, use @@ -116,65 +110,60 @@ local $ ssh-keygen -f id_rsa -p On Windows, use [PuTTY Key Generator](../accessing-the-clusters/shell-access-and-data-transfer/putty/#putty-key-generator). -Change Password ---------------- +## Change Password Change password in [your user profile](https://extranet.it4i.cz/user/). -The Certificates for Digital Signatures ---------------------------------------- +## The Certificates for Digital Signatures We accept personal certificates issued by any widely respected certification authority (CA). This includes certificates by CAs organized in [International Grid Trust Federation](http://www.igtf.net/), its European branch [EUGridPMA](https://www.eugridpma.org/) and its member organizations, e.g. the [CESNET certification authority](https://tcs.cesnet.cz). The Czech *"Qualified certificate" (KvalifikovanĂ˝ certifikát)* provided by [PostSignum](http://www.postsignum.cz/) or [I.CA](http://www.ica.cz/Kvalifikovany-certifikat.aspx), that is used in electronic contact with Czech authorities is accepted as well. Certificate generation process is well-described here: -- [How to generate a personal TCS certificate in Mozilla Firefox web browser (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-gen) +* [How to generate a personal TCS certificate in Mozilla Firefox web browser (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-gen) A FAQ about certificates can be found here: [Certificates FAQ](certificates-faq/). -Alternative Way to Personal Certificate ---------------------------------------- +## Alternative Way to Personal Certificate Follow these steps **only** if you can not obtain your certificate in a standard way. In case you choose this procedure, please attach a **scan of photo ID** (personal ID or passport or drivers license) when applying for [login credentials](obtaining-login-credentials/#the-login-credentials). -1. Go to [CAcert](www.cacert.org). - - If there's a security warning, just acknowledge it. -2. Click *Join*. -3. Fill in the form and submit it by the *Next* button. - - Type in the e-mail address which you use for communication with us. - - Don't forget your chosen *Pass Phrase*. -4. You will receive an e-mail verification link. Follow it. -5. After verifying, go to the CAcert's homepage and login using *Password Login*. -6. Go to *Client Certificates* -> *New*. -7. Tick *Add* for your e-mail address and click the *Next* button. -8. Click the *Create Certificate Request* button. -9. You'll be redirected to a page from where you can download/install your certificate. - - Simultaneously you'll get an e-mail with a link to the certificate. - -Installation of the Certificate Into Your Mail Client ------------------------------------------------------ +* Go to [CAcert](www.cacert.org). + * If there's a security warning, just acknowledge it. +* Click *Join*. +* Fill in the form and submit it by the *Next* button. + * Type in the e-mail address which you use for communication with us. + * Don't forget your chosen *Pass Phrase*. +* You will receive an e-mail verification link. Follow it. +* After verifying, go to the CAcert's homepage and login using *Password Login*. +* Go to *Client Certificates* *New*. +* Tick *Add* for your e-mail address and click the *Next* button. +* Click the *Create Certificate Request* button. +* You'll be redirected to a page from where you can download/install your certificate. + * Simultaneously you'll get an e-mail with a link to the certificate. + +## Installation of the Certificate Into Your Mail Client The procedure is similar to the following guides: MS Outlook 2010 -- [How to Remove, Import, and Export Digital certificates](http://support.microsoft.com/kb/179380) -- [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/outl-cert-imp) +* [How to Remove, Import, and Export Digital certificates](http://support.microsoft.com/kb/179380) +* [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/outl-cert-imp) Mozilla Thudnerbird -- [Installing an SMIME certificate](http://kb.mozillazine.org/Installing_an_SMIME_certificate) -- [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-imp) +* [Installing an SMIME certificate](http://kb.mozillazine.org/Installing_an_SMIME_certificate) +* [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-imp) -End of User Account Lifecycle ------------------------------ +## End of User Account Lifecycle User accounts are supported by membership in active Project(s) or by affiliation to IT4Innovations. User accounts, that loose the support (meaning, are not attached to an active project and are not affiliated with IT4I), will be deleted 1 year after the last project to which they were attached expires. User will get 3 automatically generated warning e-mail messages of the pending removal:. -- First message will be sent 3 months before the removal -- Second message will be sent 1 month before the removal -- Third message will be sent 1 week before the removal. +* First message will be sent 3 months before the removal +* Second message will be sent 1 month before the removal +* Third message will be sent 1 week before the removal. The messages will inform about the projected removal date and will challenge the user to migrate her/his data diff --git a/docs.it4i/index.md b/docs.it4i/index.md index e962135248a6e396e1fcc27ca6787d8da3ec533c..2c45d7fdd23f25479ddcf9310d75bb921dc54116 100644 --- a/docs.it4i/index.md +++ b/docs.it4i/index.md @@ -3,7 +3,7 @@ Documentation Welcome to IT4Innovations documentation pages. The IT4Innovations national supercomputing center operates supercomputers [Salomon](/salomon/introduction/) and [Anselm](/anselm-cluster-documentation/introduction/). The supercomputers are [available](get-started-with-it4innovations/applying-for-resources/) to academic community within the Czech Republic and Europe and industrial community worldwide. The purpose of these pages is to provide a comprehensive documentation on hardware, software and usage of the computers. - How to read the documentation +How to read the documentation ------------------------------ 1. Read the list in the left column. Select the subject of interest. Alternatively, use the Search in the upper right corner. @@ -12,20 +12,22 @@ Welcome to IT4Innovations documentation pages. The IT4Innovations national super Getting Help and Support ------------------------ -!!! Note "Note" - Contact [support [at] it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz) for help and support regarding the cluster technology at IT4Innovations. Please use **Czech**, **Slovak** or **English** language for communication with us. Follow the status of your request to IT4Innovations at [support.it4i.cz/rt](http://support.it4i.cz/rt). + +!!! note + Contact [support[at]it4i.cz](mailto:support@it4i.cz) for help and support regarding the cluster technology at IT4Innovations. Please use **Czech**, **Slovak** or **English** language for communication with us. Follow the status of your request to IT4Innovations at [support.it4i.cz/rt](http://support.it4i.cz/rt). Use your IT4Innotations username and password to log in to the [support](http://support.it4i.cz/) portal. Required Proficiency -------------------- + !!! Note "Note" - You need basic proficiency in Linux environment. + You need basic proficiency in Linux environment. In order to use the system for your calculations, you need basic proficiency in Linux environment. To gain the proficiency, we recommend you reading the [introduction to Linux](http://www.tldp.org/LDP/intro-linux/html/) operating system environment and installing a Linux distribution on your personal computer. A good choice might be the [CentOS](http://www.centos.org/) distribution, as it is similar to systems on the clusters at IT4Innovations. It's easy to install and use. In fact, any distribution would do. !!! Note "Note" - Learn how to parallelize your code! + Learn how to parallelize your code! In many cases, you will run your own code on the cluster. In order to fully exploit the cluster, you will need to carefully consider how to utilize all the cores available on the node and how to use multiple nodes at the same time. You need to **parallelize** your code. Proficieny in MPI, OpenMP, CUDA, UPC or GPI2 programming may be gained via the [training provided by IT4Innovations.](http://prace.it4i.cz) @@ -46,6 +48,7 @@ Terminology Frequently Used on These Pages Conventions ----------- + In this documentation, you will find a number of pages containing examples. We use the following conventions: Cluster command prompt @@ -62,4 +65,5 @@ local $ Errata ------- + Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in the text or the code we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this documentation. If you find any errata, please report them by visiting [http://support.it4i.cz/rt](http://support.it4i.cz/rt), creating a new ticket, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded on our website. \ No newline at end of file diff --git a/docs.it4i/salomon/storage.md b/docs.it4i/salomon/storage.md index dfd53fb6d6e6a44d56262c2cb751e221f998e0ad..85148948682990fcac444657e49fa7856ea89e4a 100644 --- a/docs.it4i/salomon/storage.md +++ b/docs.it4i/salomon/storage.md @@ -1,54 +1,52 @@ -Storage -======= +# Storage -Introduction ------------- +## Introduction There are two main shared file systems on Salomon cluster, the [HOME](#home) and [SCRATCH](#shared-filesystems). All login and compute nodes may access same data on shared file systems. Compute nodes are also equipped with local (non-shared) scratch, ramdisk and tmp file systems. -Policy (in a nutshell) ----------------------- +## Policy (in a nutshell) + !!! note - * Use [HOME](#home) for your most valuable data and programs. - * Use [WORK](#work) for your large project files. - * Use [TEMP](#temp) for large scratch data. + * Use [HOME](#home) for your most valuable data and programs. + * Use [WORK](#work) for your large project files. + * Use [TEMP](#temp) for large scratch data. + !!! warning - Do not use for [archiving](#archiving)! + Do not use for [archiving](#archiving)! -Archiving -------------- +## Archiving Please don't use shared file systems as a backup for large amount of data or long-term archiving mean. The academic staff and students of research institutions in the Czech Republic can use [CESNET storage service](#cesnet-data-storage), which is available via SSHFS. -Shared File systems ----------------------- +## Shared File systems + Salomon computer provides two main shared file systems, the [HOME file system](#home-filesystem) and the [SCRATCH file system](#scratch-filesystem). The SCRATCH file system is partitioned to [WORK and TEMP workspaces](#shared-workspaces). The HOME file system is realized as a tiered NFS disk storage. The SCRATCH file system is realized as a parallel Lustre file system. Both shared file systems are accessible via the Infiniband network. Extended ACLs are provided on both HOME/SCRATCH file systems for the purpose of sharing data with other users using fine-grained control. -###HOME file system +### HOME file system The HOME file system is realized as a Tiered file system, exported via NFS. The first tier has capacity 100 TB, second tier has capacity 400 TB. The file system is available on all login and computational nodes. The Home file system hosts the [HOME workspace](#home). -###SCRATCH file system +### SCRATCH file system The architecture of Lustre on Salomon is composed of two metadata servers (MDS) and six data/object storage servers (OSS). Accessible capacity is 1.69 PB, shared among all users. The SCRATCH file system hosts the [WORK and TEMP workspaces](#shared-workspaces). Configuration of the SCRATCH Lustre storage -- SCRATCH Lustre object storage - - Disk array SFA12KX - - 540 x 4 TB SAS 7.2krpm disk - - 54 x OST of 10 disks in RAID6 (8+2) - - 15 x hot-spare disk - - 4 x 400 GB SSD cache -- SCRATCH Lustre metadata storage - - Disk array EF3015 - - 12 x 600 GB SAS 15 krpm disk +* SCRATCH Lustre object storage + * Disk array SFA12KX + * 540 x 4 TB SAS 7.2krpm disk + * 54 x OST of 10 disks in RAID6 (8+2) + * 15 x hot-spare disk + * 4 x 400 GB SSD cache +* SCRATCH Lustre metadata storage + * Disk array EF3015 + * 12 x 600 GB SAS 15 krpm disk ### Understanding the Lustre File systems -(source <http://www.nas.nasa.gov>) +[http://www.nas.nasa.gov](http://www.nas.nasa.gov) A user file on the Lustre file system can be divided into multiple chunks (stripes) and stored across a subset of the object storage targets (OSTs) (disks). The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing. @@ -59,11 +57,11 @@ If multiple clients try to read and write the same part of a file at the same ti There is default stripe configuration for Salomon Lustre file systems. However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance: 1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Salomon Lustre file systems -2. stripe_count the number of OSTs to stripe across; default is 1 for Salomon Lustre file systems one can specify -1 to use all OSTs in the file system. -3. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended. +1. stripe_count the number of OSTs to stripe across; default is 1 for Salomon Lustre file systems one can specify -1 to use all OSTs in the file system. +1. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended. !!! Note "Note" - Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience. + Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience. Use the lfs getstripe for getting the stripe parameters. Use the lfs setstripe command for setting the stripe parameters to get optimal I/O performance The correct stripe setting depends on your needs and file access patterns. @@ -97,21 +95,21 @@ $ man lfs ### Hints on Lustre Stripping !!! Note "Note" - Increase the stripe_count for parallel I/O to the same file. + Increase the stripe_count for parallel I/O to the same file. When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the stripe_count is set to a larger value. The stripe count sets the number of OSTs the file will be written to. By default, the stripe count is set to 1. While this default setting provides for efficient access of metadata (for example to support the ls -l command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A rule of thumb is to use a stripe count approximately equal to the number of gigabytes in the file. Another good practice is to make the stripe count be an integral factor of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes. !!! Note "Note" - Using a large stripe size can improve performance when accessing very large files + Using a large stripe size can improve performance when accessing very large files Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file. -Read more on <http://wiki.lustre.org/manual/LustreManual20_HTML/ManagingStripingFreeSpace.html> +Read more on [http://wiki.lustre.org/manual/LustreManual20_HTML/ManagingStripingFreeSpace.html](http://wiki.lustre.org/manual/LustreManual20_HTML/ManagingStripingFreeSpace.html) + +## Disk usage and quota commands -Disk usage and quota commands ------------------------------------------- User quotas on the Lustre file systems (SCRATCH) can be checked and reviewed using following command: ```bash @@ -124,10 +122,10 @@ Example for Lustre SCRATCH directory: $ lfs quota /scratch Disk quotas for user user001 (uid 1234): Filesystem kbytes quota limit grace files quota limit grace - /scratch 8 0 100000000000 - 3 0 0 - + /scratch 8 0 100000000000 * 3 0 0 - Disk quotas for group user001 (gid 1234): Filesystem kbytes quota limit grace files quota limit grace - /scratch 8 0 0 - 3 0 0 - + /scratch 8 0 0 * 3 0 0 - ``` In this example, we view current quota size limit of 100TB and 8KB currently used by user001. @@ -178,8 +176,8 @@ $ man lfs $ man du ``` -Extended Access Control List (ACL) ----------------------------------- +## Extended Access Control List (ACL) + Extended ACLs provide another security mechanism beside the standard POSIX ACLs which are defined by three entries (for owner/group/others). Extended ACLs have more than the three basic entries. In addition, they also contain a mask entry and may contain any number of named user and named group entries. ACLs on a Lustre file system work exactly like ACLs on any Linux file system. They are manipulated with the standard tools in the standard manner. Below, we create a directory and allow a specific user access. @@ -215,15 +213,14 @@ Default ACL mechanism can be used to replace setuid/setgid permissions on direct [http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html ](http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html) -Shared Workspaces ---------------------- +## Shared Workspaces -###HOME +### HOME Users home directories /home/username reside on HOME file system. Accessible capacity is 0.5 PB, shared among all users. Individual users are restricted by file system usage quotas, set to 250 GB per user. If 250 GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request. !!! Note "Note" - The HOME file system is intended for preparation, evaluation, processing and storage of data generated by active Projects. + The HOME file system is intended for preparation, evaluation, processing and storage of data generated by active Projects. The HOME should not be used to archive data of past Projects or other unrelated data. @@ -244,14 +241,14 @@ The workspace is backed up, such that it can be restored in case of catasthropic The WORK workspace resides on SCRATCH file system. Users may create subdirectories and files in directories **/scratch/work/user/username** and **/scratch/work/project/projectid. **The /scratch/work/user/username is private to user, much like the home directory. The /scratch/work/project/projectid is accessible to all users involved in project projectid. !!! Note "Note" - The WORK workspace is intended to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up. + The WORK workspace is intended to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up. - Files on the WORK file system are **persistent** (not automatically deleted) throughout duration of the project. + Files on the WORK file system are **persistent** (not automatically deleted) throughout duration of the project. The WORK workspace is hosted on SCRATCH file system. The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH file system. !!! Note "Note" - Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience. + Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience. |WORK workspace|| |---|---| @@ -269,16 +266,16 @@ The WORK workspace is hosted on SCRATCH file system. The SCRATCH is realized as The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. >If 100 TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request. !!! Note "Note" - The TEMP workspace is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory. + The TEMP workspace is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory. - Users are advised to save the necessary data from the TEMP workspace to HOME or WORK after the calculations and clean up the scratch files. + Users are advised to save the necessary data from the TEMP workspace to HOME or WORK after the calculations and clean up the scratch files. Files on the TEMP file system that are **not accessed for more than 90 days** will be automatically **deleted**. The TEMP workspace is hosted on SCRATCH file system. The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH file system. !!! Note "Note" - Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience. + Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience. |TEMP workspace|| |---|---| @@ -291,19 +288,19 @@ The TEMP workspace is hosted on SCRATCH file system. The SCRATCH is realized as |Number of OSTs|54| |Protocol|Lustre| -RAM disk --------- +## RAM disk + Every computational node is equipped with file system realized in memory, so called RAM disk. !!! Note "Note" - Use RAM disk in case you need really fast access to your data of limited size during your calculation. Be very careful, use of RAM disk file system is at the expense of operational memory. + Use RAM disk in case you need really fast access to your data of limited size during your calculation. Be very careful, use of RAM disk file system is at the expense of operational memory. The local RAM disk is mounted as /ramdisk and is accessible to user at /ramdisk/$PBS_JOBID directory. The local RAM disk file system is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. Size of RAM disk file system is limited. Be very careful, use of RAM disk file system is at the expense of operational memory. It is not recommended to allocate large amount of memory and use large amount of data in RAM disk file system at the same time. -!!! Note "Note" - The local RAM disk directory /ramdisk/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript. +!!! Note + The local RAM disk directory /ramdisk/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript. |RAM disk|| |---|---| @@ -314,8 +311,7 @@ The local RAM disk file system is intended for temporary scratch data generated |User quota|none| -Summary -------- +## Summary |Mountpoint|Usage|Protocol|Net|Capacity|Throughput|Limitations|Access| |---|---| @@ -324,12 +320,12 @@ Summary |/scratch/temp|job temporary data|Lustre|1.69 PB|30 GB/s|Quota 100 TB|Compute and login nodes|files older 90 days removed| |/ramdisk|job temporary data, node local|local|120GB|90 GB/s|none|Compute nodes|purged after job ends| -CESNET Data Storage ------------- +## CESNET Data Storage + Do not use shared file systems at IT4Innovations as a backup for large amount of data or long-term archiving purposes. !!! Note "Note" - The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service](https://du.cesnet.cz/). + The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service](https://du.cesnet.cz/). The CESNET Storage service can be used for research purposes, mainly by academic staff and students of research institutions in the Czech Republic. @@ -343,13 +339,12 @@ The procedure to obtain the CESNET access is quick and trouble-free. (source [https://du.cesnet.cz/](https://du.cesnet.cz/wiki/doku.php/en/start "CESNET Data Storage")) -CESNET storage access ---------------------- +## CESNET storage access ### Understanding CESNET storage !!! Note "Note" - It is very important to understand the CESNET storage before uploading data. Please read <https://du.cesnet.cz/en/navody/home-migrace-plzen/start> first. + It is very important to understand the CESNET storage before uploading data. [Please read](https://du.cesnet.cz/en/navody/home-migrace-plzen/start> first) Once registered for CESNET Storage, you may [access the storage](https://du.cesnet.cz/en/navody/faq/start) in number of ways. We recommend the SSHFS and RSYNC methods. @@ -407,7 +402,7 @@ Rsync is a fast and extraordinarily versatile file copying tool. It is famous fo Rsync finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time. Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated. -More about Rsync at <https://du.cesnet.cz/en/navody/rsync/start#pro_bezne_uzivatele> +More about Rsync at [here](https://du.cesnet.cz/en/navody/rsync/start#pro_bezne_uzivatele) Transfer large files to/from CESNET storage, assuming membership in the Storage VO diff --git a/docs.it4i/software/bioinformatics.md b/docs.it4i/software/bioinformatics.md index 9abc17c97d4f8408d0eb43e601e51a66e20eb687..ccb31b2a4461a47e154257f64d6af8b352997630 100644 --- a/docs.it4i/software/bioinformatics.md +++ b/docs.it4i/software/bioinformatics.md @@ -6,12 +6,11 @@ Introduction In addition to the many applications available through modules (deployed through EasyBuild packaging system) we provide an alternative source of applications on our clusters inferred from [Gentoo Linux](https://www.gentoo.org/). The user's environment is setup through a script which returns a bash instance to the user (you can think of it a starting a whole virtual machine but inside your current namespace) . The applications were optimized by gcc compiler for the SandyBridge and IvyBridge platforms. The binaries use paths from /apps/gentoo prefix to find the required runtime dependencies, config files, etc. The Gentoo Linux is a standalone installation not even relying on the glibc provided by host operating system (Redhat). The trick which allowed us to install Gentoo Linux on the host Redhat system is called Gentoo::RAP and uses a modified loader with a hardcoded path ([links](https://wiki.gentoo.org/wiki/Prefix/libc)). - Starting the environment ------------------------ ```bash -$ /apps/gentoo/startprefix +mmokrejs@login2~$ /apps/gentoo/startprefix ``` Starting PBS jobs using the applications @@ -20,7 +19,7 @@ Starting PBS jobs using the applications Create a template file which can be used and an argument to qsub command. Notably, the 'PBS -S' line specifies full PATH to the Bourne shell of the Gentoo Linux environment. ```bash -$ cat myjob.pbs +mmokrejs@login2~$ cat myjob.pbs #PBS -S /apps/gentoo/bin/sh #PBS -l nodes=1:ppn=16,walltime=12:00:00 #PBS -q qfree @@ -44,15 +43,15 @@ Reading manual pages for installed applications ----------------------------------------------- ```bash -$ man -M /apps/gentoo/usr/share/man bwa -$ man -M /apps/gentoo/usr/share/man samtools +mmokrejs@login2~$ man -M /apps/gentoo/usr/share/man bwa +mmokrejs@login2~$ man -M /apps/gentoo/usr/share/man samtools ``` Listing of bioinformatics applications -------------------------------------- ```bash -mmokrejs@login2 ~ $ grep biology /scratch/mmokrejs/gentoo_rap/installed.txt +mmokrejs@login2~$ grep biology /scratch/mmokrejs/gentoo_rap/installed.txt sci-biology/ANGLE-bin-20080813-r1 sci-biology/AlignGraph-9999 sci-biology/Atlas-Link-0.01-r1 @@ -180,7 +179,7 @@ sci-biology/zmsort-110625 ``` ```bash -mmokrejs@login2 ~ $ grep sci-libs /scratch/mmokrejs/gentoo_rap/installed.txt +mmokrejs@login2~$ grep sci-libs /scratch/mmokrejs/gentoo_rap/installed.txt sci-libs/amd-2.3.1 sci-libs/blas-reference-20151113-r1 sci-libs/camd-2.3.1 @@ -213,7 +212,7 @@ sci-libs/umfpack-5.6.2 Classification of applications ------------------------------ -|Applications for bioinformatics at IT4I | +|Applications for bioinformatics at IT4I| |---|---| |error-correctors|6| |aligners|20| @@ -232,14 +231,13 @@ Classification of applications  - Other applications available through Gentoo Linux ------------------------------------------------- Gentoo Linux is a allows compilation of its applications from source code while using compiler and optimize flags set to user's wish. This facilitates creation of optimized binaries for the host platform. Users maybe also use several versions of gcc, python and other tools. ```bash -$ gcc-config -l -$ java-config -L -$ eselect +mmokrejs@login2~$ gcc-config -l +mmokrejs@login2~$ java-config -L +mmokrejs@login2~$ eselect ``` diff --git a/docs.it4i/software/lmod.md b/docs.it4i/software/lmod.md new file mode 100644 index 0000000000000000000000000000000000000000..dc3609023db9a9ef6415cd5e2931fa001a59f7cc --- /dev/null +++ b/docs.it4i/software/lmod.md @@ -0,0 +1,331 @@ +Lmod Environment +================ + +Lmod as a modules tool, a modern alternative to the oudated & no longer actively maintained Tcl-based environment modules tool. + +Detailed documentation on Lmod is available at [here](http://lmod.readthedocs.io). + +Benefits +-------- + + * significantly more responsive module commands, in particular module avail (ml av) + * easier to use interface + * module files can be written in either Tcl or Lua syntax (and both types of modules can be mixed together) + +Introduction +------------ + +Below you will find more details and examples. + +|command|equivalent/explanation| +|---|---| +|ml|module list| +|ml GCC/6.2.0-2.27|module load GCC/6.2.0-2.27| +|ml -GCC/6.2.0-2.27| module unload GCC/6.2.0-2.27| +|ml av|module avail| +|ml show GCC/6.2.0-2.27|module show GCC| +|ml spider|gcc searches (case-insensitive) for gcc in all available modules| +|ml spider GCC/6.2.0-2.27|show all information about the module GCC/6.2.0-2.27| +|ml save mycollection|stores the currently loaded modules to a collection| +|ml restore mycollection|restores a previously stored collection of modules| + +Listing loaded modules: ml (module load) +------------------------------------------- + +To get an overview of the currently loaded modules, use module list or ml (without specifying extra arguments). + +```bash +$ ml +Currently Loaded Modules: + 1) EasyBuild/3.0.0 (S) 2) lmod/7.2.2 + Where: + S: Module is Sticky, requires --force to unload or purge +``` + +!!! tip + for more details on sticky modules, see the section on [ml purge](#resetting-by-unloading-all-modules-ml-purge-module-purge) + +Searching for available modules: ml av (module avail) and ml spider +---------------------------------------------------------------------- + +To get an overview of all available modules, you can use module avail or simply ml av: + +```bash +$ ml av +---------------------------------------- /apps/modules/compiler ---------------------------------------------- + GCC/5.2.0 GCCcore/6.2.0 (D) icc/2013.5.192 ifort/2013.5.192 LLVM/3.9.0-intel-2017.00 (D) + ... ... + +---------------------------------------- /apps/modules/devel ------------------------------------------------- + Autoconf/2.69-foss-2015g CMake/3.0.0-intel-2016.01 M4/1.4.17-intel-2016.01 pkg-config/0.27.1-foss-2015g + Autoconf/2.69-foss-2016a CMake/3.3.1-foss-2015g M4/1.4.17-intel-2017.00 pkg-config/0.27.1-intel-2015b + ... ... +``` + +In the current module naming scheme, each module name consists of two parts: + + * the part before the first /, corresponding to the software name; and + * the remainder, corresponding to the software version, the compiler toolchain that was used to install the software, and a possible version suffix + + +!!! tip + The (D) indicates that this particular version of the module is the default, but we strongly recommend to not rely on this as the default can change at any point. Usuall, the default will point to the latest version available. + +Searching for modules: ml spider +-------------------------------- + +If you just provide a software name, for example gcc, it prints on overview of all available modules for GCC. + +```bash +$ ml spider gcc +--------------------------------------------------------------------------------- + GCC: +--------------------------------------------------------------------------------- + Description: + The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...). - Homepage: http://gcc.gnu.org/ + + Versions: + GCC/4.4.7-system + GCC/4.7.4 + GCC/4.8.3 + GCC/4.9.2-binutils-2.25 + GCC/4.9.2 + GCC/4.9.3-binutils-2.25 + GCC/4.9.3 + GCC/4.9.3-2.25 + GCC/5.1.0-binutils-2.25 + GCC/5.2.0 + GCC/5.3.0-binutils-2.25 + GCC/5.3.0-2.25 + GCC/5.3.0-2.26 + GCC/5.3.1-snapshot-20160419-2.25 + GCC/5.4.0-2.26 + GCC/6.2.0-2.27 + + Other possible modules matches: + GCCcore +--------------------------------------------------------------------------------- + To find other possible module matches do: + module -r spider '.*GCC.*' +--------------------------------------------------------------------------------- + For detailed information about a specific "GCC" module (including how to load the modules) use the module's full name. + For example: + $ module spider GCC/6.2.0-2.27 +--------------------------------------------------------------------------------- +``` + +!!! tip + spider is case-insensitive. + +If you use spider on a full module name like GCC/6.2.0-2.27 it will tell on which cluster(s) that module available: + +```bash +$ module spider GCC/6.2.0-2.27 +-------------------------------------------------------------------------------------------------------------- + GCC: GCC/6.2.0-2.27 +-------------------------------------------------------------------------------------------------------------- + Description: + The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...). - Homepage: http://gcc.gnu.org/ + + This module can be loaded directly: module load GCC/6.2.0-2.27 + + Help: + The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, + as well as libraries for these languages (libstdc++, libgcj,...). - Homepage: http://gcc.gnu.org/ +``` + +This tells you what the module contains and a URL to the homepage of the software. + +Available modules for a particular software package: ml av <name> +------------------------------------------------------------------------------------------ + +To check which modules are available for a particular software package, you can provide the software name to ml av. +For example, to check which versions of git are available: + +```bash +$ ml av git + +-------------------------------------- /apps/modules/tools ---------------------------------------- + git/2.8.0-GNU-4.9.3-2.25 git/2.8.0-intel-2017.00 git/2.9.0 git/2.9.2 git/2.11.0 (D) + + Where: + D: Default Module + +Use "module spider" to find all possible modules. +Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". +``` +!!! tip + the specified software name is treated case-insensitively. + +Lmod does a partial match on the module name, so sometimes you need to use / to indicate the end of the software name you are interested in: + +```bash +$ ml av GCC/ + +------------------------------------------ /apps/modules/compiler ------------------------------------------- +GCC/4.4.7-system GCC/4.8.3 GCC/4.9.2 GCC/4.9.3 GCC/5.1.0-binutils-2.25 GCC/5.3.0-binutils-2.25 GCC/5.3.0-2.26 GCC/5.4.0-2.26 GCC/4.7.4 GCC/4.9.2-binutils-2.25 GCC/4.9.3-binutils-2.25 GCC/4.9.3-2.25 GCC/5.2.0 GCC/5.3.0-2.25 GCC/6.2.0-2.27 (D) + + Where: + D: Default Module + +Use "module spider" to find all possible modules. +Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". + +``` + +Inspecting a module using ml show +-------------------------------------------------- + +To see how a module would change the environment, use module show or ml show: + +```bash +$ ml show Python/3.5.2 + +help([[Python is a programming language that lets you work more quickly and integrate your systems more effectively. - Homepage: http://python.org/]]) +whatis("Description: Python is a programming language that lets you work more quickly and integrate your systems more effectively. - Homepage: http://python.org/") +conflict("Python") +load("bzip2/1.0.6") +load("zlib/1.2.8") +load("libreadline/6.3") +load("ncurses/5.9") +load("SQLite/3.8.8.1") +load("Tk/8.6.3") +load("GMP/6.0.0a") +load("XZ/5.2.2") +prepend_path("CPATH","/apps/all/Python/3.5.2/include") +prepend_path("LD_LIBRARY_PATH","/apps/all/Python/3.5.2/lib") +prepend_path("LIBRARY_PATH","/apps/all/Python/3.5.2/lib") +prepend_path("MANPATH","/apps/all/Python/3.5.2/share/man") +prepend_path("PATH","/apps/all/Python/3.5.2/bin") +prepend_path("PKG_CONFIG_PATH","/apps/all/Python/3.5.2/lib/pkgconfig") +setenv("EBROOTPYTHON","/apps/all/Python/3.5.2") +setenv("EBVERSIONPYTHON","3.5.2") +setenv("EBDEVELPYTHON","/apps/all/Python/3.5.2/easybuild/Python-3.5.2-easybuild-devel") +setenv("EBEXTSLISTPYTHON","setuptools-20.1.1,pip-8.0.2,nose-1.3.7") + +``` + +!!! tip + Note that both the direct changes to the environment as well as other modules that will be loaded are shown. + +If you're not sure what all of this means: don't worry, you don't have to know; just try loading the module as try using the software. + +## Loading modules: ml <modname(s)> (module load <modname(s)>) + +The effectively apply the changes to the environment that are specified by a module, use module load or ml and specify the name of the module. +For example, to set up your environment to use intel: + +```bash +$ ml intel/2017.00 +$ ml +Currently Loaded Modules: + 1) GCCcore/5.4.0 + 2) binutils/2.26-GCCcore-5.4.0 (H) + 3) icc/2017.0.098-GCC-5.4.0-2.26 + 4) ifort/2017.0.098-GCC-5.4.0-2.26 + 5) iccifort/2017.0.098-GCC-5.4.0-2.26 + 6) impi/2017.0.098-iccifort-2017.0.098-GCC-5.4.0-2.26 + 7) iimpi/2017.00-GCC-5.4.0-2.26 + 8) imkl/2017.0.098-iimpi-2017.00-GCC-5.4.0-2.26 + 9) intel/2017.00 + + Where: + H: Hidden Module +``` + +!!! tip + Note that even though we only loaded a single module, the output of ml shows that a whole bunch of modules were loaded, which are required dependencies for intel/2017.00. + +Conflicting modules +------------------- + +!!! warning + It is important to note that **only modules that are compatible with each other can be loaded together. In particular, modules must be installed either with the same toolchain as the modules that** are already loaded, or with a compatible (sub)toolchain. + +For example, once you have loaded one or more modules that were installed with the intel/2017.00 toolchain, all other modules that you load should have been installed with the same toolchain. + +In addition, only **one single version** of each software package can be loaded at a particular time. For example, once you have the Python/3.5.2-intel-2017.00 module loaded, you can not load a different version of Python in the same session/job script; neither directly, nor indirectly as a dependency of another module you want to load. + +Unloading modules: ml -<modname(s)> (module unload <modname(s)>) + +To revert the changes to the environment that were made by a particular module, you can use module unload or ml -<modname>. +For example: + +```bash +$ ml +Currently Loaded Modules: + 1) EasyBuild/3.0.0 (S) 2) lmod/7.2.2 +$ which gcc +/usr/bin/gcc +$ ml GCC/ +$ ml +Currently Loaded Modules: + 1) EasyBuild/3.0.0 (S) 2) lmod/7.2.2 3) GCCcore/6.2.0 4) binutils/2.27-GCCcore-6.2.0 (H) 5) GCC/6.2.0-2.27 +$ which gcc +/apps/all/GCCcore/6.2.0/bin/gcc +$ ml -GCC +$ ml +Currently Loaded Modules: + 1) EasyBuild/3.0.0 (S) 2) lmod/7.2.2 3) GCCcore/6.2.0 4) binutils/2.27-GCCcore-6.2.0 (H) +$ which gcc +/usr/bin/gcc +``` + +Resetting by unloading all modules: ml purge (module purge) +----------------------------------------------------------- + +To reset your environment back to a clean state, you can use module purge or ml purge: + +```bash +$ ml +Currently Loaded Modules: + 1) EasyBuild/3.0.0 (S) 2) lmod/7.2.2 3) GCCcore/6.2.0 4) binutils/2.27-GCCcore-6.2.0 (H) +$ ml purge +The following modules were not unloaded: + (Use "module --force purge" to unload all): + 1) EasyBuild/3.0.0 +$ ml +Currently Loaded Modules: + 1) EasyBuild/3.0.0 (S) +$ ml purge --force +$ ml +No modules loaded +``` + +As such, you should not (re)load the cluster module anymore after running ml purge. See also here. + +Module collections: ml save, ml restore +--------------------------------------- + +If you have a set of modules that you need to load often, you can save these in a collection (only works with Lmod). + +First, load all the modules you need, for example: + +```bash +ml intel/2017.00 Python/3.5.2-intel-2017.00 +``` + +Now store them in a collection using ml save: + +```bash +$ ml save my-collection +``` + +Later, for example in a job script, you can reload all these modules with ml restore: + +```bash +$ ml restore my-collection +``` + +With ml savelist can you gets a list of all saved collections: + +```bash +$ ml savelist +Named collection list: + 1) my-collection + 2) my-test-collection +``` + +To inspect a collection, use ml describe. + +To remove a module collection, remove the corresponding entry in $HOME/.lmod.d. diff --git a/mkdocs.yml b/mkdocs.yml index 476a365fcf8cc8a514a75bd70c012f6a80d35828..1898d58d05f1ed4636fca7f3794cf78c3b443764 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -44,7 +44,26 @@ pages: - IB Single-plane Topology: salomon/ib-single-plane-topology.md - 7D Enhanced Hypercube: salomon/7d-enhanced-hypercube.md - Storage: salomon/storage.md - - Software: + - PRACE User Support: salomon/prace.md + - Anselm Cluster: + - Introduction: anselm-cluster-documentation/introduction.md + - Hardware Overview: anselm-cluster-documentation/hardware-overview.md + - Accessing the Cluster: anselm-cluster-documentation/shell-and-data-access.md + - Environment and Modules: anselm-cluster-documentation/environment-and-modules.md + - Resource Allocation and Job Execution: + - Resource Allocation Policy: anselm-cluster-documentation/resources-allocation-policy.md + - Job Priority: anselm-cluster-documentation/job-priority.md + - Job Submission and Execution: anselm-cluster-documentation/job-submission-and-execution.md + - Capacity Computing: anselm-cluster-documentation/capacity-computing.md + - Compute Nodes: anselm-cluster-documentation/compute-nodes.md + - Storage: anselm-cluster-documentation/storage.md + - Network: anselm-cluster-documentation/network.md + - Remote Visualization: anselm-cluster-documentation/remote-visualization.md + - PRACE User Support: anselm-cluster-documentation/prace.md + - 'Software': + - Lmod Environment: software/lmod.md + - Modules Matrix: modules-matrix.md + - Salomon Software: - Available Modules: modules-salomon.md - Available Modules on UV: modules-salomon-uv.md - 'ANSYS': @@ -98,21 +117,7 @@ pages: - Octave: salomon/software/numerical-languages/octave.md - R: salomon/software/numerical-languages/r.md - Operating System: salomon/software/operating-system.md - - PRACE User Support: salomon/prace.md - - Anselm Cluster: - - Introduction: anselm-cluster-documentation/introduction.md - - Hardware Overview: anselm-cluster-documentation/hardware-overview.md - - Accessing the Cluster: anselm-cluster-documentation/shell-and-data-access.md - - Environment and Modules: anselm-cluster-documentation/environment-and-modules.md - - Resource Allocation and Job Execution: - - Resource Allocation Policy: anselm-cluster-documentation/resources-allocation-policy.md - - Job Priority: anselm-cluster-documentation/job-priority.md - - Job Submission and Execution: anselm-cluster-documentation/job-submission-and-execution.md - - Capacity Computing: anselm-cluster-documentation/capacity-computing.md - - Compute Nodes: anselm-cluster-documentation/compute-nodes.md - - Storage: anselm-cluster-documentation/storage.md - - Network: anselm-cluster-documentation/network.md - - Software: + - Anselm Software: - Available Modules: modules-anselm.md - 'ANSYS': - Introduction: anselm-cluster-documentation/software/ansys/ansys.md @@ -181,11 +186,7 @@ pages: - Operating System: anselm-cluster-documentation/software/operating-system.md - ParaView: anselm-cluster-documentation/software/paraview.md - Virtualization: anselm-cluster-documentation/software/kvirtualization.md - - Remote Visualization: anselm-cluster-documentation/remote-visualization.md - - PRACE User Support: anselm-cluster-documentation/prace.md -# - 'Software': -# - Modules Matrix: modules-matrix.md - - Modules Matrix: modules-matrix.md +# - Modules Matrix: modules-matrix.md - PBS Pro Documentation: pbspro.md # - Testing: # - Colors: colors.md diff --git a/scripts/preklopeni_dokumentace/html_md.sh b/scripts/preklopeni_dokumentace/html_md.sh index e4ab5df058f9ea62dc2e0a1fe79879ce9faab97e..cdb10e39e654c62fbe6cde3da96448bc4b294199 100755 --- a/scripts/preklopeni_dokumentace/html_md.sh +++ b/scripts/preklopeni_dokumentace/html_md.sh @@ -5,73 +5,12 @@ # version: 1.00 ### -if [ "$1" = "-d" ]; then - # remove pdf, md and epub files - STARTTIME=$(date +%s) - if [ "$2" = "pdf" ]; then - echo "$(tput setaf 9)*.pdf deleted$(tput setaf 15)" - if [ -d ./pdf ]; then - rm -rf ./pdf - fi - elif [ "$2" = "epub" ]; then - echo "$(tput setaf 9)*.epub deleted$(tput setaf 15)" - if [ -d ./epub ]; then - rm -rf ./epub - fi - elif [ "$2" = "md" ]; then - echo "$(tput setaf 9)*.md deleted$(tput setaf 15)" - if [ -d ./converted ]; then - rm -rf ./converted - fi - elif [ "$2" = "all" ]; then - echo "$(tput setaf 9)all files deleted$(tput setaf 15)" - if [ -d ./docs.it4i ]; then - rm -rf ./converted - fi - if [ -d ./epub ]; then - rm -rf ./epub - fi - if [ -d ./pdf ]; then - rm -rf ./pdf - fi - if [ -d ./info ]; then - rm -rf ./info - fi - if [ -d ./docs.it4i.cz ]; then - rm -rf ./docs.it4i.cz - fi - fi - ENDTIME=$(date +%s) - echo "It takes $(($ENDTIME - $STARTTIME)) seconds to complete this task..." -fi if [ "$1" = "-w" ]; then # download html pages - STARTTIME=$(date +%s) rm -rf docs-old.it4i.cz - wget -X pbspro-documentation,changelog,whats-new,portal_css,portal_javascripts,++resource++jquery-ui-themes,anselm-cluster-documentation/icon.jpg -R favicon.ico,pdf.png,logo.png,background.png,application.png,search_icon.png,png.png,sh.png,touch_icon.png,anselm-cluster-documentation/icon.jpg,*js,robots.txt,*xml,RSS,download_icon.png,pdf,*zip,*rar,@@*,anselm-cluster-documentation/icon.jpg.1 --mirror --convert-links --adjust-extension --page-requisites --no-parent https://docs-old.it4i.cz; - - # download images - wget --directory-prefix=./docs-old.it4i.cz/ http://verif.cs.vsb.cz/aislinn/doc/report.png - wget --directory-prefix=./docs-old.it4i.cz/ https://docs-old.it4i.cz/anselm-cluster-documentation/software/virtualization/virtualization-job-workflow - wget --directory-prefix=./docs-old.it4i.cz/ https://docs-old.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig1.png - wget --directory-prefix=./docs-old.it4i.cz/ https://docs-old.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig2.png - wget --directory-prefix=./docs-old.it4i.cz/ https://docs-old.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig3.png - wget --directory-prefix=./docs-old.it4i.cz/ https://docs-old.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig4.png - wget --directory-prefix=./docs-old.it4i.cz/ https://docs-old.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig5.png - wget --directory-prefix=./docs-old.it4i.cz/ https://docs-old.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig6.png - wget --directory-prefix=./docs-old.it4i.cz/ https://docs-old.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig7.png - wget --directory-prefix=./docs-old.it4i.cz/ https://docs-old.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig7x.png - wget --directory-prefix=./docs-old.it4i.cz/ https://docs-old.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig8.png - wget --directory-prefix=./docs-old.it4i.cz/ https://docs-old.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig9.png - ENDTIME=$(date +%s) - echo "It takes $(($ENDTIME - $STARTTIME)) seconds to complete this task..." - + wget -X portal_css,portal_javascripts,++resource++jquery-ui-themes,anselm-cluster-documentation/icon.jpg -R favicon.ico,pdf.png,logo.png,background.png,application.png,search_icon.png,png.png,sh.png,touch_icon.png,anselm-cluster-documentation/icon.jpg,*js,robots.txt,*xml,RSS,download_icon.png,@@*,anselm-cluster-documentation/icon.jpg.1 --mirror --convert-links --adjust-extension --page-requisites --no-parent https://docs-old.it4i.cz; fi if [ "$1" = "-c" ]; then - ### convert html to md - STARTTIME=$(date +%s) - if [ -d ./docs.it4i.cz ]; then - # erasing the previous transfer if [ -d ./docs.it4i ]; then rm -rf ./docs.it4i @@ -80,16 +19,6 @@ if [ "$1" = "-c" ]; then rm -rf ./info; fi - # erasing duplicate files and unwanted files - (while read i; - do - if [ -f "$i" ]; - then - echo "$(tput setaf 9)$i deleted"; - rm "$i"; - fi - done) < ./source/list_rm - # counter for html and md files counter=1 count=$(find . -name "*.html" -type f | wc -l) @@ -100,7 +29,7 @@ if [ "$1" = "-c" ]; then # filtering html files echo "$(tput setaf 12)($counter/$count)$(tput setaf 11)$i"; counter=$((counter+1)) - printf "$(tput setaf 15)\t\tFiltering html files...\n"; + printf "\t\tFiltering html files...\n"; HEAD=$(grep -n -m1 '<h1' "$i" |cut -f1 -d: | tr --delete '\n') END=$(grep -n -m1 '<!-- <div tal:content=' "$i" |cut -f1 -d: | tr --delete '\n') @@ -110,47 +39,10 @@ if [ "$1" = "-c" ]; then sed '1,'"$((HEAD-1))"'d' "$i" | sed -n -e :a -e '1,'"$DOWN"'!{P;N;D;};N;ba' > "${i%.*}TMP.html" # converted .html to .md - printf "\t\t.html => $(tput setaf 13).md\n$(tput setaf 15)" + printf "\t\t.html => .md\n" pandoc -f html -t markdown+pipe_tables-grid_tables "${i%.*}TMP.html" -o "${i%.*}.md"; - rm "${i%.*}TMP.html"; - - # filtering html and css elements... - printf "\t\tFiltering html and css elements in md files...\n" - sed -e 's/``` /```/' "${i%.*}.md" | sed -e 's/<\/div>//g' | sed '/^<div/d' | sed -e 's/<\/span>//' | sed -e 's/^\*\*//' | sed -e 's/\\//g' | sed -e 's/^: //g' | sed -e 's/^Obsah//g' > "${i%.*}TMP.md"; - while read x ; do - arg1=`echo "$x" | cut -d"&" -f1 | sed 's:[]\[\^\$\.\*\/\"]:\\\\&:g'`; - arg2=`echo "$x" | cut -d"&" -f2 | sed 's:[]\[\^\$\.\*\/\"]:\\\\&:g'`; - - sed -e 's/'"$arg1"'/'"$arg2"'/' "${i%.*}TMP.md" > "${i%.*}TMP.TEST.md"; - cat -s "${i%.*}TMP.TEST.md" > "${i%.*}TMP.md"; - done < ./source/replace - - # repair formatting... - printf "\t\tFix formatting text...\n" - while read x ; do - arg1=`echo "$x" | cut -d"&" -f1 | sed 's:[]\[\^\$\.\*\/\"]:\\\\&:g'`; - arg2=`echo "$x" | cut -d"&" -f2 | sed 's:[]\[\^\$\.\*\/\"]:\\\\&:g'`; - - sed -e 's/'"$arg1"'/'"$arg2"'/' "${i%.*}TMP.md" | sed -e 's/^``//g' > "${i%.*}TMP.TEST.md"; - cat -s "${i%.*}TMP.TEST.md" > "${i%.*}TMP.md"; - done < ./source/formatting - - # last repair formatting... - printf "\t\tLatest fix formatting text...\n" - while read x ; do - arg1=`echo "$x" | cut -d"&" -f1 | sed 's:[]\[\^\$\.\*\/\"]:\\\\&:g'`; - arg2=`echo "$x" | cut -d"&" -f2 | sed 's:[]\[\^\$\.\*\/\"]:\\\\&:g'`; - - sed -e 's/'"$arg1"'/'"$arg2"'/' "${i%.*}TMP.md" > "${i%.*}TMP.TEST.md"; - cat -s "${i%.*}TMP.TEST.md" > "${i%.*}TMP.md"; - done < ./source/lastFilter - - cat "${i%.*}TMP.md" > "${i%.*}.md"; - - # delete temporary files - rm "${i%.*}TMP.md"; - rm "${i%.*}TMP.TEST.md"; + rm "${i%.*}TMP.html" done # delete empty files @@ -160,64 +52,4 @@ if [ "$1" = "-c" ]; then rm "$i"; echo "$(tput setaf 9)$i deleted"; done - - ### create new folder and move converted files - # create folder info and list all files and folders - mkdir info; - echo "$(tput setaf 11)Create folder info and lists od files..."; - find ./docs.it4i.cz -name "*.png" -type f > ./info/list_image; - find ./docs.it4i.cz -name "*.jpg" -type f >> ./info/list_image; - find ./docs.it4i.cz -name "*.jpeg" -type f >> ./info/list_image; - find ./docs.it4i.cz -name "*.md" -type f> ./info/list_md; - find ./docs.it4i.cz -type d | sort > ./info/list_folder; - - count=$(find ./docs.it4i.cz -name "*.md" -type f | wc -l) - - echo "$count" - - if [ $count -eq 150 ]; then - mkdir docs.it4i; - (while read i; - do - mkdir "./docs.it4i/$i"; - done) < ./source/list_folder - - # move md files to folder converted - echo "$(tput setaf 11)Moved md files..."; - while read a b ; do - mv "$a" "./docs.it4i/$b"; - done < <(paste ./info/list_md ./source/list_md_mv) - - # copy jpg, jpeg and png to folder converted - echo "$(tput setaf 11)Copy image files..."; - while read a b ; do - cp "$a" "./docs.it4i/$b"; - done < <(paste ./info/list_image ./source/list_image_mv) - cp ./docs.it4i.cz/salomon/salomon ./docs.it4i/salomon/salomon - cp ./docs.it4i.cz/salomon/salomon-2 ./docs.it4i/salomon/salomon-2 - cp ./docs.it4i/salomon/resource-allocation-and-job-execution/fairshare_formula.png ./docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/fairshare_formula.png - cp ./docs.it4i/salomon/resource-allocation-and-job-execution/job_sort_formula.png ./docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/job_sort_formula.png - cp ./docs.it4i/salomon/software/debuggers/vtune-amplifier.png ./docs.it4i/anselm-cluster-documentation/software/debuggers/vtune-amplifier.png - cp ./docs.it4i/salomon/software/debuggers/Snmekobrazovky20160708v12.33.35.png ./docs.it4i/anselm-cluster-documentation/software/debuggers/Snmekobrazovky20160708v12.33.35.png - cp ./docs.it4i.cz/virtualization-job-workflow ./docs.it4i/anselm-cluster-documentation/software/ - cp ./docs.it4i.cz/anselm-cluster-documentation/anyconnecticon.jpg ./docs.it4i/salomon/accessing-the-cluster/anyconnecticon.jpg - cp ./docs.it4i.cz/anselm-cluster-documentation/anyconnectcontextmenu.jpg ./docs.it4i/salomon/accessing-the-cluster/anyconnectcontextmenu.jpg - cp ./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/TightVNC_login.png ./docs.it4i/salomon/software/debuggers/TightVNC_login.png - - # list all files and folder converted - find ./docs.it4i -name "*.png" -type f > ./info/list_image_converted; - find ./docs.it4i -name "*.jpg" -type f >> ./info/list_image_converted; - find ./docs.it4i -name "*.jpeg" -type f >> ./info/list_image_converted; - find ./docs.it4i -name "*.md" -type f> ./info/list_md_converted; - find ./docs.it4i -type d | sort > ./info/list_folder_converted; - - echo "$(tput setaf 13)COMPLETED...$(tput setaf 15)"; - else - printf "\n\n$(tput setaf 9)Can not create a folder docs.it4i, because the number of MD files disagrees. The converted files remain in the folder docs.it4i.cz !!!!...$(tput setaf 15)\n\n"; - fi - else - printf "\n\n$(tput setaf 9)Folder docs.it4i.cz not exists!!!!...$(tput setaf 15)\n\nRun html_md.sh -w\n\n"; - fi - ENDTIME=$(date +%s) - echo "It takes $(($ENDTIME - $STARTTIME)) seconds to complete this task..." fi diff --git a/scripts/preklopeni_dokumentace/source/formatting b/scripts/preklopeni_dokumentace/source/formatting deleted file mode 100644 index 5ef039f1873c98e6f1e406b544bad0de5887ce03..0000000000000000000000000000000000000000 --- a/scripts/preklopeni_dokumentace/source/formatting +++ /dev/null @@ -1,54 +0,0 @@ - []()& - **[]()&** -- - & -.[]()&. -<!-- -->& -### []()[]()&### -### []()&### -### **&### -### class="n">&### -### Gnome on Windows**&### Gnome on Windows -### Notes **&### Notes -**Summary&**Summary** -Tape Library T950B**&**Tape Library T950B** - ****The R version 3.0.1 is available on Anselm, along with GUI interface&**The R version 3.0.1 is available on Anselm, along with GUI interface -^& -Input:** FASTQ file.&Input: FASTQ file. -Output:** FASTQ file plus an HTML file containing statistics on the&Output: FASTQ file plus an HTML file containing statistics on the -*Figure 2.****** FASTQ file.***&*Figure 2.**FASTQ file.** -Component:** Hpg-aligner.****&Component:** Hpg-aligner.** -Input:** VCF&Input:** VCF** -the corresponding QC and functional annotations.&the corresponding QC and functional annotations.** -Core features:**&**Core features:** -Regulatory:**&**Regulatory:** -Functional annotation**&**Functional annotation** -Variation**&**Variation** -Systems biology**&**Systems biology** -[VNC](../../../salomon/accessing-the-cluster/graphical-user-interface/vnc.html)**.&**[VNC](../../../salomon/accessing-the-cluster/graphical-user-interface/vnc.html)**. -Workaround:**&**Workaround:** --g** : Generates extra debugging information usable by GDB. -g3&-g** : Generates extra debugging information usable by GDB. -g3** --O0** : Suppress all optimizations.&-O0** : Suppress all optimizations.** -nodes.****&nodes. -###Compute Nodes Without Accelerator**&###Compute Nodes Without Accelerator -###Compute Nodes With MIC Accelerator**&###Compute Nodes With MIC Accelerator -###Compute Nodes With GPU Accelerator**&###Compute Nodes With GPU Accelerator -###Fat Compute Nodes**&###Fat Compute Nodes -**Figure Anselm bullx B510 servers****&**Figure Anselm bullx B510 servers** -### Compute Nodes Summary********&### Compute Nodes Summary -<p><a href="x-window-system/cygwin-and-x11-forwarding.html" If no able to forward X11 using PuTTY to CygwinX</a>&[If no able to forward X11 using PuTTY to CygwinX](x-window-system/cygwin-and-x11-forwarding.html) -<p><a href="http://x.cygwin.com/" Install Cygwin</a>&[Install Cygwin](http://x.cygwin.com/) -Component:**> Hpg-Fastq & FastQC&Component:**> Hpg-Fastq & FastQC** -Input:** BAM file.&Input:** BAM file.** -Output:** BAM file plus an HTML file containing statistics. &Output:** BAM file plus an HTML file containing statistics.** -Component:** GATK.&Component:** GATK.** -Input:** BAM&Input:** BAM** -Output:** VCF&Output:** VCF** -Variant Call Format (VCF)**&**Variant Call Format (VCF)** -</span></span>& -/ansys_inc/shared_les/licensing/lic_admin/anslic_admin&/ansys_inc/shared_les/licensing/lic_admin/anslic_admin -<td align="left">& | -However, users need only manage User and CA certificates. Note that your&**However, users need only manage User and CA certificates. Note that your -X.509 PKI certificates for communication with us.&X.509 PKI certificates for communication with us.** -PuTTYgen**&**PuTTYgen** -key](../ssh-keys.html) file if Pageant ****SSH&key](../ssh-keys.html) file if Pageant **SSH -authentication agent is not used.&authentication agent is not used.** diff --git a/scripts/preklopeni_dokumentace/source/lastFilter b/scripts/preklopeni_dokumentace/source/lastFilter deleted file mode 100644 index 00641541ae06f6de1a53cafb4d66b091ce6d5c1e..0000000000000000000000000000000000000000 --- a/scripts/preklopeni_dokumentace/source/lastFilter +++ /dev/null @@ -1,70 +0,0 @@ ->key.&key. -</span>& -`.ssh`&`.ssh`  directory: 700 (drwx------) -Authorized_keys,&Authorized_keys, known_hosts and public key (`.pub` file): `644 (-rw-r--r--)` -Private key&Private key (`id_rsa/id_rsa.ppk` ): `600 (-rw-------)` - (gnome-session:23691): WARNING **: Cannot open display:& (gnome-session:23691): WARNING **: Cannot open display:** - & - & -Search for the localhost and port number (in this case&**Search for the localhost and port number (in this case -Preferences](gdmscreensaver.png/@@images/8e80a92f-f691-4d92-8e62-344128dcc00b.png "Screensaver Preferences")](../../../../salomon/gnome_screen.jpg.1)& -maximum performance**&maximum performance -Better performance** is obtained by logging on the allocated compute&**Better performance** is obtained by logging on the allocated compute - class="discreet">& - id="result_box"& - class="hps alt-edited">stated &stated in the previous example. -Water-cooled Compute Nodes With MIC Accelerator**&**Water-cooled Compute Nodes With MIC Accelerator** -Access from PRACE network:**&**Access from PRACE network:** -Access from public Internet:**&**Access from public Internet:** -Access from PRACE network:**&**Access from PRACE network:** -Access from public Internet:**&**Access from public Internet:** ->VPN client installation&VPN client installation -Install](https://docs.it4i.cz/salomon/vpn_web_install_2.png/@@images/c2baba93-824b-418d-b548-a73af8030320.png "VPN Install")](../vpn_web_install_2.png)& - [](Salomon_IB_topology.png)& -###IB single-plane topology - ICEX Mcell**&###IB single-plane topology - ICEX Mcell -As shown in a diagram [IB&As shown in a diagram & -and [ & -SCRATCH](storage.html#shared-filesystems).& -There are two main shared file systems on Salomon cluster, the [&There are two main shared file systems on Salomon cluster, the [HOME](storage.html#home)and [SCRATCH](storage.html#shared-filesystems). ->Disk usage and quota commands&Disk usage and quota commands ->OpenCL&### OpenCL -Execution on host**&**Execution on host** -Execution on host - MPI processes distributed over multiple&**Execution on host - MPI processes distributed over multiple -Host only node-file:**&**Host only node-file:** -MIC only node-file**:&MIC only node-file: -Host and MIC node-file**:&Host and MIC node-file: -interface Rstudio&interface Rstudio** -### >&### -3. >>&3. > -- >>&- > ->Usage with MPI&Usage with MPI ->>&> - id="result_box"> & -<span& ->Introduction&Introduction ->Options&Options -***[ANSYS&**[ANSYS -Channel partner](http://www.ansys.com/)***&Channel partner](http://www.ansys.com/)** -Multiphysics)-**Commercial.**&Multiphysics)-**Commercial. ->1. Common way to run Fluent over pbs file&1. Common way to run Fluent over pbs file ->2. Fast way to run Fluent from command line&2. Fast way to run Fluent from command line -**Academic**&**Academic -***&** -id="result_box"& -</span>& -[](Fluent_Licence_2.jpg)& -Failed to initialize connection subsystem Win 8.1 - 02-10-15 MS&**Failed to initialize connection subsystem Win 8.1 - 02-10-15 MS diff --git a/scripts/preklopeni_dokumentace/source/list_folder b/scripts/preklopeni_dokumentace/source/list_folder deleted file mode 100644 index 8b4067625d434fc96dcfc90fa6d43fa549d6f30f..0000000000000000000000000000000000000000 --- a/scripts/preklopeni_dokumentace/source/list_folder +++ /dev/null @@ -1,34 +0,0 @@ -anselm-cluster-documentation -anselm-cluster-documentation/accessing-the-cluster -anselm-cluster-documentation/accessing-the-cluster/shell-and-data-access -anselm-cluster-documentation/resource-allocation-and-job-execution -anselm-cluster-documentation/software -anselm-cluster-documentation/software/ansys -anselm-cluster-documentation/software/chemistry -anselm-cluster-documentation/software/comsol -anselm-cluster-documentation/software/debuggers -anselm-cluster-documentation/software/intel-suite -anselm-cluster-documentation/software/mpi-1 -anselm-cluster-documentation/software/numerical-languages -anselm-cluster-documentation/software/numerical-libraries -anselm-cluster-documentation/software/omics-master-1 -anselm-cluster-documentation/storage-1 -get-started-with-it4innovations -get-started-with-it4innovations/accessing-the-clusters -get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer -get-started-with-it4innovations/obtaining-login-credentials -salomon -salomon/accessing-the-cluster -salomon/hardware-overview-1 -salomon/network-1 -salomon/resource-allocation-and-job-execution -salomon/software -salomon/software/ansys -salomon/software/chemistry -salomon/software/comsol -salomon/software/debuggers -salomon/software/intel-suite -salomon/software/mpi-1 -salomon/software/numerical-languages -salomon/storage diff --git a/scripts/preklopeni_dokumentace/source/list_image_mv b/scripts/preklopeni_dokumentace/source/list_image_mv deleted file mode 100644 index d543838d34c7422ee24e0a927e33c73f0f969416..0000000000000000000000000000000000000000 --- a/scripts/preklopeni_dokumentace/source/list_image_mv +++ /dev/null @@ -1,102 +0,0 @@ -anselm-cluster-documentation/software/omics-master-1/fig2.png -anselm-cluster-documentation/software/omics-master-1/fig5.png -anselm-cluster-documentation/software/omics-master-1/fig6.png -anselm-cluster-documentation/software/omics-master-1/fig3.png -get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/TightVNC_login.png -get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/putty-tunnel.png -get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/gnome-terminal.png -get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/gdmscreensaver.png -get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/gnome-compute-nodes-over-vnc.png -get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/gdmdisablescreensaver.png -get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/cygwinX11forwarding.png -get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/XWinlistentcp.png -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/PuttyKeygenerator_004V.png -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/PuttyKeygenerator_003V.png -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/PageantV.png -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/PuttyKeygenerator_001V.png -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/PuTTY_host_Salomon.png -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/PuTTY_keyV.png -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/PuttyKeygenerator_005V.png -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/PuTTY_save_Salomon.png -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/PuttyKeygeneratorV.png -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/PuTTY_open_Salomon.png -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/PuttyKeygenerator_002V.png -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/PuttyKeygenerator_006V.png -salomon/accessing-the-cluster/copy_of_vpn_web_install_3.png -salomon/accessing-the-cluster/vpn_contacting.png -salomon/resource-allocation-and-job-execution/rswebsalomon.png -salomon/accessing-the-cluster/vpn_successfull_connection.png -salomon/accessing-the-cluster/vpn_web_install_2.png -salomon/accessing-the-cluster/vpn_web_login_2.png -get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/gnome_screen.png -salomon/network-1/IBsingleplanetopologyAcceleratednodessmall.png -salomon/network-1/IBsingleplanetopologyICEXMcellsmall.png -salomon/network-1/Salomon_IB_topology.png -salomon/network-1/7D_Enhanced_hypercube.png -salomon/accessing-the-cluster/vpn_web_login.png -salomon/accessing-the-cluster/vpn_login.png -salomon/software/debuggers/totalview2.png -salomon/software/debuggers/Snmekobrazovky20160211v14.27.45.png -salomon/software/debuggers/ddt1.png -salomon/software/debuggers/totalview1.png -salomon/software/debuggers/Snmekobrazovky20160708v12.33.35.png -salomon/software/intel-suite/Snmekobrazovky20151204v15.35.12.png -salomon/software/ansys/AMsetPar1.png -salomon/accessing-the-cluster/vpn_contacting_https_cluster.png -salomon/accessing-the-cluster/vpn_web_download.png -salomon/accessing-the-cluster/vpn_web_download_2.png -salomon/accessing-the-cluster/vpn_contacting_https.png -salomon/accessing-the-cluster/vpn_web_install_4.png -anselm-cluster-documentation/software/omics-master-1/fig7.png -get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vncviewer.png -salomon/resource-allocation-and-job-execution/job_sort_formula.png -salomon/resource-allocation-and-job-execution/fairshare_formula.png -anselm-cluster-documentation/resource-allocation-and-job-execution/rsweb.png -anselm-cluster-documentation/quality2.png -anselm-cluster-documentation/turbovncclientsetting.png -get-started-with-it4innovations/obtaining-login-credentials/Authorization_chain.png -anselm-cluster-documentation/scheme.png -anselm-cluster-documentation/quality3.png -anselm-cluster-documentation/legend.png -anselm-cluster-documentation/bullxB510.png -salomon/software/debuggers/vtune-amplifier.png -anselm-cluster-documentation/software/debuggers/totalview2.png -anselm-cluster-documentation/software/debuggers/Snmekobrazovky20141204v12.56.36.png -anselm-cluster-documentation/software/debuggers/ddt1.png -anselm-cluster-documentation/software/debuggers/totalview1.png -anselm-cluster-documentation/software/numerical-languages/Matlab.png -anselm-cluster-documentation/quality1.png -anselm-cluster-documentation/software/omics-master-1/fig1.png -anselm-cluster-documentation/software/omics-master-1/fig8.png -salomon/software/debuggers/report.png -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpnuiV.png -anselm-cluster-documentation/software/omics-master-1/fig4.png -anselm-cluster-documentation/software/omics-master-1/fig7x.png -anselm-cluster-documentation/software/omics-master-1/fig9.png -salomon/software/ansys/Fluent_Licence_2.jpg -salomon/software/ansys/Fluent_Licence_4.jpg -salomon/software/ansys/Fluent_Licence_1.jpg -salomon/software/ansys/Fluent_Licence_3.jpg -anselm-cluster-documentation/accessing-the-cluster/Anselmprofile.jpg -anselm-cluster-documentation/accessing-the-cluster/anyconnecticon.jpg -anselm-cluster-documentation/accessing-the-cluster/anyconnectcontextmenu.jpg -anselm-cluster-documentation/accessing-the-cluster/logingui.jpg -anselm-cluster-documentation/software/ansys/Fluent_Licence_2.jpg -anselm-cluster-documentation/software/ansys/Fluent_Licence_4.jpg -anselm-cluster-documentation/software/ansys/Fluent_Licence_1.jpg -anselm-cluster-documentation/software/ansys/Fluent_Licence_3.jpg -anselm-cluster-documentation/accessing-the-cluster/firstrun.jpg -anselm-cluster-documentation/accessing-the-cluster/successfullconnection.jpg -salomon/sgi-c1104-gp1.jpeg -salomon/salomon-1.jpeg -salomon/hardware-overview-1/uv-2000.jpeg -salomon/salomon-3.jpeg -salomon/salomon-4.jpeg -anselm-cluster-documentation/accessing-the-cluster/loginwithprofile.jpeg -anselm-cluster-documentation/accessing-the-cluster/instalationfile.jpeg -anselm-cluster-documentation/accessing-the-cluster/successfullinstalation.jpeg -anselm-cluster-documentation/accessing-the-cluster/java_detection.jpeg -anselm-cluster-documentation/accessing-the-cluster/executionaccess.jpeg -anselm-cluster-documentation/accessing-the-cluster/downloadfilesuccessfull.jpeg -anselm-cluster-documentation/accessing-the-cluster/executionaccess2.jpeg -anselm-cluster-documentation/accessing-the-cluster/login.jpeg diff --git a/scripts/preklopeni_dokumentace/source/list_md_mv b/scripts/preklopeni_dokumentace/source/list_md_mv deleted file mode 100644 index f4a48544c0fdb926e6a3f96f54de9ba2701edaee..0000000000000000000000000000000000000000 --- a/scripts/preklopeni_dokumentace/source/list_md_mv +++ /dev/null @@ -1,150 +0,0 @@ -get-started-with-it4innovations/accessing-the-clusters/introduction.md -get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md -get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/cygwin-and-x11-forwarding.md -get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md -get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/introduction.md -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/puttygen.md -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/pageant.md -get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md -get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md -get-started-with-it4innovations/applying-for-resources.md -salomon/introduction.md -salomon/resource-allocation-and-job-execution/introduction.md -salomon/resource-allocation-and-job-execution/resources-allocation-policy.md -salomon/resource-allocation-and-job-execution/job-submission-and-execution.md -salomon/resource-allocation-and-job-execution/capacity-computing.md -salomon/resource-allocation-and-job-execution/job-priority.md -salomon/prace.md -salomon/environment-and-modules.md -salomon/network-1/7d-enhanced-hypercube.md -salomon/network-1/ib-single-plane-topology.md -salomon/network-1/network.md -salomon/accessing-the-cluster/outgoing-connections.md -salomon/accessing-the-cluster/vpn-access.md -salomon/software/debuggers/intel-vtune-amplifier.md -salomon/software/debuggers/summary.md -salomon/software/debuggers/allinea-performance-reports.md -salomon/software/debuggers/valgrind.md -salomon/software/debuggers/allinea-ddt.md -salomon/software/debuggers/aislinn.md -salomon/software/debuggers/vampir.md -salomon/software/debuggers/total-view.md -salomon/software/numerical-languages/introduction.md -salomon/software/numerical-languages/octave.md -salomon/software/numerical-languages/matlab.md -salomon/software/numerical-languages/r.md -salomon/software/operating-system.md -salomon/software/mpi-1/mpi.md -salomon/software/mpi-1/mpi4py-mpi-for-python.md -salomon/software/mpi-1/Running_OpenMPI.md -salomon/software/intel-xeon-phi.md -salomon/software/chemistry/phono3py.md -salomon/software/chemistry/molpro.md -salomon/software/chemistry/nwchem.md -salomon/software/ansys/ansys.md -salomon/software/compilers.md -salomon/software/intel-suite/intel-compilers.md -salomon/software/intel-suite/intel-inspector.md -salomon/software/intel-suite/intel-integrated-performance-primitives.md -salomon/software/intel-suite/intel-advisor.md -salomon/software/intel-suite/intel-trace-analyzer-and-collector.md -salomon/software/intel-suite/intel-mkl.md -salomon/software/intel-suite/intel-parallel-studio-introduction.md -salomon/software/intel-suite/intel-tbb.md -salomon/software/intel-suite/intel-debugger.md -salomon/software/debuggers.md -salomon/software/java.md -salomon/software/comsol/comsol-multiphysics.md -salomon/software/comsol/licensing-and-available-versions.md -salomon/software/ansys/ansys-mechanical-apdl.md -salomon/software/ansys/setting-license-preferences.md -salomon/software/ansys/workbench.md -salomon/software/ansys/licensing.md -salomon/software/ansys/ansys-fluent.md -salomon/software/ansys/ansys-cfx.md -salomon/software/ansys/ansys-products-mechanical-fluent-cfx-mapdl.md -salomon/software/ansys/ansys-ls-dyna.md -salomon/storage/cesnet-data-storage.md -salomon/storage/storage.md -salomon/accessing-the-cluster.md -salomon/hardware-overview-1/hardware-overview.md -anselm-cluster-documentation/introduction.md -anselm-cluster-documentation/hardware-overview.md -anselm-cluster-documentation/resource-allocation-and-job-execution/introduction.md -anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy.md -anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution.md -anselm-cluster-documentation/resource-allocation-and-job-execution/capacity-computing.md -anselm-cluster-documentation/resource-allocation-and-job-execution/job-priority.md -anselm-cluster-documentation/prace.md -anselm-cluster-documentation/storage-1/cesnet-data-storage.md -anselm-cluster-documentation/storage-1/storage.md -anselm-cluster-documentation/environment-and-modules.md -anselm-cluster-documentation/accessing-the-cluster/outgoing-connections.md -anselm-cluster-documentation/accessing-the-cluster/shell-and-data-access/shell-and-data-access.md -anselm-cluster-documentation/accessing-the-cluster/vpn-access.md -anselm-cluster-documentation/software/nvidia-cuda.md -anselm-cluster-documentation/software/debuggers/papi.md -anselm-cluster-documentation/software/debuggers/scalasca.md -anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md -anselm-cluster-documentation/software/debuggers/summary.md -anselm-cluster-documentation/software/debuggers/cube.md -anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md -anselm-cluster-documentation/software/debuggers/valgrind.md -anselm-cluster-documentation/software/debuggers/allinea-ddt.md -anselm-cluster-documentation/software/debuggers/score-p.md -anselm-cluster-documentation/software/debuggers/vampir.md -anselm-cluster-documentation/software/debuggers/total-view.md -anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md -anselm-cluster-documentation/software/kvirtualization.md -anselm-cluster-documentation/software/gpi2.md -anselm-cluster-documentation/software/paraview.md -anselm-cluster-documentation/software/numerical-languages/introduction.md -anselm-cluster-documentation/software/numerical-languages/octave.md -anselm-cluster-documentation/software/numerical-languages/copy_of_matlab.md -anselm-cluster-documentation/software/numerical-languages/matlab.md -anselm-cluster-documentation/software/numerical-languages/r.md -anselm-cluster-documentation/software/operating-system.md -anselm-cluster-documentation/software/mpi-1/mpi.md -anselm-cluster-documentation/software/mpi-1/mpi4py-mpi-for-python.md -anselm-cluster-documentation/software/mpi-1/running-mpich2.md -anselm-cluster-documentation/software/mpi-1/Running_OpenMPI.md -anselm-cluster-documentation/software/intel-xeon-phi.md -anselm-cluster-documentation/software/chemistry/molpro.md -anselm-cluster-documentation/software/chemistry/nwchem.md -anselm-cluster-documentation/software/ansys.md -anselm-cluster-documentation/software/isv_licenses.md -anselm-cluster-documentation/software/compilers.md -anselm-cluster-documentation/software/intel-suite/intel-compilers.md -anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md -anselm-cluster-documentation/software/intel-suite/intel-mkl.md -anselm-cluster-documentation/software/intel-suite/intel-parallel-studio-introduction.md -anselm-cluster-documentation/software/intel-suite/intel-tbb.md -anselm-cluster-documentation/software/intel-suite/intel-debugger.md -anselm-cluster-documentation/software/debuggers.md -anselm-cluster-documentation/software/omics-master-1/priorization-component-bierapp.md -anselm-cluster-documentation/software/omics-master-1/diagnostic-component-team.md -anselm-cluster-documentation/software/omics-master-1/overview.md -anselm-cluster-documentation/software/openfoam.md -anselm-cluster-documentation/software/java.md -anselm-cluster-documentation/software/comsol/comsol-multiphysics.md -anselm-cluster-documentation/software/intel-suite.md -anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md -anselm-cluster-documentation/software/ansys/ansys-fluent.md -anselm-cluster-documentation/software/ansys/ansys-cfx.md -anselm-cluster-documentation/software/ansys/ls-dyna.md -anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md -anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md -anselm-cluster-documentation/software/numerical-libraries/gsl.md -anselm-cluster-documentation/software/numerical-libraries/trilinos.md -anselm-cluster-documentation/software/numerical-libraries/hdf5.md -anselm-cluster-documentation/software/numerical-libraries/intel-numerical-libraries.md -anselm-cluster-documentation/software/numerical-libraries/fftw.md -anselm-cluster-documentation/software/numerical-libraries/petsc.md -anselm-cluster-documentation/network.md -anselm-cluster-documentation/remote-visualization.md -anselm-cluster-documentation/compute-nodes.md -get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md -index.md diff --git a/scripts/preklopeni_dokumentace/source/list_rm b/scripts/preklopeni_dokumentace/source/list_rm deleted file mode 100644 index 8c1e208065c9b1dd5d32010852ffeaeaf3485f5b..0000000000000000000000000000000000000000 --- a/scripts/preklopeni_dokumentace/source/list_rm +++ /dev/null @@ -1,101 +0,0 @@ -./docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster.html -./docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/storage-1.html -./docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/x-window-and-vnc.html -./docs.it4i.cz/anselm-cluster-documentation.html -./docs.it4i.cz/anselm-cluster-documentation/icon.jpg -./docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution.html -./docs.it4i.cz/anselm-cluster-documentation/software.1.html -./docs.it4i.cz/anselm-cluster-documentation/software/anselm-cluster-documentation/software/mpi-1/running-mpich2.html -./docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-cfx-pbs-file/view.html -./docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-fluent-pbs-file/view.html -./docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-ls-dyna-pbs-file/view.html -./docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-mapdl-pbs-file/view.html -./docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-products-mechanical-fluent-cfx-mapdl.html -./docs.it4i.cz/anselm-cluster-documentation/software/ansys/licensing.html -./docs.it4i.cz/anselm-cluster-documentation/software/ansys/ls-dyna-pbs-file/view.html -./docs.it4i.cz/anselm-cluster-documentation/software/chemistry.html -./docs.it4i.cz/anselm-cluster-documentation/software/comsol.html -./docs.it4i.cz/anselm-cluster-documentation/software/debuggers/mympiprog_32p_2014-10-15_16-56.html -./docs.it4i.cz/anselm-cluster-documentation/software/mpi-1.html -./docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages.1.html -./docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages.html -./docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig1.png/image_view_fullscreen.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig1.png/view.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig2.png/image_view_fullscreen.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig2.png/view.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig3.png/image_view_fullscreen.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig3.png/view.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig4.png/image_view_fullscreen.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig4.png/view.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig5.png/image_view_fullscreen.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig5.png/view.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig6.png/image_view_fullscreen.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig6.png/view.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig7.png/image_view_fullscreen.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig7.png/view.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig7x.png/image_view_fullscreen.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig7x.png/view.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig8.png/image_view_fullscreen.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig8.png/view.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig9.png/image_view_fullscreen.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig9.png/view.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/table1.png/image_view_fullscreen.html -./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/table1.png/view.html -./docs.it4i.cz/anselm-cluster-documentation/software/virtualization.html -./docs.it4i.cz/anselm-cluster-documentation/storage-1.html -./docs.it4i.cz/anselm-cluster-documentation/storage.html -./docs.it4i.cz/anselm.html -./docs.it4i.cz/changelog.html -./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface.html -./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding.html -./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html -./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.1.html -./docs.it4i.cz/get-started-with-it4innovations/changelog.html -./docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials.html -./docs.it4i.cz/links.html -./docs.it4i.cz/pbspro-documentation.html -./docs.it4i.cz/robots.txt -./docs.it4i.cz/salomon/accessing-the-cluster/graphical-user-interface.html -./docs.it4i.cz/salomon/accessing-the-cluster/graphical-user-interface/vnc.html -./docs.it4i.cz/salomon/accessing-the-cluster/shell-and-data-access/shell-and-data-access.html -./docs.it4i.cz/salomon/compute-nodes.html -./docs.it4i.cz/salomon/hardware-overview-1.1.html -./docs.it4i.cz/salomon.html -./docs.it4i.cz/salomon/list_of_modules.html -./docs.it4i.cz/salomon/network-1.html -./docs.it4i.cz/salomon/network-1/IB single-plane topology - Accelerated nodes.pdf/view.html -./docs.it4i.cz/salomon/network-1/ib-single-plane-topology/IB single-plane topology - ICEX Mcell.pdf/view.html -./docs.it4i.cz/salomon/network-1/ib-single-plane-topology/schematic-representation-of-the-salomon-cluster-ib-single-plain-topology-hypercube-dimension-0.html -./docs.it4i.cz/salomon/resource-allocation-and-job-execution.html -./docs.it4i.cz/salomon/software/ansys/ansys-cfx-pbs-file/view.html -./docs.it4i.cz/salomon/software/ansys/ansys-fluent-pbs-file/view.html -./docs.it4i.cz/salomon/software/ansys/ansys-ls-dyna-pbs-file/view.html -./docs.it4i.cz/salomon/software/ansys/ansys-mapdl-pbs-file/view.html -./docs.it4i.cz/salomon/software/ansys/ls-dyna-pbs-file/view.html -./docs.it4i.cz/salomon/software/chemistry.html -./docs.it4i.cz/salomon/software/chemistry/phono3py-input/gofree-cond1.sh/view.html -./docs.it4i.cz/salomon/software/chemistry/phono3py-input.html -./docs.it4i.cz/salomon/software/chemistry/phono3py-input/INCAR/view.html -./docs.it4i.cz/salomon/software/chemistry/phono3py-input/KPOINTS/view.html -./docs.it4i.cz/salomon/software/chemistry/phono3py-input/poscar-si/view.html -./docs.it4i.cz/salomon/software/chemistry/phono3py-input/POTCAR/view.html -./docs.it4i.cz/salomon/software/chemistry/phono3py-input/prepare.sh/view.html -./docs.it4i.cz/salomon/software/chemistry/phono3py-input/run.sh/view.html -./docs.it4i.cz/salomon/software/chemistry/phono3py-input/submit.sh/view.html -./docs.it4i.cz/salomon/software/comsol.html -./docs.it4i.cz/salomon/software/debuggers/mympiprog_32p_2014-10-15_16-56.html -./docs.it4i.cz/salomon/software/debuggers/score-p.html -./docs.it4i.cz/salomon/software.html -./docs.it4i.cz/salomon/software/intel-suite.html -./docs.it4i.cz/salomon/software/isv_licenses.html -./docs.it4i.cz/salomon/software/mpi-1.html -./docs.it4i.cz/salomon/software/numerical-languages.1.html -./docs.it4i.cz/salomon/software/numerical-languages.html -./docs.it4i.cz/salomon/storage.html -./docs.it4i.cz/sitemap.html -./docs.it4i.cz/whats-new.html -./docs.it4i.cz/salomon/index.html -./docs.it4i.cz/get-started-with-it4innovations/introduction.html diff --git a/scripts/preklopeni_dokumentace/source/repairIMG b/scripts/preklopeni_dokumentace/source/repairIMG deleted file mode 100644 index 5a8d1763fd87d5d3e2e40aa52b0ab08aa6e43f75..0000000000000000000000000000000000000000 --- a/scripts/preklopeni_dokumentace/source/repairIMG +++ /dev/null @@ -1,123 +0,0 @@ -& -& - -2](../executionaccess2.jpg/@@images/bed3998c-4b82-4b40-83bd-c3528dde2425.jpeg "Execution access 2")& -& -& -& -& -& -& -& -& -& -& -& -**&& -& -& - &increases.](fig5.png.1 "fig5.png") -out.](fig1.png "Fig 1")&out.](fig1.png "Fig 1") -& -operation.](images/fig3.png "fig3.png")&operation.](fig3.png "fig3.png")& -where the position is ambiguous.](images/fig4.png)&where the position is ambiguous.](fig4.png) -genomic coordinates.](images/fig6.png.1 "fig6.png")&genomic coordinates.](fig6.png.1 "fig6.png") - [](cygwin-and-x11-forwarding.html)& -[](putty-tunnel.png)& -& -[****](TightVNC_login.png)& -[](https://docs.it4i.cz/get-started-with-it4innovations/gnome_screen.jpg)& -[](gdmdisablescreensaver.png)& -[](../../../../salomon/gnome_screen.jpg.1)& -[](gnome-terminal.png)& -[](gnome-compute-nodes-over-vnc.png)& - [](PageantV.png)& - [](PuTTY_host_Salomon.png)& - [](PuTTY_keyV.png)& - [](PuTTY_save_Salomon.png)& - [](PuTTY_open_Salomon.png)& - [](PuttyKeygeneratorV.png)& - [](PuttyKeygenerator_001V.png)& - [](PuttyKeygenerator_002V.png)& - [](20150312_143443.png)& - [](PuttyKeygenerator_004V.png)& - [](PuttyKeygenerator_005V.png)& - [](PuttyKeygenerator_006V.png)& -[](../vpn_web_login.png)& -Install](https://docs.it4i.cz/salomon/vpn_web_login_2.png/@@images/be923364-0175-4099-a363-79229b88e252.png "VPN Install")](../vpn_web_login_2.png)& -Install](https://docs.it4i.cz/salomon/vpn_web_install_2.png/@@images/c2baba93-824b-418d-b548-a73af8030320.png "VPN Install")](../vpn_web_install_2.png)[ -Install](https://docs.it4i.cz/salomon/copy_of_vpn_web_install_3.png/@@images/9c34e8ad-64b1-4e1d-af3a-13c7a18fbca4.png "VPN Install")](../copy_of_vpn_web_install_3.png)& -Install](https://docs.it4i.cz/salomon/vpn_web_install_4.png/@@images/4cc26b3b-399d-413b-9a6c-82ec47899585.png "VPN Install")](../vpn_web_install_4.png)& -Install](https://docs.it4i.cz/salomon/vpn_web_download.png/@@images/06a88cce-5f51-42d3-8f0a-f615a245beef.png "VPN Install")](../vpn_web_download.png)& -Install](https://docs.it4i.cz/salomon/vpn_web_download_2.png/@@images/3358d2ce-fe4d-447b-9e6c-b82285f9796e.png "VPN Install")](../vpn_web_download_2.png)& -& -[](../vpn_contacting_https_cluster.png)& -Cluster](https://docs.it4i.cz/salomon/vpn_contacting_https.png/@@images/ff365499-d07c-4baf-abb8-ce3e15559210.png "VPN Contacting Cluster")](../vpn_contacting_https.png)& -[](../../anselm-cluster-documentation/anyconnecticon.jpg)& -[](../../anselm-cluster-documentation/anyconnectcontextmenu.jpg)& -[](../vpn_contacting.png)& -login](https://docs.it4i.cz/salomon/vpn_login.png/@@images/5102f29d-93cf-4cfd-8f55-c99c18f196ea.png "VPN login")](../vpn_login.png)& -Connection](https://docs.it4i.cz/salomon/vpn_successfull_connection.png/@@images/45537053-a47f-48b2-aacd-3b519d6770e6.png "VPN Succesfull Connection")](../vpn_successfull_connection.png)& -[](../salomon-2)&& -& -[](salomon)& -& -& -& -[](7D_Enhanced_hypercube.png)& -[](https://docs.it4i.cz/salomon/network-1/ib-single-plane-topology/IB%20single-plane%20topology%20-%20ICEX%20Mcell.pdf)& -[](https://docs.it4i.cz/salomon/network-1/ib-single-plane-topology/IB%20single-plane%20topology%20-%20Accelerated%20nodes.pdf)& -[](Fluent_Licence_1.jpg)& -[](Fluent_Licence_2.jpg)& -[](Fluent_Licence_3.jpg)& -[](Fluent_Licence_4.jpg)& -& -& -[{.image-inline width="451"& -height="513"}](ddt1.png)& -& -[](vtune-amplifier)& -[](totalview1.png)& -[](totalview2.png)& -& -& -& -screensaver](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/gdmdisablescreensaver.png/@@images/8a4758d9-3027-4ce4-9a90-2d5e88197451.png "Disable lock screen and screensaver")](gdmdisablescreensaver.png)& -[](gnome-terminal.png)& -[](gnome-compute-nodes-over-vnc.png)& -& -genomic coordinates.](fig6.png.1 "fig6.png")&genomic coordinates.](fig6.png) -out.](images/fig1.png "Fig 1")&out.](fig1.png) -operation.](fig3.png "fig3.png")&operation.](fig3.png) -starts.](images/fig7.png "fig7.png")&starts.](fig7.png) -H).](images/fig7x.png "fig7x.png")&H).](fig7x.png) -](images/fig8.png "fig8.png")*&](fig8.png)* -tumor.](images/fig9.png "fig9.png")**&tumor.](fig9.png) -increases.](fig5.png.1 "fig5.png")&increases.](fig5.png) diff --git a/scripts/preklopeni_dokumentace/source/replace b/scripts/preklopeni_dokumentace/source/replace deleted file mode 100644 index a95df81e42d694997a5e586e71a9b7630ecc0196..0000000000000000000000000000000000000000 --- a/scripts/preklopeni_dokumentace/source/replace +++ /dev/null @@ -1,128 +0,0 @@ -style="text-align: left; float: none; ">& -class="anchor-link">& -class="Apple-converted-space">& -class="discreet visualHighlight">& -class="emphasis">& -class="glossaryItem">& -class="highlightedSearchTerm">& -class="highlightedSearchTerm">SSH</span><span>&highlightedSearchTerm -class="hps">& -class="hps">& -class="hps">More</span> <span class="hps">& -class="hps trans-target-highlight">& -class="internal-link">& -class="internal-link"><span id="result_box" class="short_text"><span& -class="monospace">& -class="monospace">LAPACKE</span> module, which includes Intel's LAPACKE&LAPACKE modelu, which includes Intel's LAPACKE -class="n">& -class="n">& -class="pre">& -class="pun">node_group_key& -class="short_text"><span& -class="smarterwiki-popup-bubble-body"><span& -class="smarterwiki-popup-bubble-links-container"><span& -class="smarterwiki-popup-bubble-links-row">[{.smarterwiki-popup-bubble-link-favicon}](http://maps.google.com/maps?q=HDF5%20icc%20serial%09pthread%09hdf5%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09%24HDF5_INC%20%24HDF5_CPP_LIB%09%24HDF5_INC%20%24HDF5_F90_LIB%0A%0AHDF5%20icc%20parallel%20MPI%0A%09pthread%2C%20IntelMPI%09hdf5-parallel%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09Not%20supported%09%24HDF5_INC%20%24HDF5_F90_LIB "Search Google Maps"){.smarterwiki-popup-bubble-link}[{.smarterwiki-popup-bubble-link-favicon}](http://www.google.com/search?q=HDF5%20icc%20serial%09pthread%09hdf5%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09%24HDF5_INC%20%24HDF5_CPP_LIB%09%24HDF5_INC%20%24HDF5_F90_LIB%0A%0AHDF5%20icc%20parallel%20MPI%0A%09pthread%2C%20IntelMPI%09hdf5-parallel%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09Not%20supported%09%24HDF5_INC%20%24HDF5_F90_LIB "Search Google"){.smarterwiki-popup-bubble-link}[](http://www.google.com/search?hl=com&btnI=I'm+Feeling+Lucky&q=HDF5%20icc%20serial%09pthread%09hdf5%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09%24HDF5_INC%20%24HDF5_CPP_LIB%09%24HDF5_INC%20%24HDF5_F90_LIB%0A%0AHDF5%20icc%20parallel%20MPI%0A%09pthread%2C%20IntelMPI%09hdf5-parallel%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09Not%20supported%09%24HDF5_INC%20%24HDF5_F90_LIB+wikipedia "Search Wikipedia"){.smarterwiki-popup-bubble-link}</span></span></span></span></span>& -class="smarterwiki-popup-bubble-links"><span& -class="smarterwiki-popup-bubble smarterwiki-popup-bubble-active smarterwiki-popup-bubble-flipped"><span&& -class="smarterwiki-popup-bubble-tip"></span><span -</div>& -<div>& -<div class="itemizedlist">& -<div id="d4841e18">& -<div id="d4841e21">& -<div id="d4841e24">& -<div id="d4841e27">& -<div id="d4841e30">& -<div id="d4841e34">& -<div id="d4841e37">& -{.external& -</span>& -<span& -[<span class="anchor-link">& -<span class="discreet">& -<span class="discreet"></span>& -<span class="glossaryItem">& -<span class="hps">& -<span class="hps alt-edited">& -<span class="listitem">& -<span class="n">& -<span class="s1">& -<span class="WYSIWYG_LINK">& -<span dir="auto">& -<span id="__caret">& -<span id="__caret"><span id="__caret"></span></span>& -(<span id="result_box">& -<span id="result_box" class="short_text"><span class="hps">& -<span id="result_box"><span class="hps">& -</span></span>& -</span> <span& -<span><span>& -</span> <span class="hps">& -</span> <span class="hps">who have a valid</span> <span& -<span><span class="monospace">& -<span><span>Introduction&###Introduction -</span></span><span><span>& -</span></span></span></span><span><span>& -</span></span><span><span><span><span>& -.<span style="text-align: left; "> </span>& -<span style="text-align: start; ">& -{.state-missing-value - style="text-align: left; float: none; ">& - style="text-align: left; float: none; ">& - style="text-align: left; float: none; ">change it&change it to - style="text-align: left; float: none; ">Check Putty settings:& - style="text-align: left; float: none; ">Enable X11&Enable X11 - style="text-align: left; float: none; "> & -style="text-align: start; ">& -style="text-align: start; float: none; ">& -.text}.& -.text}& -ssh-connection style="text-alignstart; "}& -<div& -</div>& -{.anchor-link}& -{.code-basic style="text-align: start; "}& -{.code .highlight .white .shell}& -{.docutils .literal}& -{.email-link}& -{.external-link}& -{.fragment}& -{.image-inline}& -{.image-inline width="451" height="513"}& -{.internal-link}& -{.literal-block}& -{.mw-redirect}& -{#parent-fieldname-title}& -{#parent-fieldname-title .documentFirstHeading}& -{.prettyprint}& -{.prettyprint .lang-cpp}& -{.prettyprint .lang-sh}& -{.prettyprint .lang-sh .prettyprinted}& -{#putty---before-we-start-ssh-connection style="text-align: start; "}& -{#resources-allocation-policy}& -{#schematic-overview}& -{.screen}& -{#setup-and-start-your-own-turbovnc-server}& -{style="text-align: left; "}& -ssh-connection style="text-alignstart; "}& -{#putty---before-we-start-& -<span class="pln">& -class="pln">& -id="parent-fieldname-text-5739e5d4b93b40a6b3d987bd4047d4e0">& -id="content-core">& -id="viewlet-above-content-body">& -id="viewlet-below-content-title">& -^[>[1<span>]]& --link}& -<span& - id="Key_management" class="mw-headline">& -class="external-link">& - class="link-external">& -class="WYSIWYG_LINK">& - class="wide-view-wrapper">& - class="listitem">& - class="emphasis">& -class="visualHighlight">& -{.spip_in& -.external& - </span>& diff --git a/scripts/preklopeni_dokumentace/source/tab b/scripts/preklopeni_dokumentace/source/tab deleted file mode 100644 index bec6eca8f5bc1ba6a48cd28c2a1c38babbcf13ef..0000000000000000000000000000000000000000 --- a/scripts/preklopeni_dokumentace/source/tab +++ /dev/null @@ -1,28 +0,0 @@ -<table>&\\ -</th>&\\ -</td>&\\ -<td align="left">& | -</tr>& | -<tr class="odd">&\\ -<tr class="even">&\\ -</tbody>&\\ -</table>&\\ -<p> class="s1">& -</p>&\\ -<colgroup>&\\ -<col width="50%" />&\\ -</colgroup>&\\ -<thead>&\\ -<tr class="header">&\\ -<tbody>&\\ -<th align="left">& | -</thead>& | --- | --- | -<th align="left">& | -<br />&\\ -</p>&\\ -)\&) -.\&. -<p>& -<table style="width:100%;">&\\ -<col width="16%" />&\\ -<th align="left">& | diff --git a/scripts/preklopeni_dokumentace/source/tabREPLACE b/scripts/preklopeni_dokumentace/source/tabREPLACE deleted file mode 100644 index 7ed3455feb6bc8c41119e9ab92ce8a67644abb74..0000000000000000000000000000000000000000 --- a/scripts/preklopeni_dokumentace/source/tabREPLACE +++ /dev/null @@ -1,147 +0,0 @@ - compute nodes number of workers start-up time[s]&|compute nodes|number of workers|start-up time[s]| - --------------- ------------------- --------------------&|---|---|---| - 16 384 831&|16|384|831| - 8 192 807&|8|192|807| - 4 96 483&|4|96|483| - 2 48 16&|2|48|16| - Node type Count Range Memory Cores [Access](resource-allocation-and-job-execution/resources-allocation-policy.html)&|Node type|Count|Range|Memory|Cores|[Access](resource-allocation-and-job-execution/resources-allocation-policy.html)| - ---------------------------- ------- --------------- -------- ------------- --------------------------------------------------------------------------------------------------&|---|---|---|---|---|---| - Nodes without accelerator 180 cn[1-180] 64GB 16 @ 2.4Ghz qexp, qprod, qlong, qfree&|Nodes without accelerator|180|cn[1-180]|64GB|16 @ 2.4Ghz|qexp, qprod, qlong, qfree| - Nodes with GPU accelerator 23 cn[181-203] 96GB 16 @ 2.3Ghz qgpu, qprod&|Nodes with GPU accelerator|23|cn[181-203]|96GB|16 @ 2.3Ghz|qgpu, qprod| - Nodes with MIC accelerator 4 cn[204-207] 96GB 16 @ 2.3GHz qmic, qprod&|Nodes with MIC accelerator|4|cn[204-207]|96GB|16 @ 2.3GHz|qmic, qprod| - Fat compute nodes 2 cn[208-209] 512GB 16 @ 2.4GHz qfat, qprod&|Fat compute nodes|2|cn[208-209]|512GB|16 @ 2.4GHz|qfat, qprod| - Node Processor Memory Accelerator&|Node|Processor|Memory|Accelerator| - ------------------ --------------------------------------- -------- ----------------------&|---|---|---|---| - w/o accelerator 2x Intel Sandy Bridge E5-2665, 2.4GHz 64GB -&|w/o accelerator|2x Intel Sandy Bridge E5-2665, 2.4GHz|64GB|-| - GPU accelerated 2x Intel Sandy Bridge E5-2470, 2.3GHz 96GB NVIDIA Kepler K20&|GPU accelerated|2x Intel Sandy Bridge E5-2470, 2.3GHz|96GB|NVIDIA Kepler K20| - MIC accelerated 2x Intel Sandy Bridge E5-2470, 2.3GHz 96GB Intel Xeon Phi P5110&|MIC accelerated|2x Intel Sandy Bridge E5-2470, 2.3GHz|96GB|Intel Xeon Phi P5110| - Fat compute node 2x Intel Sandy Bridge E5-2665, 2.4GHz 512GB -&|Fat compute node|2x Intel Sandy Bridge E5-2665, 2.4GHz|512GB|-| - Login address Port Protocol Login node&|Login address|Port|Protocol|Login node| - ------------------------ ------ ---------- -----------------------------------------&|---|---|---|---| - salomon.it4i.cz 22 ssh round-robin DNS record for login[1-4]&|salomon.it4i.cz|22|ssh|round-robin DNS record for login[1-4]| - login1.salomon.it4i.cz 22 ssh login1&|login1.salomon.it4i.cz|22|ssh|login1| - login2.salomon.it4i.cz 22 ssh login2&|login1.salomon.it4i.cz|22|ssh|login1| - login3.salomon.it4i.cz 22 ssh login3&|login1.salomon.it4i.cz|22|ssh|login1| - login4.salomon.it4i.cz 22 ssh login4&|login1.salomon.it4i.cz|22|ssh|login1| - Toolchain Module(s)&|Toolchain|Module(s)| - -------------------- ------------------------------------------------&|---|----| - GCC GCC&|GCC|GCC| - ictce icc, ifort, imkl, impi&|ictce|icc, ifort, imkl, impi| - intel GCC, icc, ifort, imkl, impi&|intel|GCC, icc, ifort, imkl, impi| - gompi GCC, OpenMPI&|gompi|GCC, OpenMPI| - goolf BLACS, FFTW, GCC, OpenBLAS, OpenMPI, ScaLAPACK&|goolf|BLACS, FFTW, GCC, OpenBLAS, OpenMPI, ScaLAPACK| - >iompi OpenMPI, icc, ifort&|iompi|OpenMPI, icc, ifort| - iccifort icc, ifort&|iccifort|icc, ifort| - Login address Port Protocol Login node& |Login address|Port|Protocol|Login node| - ------------------------------ ------ ---------- ----------------------------------& |---|---| - salomon-prace.it4i.cz 2222 gsissh login1, login2, login3 or login4& |salomon-prace.it4i.cz|2222|gsissh|login1, login2, login3 or login4| - login1-prace.salomon.it4i.cz 2222 gsissh login1& |login1-prace.salomon.it4i.cz|2222|gsissh|login1| - login2-prace.salomon.it4i.cz 2222 gsissh login2& |login2-prace.salomon.it4i.cz|2222|gsissh|login2| - login3-prace.salomon.it4i.cz 2222 gsissh login3& |login3-prace.salomon.it4i.cz|2222|gsissh|login3| - login4-prace.salomon.it4i.cz 2222 gsissh login4& |login4-prace.salomon.it4i.cz|2222|gsissh|login4| - |Login address|Port|Protocol|Login node|& |Login address|Port|Protocol|Login node| - ------------------------ ------ ---------- ----------------------------------& |---|---| - salomon.it4i.cz 2222 gsissh login1, login2, login3 or login4& |salomon.it4i.cz|2222|gsissh|login1, login2, login3 or login4| - login1.salomon.it4i.cz 2222 gsissh login1& |login1.salomon.it4i.cz|2222|gsissh|login1| - login2.salomon.it4i.cz 2222 gsissh login2& |login2-prace.salomon.it4i.cz|2222|gsissh|login2| - login3.salomon.it4i.cz 2222 gsissh login3& |login3-prace.salomon.it4i.cz|2222|gsissh|login3| - login4.salomon.it4i.cz 2222 gsissh login4& |login4-prace.salomon.it4i.cz|2222|gsissh|login4| - Login address Port Node role& |Login address|Port|Node role| - ------------------------------- ------ -----------------------------& |---|---| - gridftp-prace.salomon.it4i.cz 2812 Front end /control server& |gridftp-prace.salomon.it4i.cz|2812|Front end /control server| - lgw1-prace.salomon.it4i.cz 2813 Backend / data mover server& |lgw1-prace.salomon.it4i.cz|2813|Backend / data mover server| - lgw2-prace.salomon.it4i.cz 2813 Backend / data mover server& |lgw2-prace.salomon.it4i.cz|2813|Backend / data mover server| - lgw3-prace.salomon.it4i.cz 2813 Backend / data mover server& |lgw3-prace.salomon.it4i.cz|2813|Backend / data mover server| - Login address Port Node role& |Login address|Port|Node role| - ------------------------- ------ -----------------------------& |---|---| - gridftp.salomon.it4i.cz 2812 Front end /control server& |gridftp.salomon.it4i.cz|2812|Front end /control server| - lgw1.salomon.it4i.cz 2813 Backend / data mover server& |lgw1.salomon.it4i.cz|2813|Backend / data mover server| - lgw2.salomon.it4i.cz 2813 Backend / data mover server& |lgw2.salomon.it4i.cz|2813|Backend / data mover server| - lgw3.salomon.it4i.cz 2813 Backend / data mover server& |lgw3.salomon.it4i.cz|2813|Backend / data mover server| - File system mount point Filesystem Comment& |File system mount point|Filesystem|Comment| - ------------------------- ------------ ----------------------------------------------------------------& |---|---| - /home Lustre Default HOME directories of users in format /home/prace/login/& |/home|Lustre|Default HOME directories of users in format /home/prace/login/| - /scratch Lustre Shared SCRATCH mounted on the whole cluster& |/scratch|Lustre|Shared SCRATCH mounted on the whole cluster| - Data type Default path& |Data type|Default path| - ------------------------------ ---------------------------------& |---|---| - large project files /scratch/work/user/prace/login/& |large project files|/scratch/work/user/prace/login/| - large scratch/temporary data /scratch/temp/& |large scratch/temporary data|/scratch/temp/| - ---------------------------------------------------------------------------------------------------------------------------------------------& - queue Active project Project resources Nodes priority authorization walltime&|queue|Active project|Project resources|Nodes|priority|authorization|walltime default/max| - default/max& - --------------------- -|---|---|---|--------------------- ---------- --------------- -------------&|---|---| - **qexp** no none required 32 nodes, max 8 per user 150 no 1 / 1h& |**qexp** Express queue|no|none required|32 nodes, max 8 per user|150|no|1 / 1h| - Express queue&\\ - **qprace** yes > 0 >1006 nodes, max 86 per job 0 no 24 / 48h&|**qprace** Production queue|yes|> 0|1006 nodes, max 86 per job|0|no|24 / 48h| - Production queue&\\ - **qfree** yes none required 752 nodes, max 86 per job -1024 no 12 / 12h&|**qfree** Free resource queue|yes|none required|752 nodes, max 86 per job|-1024|no|12 / 12h| - Free resource queue&\\ - ---------------------------------------------------------------------------------------------------------------------------------------------& - Port Protocol& |Port|Protocol| - ------ ----------& |---|---| - 22 ssh& |22|ssh| - 80 http& |80|http| - 443 https& |443|https| - 9418 git& |9418|git| - Node Count Processor Cores Memory Accelerator& |Node|Count|Processor|Cores|Memory|Accelerator| - ----------------- ------- ---------------------------------- ------- -------- --------------------------------------------& |---|---| - w/o accelerator 576 2x Intel Xeon E5-2680v3, 2.5GHz 24 128GB -& |w/o accelerator|576|2x Intel Xeon E5-2680v3, 2.5GHz|24|128GB|-| - MIC accelerated 432 2x Intel Xeon E5-2680v3, 2.5GHz 24 128GB 2x Intel Xeon Phi 7120P, 61cores, 16GB RAM& |MIC accelerated|432|2x Intel Xeon E5-2680v3, 2.5GHz|24|128GB|2x Intel Xeon Phi 7120P, 61cores, 16GB RAM| - Node Count Processor Cores Memory GPU Accelerator& |Node|Count|Processor|Cores|Memory|GPU Accelerator| - --------------- ------- --------------------------------- ------- -------- ------------------------------& |---|---| - visualization 2 2x Intel Xeon E5-2695v3, 2.3GHz 28 512GB NVIDIA QUADRO K5000, 4GB RAM& |visualization|2|2x Intel Xeon E5-2695v3, 2.3GHz|28|512GB|NVIDIA QUADRO K5000, 4GB RAM| - Hypercube dimension& |Hypercube|dimension| - --------------------- -------------------------------------------& |---|---| - 1D ehc_1d& |1D|ehc_1d| - 2D ehc_2d& |2D|ehc_2d| - 3D ehc_3d& |3D|ehc_3d| - 4D ehc_4d& |4D|ehc_4d| - 5D ehc_5d& |5D|ehc_5d| - 6D ehc_6d& |6D|ehc_6d| - 7D ehc_7d& |7D|ehc_7d| - Node type Count Short name Long name Rack& |Node type|Count|Short name|Long name|Rack| - -------------------------------------- ------- ------------------ -------------------------- -------& |---|---| - M-Cell compute nodes w/o accelerator 576 cns1 -cns576 r1i0n0 - r4i7n17 1-4& |M-Cell compute nodes w/o accelerator|576|cns1 -cns576|r1i0n0 - r4i7n17|1-4| - compute nodes MIC accelerated 432 cns577 - cns1008 r21u01n577 - r37u31n1008 21-38& |compute nodes MIC accelerated|432|cns577 - cns1008|r21u01n577 - r37u31n1008|21-38| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------& - Mountpoint Usage Protocol Net Capacity Throughput Limitations Access Services& |Mountpoint|Usage|Protocol|Net|Capacity|Throughput|Limitations|Access| - ---------------------------------------------- -------------------------------- ------------- -------------- ------------ ------------- ------------------------- -----------------------------& |---|---| - /home home directory NFS, 2-Tier 0.5 PB 6 GB/s Quota 250GB Compute and login nodes backed up&| /home|home directory|NFS, 2-Tier|0.5 PB|6 GB/s|Quota 250GB|Compute and login nodes|backed up| - /scratch/work large project files Lustre 1.69 PB 30 GB/s Quota Compute and login nodes none& |/scratch/work|large project files|Lustre|1.69 PB|30 GB/s|Quota|Compute and login nodes|none| - 1TB& - /scratch/temp job temporary data Lustre 1.69 PB 30 GB/s Quota 100TB Compute and login nodes files older 90 days removed& |/scratch/temp|job temporary data|Lustre|1.69 PB|30 GB/s|Quota 100TB|Compute and login nodes|files older 90 days removed| - /ramdisk job temporary data, node local local 120GB 90 GB/s none Compute nodes purged after job ends& |/ramdisk|job temporary data, node local|local|120GB|90 GB/s|none|Compute nodes|purged after job ends| - -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------& - Application Version module& |Application|Version|module| - ------------- -------------- ---------------------& |---|---| - **R** R 3.1.1 R/3.1.1-intel-2015b& |**R**|R 3.1.1|R/3.1.1-intel-2015b| - **Rstudio** Rstudio 0.97 Rstudio& |**Rstudio**|Rstudio 0.97|Rstudio| - 24 MPI**&**24 MPI** - --------------- -----------------------& |**Version**|**Module**| -**Version** **Module**& |---|---| - 2016 Update 2 Advisor/2016_update2& |2016 Update 2|Advisor/2016_update2| - 2016 Update 1 Inspector/2016_update1& |2016 Update 1|Inspector/2016_update1| - Interface Integer type& |Interface|Integer type| - ----------- -----------------------------------------------& |---|---| - LP64 32-bit, int, integer(kind=4), MPI_INT& |LP64|32-bit, int, integer(kind=4), MPI_INT| - ILP64 64-bit, long int, integer(kind=8), MPI_INT64 & |ILP64|64-bit, long int, integer(kind=8), MPI_INT64| - Parameter Value& |Parameter|Value| - ------------------------------------------------- -----------------------------& |---|---| - >max number of atoms 200& |max number of atoms|200| - >max number of valence orbitals 300& |max number of valence orbitals|300| - >max number of basis functions 4095& |max number of basis functions|4095| - >max number of states per symmmetry 20& |max number of states per symmmetry|20| - >max number of state symmetries 16& |max number of state symmetries|16| - >max number of records 200& |max number of records|200| - >max number of primitives >maxbfn x [2]& |max number of primitives|maxbfn x [2]| - Address Port Protocol& |Address|Port|Protocol| - -------------------------------------------------- ---------------------------------- -----------------------------------------& |---|---| - anselm.it4i.cz 22 scp, sftp& |anselm.it4i.cz|22|scp, sftp| - login1.anselm.it4i.cz 22 scp, sftp& |login1.anselm.it4i.cz|22|scp, sftp| - login2.anselm.it4i.cz 22 scp, sftp& |login2.anselm.it4i.cz|22|scp, sftp| - class="discreet">dm1.anselm.it4i.cz 22 class="discreet">scp, sftp</span>& |dm1.anselm.it4i.cz|22|scp, sftp| - Login address Port Protocol Login node& |Login address|Port|Protocol|Login node| - ----------------------- ------ ---------- ----------------------------------------------& |---|----| - anselm.it4i.cz 22 ssh round-robin DNS record for login1 and login2& |anselm.it4i.cz|22|ssh|round-robin DNS record for login1 and login2| - login1.anselm.it4i.cz 22 ssh login1& |login1.anselm.it4i.cz|22|ssh|login1| - login2.anselm.it4i.cz 22 ssh login2& |login2.anselm.it4i.cz|22|ssh|login2|