diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/Anselmprofile.jpg b/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/Anselmprofile.jpg similarity index 100% rename from converted/docs.it4i.cz/anselm-cluster-documentation/Anselmprofile.jpg rename to converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/Anselmprofile.jpg diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/anyconnectcontextmenu.jpg b/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/anyconnectcontextmenu.jpg similarity index 100% rename from converted/docs.it4i.cz/anselm-cluster-documentation/anyconnectcontextmenu.jpg rename to converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/anyconnectcontextmenu.jpg diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/anyconnecticon.jpg b/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/anyconnecticon.jpg similarity index 100% rename from converted/docs.it4i.cz/anselm-cluster-documentation/anyconnecticon.jpg rename to converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/anyconnecticon.jpg diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/downloadfilesuccessfull.jpeg b/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/downloadfilesuccessfull.jpeg similarity index 100% rename from converted/docs.it4i.cz/anselm-cluster-documentation/downloadfilesuccessfull.jpeg rename to converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/downloadfilesuccessfull.jpeg diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/executionaccess.jpeg b/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/executionaccess.jpeg similarity index 100% rename from converted/docs.it4i.cz/anselm-cluster-documentation/executionaccess.jpeg rename to converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/executionaccess.jpeg diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/executionaccess2.jpeg b/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/executionaccess2.jpeg similarity index 100% rename from converted/docs.it4i.cz/anselm-cluster-documentation/executionaccess2.jpeg rename to converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/executionaccess2.jpeg diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/instalationfile.jpeg b/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/instalationfile.jpeg similarity index 100% rename from converted/docs.it4i.cz/anselm-cluster-documentation/instalationfile.jpeg rename to converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/instalationfile.jpeg diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/java_detection.jpeg b/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/java_detection.jpeg similarity index 100% rename from converted/docs.it4i.cz/anselm-cluster-documentation/java_detection.jpeg rename to converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/java_detection.jpeg diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/login.jpeg b/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/login.jpeg similarity index 100% rename from converted/docs.it4i.cz/anselm-cluster-documentation/login.jpeg rename to converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/login.jpeg diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/logingui.jpg b/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/logingui.jpg similarity index 100% rename from converted/docs.it4i.cz/anselm-cluster-documentation/logingui.jpg rename to converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/logingui.jpg diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/loginwithprofile.jpeg b/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/loginwithprofile.jpeg similarity index 100% rename from converted/docs.it4i.cz/anselm-cluster-documentation/loginwithprofile.jpeg rename to converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/loginwithprofile.jpeg diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/outgoing-connections.md b/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/outgoing-connections.md index e1a4a38dc2823aeee3bb07fc9c1d0baafa431f67..342fc28f43616c0e2c6b7b7f394e4551ba15cad5 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/outgoing-connections.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/outgoing-connections.md @@ -44,7 +44,7 @@ local $ ssh -R 6000:remote.host.com:1234 anselm.it4i.cz ``` In this example, we establish port forwarding between port 6000 on -Anselm and port 1234 on the remote.host.com. By accessing +Anselm and port 1234 on the remote.host.com. By ing localhost:6000 on Anselm, an application will see response of remote.host.com:1234. The traffic will run via users local workstation. @@ -55,7 +55,7 @@ Remote radio button. Insert 6000 to Source port textbox. Insert remote.host.com:1234. Click Add button, then Open. Port forwarding may be established directly to the remote host. However, -this requires that user has ssh access to remote.host.com +this requires that user has ssh to remote.host.com ``` $ ssh -L 6000:localhost:1234 remote.host.com @@ -66,7 +66,7 @@ Note: Port number 6000 is chosen as an example only. Pick any free port. ### Port forwarding from compute nodes Remote port forwarding from compute nodes allows applications running on -the compute nodes to access hosts outside Anselm Cluster. +the compute nodes to hosts outside Anselm Cluster. First, establish the remote port forwarding form the login node, as [described @@ -80,7 +80,7 @@ $ ssh -TN -f -L 6000:localhost:6000 login1 ``` In this example, we assume that port forwarding from login1:6000 to -remote.host.com:1234 has been established beforehand. By accessing +remote.host.com:1234 has been established beforehand. By ing localhost:6000, an application running on a compute node will see response of remote.host.com:1234 @@ -90,7 +90,7 @@ Port forwarding is static, each single port is mapped to a particular port on remote host. Connection to other remote host, requires new forward. -Applications with inbuilt proxy support, experience unlimited access to +Applications with inbuilt proxy support, experience unlimited to remote hosts, via single proxy server. To establish local proxy server on your workstation, install and run @@ -114,6 +114,6 @@ local $ ssh -R 6000:localhost:1080 anselm.it4i.cz ``` Now, configure the applications proxy settings to **localhost:6000**. -Use port forwarding to access the [proxy server from compute +Use port forwarding to the [proxy server from compute nodes](outgoing-connections.html#port-forwarding-from-compute-nodes) as well . diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/shell-and-data-access/shell-and-data-access.md b/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/shell-and-data-access/shell-and-data-access.md index 65d1c36ad2d0cd897ffd1c16e0956f7bd0d3b304..82f3ae4121b2b45c4f9d86253fcd0f1d0b683578 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/shell-and-data-access/shell-and-data-access.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/shell-and-data-access/shell-and-data-access.md @@ -1,4 +1,4 @@ -Shell access and data transfer +Shell and data transfer ============================== @@ -8,7 +8,7 @@ Shell access and data transfer Interactive Login ----------------- -The Anselm cluster is accessed by SSH protocol via login nodes login1 +The Anselm cluster is ed by SSH protocol via login nodes login1 and login2 at address anselm.it4i.cz. The login nodes may be addressed specifically, by prepending the login node name to the address. @@ -19,7 +19,7 @@ login1.anselm.it4i.cz 22 ssh login1 login2.anselm.it4i.cz 22 ssh login2 The authentication is by the [private -key](../../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html) +key](../../../get-started-with-it4innovations/ing-the-clusters/shell-access-and-data-transfer/ssh-keys.html) Please verify SSH fingerprints during the first logon. They are identical on all login nodes: @@ -44,7 +44,7 @@ local $ chmod 600 /path/to/id_rsa ``` On **Windows**, use [PuTTY ssh -client](../../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty.html). +client](../../../get-started-with-it4innovations/ing-the-clusters/shell-access-and-data-transfer/putty/putty.html). After logging in, you will see the command prompt: @@ -79,10 +79,10 @@ Address Port anselm.it4i.cz 22 scp, sftp login1.anselm.it4i.cz 22 scp, sftp login2.anselm.it4i.cz 22 scp, sftp - class="discreet">dm1.anselm.it4i.cz class="discreet">22 <span class="discreet">scp, sftp</span> + class="discreet">dm1.anselm.it4i.cz class="discreet">22 class="discreet">scp, sftp</span>  The authentication is by the [private -key](../../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html) +key](../../../get-started-with-it4innovations/ing-the-clusters/shell-access-and-data-transfer/ssh-keys.html) Data transfer rates up to **160MB/s** can be achieved with scp or sftp. 1TB may be transferred in 1:50h. diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/successfullinstalation.jpeg b/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/successfullinstalation.jpeg similarity index 100% rename from converted/docs.it4i.cz/anselm-cluster-documentation/successfullinstalation.jpeg rename to converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/successfullinstalation.jpeg diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/vpn-access.md b/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/vpn-access.md index 8f9b46013862cdf3290aa48ff3c924a1d8916974..e245539bcb98ad2b3b4613b5e866ea3b8c9d55c3 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/vpn-access.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/vpn-access.md @@ -35,37 +35,37 @@ It is impossible to connect to VPN from other operating systems. You can install VPN client from web interface after successful login with LDAP credentials on address <https://vpn1.it4i.cz/anselm> - + According to the Java settings after login, the client either automatically installs, or downloads installation file for your operating system. It is necessary to allow start of installation tool for automatic installation. - - + + + +](../executionaccess.jpg/@@images/4d6e7cb7-9aa7-419c-9583-6dfd92b2c015.jpeg "Execution access") + + After successful installation, VPN connection will be established and you can use available resources from IT4I network. - + + If your Java setting doesn't allow automatic installation, you can download installation file and install VPN client manually. - + + After you click on the link, download of installation file will start. - + + After successful download of installation file, you have to execute this tool with administrator's rights and install VPN client manually. @@ -76,22 +76,22 @@ Working with VPN client You can use graphical user interface or command line interface to run VPN client on all supported operating systems. We suggest using GUI. - + Before the first login to VPN, you have to fill URL **https://vpn1.it4i.cz/anselm** into the text field. - + After you click on the Connect button, you must fill your login credentials. - + After a successful login, the client will minimize to the system tray. If everything works, you can see a lock in the Cisco tray icon. - If you right-click on this icon, you will see a context menu in which @@ -105,18 +105,18 @@ profile and creates a new item "ANSELM" in the connection list. For subsequent connections, it is not necessary to re-enter the URL address, but just select the corresponding item. - + Then AnyConnect automatically proceeds like in the case of first logon. - + + After a successful logon, you can see a green circle with a tick mark on the lock icon. - + + For disconnecting, right-click on the AnyConnect client icon in the system tray and select **VPN Disconnect**. diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/compute-nodes.md b/converted/docs.it4i.cz/anselm-cluster-documentation/compute-nodes.md index ecae0fd750eef8f45a0efec6338f165bbbd64cef..46675a0775ca8a61a59bbcade72aaed9c9d0dcd8 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/compute-nodes.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/compute-nodes.md @@ -15,38 +15,38 @@ nodes.**** ###Compute Nodes Without Accelerator** -- <div class="itemizedlist"> +- 180 nodes -- <div class="itemizedlist"> +- 2880 cores in total -- <div class="itemizedlist"> +- two Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node -- <div class="itemizedlist"> +- 64 GB of physical memory per node - one 500GB SATA 2,5” 7,2 krpm HDD per node -- <div class="itemizedlist"> +- bullx B510 blade servers -- <div class="itemizedlist"> +- cn[1-180] @@ -54,44 +54,44 @@ nodes.**** ###Compute Nodes With GPU Accelerator** -- <div class="itemizedlist"> +- 23 nodes -- <div class="itemizedlist"> +- 368 cores in total -- <div class="itemizedlist"> +- two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node -- <div class="itemizedlist"> +- 96 GB of physical memory per node - one 500GB SATA 2,5” 7,2 krpm HDD per node -- <div class="itemizedlist"> +- GPU accelerator 1x NVIDIA Tesla Kepler K20 per node -- <div class="itemizedlist"> +- bullx B515 blade servers -- <div class="itemizedlist"> +- cn[181-203] @@ -99,44 +99,44 @@ nodes.**** ###Compute Nodes With MIC Accelerator** -- <div class="itemizedlist"> +- 4 nodes -- <div class="itemizedlist"> +- 64 cores in total -- <div class="itemizedlist"> +- two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node -- <div class="itemizedlist"> +- 96 GB of physical memory per node - one 500GB SATA 2,5” 7,2 krpm HDD per node -- <div class="itemizedlist"> +- MIC accelerator 1x Intel Phi 5110P per node -- <div class="itemizedlist"> +- bullx B515 blade servers -- <div class="itemizedlist"> +- cn[204-207] @@ -144,44 +144,44 @@ nodes.**** ###Fat Compute Nodes** -- <div> +- 2 nodes -- <div> +- 32 cores in total -- <div> +- 2 Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node -- <div> +- 512 GB of physical memory per node - two 300GB SAS 3,5”15krpm HDD (RAID1) per node -- <div> +- two 100GB SLC SSD per node -- <div> +- bullx R423-E3 servers -- <div> +- cn[208-209] @@ -220,10 +220,10 @@ with accelerator). Processors support Advanced Vector Extensions (AVX) ### Intel Sandy Bridge E5-2665 Processor - eight-core - speed: 2.4 GHz, up to 3.1 GHz using Turbo Boost Technology -- peak performance: class="emphasis">19.2 Gflop/s per +- peak performance: 19.2 Gflop/s per core - caches: - <div class="itemizedlist"> + - L2: 256 KB per core - L3: 20 MB per processor @@ -235,10 +235,10 @@ with accelerator). Processors support Advanced Vector Extensions (AVX) ### Intel Sandy Bridge E5-2470 Processor - eight-core - speed: 2.3 GHz, up to 3.1 GHz using Turbo Boost Technology -- peak performance: class="emphasis">18.4 Gflop/s per +- peak performance: 18.4 Gflop/s per core - caches: - <div class="itemizedlist"> + - L2: 256 KB per core - L3: 20 MB per processor @@ -277,7 +277,7 @@ Memory Architecture ### Compute Node Without Accelerator - 2 sockets - Memory Controllers are integrated into processors. - <div class="itemizedlist"> + - 8 DDR3 DIMMS per node - 4 DDR3 DIMMS per CPU @@ -291,7 +291,7 @@ Memory Architecture ### Compute Node With GPU or MIC Accelerator - 2 sockets - Memory Controllers are integrated into processors. - <div class="itemizedlist"> + - 6 DDR3 DIMMS per node - 3 DDR3 DIMMS per CPU @@ -305,7 +305,7 @@ Memory Architecture ### Fat Compute Node - 2 sockets - Memory Controllers are integrated into processors. - <div class="itemizedlist"> + - 16 DDR3 DIMMS per node - 8 DDR3 DIMMS per CPU diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/environment-and-modules.md b/converted/docs.it4i.cz/anselm-cluster-documentation/environment-and-modules.md index a301f19292f5262080771d8cc2e93c5f1138dee9..b4a02b88b8ef2d6f8bbc99808b9a4293f4c3d58c 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/environment-and-modules.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/environment-and-modules.md @@ -35,7 +35,7 @@ etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Take care for SSH session interactivity for such commands as id="result_box" class="short_text"> class="hps alt-edited">stated -class="hps">in the previous example. +in the previous example. ### Application Modules diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/hardware-overview.md b/converted/docs.it4i.cz/anselm-cluster-documentation/hardware-overview.md index 50ae62cbd9a4b5377a0fe58552b5cb0445d2f79a..39af24ff18fd30782488e6cb8783e51a794989a2 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/hardware-overview.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/hardware-overview.md @@ -10,7 +10,7 @@ of which 180 are regular compute nodes, 23 GPU Kepler K20 accelerated nodes, 4 MIC Xeon Phi 5110 accelerated nodes and 2 fat nodes. Each node is a class="WYSIWYG_LINK">powerful x86-64 computer, equipped with 16 cores (two eight-core Intel Sandy Bridge processors), -at least 64GB RAM, and local hard drive. The user access to the Anselm +at least 64GB RAM, and local hard drive. The user to the Anselm cluster is provided by two login nodes login[1,2]. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 320TB /home disk storage to store the user files. The 146TB shared @@ -19,7 +19,7 @@ share 320TB /home disk storage to store the user files. The 146TB shared The Fat nodes are equipped with large amount (512GB) of memory. Virtualization infrastructure provides resources to run long term servers and services in virtual mode. Fat nodes and virtual servers may -access 45 TB of dedicated block storage. Accelerated nodes, fat nodes, + 45 TB of dedicated block storage. Accelerated nodes, fat nodes, and virtualization infrastructure are available [upon request](https://support.it4i.cz/rt) made by a PI. @@ -348,9 +348,9 @@ disk storage available on all compute nodes /lscratch. [More about class="WYSIWYG_LINK">Storage](storage.html). -The user access to the Anselm cluster is provided by two login nodes -login1, login2, and data mover node dm1. [More about accessing -cluster.](accessing-the-cluster.html) +The user to the Anselm cluster is provided by two login nodes +login1, login2, and data mover node dm1. [More about ing +cluster.](ing-the-cluster.html)  The parameters are summarized in the following tables: diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/introduction.md b/converted/docs.it4i.cz/anselm-cluster-documentation/introduction.md index cb25cce81b9eed12427ad908695327c84b5f6cec..39979bff8e9c120e08cd3a5833e8ebcdbb72d8f8 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/introduction.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/introduction.md @@ -23,7 +23,7 @@ class="WYSIWYG_LINK">Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of [software](software.1.html) packages targeted at -different scientific domains. These packages are accessible via the +different scientific domains. These packages are ible via the [modules environment](environment-and-modules.html). User data shared file-system (HOME, 320TB) and job data shared @@ -37,4 +37,4 @@ Read more on how to [apply for resources](../get-started-with-it4innovations/applying-for-resources.html), [obtain login credentials,](../get-started-with-it4innovations/obtaining-login-credentials.html) -and [access the cluster](accessing-the-cluster.html). +and [ the cluster](accessing-the-cluster.html). diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/network.md b/converted/docs.it4i.cz/anselm-cluster-documentation/network.md index f212cbea7da35aab0d3411dd809a9be887b72768..673a16d793e7cc1e1a87bbc08beb76fd020a1020 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/network.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/network.md @@ -20,7 +20,7 @@ high-bandwidth, low-latency QDR network (IB 4x QDR, 40 Gbps). The network topology is a fully non-blocking fat-tree. -The compute nodes may be accessed via the Infiniband network using ib0 +The compute nodes may be ed via the Infiniband network using ib0 network interface, in address range 10.2.1.1-209. The MPI may be used to establish native Infiniband connection among the nodes. @@ -34,7 +34,7 @@ other nodes concurrently. Ethernet Network ---------------- -The compute nodes may be accessed via the regular Gigabit Ethernet +The compute nodes may be ed via the regular Gigabit Ethernet network interface eth0, in address range 10.1.1.1-209, or by using aliases cn1-cn209. The network provides **114MB/s** transfer rates via the TCP connection. @@ -55,5 +55,5 @@ $ ssh 10.2.1.110 $ ssh 10.1.1.108 ``` -In this example, we access the node cn110 by Infiniband network via the +In this example, we the node cn110 by Infiniband network via the ib0 interface, then from cn110 to cn108 by Ethernet network. diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/prace.md b/converted/docs.it4i.cz/anselm-cluster-documentation/prace.md index 7e6fe2800e4641edcc40c1def7102ee7005abf4d..22b199c7ee52c366b3d6cab076b50cd84568c57f 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/prace.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/prace.md @@ -14,12 +14,12 @@ general documentation applies to them as well. This section shows the main differences for quicker orientation, but often uses references to the original documentation. PRACE users who don't undergo the full procedure (including signing the IT4I AuP on top of the PRACE AuP) will -not have a password and thus access to some services intended for +not have a password and thus to some services intended for regular users. This can lower their comfort, but otherwise they should be able to use the TIER-1 system as intended. Please see the [Obtaining Login Credentials section](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.html), -if the same level of access is required. +if the same level of is required. All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation/) @@ -35,7 +35,7 @@ Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/). Information about the local services are provided in the [introduction of general user documentation](introduction.html). Please keep in mind, that standard PRACE accounts don't have a password -to access the web interface of the local (IT4Innovations) request +to the web interface of the local (IT4Innovations) request tracker and thus a new ticket should be created by sending an e-mail to support[at]it4i.cz. @@ -59,7 +59,7 @@ Accessing the cluster ### Access with GSI-SSH -For all PRACE users the method for interactive access (login) and data +For all PRACE users the method for interactive (login) and data transfer based on grid services from Globus Toolkit (GSI SSH and GridFTP) is supported. @@ -67,14 +67,14 @@ The user will need a valid certificate and to be present in the PRACE LDAP (please contact your HOME SITE or the primary investigator of your project for LDAP account creation). -Most of the information needed by PRACE users accessing the Anselm +Most of the information needed by PRACE users ing the Anselm TIER-1 system can be found here: - [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs) - [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ) -- [Interactive access using +- [Interactive using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh) - [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details) @@ -95,9 +95,9 @@ valid 12 hours), use:  -To access Anselm cluster, two login nodes running GSI SSH service are +To Anselm cluster, two login nodes running GSI SSH service are available. The service is available from public Internet as well as from -the internal PRACE network (accessible only from other PRACE partners). +the internal PRACE network (ible only from other PRACE partners). Access from PRACE network:** @@ -162,12 +162,12 @@ gsiscp can be used: If the user needs to run X11 based graphical application and does not have a X11 server, the applications can be run using VNC service. If the -user is using regular SSH based access, please see the [section in +user is using regular SSH based , please see the [section in general documentation](https://docs.it4i.cz/anselm-cluster-documentation/resolveuid/11e53ad0d2fd4c5187537f4baeedff33). -If the user uses GSI SSH based access, then the procedure is similar to -the SSH based access ([look +If the user uses GSI SSH based , then the procedure is similar to +the SSH based ([look here](https://docs.it4i.cz/anselm-cluster-documentation/resolveuid/11e53ad0d2fd4c5187537f4baeedff33)), only the port forwarding must be done using GSI SSH: @@ -176,7 +176,7 @@ only the port forwarding must be done using GSI SSH: ### Access with SSH After successful obtainment of login credentials for the local -IT4Innovations account, the PRACE users can access the cluster as +IT4Innovations account, the PRACE users can the cluster as regular users using SSH. For more information please see the [section in general documentation](https://docs.it4i.cz/anselm-cluster-documentation/resolveuid/5d3d6f3d873a42e584cbf4365c4e251b). @@ -192,7 +192,7 @@ documentation](https://docs.it4i.cz/anselm-cluster-documentation/resolveuid/5d3d Apart from the standard mechanisms, for PRACE users to transfer data to/from Anselm cluster, a GridFTP server running Globus Toolkit GridFTP service is available. The service is available from public Internet as -well as from the internal PRACE network (accessible only from other +well as from the internal PRACE network (ible only from other PRACE partners). There's one control server and three backend servers for striping and/or @@ -268,8 +268,8 @@ Usage of the cluster -------------------- There are some limitations for PRACE user when using the cluster. By -default PRACE users aren't allowed to access special queues in the PBS -Pro to have high priority or exclusive access to some special equipment +default PRACE users aren't allowed to special queues in the PBS +Pro to have high priority or exclusive to some special equipment like accelerated nodes and high memory (fat) nodes. There may be also restrictions obtaining a working license for the commercial software installed on the cluster, mostly because of the license agreement or diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/remote-visualization.md b/converted/docs.it4i.cz/anselm-cluster-documentation/remote-visualization.md index 8ca4bf3e7e8b75a5d4c9050cfd32512654a03a96..8cd088240eee54b66d4e89032f3de4f200b05ea7 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/remote-visualization.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/remote-visualization.md @@ -6,7 +6,7 @@ Introduction The goal of this service is to provide the users a GPU accelerated use of OpenGL applications, especially for pre- and post- processing work, -where not only the GPU performance is needed but also fast access to the +where not only the GPU performance is needed but also fast to the shared file systems of the cluster and a reasonable amount of RAM. The service is based on integration of open source tools VirtualGL and @@ -32,9 +32,9 @@ InfiniBand QDR Schematic overview ------------------ - + - + How to use the service ---------------------- @@ -120,7 +120,7 @@ $ ssh login2.anselm.it4i.cz -L 5901:localhost:5901 *If you use Windows and Putty, please refer to port forwarding setup in the documentation:* -[https://docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/x-window-and-vnc#section-12](accessing-the-cluster/x-window-and-vnc.html#section-12) +[https://docs.it4i.cz/anselm-cluster-documentation/ing-the-cluster/x-window-and-vnc#section-12](accessing-the-cluster/x-window-and-vnc.html#section-12) #### 7. If you don't have Turbo VNC installed on your workstation. {#7-if-you-don-t-have-turbo-vnc-installed-on-your-workstation} @@ -138,7 +138,7 @@ $ vncviewer localhost:5901 *If you use Windows version of TurboVNC Viewer, just run the Viewer and use address **localhost:5901**.* -#### 9. Proceed to the chapter "Access the visualization node." {#9-proceed-to-the-chapter-access-the-visualization-node} +#### 9. Proceed to the chapter "Access the visualization node." {#9-proceed-to-the-chapter--the-visualization-node} *Now you should have working TurboVNC session connected to your workstation.* @@ -155,7 +155,7 @@ $ vncserver -kill :1 Access the visualization node ----------------------------- -To access the node use a dedicated PBS Professional scheduler queue +To the node use a dedicated PBS Professional scheduler queue qviz**. The queue has following properties: <table> @@ -196,7 +196,7 @@ default/max</th> </tbody> </table> -Currently when accessing the node, each user gets 4 cores of a CPU +Currently when ing the node, each user gets 4 cores of a CPU allocated, thus approximately 16 GB of RAM and 1/4 of the GPU capacity. *If more GPU power or RAM is required, it is recommended to allocate one whole node per user, so that all 16 cores, whole RAM and whole GPU is @@ -204,7 +204,7 @@ exclusive. This is currently also the maximum allowed allocation per one user. One hour of work is allocated by default, the user may ask for 2 hours maximum.* -To access the visualization node, follow these steps: +To the visualization node, follow these steps: #### 1. In your VNC session, open a terminal and allocate a node using PBSPro qsub command. {#1-in-your-vnc-session-open-a-terminal-and-allocate-a-node-using-pbspro-qsub-command} @@ -281,22 +281,22 @@ Tips and Tricks If you want to increase the responsibility of the visualization, please adjust your TurboVNC client settings in this way: - + To have an idea how the settings are affecting the resulting picture quality three levels of "JPEG image quality" are demonstrated: 1. JPEG image quality = 30 - + 2. JPEG image quality = 15 - + 3. JPEG image quality = 10 - + diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/fairshare_formula.png b/converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/fairshare_formula.png new file mode 100644 index 0000000000000000000000000000000000000000..6a5a1443fa08cd9d3c62bea52bbb48136b2501dc Binary files /dev/null and b/converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/fairshare_formula.png differ diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/introduction.md b/converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/introduction.md index 06d093ce6f266d317868007e2c45e51e92cfb5a9..4ba64d17e38dbb961b98d3975b9b17165e994f94 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/introduction.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/introduction.md @@ -22,8 +22,8 @@ The resources are allocated to the job in a fairshare fashion, subject to constraints set by the queue and resources available to the Project. [The Fairshare](job-priority.html) at Anselm ensures that individual users may consume approximately equal amount of -resources per week. The resources are accessible via several queues for -queueing the jobs. The queues provide prioritized and exclusive access +resources per week. The resources are ible via several queues for +queueing the jobs. The queues provide prioritized and exclusive to the computational resources. Following queues are available to Anselm users: diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution.md b/converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution.md index 9bb39151d678be66ad23bf47b835ac79ca393762..41d61d31a128b689bd5317178335280ff99feef6 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution.md @@ -118,7 +118,7 @@ Nodes equipped with Intel Xeon E5-2665 CPU have base clock frequency 2.4GHz, nodes equipped with Intel Xeon E5-2470 CPU have base frequency 2.3 GHz (see section Compute Nodes for details). Nodes may be selected via the PBS resource attribute -class="highlightedSearchTerm">cpu_freq . +cpu_freq . CPU Type base freq. Nodes cpu_freq attribute -------------------- ------------ ---------------------------- --------------------- @@ -369,11 +369,11 @@ $ pwd In this example, 4 nodes were allocated interactively for 1 hour via the qexp queue. The interactive shell is executed in the home directory. -All nodes within the allocation may be accessed via ssh. Unallocated -nodes are not accessible to user. +All nodes within the allocation may be ed via ssh. Unallocated +nodes are not ible to user. -The allocated nodes are accessible via ssh from login nodes. The nodes -may access each other via ssh as well. +The allocated nodes are ible via ssh from login nodes. The nodes +may each other via ssh as well. Calculations on allocated nodes may be executed remotely via the MPI, ssh, pdsh or clush. You may find out which nodes belong to the diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job_sort_formula.png b/converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job_sort_formula.png new file mode 100644 index 0000000000000000000000000000000000000000..6078911559aa56effb4b342fa4ffd074cfaed46f Binary files /dev/null and b/converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job_sort_formula.png differ diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy.md b/converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy.md index 23c8050fa39ffce846574d24cfcbcf7fcec28807..881fa3149d70c6236022cbf7ecc94859b222808c 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy.md @@ -13,8 +13,8 @@ to constraints set by the queue and resources available to the Project. The Fairshare at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority.html) section. The -resources are accessible via several queues for queueing the jobs. The -queues provide prioritized and exclusive access to the computational +resources are ible via several queues for queueing the jobs. The +queues provide prioritized and exclusive to the computational resources. Following table provides the queue partitioning overview:   @@ -135,7 +135,7 @@ select the proper nodes during the PSB job submission. - **qprod**, the Production queue****: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All - nodes may be accessed via the qprod queue, except the reserved ones. + nodes may be ed via the qprod queue, except the reserved ones. >*>178 nodes without accelerator are included.* Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special @@ -144,14 +144,14 @@ select the proper nodes during the PSB job submission. - **qlong**, the Long queue****: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 60 nodes - without acceleration may be accessed via the qlong queue. Full + without acceleration may be ed via the qlong queue. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it.> *The maximum runtime in qlong is 144 hours (three times of the standard qprod time - 3 * 48 h).* - **qnvidia, qmic, qfat**, the Dedicated queues****: The queue qnvidia - is dedicated to access the Nvidia accelerated nodes, the qmic to - access MIC nodes and qfat the Fat nodes. It is required that active + is dedicated to the Nvidia accelerated nodes, the qmic to + MIC nodes and qfat the Fat nodes. It is required that active project with nonzero remaining resources is specified to enter these queues. 23 nvidia, 4 mic and 2 fat nodes are included. Full nodes, 16 cores per node are allocated. The queues run with> @@ -168,12 +168,12 @@ select the proper nodes during the PSB job submission. after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to - the Project. Only 178 nodes without accelerator may be accessed from + the Project. Only 178 nodes without accelerator may be ed from this queue. Full nodes, 16 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours. -### Notes ** +### Notes The job wall clock time defaults to **half the maximum time**, see table @@ -193,7 +193,7 @@ Anselm users may check current queue configuration at Check the status of jobs, queues and compute nodes at <https://extranet.it4i.cz/anselm/> - + Display the queue status on Anselm: diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys.md index c34ea3094c159950bec97fc5c0b6a136f192e483..e1c2bba19ffee599b72d221151761dc1877428e1 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys.md @@ -14,9 +14,9 @@ are distinguished by "**Academic...**" word in the name of  license or by two letter preposition "**aa_**" in the license feature name. Change of license is realized on command line respectively directly in user's pbs file (see individual products). [ id="result_box" -class="short_text"> class="hps">More class="hps">about +class="short_text"> More about licensing -class="hps">here](ansys/licensing.html) +here](ansys/licensing.html) To load the latest version of any ANSYS product (Mechanical, Fluent, CFX, MAPDL,...) load the module: diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-cfx.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-cfx.md index 515285d8ed20b3b74b75c3b1e1a570e231573efd..6e40eaa7d90ea7ac9fc413ae080d1a6b27708de9 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-cfx.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-cfx.md @@ -79,9 +79,9 @@ License** should be selected by parameter -P (Big letter **P**). Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial.** -[ id="result_box" class="short_text"> class="hps">More - class="hps">about licensing -class="hps">here](licensing.html) +[ id="result_box" More + about licensing +here](licensing.html)  diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-fluent.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-fluent.md index 1db2e6eb0be2a514fa91a85848c6b7b44d2b5581..e725ae6a0825b0163d7f45a36845fb7378ac9a23 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-fluent.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-fluent.md @@ -92,13 +92,13 @@ This syntax will start the ANSYS FLUENT job under PBS Professional using the qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of -class="emphasis">*job_ID.hostname*. This job ID can then be used +*job_ID.hostname*. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o> -class="emphasis">*job_ID*.     +*job_ID*.     3. Running Fluent via user's config file ---------------------------------------- diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md index 48cba5f35b8c3c80e04cefb9d3f7b4c6e567f150..0369b68dc705ed8169c19fc7c541c98f7a12bf37 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md @@ -3,7 +3,7 @@ ANSYS LS-DYNA [ANSYS LS-DYNA](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/Explicit+Dynamics/ANSYS+LS-DYNA) -software provides convenient and easy-to-use access to the +software provides convenient and easy-to-use to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md index 1c0aab2a5b77a5fbde4338214c73b5b4ccd8cd4a..37287cff5f75425ad7389c50c9a76ed0b9ba6cd5 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md @@ -75,9 +75,9 @@ License** should be selected by parameter -p. Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**, aa_r_dy (ANSYS **Academic** AUTODYN)> -[ id="result_box" class="short_text"> class="hps">More - class="hps">about licensing -class="hps">here](licensing.html) +[ id="result_box" More + about licensing +here](licensing.html) diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/chemistry/molpro.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/chemistry/molpro.md index 7cc4223a88c2ed9e9a5afdb08a43a04aa3334580..ade13a15fb0009d539bbcbd947aec8d052723a2c 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/chemistry/molpro.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/chemistry/molpro.md @@ -14,7 +14,7 @@ License ------- Molpro software package is available only to users that have a valid -license. Please contact support to enable access to Molpro if you have a +license. Please contact support to enable to Molpro if you have a valid license appropriate for running on our cluster (eg. >academic research group licence, parallel execution). diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/compilers.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/compilers.md index 5a862a79ce8c7d925a7f6b5ec532165ceb01cab1..dc8f123ef3d895c64576e841769332f48993fbcc 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/compilers.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/compilers.md @@ -32,7 +32,7 @@ GNU C/C++ and Fortran Compilers For compatibility reasons there are still available the original (old 4.4.6-4) versions of GNU compilers as part of the OS. These are -accessible in the search path by default. +ible in the search path by default. It is strongly recommended to use the up to date version (4.8.1) which comes with the module gcc: diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/comsol/comsol-multiphysics.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/comsol/comsol-multiphysics.md index 297236be2542569b2cdc2415872e3895d6d0b3a6..8019d80332d82bdb4aee198eaa920f2794c9845b 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/comsol/comsol-multiphysics.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/comsol/comsol-multiphysics.md @@ -9,7 +9,7 @@ COMSOL Multiphysics® ------------------------- ->>[COMSOL](http://www.comsol.com)<span><span> +>>[COMSOL](http://www.comsol.com)><span> is a powerful environment for modelling and solving various engineering and scientific problems based on partial differential equations. COMSOL is designed to solve coupled or multiphysics phenomena. For many @@ -37,9 +37,9 @@ applications. others](http://www.comsol.com/products) >>COMSOL also allows an ->><span><span>interface support for +>>><span>interface support for equation-based modelling of -</span></span>>>partial differential +>>partial differential equations. >>Execution @@ -49,21 +49,21 @@ equations. >>On the Anselm cluster COMSOL is available in the latest stable version. There are two variants of the release: -- >>**Non commercial**<span><span> or so +- >>**Non commercial**><span> or so called >>**EDU variant**>>, which can be used for research and educational purposes. -- >>**Commercial**<span><span> or so called - >>**COM variant**</span></span><span><span>, +- >>**Commercial**><span> or so called + >>**COM variant**><span>, which can used also for commercial activities. - >>**COM variant**</span></span><span><span> + >>**COM variant**><span> has only subset of features compared to the >>**EDU - variant**>> available. <span - id="result_box" class="short_text"> - class="hps">More class="hps">about - licensing will be posted class="hps">here + variant**>> available. + id="result_box" + More about + licensing will be posted here soon.</span> @@ -73,7 +73,7 @@ stable version. There are two variants of the release: $ module load comsol ``` ->>By default the <span><span>**EDU +>>By default the ><span>**EDU variant**>> will be loaded. If user needs other version or variant, load the particular version. To obtain the list of available versions use @@ -136,19 +136,19 @@ LiveLink™* *for MATLAB^®^ >>COMSOL is the software package for the numerical solution of the partial differential equations. LiveLink for MATLAB allows connection to the -COMSOL>>^<span><span><span><span><span><span><span>**®**</span></span></span></span></span></span></span>^</span></span><span><span> +COMSOL>>^><span><span><span><span>**®**</span></span></span></span></span>^ API (Application Programming Interface) with the benefits of the programming language and computing environment of the MATLAB. >>LiveLink for MATLAB is available in both ->>**EDU**</span></span><span><span> and ->>**COM**</span></span><span><span> ->>**variant**</span></span><span><span> of the +>>**EDU**><span> and +>>**COM**><span> +>>**variant**><span> of the COMSOL release. On Anselm 1 commercial -(>>**COM**</span></span><span><span>) license +(>>**COM**><span>) license and the 5 educational -(>>**EDU**</span></span><span><span>) licenses +(>>**EDU**><span>) licenses of LiveLink for MATLAB (please see the [ISV Licenses](../isv_licenses.html)) are available. Following example shows how to start COMSOL model from MATLAB via diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/Snmekobrazovky20160708v12.33.35.png b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/Snmekobrazovky20160708v12.33.35.png new file mode 100644 index 0000000000000000000000000000000000000000..d8ea15508f0714eeacfadff6d85fe8cafe5c406b Binary files /dev/null and b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/Snmekobrazovky20160708v12.33.35.png differ diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/allinea-ddt.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/allinea-ddt.md index d909877d1cfb77e240d7a9c9122998748fafadb2..6397524fa0e33cef52a94842136be5f410bf676a 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/allinea-ddt.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/allinea-ddt.md @@ -77,7 +77,7 @@ forwarding enabled. This could mean using the -X in the ssh:  $ ssh -X username@anselm.it4i.cz -Other options is to access login node using VNC. Please see the detailed +Other options is to login node using VNC. Please see the detailed information on how to [use graphic user interface on Anselm](https://docs.it4i.cz/anselm-cluster-documentation/software/debuggers/resolveuid/11e53ad0d2fd4c5187537f4baeedff33) . diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/cube.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/cube.md index 1b8957ed751873683a506f73dc51ece8beffa64b..d84e0da10ea5efee6fa7c4c797ff04a863b79841 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/cube.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/cube.md @@ -21,8 +21,8 @@ dimension is organized by files and routines in your source code etc. - + + diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md index cdd497466fdc33757e5cec1be21a5aef06077ece..7ad500b8770fee3c14343c8c50b7941468ed9872 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md @@ -85,7 +85,7 @@ Usage: > pcm-power.x <delay> | ### pcm This command provides an overview of performance counters and memory -usage. >Usage: > <span pcm.x +usage. >Usage: > pcm.x <delay> | <external program> Sample output : @@ -207,7 +207,7 @@ installed on Anselm. API --- -In a similar fashion to PAPI, PCM provides a C++ API to access the +In a similar fashion to PAPI, PCM provides a C++ API to the performance counter from within your application. Refer to the [doxygen documentation](http://intel-pcm-api-documentation.github.io/classPCM.html) for details of the API. diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md index 26ab8794f660f011a54079b13b6aca3e7028f383..b4569b2834638dce7a55e929789c2ecee656aed9 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md @@ -19,7 +19,7 @@ highlight of the features: bandwidth - Power usage analysis - frequency and sleep states. - + diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/papi.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/papi.md index 385f2c266ce5cb877d6da2e5d35ffa9066617518..580197aeba698265dc808e83147290662457b5b9 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/papi.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/papi.md @@ -9,7 +9,7 @@ Introduction ------------ dir="auto">Performance Application Programming -Interface >(PAPI)  is a portable interface to access +Interface >(PAPI)  is a portable interface to hardware performance counters (such as instruction counts and cache misses) found in most modern architectures. With the new component framework, PAPI is not limited only to CPU counters, but offers also diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/total-view.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/total-view.md index 1474446d0f068975d7144bd1ac25d6dc9abbfad7..e7bb3eb1d6b0352b49d93ab9d906a72d3f091f7e 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/total-view.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/total-view.md @@ -65,7 +65,7 @@ using the -X in the ssh: ssh -X username@anselm.it4i.cz -Other options is to access login node using VNC. Please see the detailed +Other options is to login node using VNC. Please see the detailed information on how to use graphic user interface on Anselm [here](https://docs.it4i.cz/anselm-cluster-documentation/software/debuggers/resolveuid/11e53ad0d2fd4c5187537f4baeedff33#VNC). diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/valgrind.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/valgrind.md index d76f3cd3e72ce9fb37ace70b8fec131984150e4e..77fcfe6acfa31ef2d2ff2bd7c33c9bcb5a9c06ca 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/valgrind.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/valgrind.md @@ -20,8 +20,8 @@ Valgrind run 5-100 times slower. The main tools available in Valgrind are : - **Memcheck**, the original, must used and default tool. Verifies - memory access in you program and can detect use of unitialized - memory, out of bounds memory access, memory leaks, double free, etc. + memory in you program and can detect use of unitialized + memory, out of bounds memory , memory leaks, double free, etc. - **Massif**, a heap profiler. - **Hellgrind** and **DRD** can detect race conditions in multi-threaded applications. @@ -116,7 +116,7 @@ description of command line options. ==12652== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 6 from 6) In the output we can see that Valgrind has detected both errors - the -off-by-one memory access at line 5 and a memory leak of 40 bytes. If we +off-by-one memory at line 5 and a memory leak of 40 bytes. If we want a detailed analysis of the memory leak, we need to run Valgrind with --leak-check=full option : diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/vampir.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/vampir.md index d6d3243302b19239d16c2e1fa5ce1152a906c732..f073038c9a837b1a9bfbaa2edc258fbcc0ff280c 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/vampir.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/vampir.md @@ -8,7 +8,7 @@ functionality to collect traces, you need to use a trace collection tool as [Score-P](../../../salomon/software/debuggers/score-p.html)) first to collect the traces. - + ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Installed versions diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/vtune-amplifier.png b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/vtune-amplifier.png new file mode 100644 index 0000000000000000000000000000000000000000..75ee99d84b87649151f22edad65de021ec348f1c Binary files /dev/null and b/converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/vtune-amplifier.png differ diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-debugger.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-debugger.md index 3b220c7c046373441fb15fcd655e1df4f1e77966..1b6fc1c222bbc0bbda21bee492b52f8fa29116c3 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-debugger.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-debugger.md @@ -23,7 +23,7 @@ The debugger may run in text mode. To debug in text mode, use $ idbc To debug on the compute nodes, module intel must be loaded. -The GUI on compute nodes may be accessed using the same way as in [the +The GUI on compute nodes may be ed using the same way as in [the GUI section](https://docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/resolveuid/11e53ad0d2fd4c5187537f4baeedff33) @@ -40,7 +40,7 @@ Example: In this example, we allocate 1 full compute node, compile program myprog.c with debugging options -O0 -g and run the idb debugger -interactively on the myprog.x executable. The GUI access is via X11 port +interactively on the myprog.x executable. The GUI is via X11 port forwarding provided by the PBS workload manager. Debugging parallel applications diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-mkl.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-mkl.md index c599d5d2969e5a5f57f8da4f1e47023dd48b5f43..2a84fe485e65fd935efe94267cb24a80346b1ddd 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-mkl.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-mkl.md @@ -14,20 +14,20 @@ subroutines, extensively threaded and optimized for maximum performance. Intel MKL provides these basic math kernels: -- <div id="d4841e18"> +- BLAS (level 1, 2, and 3) and LAPACK linear algebra routines, offering vector, vector-matrix, and matrix-matrix operations. -- <div id="d4841e21"> +- The PARDISO direct sparse solver, an iterative sparse solver, and supporting sparse BLAS (level 1, 2, and 3) routines for solving sparse systems of equations. -- <div id="d4841e24"> +- @@ -35,7 +35,7 @@ Intel MKL provides these basic math kernels: Linux* and Windows* operating systems, as well as the Basic Linear Algebra Communications Subprograms (BLACS) and the Parallel Basic Linear Algebra Subprograms (PBLAS). -- <div id="d4841e27"> +- @@ -43,13 +43,13 @@ Intel MKL provides these basic math kernels: dimensions with support for mixed radices (not limited to sizes that are powers of 2), as well as distributed versions of these functions. -- <div id="d4841e30"> +- Vector Math Library (VML) routines for optimized mathematical operations on vectors. -- <div id="d4841e34"> +- @@ -57,7 +57,7 @@ Intel MKL provides these basic math kernels: high-performance vectorized random number generators (RNG) for several probability distributions, convolution and correlation routines, and summary statistics functions. -- <div id="d4841e37"> +- diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/intel-xeon-phi.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/intel-xeon-phi.md index f82effb17a7d696edea35e67f18badf413d5123f..21ac1ca8dc07ee5e3db8decedc6d389cb4b8d1e8 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/intel-xeon-phi.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/intel-xeon-phi.md @@ -13,7 +13,7 @@ supported. Intel Utilities for Xeon Phi ---------------------------- -To get access to a compute node with Intel Xeon Phi accelerator, use the +To get to a compute node with Intel Xeon Phi accelerator, use the PBS interactive session $ qsub -I -q qmic -A NONE-0-0 @@ -404,7 +404,7 @@ Phi: ### Execution of the Program in Native Mode on Intel Xeon Phi -The user access to the Intel Xeon Phi is through the SSH. Since user +The user to the Intel Xeon Phi is through the SSH. Since user home directories are mounted using NFS on the accelerator, users do not have to copy binary files or libraries between the host and accelerator.  @@ -431,7 +431,7 @@ was used to compile the code on the host computer. For your information the list of libraries and their location required for execution of an OpenMP parallel code on Intel Xeon Phi is: -class="discreet visualHighlight"> + >/apps/intel/composer_xe_2013.5.192/compiler/lib/mic @@ -662,7 +662,7 @@ Similarly to execution of OpenMP programs in native mode, since the environmental module are not supported on MIC, user has to setup paths to Intel MPI libraries and binaries manually. One time setup can be done by creating a "**.profile**" file in user's home directory. This file -sets up the environment on the MIC automatically once user access to the +sets up the environment on the MIC automatically once user to the accelerator through the SSH. $ vim ~/.profile @@ -684,7 +684,7 @@ libraries. library and particular version of an Intel compiler. These versions have to match with loaded modules. -To access a MIC accelerator located on a node that user is currently +To a MIC accelerator located on a node that user is currently connected to, use: $ ssh mic0 @@ -756,7 +756,7 @@ execute: Execution on host - MPI processes distributed over multiple accelerators on multiple nodes** ->To get access to multiple nodes with MIC accelerator, user has to +>To get to multiple nodes with MIC accelerator, user has to use PBS to allocate the resources. To start interactive session, that allocates 2 compute nodes = 2 MIC accelerators run qsub command with following parameters: @@ -777,7 +777,7 @@ immediately. To see the other nodes that have been allocated use: cn205.bullx >This output means that the PBS allocated nodes cn204 and cn205, -which means that user has direct access to "**cn204-mic0**" and +which means that user has direct to "**cn204-mic0**" and "**cn-205-mic0**" accelerators. >Please note: At this point user can connect to any of the @@ -953,7 +953,7 @@ Optimization For more details about optimization techniques please read Intel document [Optimization and Performance Tuning for Intel® Xeon Phi™ Coprocessors](http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization "http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization") -. +  diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/isv_licenses.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/isv_licenses.md index e3bc7b71e23e9984e6db0d57308e74e2562ae110..667b7c11659c80f838ec9df43f62260933c6af1f 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/isv_licenses.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/isv_licenses.md @@ -24,7 +24,7 @@ non-commercial version. Overview of the licenses usage ------------------------------ -The overview is generated every minute and is accessible from web or +The overview is generated every minute and is ible from web or command line interface. ### Web interface @@ -40,7 +40,7 @@ number of free license features For each license there is a unique text file, which provides the information about the name, number of available (purchased/licensed), number of used and number of free license features. The text files are -accessible from the Anselm command prompt. +ible from the Anselm command prompt. Product File with license state Note ------------ ----------------------------------------------------- --------------------- @@ -80,7 +80,7 @@ Each feature of each license is accounted and checked by the scheduler of PBS Pro. If you ask for certain licences, the scheduler won't start the job until the asked licenses are free (available). This prevents to crash batch jobs, just because of id="result_box" -class="short_text"> class="hps">unavailability of the +class="short_text"> unavailability of the needed licenses. The general format of the name is: diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/kvirtualization.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/kvirtualization.md index 0eec6016681350d406141600d390b1945e7246ef..38885466d057805df8e46874ca59ff0f65bfd075 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/kvirtualization.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/kvirtualization.md @@ -18,7 +18,7 @@ needs. tools - Application requires specific setup (installation, configuration) of complex software stack -- Application requires privileged access to operating system +- Application requires privileged to operating system - ... and combinations of above cases  We offer solution for these cases - **virtualization**. Anselm's @@ -51,7 +51,7 @@ efficient solution. Solution described in chapter [HOWTO](virtualization.html#howto) -class="anchor-link"> is suitable for single node tasks, does not + is suitable for single node tasks, does not introduce virtual machine clustering. Please consider virtualization as last resort solution for your needs. @@ -68,7 +68,7 @@ Licensing IT4Innovations does not provide any licenses for operating systems and software of virtual machines. Users are ( id="result_box"> -class="hps">in accordance with [Acceptable use policy +in accordance with [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf)) fully responsible for licensing all software running in virtual machines on Anselm. Be aware of complex conditions of licensing software in @@ -84,8 +84,8 @@ running in their virtual machines. We propose this job workflow: -  + + Our recommended solution is that job script creates distinct shared job directory, which makes a central point for data exchange between @@ -143,16 +143,16 @@ information see [Virtio Linux](http://www.linux-kvm.org/page/Virtio) and [Virtio Windows](http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers). -Disable all id="result_box" class="short_text"> -class="hps trans-target-highlight">unnecessary services +Disable all id="result_box" +unnecessary services and tasks. Restrict all unnecessary operating system operations. -Remove all id="result_box" class="short_text"> -class="hps trans-target-highlight">unnecessary software and +Remove all id="result_box" +unnecessary software and files. - id="result_box" class="short_text"> -class="hps trans-target-highlight">Remove all paging + id="result_box" +Remove all paging space, swap files, partitions, etc. Shrink your image. (It is recommended to zero all free space and @@ -224,7 +224,7 @@ for 5 minutes, then shutdown virtual machine. Create job script according recommended id="result_box" class="short_text"> -class="hps trans-target-highlight">[Virtual Machine Job +[Virtual Machine Job Workflow](virtualization.html#virtual-machine-job-workflow). Example job for Windows virtual machine: @@ -299,9 +299,9 @@ Run virtual machine (simple) $ qemu-system-x86_64 -hda win.img -enable-kvm -cpu host -smp 16 -m 32768 -vga std -localtime -usb -usbdevice tablet -vnc :0 -You can access virtual machine by VNC viewer (option -vnc) connecting to +You can virtual machine by VNC viewer (option -vnc) connecting to IP address of compute node. For VNC you must use [VPN -network](../../accessing-the-cluster/vpn-access.html). +network](../../ing-the-cluster/vpn-access.html). Install virtual machine from iso file @@ -316,10 +316,10 @@ sharing and port forwarding, in snapshot mode $ qemu-system-x86_64 -drive file=win.img,media=disk,if=virtio -enable-kvm -cpu host -smp 16 -m 32768 -vga std -localtime -usb -usbdevice tablet -device virtio-net-pci,netdev=net0 -netdev user,id=net0,smb=/scratch/$USER/tmp,hostfwd=tcp::3389-:3389 -vnc :0 -snapshot -Thanks to port forwarding you can access virtual machine via SSH (Linux) +Thanks to port forwarding you can virtual machine via SSH (Linux) or RDP (Windows) connecting to IP address of compute node (and port 2222 for SSH). You must use [VPN -network](../../accessing-the-cluster/vpn-access.html). +network](../../ing-the-cluster/vpn-access.html). Keep in mind, that if you use virtio devices, you must have virtio drivers installed on your virtual machine. @@ -334,7 +334,7 @@ SMB sharing, port forwarding. In default configuration IP network 10.0.2.0/24 is used, host has IP address 10.0.2.2, DNS server 10.0.2.3, SMB server 10.0.2.4 and virtual machines obtain address from range 10.0.2.15-10.0.2.31. Virtual machines -have access to Anselm's network via NAT on compute node (host). +have to Anselm's network via NAT on compute node (host). Simple network setup @@ -353,9 +353,9 @@ Optimized network setup with sharing and port forwarding ### Advanced networking -Internet access** +Internet ** -Sometime your virtual machine needs access to internet (install +Sometime your virtual machine needs to internet (install software, updates, software activation, etc). We suggest solution using Virtual Distributed Ethernet (VDE) enabled QEMU with SLIRP running on login node tunnelled to compute node. Be aware, this setup has very low diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/mpi.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/mpi.md index 826a565810cd9d30536ee4ceed8dbe8b1346fe95..c74b65cb3a84d8305ff638138371cdfbb4896f3c 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/mpi.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/mpi.md @@ -150,7 +150,7 @@ scratch /lscratch filesystem. ### Ways to run MPI programs Optimal way to run an MPI program depends on its memory requirements, -memory access pattern and communication pattern. +memory pattern and communication pattern. Consider these ways to run an MPI program: 1. One MPI process per node, 16 threads per process @@ -161,13 +161,13 @@ One MPI** process per node, using 16 threads, is most useful for memory demanding applications, that make good use of processor cache memory and are not memory bound. This is also a preferred way for communication intensive applications as one process per node enjoys full -bandwidth access to the network interface. +bandwidth to the network interface. Two MPI** processes per node, using 8 threads each, bound to processor socket is most useful for memory bandwidth bound applications such as BLAS1 or FFT, with scalable memory demand. However, note that the two -processes will share access to the network interface. The 8 threads and -socket binding should ensure maximum memory access bandwidth and +processes will share to the network interface. The 8 threads and +socket binding should ensure maximum memory bandwidth and minimize communication, migration and numa effect overheads. Important! Bind every OpenMP thread to a core! diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/running-mpich2.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/running-mpich2.md index b14053ba6089d4b072795d088f3c969dc0b87d26..e228281a02b74b370943ba2598b3632fc6130d17 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/running-mpich2.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/running-mpich2.md @@ -10,7 +10,7 @@ MPICH2 program execution The MPICH2 programs use mpd daemon or ssh connection to spawn processes, no PBS support is needed. However the PBS allocation is required to -access compute nodes. On Anselm, the **Intel MPI** and **mpich2 1.9** + compute nodes. On Anselm, the **Intel MPI** and **mpich2 1.9** are MPICH2 based MPI implementations. ### Basic usage diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/copy_of_matlab.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/copy_of_matlab.md index d7b9b61c25b1fe1df6388249cd5a95ce99135228..05905cd8be261ffabd8b40d69d2a8f102aaa6fd5 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/copy_of_matlab.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/copy_of_matlab.md @@ -36,12 +36,12 @@ Matlab on the compute nodes via PBS Pro scheduler. If you require the Matlab GUI, please follow the general informations about [running graphical -applications](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html). +applications](../../../get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html). Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part -[here](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html)) +[here](../../../get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html)) is recommended. To run Matlab with GUI, use @@ -97,7 +97,7 @@ Anselm. Following example shows how to start interactive session with support for Matlab GUI. For more information about GUI based applications on Anselm see [this -page](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html). +page](../../../get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html). $ xhost + $ qsub -I -v DISPLAY=$(uname -n):$(echo $DISPLAY | cut -d ':' -f 2) -A NONE-0-0 -q qexp -l select=1 -l walltime=00:30:00 @@ -109,7 +109,7 @@ The second part of the command shows how to request all necessary licenses. In this case 1 Matlab-EDU license and 48 Distributed Computing Engines licenses. -Once the access to compute nodes is granted by PBS, user can load +Once the to compute nodes is granted by PBS, user can load following modules and start Matlab: r1i0n17$ module load MATLAB/2015b-EDU diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/matlab.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/matlab.md index 00799872748ab1d2803ab4e7e9a9b45f0fb4dffd..ed43615d1e0f697ca6182e31f54b4a3474d6b695 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/matlab.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/matlab.md @@ -71,7 +71,7 @@ non-interactive PBS sessions. This mode guarantees that the data processing is not performed on login nodes, but all processing is on compute nodes. -  + For the performance reasons Matlab should use system MPI. On Anselm the supported MPI implementation for Matlab is Intel MPI. To switch to @@ -124,7 +124,7 @@ The second part of the command shows how to request all necessary licenses. In this case 1 Matlab-EDU license and 32 Distributed Computing Engines licenses. -Once the access to compute nodes is granted by PBS, user can load +Once the to compute nodes is granted by PBS, user can load following modules and start Matlab: cn79$ module load matlab/R2013a-EDU diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/r.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/r.md index bde86cc5d5a081443efba4b2c6f50969b6a3d0d4..c507ac4f273457dc647052c568c23a74a4b341fc 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/r.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/r.md @@ -34,12 +34,12 @@ Modules ------- **The R version 3.0.1 is available on Anselm, along with GUI interface -Rstudio +Rstudio** Application Version module ------------- -------------- --------- R** R 3.0.1 R -Rstudio** Rstudio 0.97 Rstudio +Rstudio**** Rstudio 0.97 Rstudio $ module load R @@ -55,10 +55,10 @@ OMP_NUM_THREADS environment variable. ### Interactive execution -To run R interactively, using Rstudio GUI, log in with ssh -X parameter +To run R interactively, using Rstudio** GUI, log in with ssh -X parameter for X11 forwarding. Run rstudio: - $ module load Rstudio + $ module load Rstudio** $ rstudio ### Batch execution @@ -390,7 +390,7 @@ mpi.apply Rmpi example: The above is the mpi.apply MPI example for calculating the number Ď€. Only the slave processes carry out the calculation. Note the mpi.parSapply(), ** function call. The package -class="anchor-link">parallel +parallel [example](r.html#package-parallel)[above](r.html#package-parallel) may be trivially adapted (for much better performance) to this structure using the mclapply() in place of mpi.parSapply(). diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/hdf5.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/hdf5.md index 45983dad867fed66af00b70b3a62f17ea9230762..13b8f36700085de894f1ea2b551d9f84c83a8c48 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/hdf5.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/hdf5.md @@ -154,10 +154,10 @@ Example status = H5Dread(dataset_id, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, H5P_DEFAULT, dset_data); - /* End access to the dataset and release resources used by it. */ + /* End to the dataset and release resources used by it. */ status = H5Dclose(dataset_id); - /* Terminate access to the data space. */ + /* Terminate to the data space. */ status = H5Sclose(dataspace_id); /* Close the file. */ @@ -189,4 +189,4 @@ class="smarterwiki-popup-bubble-tip"> class="smarterwiki-popup-bubble-body"> class="smarterwiki-popup-bubble-links-container"> class="smarterwiki-popup-bubble-links"> -class="smarterwiki-popup-bubble-links-row">[{.smarterwiki-popup-bubble-link-favicon}](http://maps.google.com/maps?q=HDF5%20icc%20serial%09pthread%09hdf5%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09%24HDF5_INC%20%24HDF5_CPP_LIB%09%24HDF5_INC%20%24HDF5_F90_LIB%0A%0AHDF5%20icc%20parallel%20MPI%0A%09pthread%2C%20IntelMPI%09hdf5-parallel%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09Not%20supported%09%24HDF5_INC%20%24HDF5_F90_LIB "Search Google Maps"){.smarterwiki-popup-bubble-link}[{.smarterwiki-popup-bubble-link-favicon}](http://www.google.com/search?q=HDF5%20icc%20serial%09pthread%09hdf5%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09%24HDF5_INC%20%24HDF5_CPP_LIB%09%24HDF5_INC%20%24HDF5_F90_LIB%0A%0AHDF5%20icc%20parallel%20MPI%0A%09pthread%2C%20IntelMPI%09hdf5-parallel%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09Not%20supported%09%24HDF5_INC%20%24HDF5_F90_LIB "Search Google"){.smarterwiki-popup-bubble-link}[](http://www.google.com/search?hl=com&btnI=I'm+Feeling+Lucky&q=HDF5%20icc%20serial%09pthread%09hdf5%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09%24HDF5_INC%20%24HDF5_CPP_LIB%09%24HDF5_INC%20%24HDF5_F90_LIB%0A%0AHDF5%20icc%20parallel%20MPI%0A%09pthread%2C%20IntelMPI%09hdf5-parallel%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09Not%20supported%09%24HDF5_INC%20%24HDF5_F90_LIB+wikipedia "Search Wikipedia"){.smarterwiki-popup-bubble-link}</span></span></span> +btnI=I'm+Feeling+Lucky&btnI=I'm+Feeling+Lucky&q=HDF5%20icc%20serial%09pthread%09hdf5%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09%24HDF5_INC%20%24HDF5_CPP_LIB%09%24HDF5_INC%20%24HDF5_F90_LIB%0A%0AHDF5%20icc%20parallel%20MPI%0A%09pthread%2C%20IntelMPI%09hdf5-parallel%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09Not%20supported%09%24HDF5_INC%20%24HDF5_F90_LIB+wikipedia "Search Wikipedia"){.smarterwiki-popup-bubble-link}</span> diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/trilinos.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/trilinos.md index 63d2afd82a00f0a778215f139cca41a9d35feae7..81b90738dde4c7bcaade4303d5d719d10b80271c 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/trilinos.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/trilinos.md @@ -66,7 +66,7 @@ or include Makefile.export.<package> if you are interested only in a specific Trilinos package. This will -give you access to the variables such as Trilinos_CXX_COMPILER, +give you to the variables such as Trilinos_CXX_COMPILER, Trilinos_INCLUDE_DIRS, Trilinos_LIBRARY_DIRS etc. For the detailed description and example makefile see <http://trilinos.sandia.gov/Export_Makefile.txt>. diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/nvidia-cuda.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/nvidia-cuda.md index b185b86cd6bae5f7a315f011b3c262e86b8402c9..38f69e3838897254a1effc0d7d50752f7d956e30 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/nvidia-cuda.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/nvidia-cuda.md @@ -41,7 +41,7 @@ the example used is deviceQuery) and run "make" to start the compilation $ cd ~/cuda-samples/1_Utilities/deviceQuery $ make -To run the code user can use PBS interactive session to get access to a +To run the code user can use PBS interactive session to get to a node from qnvidia queue (note: use your project name with parameter -A in the qsub command) and execute the binary file @@ -185,7 +185,7 @@ This code can be compiled using following command $ nvcc test.cu -o test_cuda -To run the code use interactive PBS session to get access to one of the +To run the code use interactive PBS session to get to one of the GPU accelerated nodes $ qsub -I -q qnvidia -A OPEN-0-0 diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/diagnostic-component-team.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/diagnostic-component-team.md index c5fd0a79c8db47644b28dcffff812507a62ea979..994fc86aa850c6722b7e1c369cdd1c8c7dc2c380 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/diagnostic-component-team.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/diagnostic-component-team.md @@ -11,8 +11,8 @@ Diagnostic component (TEAM) TEAM is available at the following address : <http://omics.it4i.cz/team/> -The address is accessible only via -[VPN. ](../../accessing-the-cluster/vpn-access.html) +The address is ible only via +[VPN. ](../../ing-the-cluster/vpn-access.html) ### Diagnostic component (TEAM) @@ -33,11 +33,11 @@ diagnostic is produced. TEAM also provides with an interface for the definition of and customization of panels, by means of which, genes and mutations can be added or discarded to adjust panel definitions. - + + + + + @@ -46,7 +46,7 @@ increases.](images/fig5.png.1 "fig5.png") *Figure 5. ****Interface of the application. Panels for defining targeted regions of interest can be set up by just drag and drop known disease genes or disease definitions from the lists. Thus, virtual -panels can be interactively improved as the knowledge of the disease +panels can be increases.* diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig1.png b/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig1.png new file mode 100644 index 0000000000000000000000000000000000000000..0b5670a4e570c385eccc5d83e8cefc8c93e38e03 Binary files /dev/null and b/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig1.png differ diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig2.png b/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig2.png new file mode 100644 index 0000000000000000000000000000000000000000..f5bc24d65e435dbd869f873a3c88c6926fe5b466 Binary files /dev/null and b/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig2.png differ diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig3.png b/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig3.png new file mode 100644 index 0000000000000000000000000000000000000000..911f443a5a175ca36073dd17944589a37c7dec6a Binary files /dev/null and b/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig3.png differ diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig4.png b/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig4.png new file mode 100644 index 0000000000000000000000000000000000000000..8aa39d6aa924e3a567a135334e7305ccd14ce05d Binary files /dev/null and b/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig4.png differ diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig5.png b/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig5.png new file mode 100644 index 0000000000000000000000000000000000000000..4e87c6f45b1e69d053663a539ab67176d166b094 Binary files /dev/null and b/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig5.png differ diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig6.png b/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig6.png new file mode 100644 index 0000000000000000000000000000000000000000..43987a78a007e9489ad5e103db8c80a6749ec259 Binary files /dev/null and b/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig6.png differ diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/overview.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/overview.md index 3b08c53001785bc601285d0cb0574fe7c12d1612..a7b30eed8da0041c681d9066c831eb19eb3bd96b 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/overview.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/overview.md @@ -20,15 +20,15 @@ mapping and variant calling steps that result in a file containing the set of variants in the sample. From this point, the prioritization component or the diagnostic component can be launched. - + + + + + + + + + *Figure 1.*** *OMICS MASTER solution overview. Data is produced in the external labs and comes to IT4I (represented by the blue dashed line). @@ -37,7 +37,7 @@ annotations for each sequenced patient. These lists files together with primary and secondary (alignment) data files are stored in IT4I sequence DB and uploaded to the discovery (candidate prioritization) or diagnostic component where they can be analyzed directly by the user -that produced them, depending of the experimental design carried +that produced out*. style="text-align: left; "> Typical genomics pipelines are composed by several components that need @@ -94,7 +94,7 @@ FASTQ format >It represents the nucleotide sequence and its corresponding quality scores. - + *Figure 2.****** FASTQ file.*** @@ -220,21 +220,21 @@ soft clipping, â€H’ for hard clipping and â€P’ for padding. These support splicing, clipping, multi-part and padded alignments. Figure 3 shows examples of CIGAR strings for different types of alignments. - + + + + + + + + + + + + + + + * Figure 3.*** *SAM format file. The â€@SQ’ line in the header section gives the order of reference sequences. Notably, r001 is the name of a @@ -328,26 +328,26 @@ delimited and the number of fields in each data line must match the number of fields in the header line. It is strongly recommended that all annotation tags used are declared in the VCF header section. - + + + + + + + + + + + + + + + + + + + + Figure 4.**> (a) Example of valid VCF. The header lines ##fileformat and #CHROM are mandatory, the rest is optional but @@ -363,11 +363,11 @@ SAMPLE1) and a replacement of two bases by another base (SAMPLE2); the second line shows a SNP and an insertion; the third a SNP; the fourth a large structural variant described by the annotation in the INFO column, the coordinate is that of the base before the variant. (b–f ) Alignments -and VCF representations of different sequence variants: SNP, insertion, -deletion, replacement, and a large deletion. The REF columns shows the -reference bases replaced by the haplotype in the ALT column. The -coordinate refers to the first reference base. (g) Users are advised to -use simplest representation possible and lowest coordinate in cases +and VCF + + + + where the position is ambiguous. ### >Annotating @@ -610,28 +610,28 @@ should use the next command: speed-up mapping high-quality reads, and *Smith-Waterman*> (SW) to increase sensitivity when reads cannot be mapped using BWT. -- >><span>[hpg-fastq](http://docs.bioinfo.cipf.es/projects/fastqhpc/wiki), <span> a +- >>>[hpg-fastq](http://docs.bioinfo.cipf.es/projects/fastqhpc/wiki), <span> a quality control tool for high throughput - sequence data.</span></span> -- >><span><span>[hpg-variant](http://wiki.opencb.org/projects/hpg/doku.php?id=variant:downloads), <span>The + sequence data. +- >>><span>[hpg-variant](http://wiki.opencb.org/projects/hpg/doku.php?id=variant:downloads), <span>The HPG Variant suite is an ambitious project aimed to provide a complete suite of tools to work with genomic variation data, from VCF tools to variant profiling or genomic statistics. It is being implemented using High Performance Computing technologies to provide - the best performance possible.</span></span></span> -- >><span><span>[picard](http://picard.sourceforge.net/), <span>Picard + the best performance possible.</span> +- >>><span>[picard](http://picard.sourceforge.net/), <span>Picard comprises Java-based command-line utilities that manipulate SAM files, and a Java API (HTSJDK) for creating new programs that read and write SAM files. Both SAM text format and SAM binary (BAM) - format are supported.</span></span></span> -- >><span><span>[samtools](http://samtools.sourceforge.net/samtools-c.shtml), <span>SAM + format are supported.</span> +- >>><span>[samtools](http://samtools.sourceforge.net/samtools-c.shtml), <span>SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a - per-position format.</span></span></span> -- >><span><span><span>[snpEff](http://snpeff.sourceforge.net/), <span>Genetic + per-position format.</span> +- >>>[snpEff](http://snpeff.sourceforge.net/), <span>Genetic variant annotation and effect - prediction toolbox.</span></span></span></span> + prediction toolbox.</span></span> This listing show which tools are used in each step of the pipeline : @@ -850,7 +850,7 @@ associated to the phenotype: large intestine tumor.*** and Cooper,D.N. (2003) Human gene mutation database (HGMD): 2003 update. Hum. Mutat., 21, 577–581. 21. class="discreet">Johnson,A.D. and O’Donnell,C.J. (2009) An - open access database of genome-wide association results. BMC Med. + open database of genome-wide association results. BMC Med. Genet, 10, 6. 22. class="discreet">McKusick,V. (1998) A Catalog of Human Genes and Genetic Disorders, 12th edn. John Hopkins University diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/priorization-component-bierapp.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/priorization-component-bierapp.md index 0474925e999d50a5e290e951806fa42b14892098..7235c5914e9be6750e6216a42a03a21e40866a7d 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/priorization-component-bierapp.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/priorization-component-bierapp.md @@ -7,8 +7,8 @@ Priorization component (BiERApp) BiERApp is available at the following address : <http://omics.it4i.cz/bierapp/> -The address is accessible only -via [VPN. ](../../accessing-the-cluster/vpn-access.html) +The address is ible only +via [VPN. ](../../ing-the-cluster/vpn-access.html) ### >BiERApp @@ -31,11 +31,11 @@ the context of a large-scale genomic project carried out by the Spanish Network for Research, in Rare Diseases (CIBERER) and the Medical Genome Project. in which more than 800 exomes have been analyzed. - + + + + + *Figure 6***. *Web interface to the prioritization tool.* *This figure* *shows the interface of the web tool for candidate gene diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/openfoam.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/openfoam.md index e458aebc392201979ba1a46b071c3e52e276f379..cc9609e7cbb0266f0454a7126a2a4727bfbcfd90 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/openfoam.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/openfoam.md @@ -29,21 +29,21 @@ For example syntax of available OpenFOAM module is: >< openfoam/2.2.1-icc-openmpi1.6.5-DP > this means openfoam version >2.2.1 compiled by ->ICC compiler with >openmpi1.6.5 in<span> double +>ICC compiler with >openmpi1.6.5 in> double precision. Naming convection of the installed versions is following: >  -openfoam/<>VERSION>>-<</span><span>COMPILER</span><span>>-<</span><span>openmpiVERSION</span><span>>-<</span><span>PRECISION</span><span>></span> +openfoam/<>VERSION>>-<</span>>COMPILER</span><span>>-<</span><span>openmpiVERSION</span><span>>-<</span><span>PRECISION</span><span>></span> -- ><>VERSION<span>> - version of +- ><>VERSION>> - version of openfoam - ><>COMPILER> - version of used compiler - ><>openmpiVERSION> - version of used openmpi/impi -- ><>PRECISION> - DP/<span>SP – +- ><>PRECISION> - DP/>SP – double/single precision ###Available OpenFOAM modules** @@ -206,7 +206,7 @@ preparation of parallel computation. >>This job create simple block mesh and domain decomposition. Check your decomposition, and submit parallel computation: ->>Create a PBS script<span> +>>Create a PBS script> testParallel.pbs:</span> diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/paraview.md b/converted/docs.it4i.cz/anselm-cluster-documentation/software/paraview.md index 757e07cfb4f61d0653bf4e3a3735bea8dd309133..32970cd18379a79e2db0c02662394ccb2f107ca9 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/software/paraview.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/software/paraview.md @@ -81,7 +81,7 @@ replace username with your login and cn77 with the name of compute node your ParaView server is running on (see previous step). If you use PuTTY on Windows, load Anselm connection configuration, t>hen go to Connection-> -class="highlightedSearchTerm">SSH>->Tunnels to set up the +SSH>->Tunnels to set up the port forwarding. Click Remote radio button. Insert 12345 to Source port textbox. Insert cn77:11111. Click Add button, then Open. [Read more about port diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/software/virtualization-job-workflow b/converted/docs.it4i.cz/anselm-cluster-documentation/software/virtualization-job-workflow new file mode 100644 index 0000000000000000000000000000000000000000..f5602dd43de3879f6599a84170a7156100c27302 Binary files /dev/null and b/converted/docs.it4i.cz/anselm-cluster-documentation/software/virtualization-job-workflow differ diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/storage-1/cesnet-data-storage.md b/converted/docs.it4i.cz/anselm-cluster-documentation/storage-1/cesnet-data-storage.md index a5dfb47922cded184c2b07602830603efd4d70c4..be82b6fac217d54c30a7ea235c88437d4229ac0c 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/storage-1/cesnet-data-storage.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/storage-1/cesnet-data-storage.md @@ -23,7 +23,7 @@ Republic. User of data storage CESNET (DU) association can become organizations or an individual person who is either in the current employment relationship (employees) or the current study relationship (students) to -a legal entity (organization) that meets the “Principles for access to +a legal entity (organization) that meets the “Principles for to CESNET Large infrastructure (Access Policy)”. User may only use data storage CESNET for data transfer and storage @@ -37,12 +37,12 @@ The service is documented at please contact directly CESNET Storage Department via e-mail [du-support(at)cesnet.cz](mailto:du-support@cesnet.cz). -The procedure to obtain the CESNET access is quick and trouble-free. +The procedure to obtain the CESNET is quick and trouble-free. (source [https://du.cesnet.cz/](https://du.cesnet.cz/wiki/doku.php/en/start "CESNET Data Storage")) -CESNET storage access +CESNET storage --------------------- ### Understanding Cesnet storage @@ -51,7 +51,7 @@ It is very important to understand the Cesnet storage before uploading data. Please read <https://du.cesnet.cz/en/navody/home-migrace-plzen/start> first. -Once registered for CESNET Storage, you may [access the +Once registered for CESNET Storage, you may [ the storage](https://du.cesnet.cz/en/navody/faq/start) in number of ways. We recommend the SSHFS and RSYNC methods. @@ -59,7 +59,7 @@ number of ways. We recommend the SSHFS and RSYNC methods. SSHFS: The storage will be mounted like a local hard drive -The SSHFS provides a very convenient way to access the CESNET Storage. +The SSHFS provides a very convenient way to the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable harddrive. Files can be than copied in and out in a usual fashion. @@ -74,7 +74,7 @@ Mount tier1_home **(only 5120M !)**: $ sshfs username@ssh.du1.cesnet.cz:. cesnet/ -For easy future access from Anselm, install your public key +For easy future from Anselm, install your public key $ cp .ssh/id_rsa.pub cesnet/.ssh/authorized_keys @@ -92,7 +92,7 @@ Once done, please remember to unmount the storage $ fusermount -u cesnet -### Rsync access +### Rsync Rsync provides delta transfer for best performance, can resume interrupted transfers diff --git a/converted/docs.it4i.cz/anselm-cluster-documentation/storage-1/storage.md b/converted/docs.it4i.cz/anselm-cluster-documentation/storage-1/storage.md index f8ee9e6a4e95930030fd5abe918afbc1a75e3def..5fa2b975c6d6fbcf34c669cd57bdb54b0a2a57b0 100644 --- a/converted/docs.it4i.cz/anselm-cluster-documentation/storage-1/storage.md +++ b/converted/docs.it4i.cz/anselm-cluster-documentation/storage-1/storage.md @@ -8,7 +8,7 @@ Storage There are two main shared file systems on Anselm cluster, the [HOME](../storage.html#home) and [SCRATCH](../storage.html#scratch). All login and compute -nodes may access same data on shared filesystems. Compute nodes are also +nodes may same data on shared filesystems. Compute nodes are also equipped with local (non-shared) scratch, ramdisk and tmp filesystems. Archiving @@ -27,7 +27,7 @@ Anselm computer provides two main shared filesystems, the [HOME filesystem](../storage.html#home) and the [SCRATCH filesystem](../storage.html#scratch). Both HOME and SCRATCH filesystems are realized as a parallel Lustre filesystem. Both -shared file systems are accessible via the Infiniband network. Extended +shared file systems are ible via the Infiniband network. Extended ACLs are provided on both Lustre filesystems for the purpose of sharing data with other users using fine-grained control. @@ -40,15 +40,15 @@ A user file on the Lustre filesystem can be divided into multiple chunks (OSTs) (disks). The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing. -When a client (a class="glossaryItem">compute -class="glossaryItem">node from your job) needs to create -or access a file, the client queries the metadata server ( -class="glossaryItem">MDS) and the metadata target ( -class="glossaryItem">MDT) for the layout and location of the +When a client (a compute +node from your job) needs to create +or a file, the client queries the metadata server ( +MDS) and the metadata target ( +MDT) for the layout and location of the [file's stripes](http://www.nas.nasa.gov/hecc/support/kb/Lustre_Basics_224.html#striping). Once the file is opened and the client obtains the striping information, -the class="glossaryItem">MDS is no longer involved in the +the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval. @@ -69,7 +69,7 @@ directories or files to get optimum I/O performance: Anselm Lustre filesystems one can specify -1 to use all OSTs in the filesystem. 3.stripe_offset The index of the - class="glossaryItem">OST where the first stripe is to be + OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended. @@ -81,7 +81,7 @@ significantly impact the I/O performance you experience. Use the lfs getstripe for getting the stripe parameters. Use the lfs setstripe command for setting the stripe parameters to get optimal I/O performance The correct stripe setting depends on your needs and file -access patterns. + patterns. ``` $ lfs getstripe dir|filename @@ -122,7 +122,7 @@ When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the stripe_count is set to a larger value. The stripe count sets the number of OSTs the file will be written to. By default, the stripe count is set -to 1. While this default setting provides for efficient access of +to 1. While this default setting provides for efficient of metadata (for example to support the ls -l command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A @@ -134,10 +134,10 @@ of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes. -Using a large stripe size can improve performance when accessing very +Using a large stripe size can improve performance when ing very large files -Large stripe size allows each client to have exclusive access to its own +Large stripe size allows each client to have exclusive to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file. @@ -152,11 +152,11 @@ servers (MDS) and four data/object storage servers (OSS). Two object storage servers are used for file system HOME and another two object storage servers are used for file system SCRATCH. - class="emphasis">Configuration of the storages + Configuration of the storages -- class="emphasis">HOME Lustre object storage - <div class="itemizedlist"> +- HOME Lustre object storage + - One disk array NetApp E5400 - 22 OSTs @@ -166,8 +166,8 @@ storage servers are used for file system SCRATCH. -- class="emphasis">SCRATCH Lustre object storage - <div class="itemizedlist"> +- SCRATCH Lustre object storage + - Two disk arrays NetApp E5400 - 10 OSTs @@ -177,8 +177,8 @@ storage servers are used for file system SCRATCH. -- class="emphasis">Lustre metadata storage - <div class="itemizedlist"> +- Lustre metadata storage + - One disk array NetApp E2600 - 12 300GB SAS 15krpm disks @@ -249,7 +249,7 @@ prove as insufficient for particular user, please contact lifted upon request. The Scratch filesystem is intended for temporary scratch data generated -during the calculation as well as for high performance access to input +during the calculation as well as for high performance to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory. @@ -257,7 +257,7 @@ Users are advised to save the necessary data from the SCRATCH filesystem to HOME filesystem after the calculations and clean up the scratch files. -Files on the SCRATCH filesystem that are **not accessed for more than 90 +Files on the SCRATCH filesystem that are **not ed for more than 90 days** will be automatically **deleted**. The SCRATCH filesystem is realized as Lustre parallel filesystem and is @@ -370,7 +370,7 @@ number of named user and named group entries. ACLs on a Lustre file system work exactly like ACLs on any Linux file system. They are manipulated with the standard tools in the standard -manner. Below, we create a directory and allow a specific user access. +manner. Below, we create a directory and allow a specific user . ``` [vop999@login1.anselm ~]$ umask 027 @@ -414,18 +414,18 @@ Local Filesystems Every computational node is equipped with 330GB local scratch disk. -Use local scratch in case you need to access large amount of small files +Use local scratch in case you need to large amount of small files during your calculation. -The local scratch disk is mounted as /lscratch and is accessible to +The local scratch disk is mounted as /lscratch and is ible to user at /lscratch/$PBS_JOBID directory. The local scratch filesystem is intended for temporary scratch data -generated during the calculation as well as for high performance access -to input and output files. All I/O intensive jobs that access large +generated during the calculation as well as for high performance +to input and output files. All I/O intensive jobs that large number of small files within the calculation must use the local scratch filesystem as their working directory. This is required for performance -reasons, as frequent access to number of small files may overload the +reasons, as frequent to number of small files may overload the metadata servers (MDS) of the Lustre filesystem. The local scratch directory /lscratch/$PBS_JOBID will be deleted @@ -448,16 +448,16 @@ none Every computational node is equipped with filesystem realized in memory, so called RAM disk. -Use RAM disk in case you need really fast access to your data of limited +Use RAM disk in case you need really fast to your data of limited size during your calculation. Be very careful, use of RAM disk filesystem is at the expense of operational memory. -The local RAM disk is mounted as /ramdisk and is accessible to user +The local RAM disk is mounted as /ramdisk and is ible to user at /ramdisk/$PBS_JOBID directory. The local RAM disk filesystem is intended for temporary scratch data -generated during the calculation as well as for high performance access +generated during the calculation as well as for high performance to input and output files. Size of RAM disk filesystem is limited. Be very careful, use of RAM disk filesystem is at the expense of operational memory. It is not recommended to allocate large amount of diff --git a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/cygwin-and-x11-forwarding.md b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/cygwin-and-x11-forwarding.md index 234b2cc0b0b62de3146151d5116105b8471a6c7a..b8c46a60e5dd0e7d2272c55898245a142bd7f7a2 100644 --- a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/cygwin-and-x11-forwarding.md +++ b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/cygwin-and-x11-forwarding.md @@ -12,7 +12,7 @@ PuTTY X11 proxy: unable to connect to forwarded X server: Network error: Connect (gnome-session:23691): WARNING **: Cannot open display: ``` - +  1. Locate and modify Cygwin shortcut that @@ -30,10 +30,10 @@ PuTTY X11 proxy: unable to connect to forwarded X server: Network error: Connect -2. +2. Check Putty settings: Check Putty settings: Enable X11 - forwarding + diff --git a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md index e51a82cddb0ea5efdd5f6abca11fdee84745000a..454ee5cd4ee844041a6cb3b7f0bc9faa678c3c13 100644 --- a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md +++ b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md @@ -8,7 +8,7 @@ Graphical User Interface X Window System --------------- -The X Window system is a principal way to get GUI access to the +The X Window system is a principal way to get GUI to the clusters. Read more about configuring [**X Window @@ -27,6 +27,6 @@ to remotely control another class="link-external">[computer](http://en.wikipedia.org/wiki/Computer "Computer"). Read more about configuring -[VNC](../../../salomon/accessing-the-cluster/graphical-user-interface/vnc.html)**. +[VNC](../../../salomon/ing-the-cluster/graphical-user-interface/vnc.html)**. diff --git a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md index 4c14334193b53458451663a8848123bdb7117f16..fa6db5525fa7e11d71874ddf9e609677218e8681 100644 --- a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md +++ b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md @@ -40,7 +40,7 @@ Verify: Start vncserver --------------- -To access VNC a local vncserver must be started first and also a tunnel +To VNC a local vncserver must be started first and also a tunnel using SSH port forwarding must be established. [See below](vnc.html#linux-example-of-creating-a-tunnel) for the details on SSH tunnels. In this example we use port 61. @@ -86,7 +86,7 @@ Another command: username   10296 0.0 0.0 131772 21076 pts/29  SN  13:01  0:01 /usr/bin/Xvnc :61 -desktop login2:61 (username) -auth /home/jir13/.Xauthority -geometry 1600x900 -depth 16 -rfbwait 30000 -rfbauth /home/username/.vnc/passwd -rfbport 5961 -fp catalogue:/etc/X11/fontpath.d -pn ``` -To access the VNC server you have to create a tunnel between the login +To the VNC server you have to create a tunnel between the login node using TCP **port 5961** and your machine using a free TCP port (for simplicity the very same, in this case). @@ -163,7 +163,7 @@ Fill the Source port and Destination fields. **Do not forget to click the Add button**. - +tunnel](https://docs.it4i.cz/get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/vnc/putty-tunnel.png/@@images/4c66cd51-c858-473b-98c2-8d901aea7118.png "PuTTY Tunnel")](putty-tunnel.png) Run the VNC client of your choice, select VNC server 127.0.0.1, port 5961 and connect using VNC password. @@ -239,7 +239,7 @@ GUI applications on compute nodes over VNC ------------------------------------------ The very [same methods as described -above](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-and-vnc#gui-applications-on-compute-nodes), +above](https://docs.it4i.cz/get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-and-vnc#gui-applications-on-compute-nodes), may be used to run the GUI applications on compute nodes. However, for maximum performance**, proceed following these steps: diff --git a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md index f3bf43d32c515b1f0a046c79adfaba0fb247d291..1f0b4eba84c91acdd76aae9a837293a7d430814b 100644 --- a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md +++ b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md @@ -5,7 +5,7 @@ X Window System -The X Window system is a principal way to get GUI access to the +The X Window system is a principal way to get GUI to the clusters. The **X Window System** (commonly known as **X11**, based on its current major version being 11, or shortened to simply **X**, and sometimes informally **X-Windows**) is a computer software system and diff --git a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/introduction.md b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/introduction.md index 5616c4d470b77a39ade11f6fe72ef36e06c243e2..1bbcf5094d1d9c3a05588c92c17c35d62a688b4d 100644 --- a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/introduction.md +++ b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/introduction.md @@ -1,21 +1,21 @@ Accessing the Clusters ====================== -The IT4Innovations clusters are accessed by SSH protocol via login +The IT4Innovations clusters are ed by SSH protocol via login nodes. Read more on [Accessing the Salomon -Cluste](../salomon/accessing-the-cluster.html)r or +Cluste](../salomon/ing-the-cluster.html)r or [Accessing the Anselm -Cluster](../anselm-cluster-documentation/accessing-the-cluster.html) +Cluster](../anselm-cluster-documentation/ing-the-cluster.html) pages. ### PuTTY On **Windows**, use [PuTTY ssh -client](accessing-the-clusters/shell-access-and-data-transfer/putty/putty.html). +client](ing-the-clusters/shell-access-and-data-transfer/putty/putty.html). ### SSH keys Read more about [SSH keys -management](accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html). +management](ing-the-clusters/shell-access-and-data-transfer/ssh-keys.html). diff --git a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/introduction.md b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/introduction.md index 10f87cdf293bb3c41ed569df114c5521caaf76cc..cbfc29c5e95351cdbea0454da6c6620b1b8b8dbd 100644 --- a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/introduction.md +++ b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/introduction.md @@ -1,13 +1,13 @@ Accessing the Clusters ====================== -The IT4Innovations clusters are accessed by SSH protocol via login +The IT4Innovations clusters are ed by SSH protocol via login nodes. Read more on [Accessing the Salomon -Cluste](../../../salomon/accessing-the-cluster.html)r or +Cluste](../../../salomon/ing-the-cluster.html)r or [Accessing the Anselm -Cluster](../../../anselm-cluster-documentation/accessing-the-cluster.html) +Cluster](../../../anselm-cluster-documentation/ing-the-cluster.html) pages. ### PuTTY diff --git a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/pageant.md b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/pageant.md index d6a8cf8081544656ca5ed20ed4d41b109a14e534..6e004088c57715819a313e4bae6ff1421185b309 100644 --- a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/pageant.md +++ b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/pageant.md @@ -15,6 +15,6 @@ passphrase on every login. - Now you have your private key in memory without needing to retype a passphrase on every login. -  + [](PageantV.png)  diff --git a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md index 03e17904173f4d295377ea8d3bc7d644fb185ae7..72e0950e6d2ce3a3f447a6a765605ec7b601fcb1 100644 --- a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md +++ b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md @@ -41,7 +41,7 @@ PuTTY - how to connect to the IT4Innovations cluster - Run PuTTY - Enter Host name and Save session fields with [Login - address](../../../../salomon/accessing-the-cluster/shell-and-data-access/shell-and-data-access.html) + address](../../../../salomon/ing-the-cluster/shell-and-data-access/shell-and-data-access.html) and browse Connection - > SSH -> Auth menu. The *Host Name* input may be in the format **"username@clustername.it4i.cz"** so you don't have to type your @@ -49,7 +49,7 @@ PuTTY - how to connect to the IT4Innovations cluster In this example we will connect to the Salomon cluster using  **"salomon.it4i.cz"**. -  + [](PuTTY_host_Salomon.png)  @@ -59,16 +59,16 @@ PuTTY - how to connect to the IT4Innovations cluster Browse and select your [private key](../ssh-keys.html) file. -  + [](PuTTY_keyV.png) - Return to Session page and Save selected configuration with *Save* button. -  + [](PuTTY_save_Salomon.png) - Now you can log in using *Open* button. -  + [](PuTTY_open_Salomon.png) - Enter your username if the *Host Name* input is not in the format "username@salomon.it4i.cz". diff --git a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/puttygen.md b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/puttygen.md index 8bc7590212421c6ee323e47d87b16b02a89c518f..f6e3ea2462bfb3b679905c9de01f1e44b7522b05 100644 --- a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/puttygen.md +++ b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/puttygen.md @@ -21,7 +21,7 @@ Make sure to backup the key. - Confirm key passphrase. - Save your private key with *Save private key* button. -  + [](PuttyKeygeneratorV.png)  @@ -33,15 +33,15 @@ key into authorized_keys file for authentication with your own private - Start with *Generate* button. -  + [](PuttyKeygenerator_001V.png) - Generate some randomness. -  + [](PuttyKeygenerator_002V.png) - Wait. -  + [](20150312_143443.png) - Enter a *comment* for your key using format 'username@organization.example.com'. @@ -50,18 +50,18 @@ key into authorized_keys file for authentication with your own private Save your new private key `in "*.ppk" `format with *Save private key* button. -  + [](PuttyKeygenerator_004V.png) - Save the public key with *Save public key* button. You can copy public key out of the â€Public key for pasting into authorized_keys file’ box. -  + [](PuttyKeygenerator_005V.png) - Export private key in OpenSSH format "id_rsa" using Conversion -> Export OpenSSH key -  + [](PuttyKeygenerator_006V.png) - Now you can insert additional public key into authorized_keys file for authentication with your own private >key. diff --git a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md index 5e87ffbd430137384a21b506e0d29108983536b2..1fafc36c858ee4da7f81c5ae69727af74b6eeeda 100644 --- a/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md +++ b/converted/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md @@ -41,7 +41,7 @@ the cluster.  file): - `644 (-rw-r--r--)`</span></span> + `644 (-rw-r--r--)` - `` @@ -113,7 +113,7 @@ verify a  digital signature. Public key is present on the remote -side and allows access to +side and allows to the owner of the matching private key. An example of public key diff --git a/converted/docs.it4i.cz/get-started-with-it4innovations/applying-for-resources.md b/converted/docs.it4i.cz/get-started-with-it4innovations/applying-for-resources.md index bc922be6eeb7c554a5be15ca228eba3beae436f1..8b592cb500b14f5dd8b6cc7d16269929252314c0 100644 --- a/converted/docs.it4i.cz/get-started-with-it4innovations/applying-for-resources.md +++ b/converted/docs.it4i.cz/get-started-with-it4innovations/applying-for-resources.md @@ -12,7 +12,7 @@ mechanisms. Academic researchers can apply for computational resources via [Open Access -Competitions](http://www.it4i.cz/open-access-competition/?lang=en&lang=en). +Competitions](http://www.it4i.cz/open--competition/?lang=en&lang=en). Anyone is welcomed to apply via the [Directors Discretion.](http://www.it4i.cz/obtaining-computational-resources-through-directors-discretion/?lang=en&lang=en) @@ -21,7 +21,7 @@ Foreign (mostly European) users can obtain computational resources via the [PRACE (DECI) program](http://www.prace-ri.eu/DECI-Projects). -In all cases, IT4Innovations’ access mechanisms are aimed at +In all cases, IT4Innovations’ mechanisms are aimed at distributing computational resources while taking into account the development and application of supercomputing methods and their benefits and usefulness for society. The applicants are expected to submit a diff --git a/converted/docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md b/converted/docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md index e3b321a6442faf5ccdf4a23ac18b054ecd555871..493abbdafcaf9f578bbd72ea0ce1e1bc58206a1f 100644 --- a/converted/docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md +++ b/converted/docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md @@ -222,7 +222,7 @@ where $mydomain.crt is the certificate of a trusted signing authority More information on the tool can be found at:<http://docs.oracle.com/javase/7/docs/technotes/tools/solaris/keytool.html> -Q: How do I use my certificate to access the different grid Services? +Q: How do I use my certificate to the different grid Services? --------------------------------------------------------------------- Most grid services require the use of your certificate; however, the @@ -303,7 +303,7 @@ to carry their private keys and certificates when travelling; nor do users have to install private keys and certificates on possibly insecure computers. -Q: Someone may have copied or had access to the private key of my certificate either in a separate file or in the browser. What should I do? +Q: Someone may have copied or had to the private key of my certificate either in a separate file or in the browser. What should I do? -------------------------------------------------------------------------------------------------------------------------------------------- Please ask the CA that issued your certificate to revoke this certifcate diff --git a/converted/docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md b/converted/docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md index 9f3f8d85678030e096fa34f772dc567cb7d33235..82b0df2d5e8e610a4430ac32c9beda74e33d6c59 100644 --- a/converted/docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md +++ b/converted/docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md @@ -11,14 +11,14 @@ Obtaining Authorization The computational resources of IT4I  are allocated by the Allocation Committee to a [Project](../introduction.html), investigated by a Primary Investigator. By allocating the computational -resources, the Allocation Committee is authorizing the PI to access and +resources, the Allocation Committee is authorizing the PI to and use the clusters. The PI may decide to authorize a number of her/his -Collaborators to access and use the clusters, to consume the resources +Collaborators to and use the clusters, to consume the resources allocated to her/his Project. These collaborators will be associated to the Project. The Figure below is depicting the authorization chain: - + @@ -27,14 +27,14 @@ the Project. The Figure below is depicting the authorization chain:  You need to either [become the PI](../applying-for-resources.html) or [be named as a collaborator](obtaining-login-credentials.html#authorization-of-collaborator-by-pi) -by a PI in order to access and use the clusters. +by a PI in order to and use the clusters. Head of Supercomputing Services acts as a PI of a project DD-13-5. -Joining this project, you may **access and explore the clusters**, use +Joining this project, you may ** and explore the clusters**, use software, development environment and computers via the qexp and qfree queues. You may use these resources for own education/research, no paperwork is required. All IT4I employees may contact the Head of -Supercomputing Services in order to obtain **free access to the +Supercomputing Services in order to obtain **free to the clusters**. ### Authorization of PI by Allocation Committee @@ -45,7 +45,7 @@ the Allocation Committee decision. ### Authorization by web -This is a preferred way of granting access to project resources. +This is a preferred way of granting to project resources. Please, use this method whenever it's possible. Log in to the [IT4I Extranet @@ -69,7 +69,7 @@ following information: 2.Provide list of people, including himself, who are authorized to use the resources allocated to the project. The list must include full name, e-mail and affiliation. Provide usernames as well, if - collaborator login access already exists on the IT4I systems. + collaborator login already exists on the IT4I systems. 3.Include "Authorization to IT4Innovations" into the subject line. Example (except the subject line which must be in English, you may use @@ -98,7 +98,7 @@ The Login Credentials ------------------------- Once authorized by PI, every person (PI or Collaborator) wishing to -access the clusters, should contact the [IT4I + the clusters, should contact the [IT4I support](https://support.it4i.cz/rt/) (E-mail: [support [at] it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz)) providing following information: @@ -144,7 +144,7 @@ Digital signature allows us to confirm your identity in remote electronic communication and provides an encrypted channel to exchange sensitive information such as login credentials. After receiving your signed e-mail with the requested information, we will send you your -login credentials (user name, key, passphrase and password) to access +login credentials (user name, key, passphrase and password) to the IT4I systems. We accept certificates issued by any widely respected certification @@ -160,8 +160,8 @@ The login credentials include: 2.ssh private key and private key passphrase 3.system password -The clusters are accessed by the [private -key](../accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html) +The clusters are ed by the [private +key](../ing-the-clusters/shell-access-and-data-transfer/ssh-keys.html) and username. Username and password is used for login to the information systems listed on <http://support.it4i.cz/>. @@ -175,7 +175,7 @@ local $ ssh-keygen -f id_rsa -p ``` On Windows, use [PuTTY Key -Generator](../accessing-the-clusters/shell-access-and-data-transfer/putty/puttygen.html). +Generator](../ing-the-clusters/shell-access-and-data-transfer/putty/puttygen.html). ### Change Password diff --git a/converted/docs.it4i.cz/salomon/accessing-the-cluster.md b/converted/docs.it4i.cz/salomon/accessing-the-cluster.md index a6b63b5c58da6b955765fe557dac221881277e22..ecd7e8a605585f4edd6afd4d249e11e3c6597913 100644 --- a/converted/docs.it4i.cz/salomon/accessing-the-cluster.md +++ b/converted/docs.it4i.cz/salomon/accessing-the-cluster.md @@ -1,4 +1,4 @@ -Shell access and data transfer +Shell and data transfer ============================== @@ -8,7 +8,7 @@ Shell access and data transfer Interactive Login ----------------- -The Salomon cluster is accessed by SSH protocol via login nodes login1, +The Salomon cluster is ed by SSH protocol via login nodes login1, login2, login3 and login4 at address salomon.it4i.cz. The login nodes may be addressed specifically, by prepending the login node name to the address. @@ -26,7 +26,7 @@ login3.salomon.it4i.cz 22 ssh login3 login4.salomon.it4i.cz 22 ssh login4 The authentication is by the [private -key](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html) +key](../get-started-with-it4innovations/ing-the-clusters/shell-access-and-data-transfer/ssh-keys.html) Please verify SSH fingerprints during the first logon. They are identical on all login nodes: @@ -51,7 +51,7 @@ local $ chmod 600 /path/to/id_rsa ``` On **Windows**, use [PuTTY ssh -client](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty.html). +client](../get-started-with-it4innovations/ing-the-clusters/shell-access-and-data-transfer/putty/putty.html). After logging in, you will see the command prompt: @@ -95,7 +95,7 @@ login2.salomon.it4i.cz 22 scp, sftp login4.salomon.it4i.cz 22 scp, sftp  The authentication is by the [private -key](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.html) +key](../get-started-with-it4innovations/ing-the-clusters/shell-access-and-data-transfer/ssh-keys.html) HTML commented section #2 (ssh transfer performance data need to be verified) diff --git a/converted/docs.it4i.cz/salomon/accessing-the-cluster/outgoing-connections.md b/converted/docs.it4i.cz/salomon/accessing-the-cluster/outgoing-connections.md index 4ba8f15efa9635820cca99b64aed3d522bd74eb4..8d1d51104ff4f40e99475df6a22aeed5722a7f0d 100644 --- a/converted/docs.it4i.cz/salomon/accessing-the-cluster/outgoing-connections.md +++ b/converted/docs.it4i.cz/salomon/accessing-the-cluster/outgoing-connections.md @@ -44,7 +44,7 @@ local $ ssh -R 6000:remote.host.com:1234 salomon.it4i.cz ``` In this example, we establish port forwarding between port 6000 on -Salomon and port 1234 on the remote.host.com. By accessing +Salomon and port 1234 on the remote.host.com. By ing localhost:6000 on Salomon, an application will see response of remote.host.com:1234. The traffic will run via users local workstation. @@ -55,7 +55,7 @@ Remote radio button. Insert 6000 to Source port textbox. Insert remote.host.com:1234. Click Add button, then Open. Port forwarding may be established directly to the remote host. However, -this requires that user has ssh access to remote.host.com +this requires that user has ssh to remote.host.com ``` $ ssh -L 6000:localhost:1234 remote.host.com @@ -66,7 +66,7 @@ Note: Port number 6000 is chosen as an example only. Pick any free port. ### Port forwarding from compute nodes Remote port forwarding from compute nodes allows applications running on -the compute nodes to access hosts outside Salomon Cluster. +the compute nodes to hosts outside Salomon Cluster. First, establish the remote port forwarding form the login node, as [described @@ -80,7 +80,7 @@ $ ssh -TN -f -L 6000:localhost:6000 login1 ``` In this example, we assume that port forwarding from login1:6000 to -remote.host.com:1234 has been established beforehand. By accessing +remote.host.com:1234 has been established beforehand. By ing localhost:6000, an application running on a compute node will see response of remote.host.com:1234 @@ -90,7 +90,7 @@ Port forwarding is static, each single port is mapped to a particular port on remote host. Connection to other remote host, requires new forward. -Applications with inbuilt proxy support, experience unlimited access to +Applications with inbuilt proxy support, experience unlimited to remote hosts, via single proxy server. To establish local proxy server on your workstation, install and run @@ -114,7 +114,7 @@ local $ ssh -R 6000:localhost:1080 salomon.it4i.cz ``` Now, configure the applications proxy settings to **localhost:6000**. -Use port forwarding to access the [proxy server from compute +Use port forwarding to the [proxy server from compute nodes](outgoing-connections.html#port-forwarding-from-compute-nodes) as well . diff --git a/converted/docs.it4i.cz/salomon/accessing-the-cluster/vpn-access.md b/converted/docs.it4i.cz/salomon/accessing-the-cluster/vpn-access.md index 7d6b7500f509702efec187a547d3af27498830cd..b12c1d7669b946286da6c4a89f8c494f7e04c749 100644 --- a/converted/docs.it4i.cz/salomon/accessing-the-cluster/vpn-access.md +++ b/converted/docs.it4i.cz/salomon/accessing-the-cluster/vpn-access.md @@ -40,7 +40,7 @@ for automatic installation.  -Install](https://docs.it4i.cz/salomon/vpn_web_install_2.png/@@images/c2baba93-824b-418d-b548-a73af8030320.png "VPN Install")](../vpn_web_install_2.png) +  After successful installation, VPN connection will be established and @@ -86,7 +86,7 @@ credentials. After a successful login, the client will minimize to the system tray. If everything works, you can see a lock in the Cisco tray icon. - +[  If you right-click on this icon, you will see a context menu in which diff --git a/converted/docs.it4i.cz/salomon/environment-and-modules.md b/converted/docs.it4i.cz/salomon/environment-and-modules.md index 53ff50906134f85f629407e964cea76a8bf2eeaa..b8323d2f5ec2ccaae0e5b034d6bc2bd09c2be2e7 100644 --- a/converted/docs.it4i.cz/salomon/environment-and-modules.md +++ b/converted/docs.it4i.cz/salomon/environment-and-modules.md @@ -35,7 +35,7 @@ etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Take care for SSH session interactivity for such commands as id="result_box" class="short_text"> class="hps alt-edited">stated -class="hps">in the previous example. +in the previous example. ### Application Modules diff --git a/converted/docs.it4i.cz/salomon/hardware-overview-1/hardware-overview.md b/converted/docs.it4i.cz/salomon/hardware-overview-1/hardware-overview.md index 14fed67250cfc55391a5e05b3a9fdba0dc2325e0..9f246a4c281a1fb78bbb380f2f77e520a4e1f260 100644 --- a/converted/docs.it4i.cz/salomon/hardware-overview-1/hardware-overview.md +++ b/converted/docs.it4i.cz/salomon/hardware-overview-1/hardware-overview.md @@ -15,7 +15,7 @@ with 24 cores (two twelve-core Intel Xeon processors) and 128GB RAM. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 0.5PB /home NFS disk storage to store the user files. Users may use a DDN Lustre shared storage with capacity of 1.69 PB which -is available for the scratch project data. The user access to the +is available for the scratch project data. The user to the Salomon cluster is provided by four login nodes. [More about schematic representation of the Salomon cluster compute @@ -23,7 +23,7 @@ nodes IB topology](../network-1/ib-single-plane-topology.html). -[](../salomon-2) + The parameters are summarized in the following tables: @@ -118,6 +118,6 @@ available: </tbody> </table> - + + diff --git a/converted/docs.it4i.cz/salomon/uv-2000.jpeg b/converted/docs.it4i.cz/salomon/hardware-overview-1/uv-2000.jpeg similarity index 100% rename from converted/docs.it4i.cz/salomon/uv-2000.jpeg rename to converted/docs.it4i.cz/salomon/hardware-overview-1/uv-2000.jpeg diff --git a/converted/docs.it4i.cz/salomon/index.md b/converted/docs.it4i.cz/salomon/index.md index 0f83c38f8a48c519c0f8cfef4aa37062f10cce69..90a83ecda0d54f7ec8fad3b9c9bb954ed1b26736 100644 --- a/converted/docs.it4i.cz/salomon/index.md +++ b/converted/docs.it4i.cz/salomon/index.md @@ -21,13 +21,13 @@ family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_T **Water-cooled Compute Nodes With MIC Accelerator** - +  **Tape Library T950B** -![]](salomon-3.jpeg) +  diff --git a/converted/docs.it4i.cz/salomon/introduction.md b/converted/docs.it4i.cz/salomon/introduction.md index 0f83c38f8a48c519c0f8cfef4aa37062f10cce69..90a83ecda0d54f7ec8fad3b9c9bb954ed1b26736 100644 --- a/converted/docs.it4i.cz/salomon/introduction.md +++ b/converted/docs.it4i.cz/salomon/introduction.md @@ -21,13 +21,13 @@ family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_T **Water-cooled Compute Nodes With MIC Accelerator** - +  **Tape Library T950B** -![]](salomon-3.jpeg) +  diff --git a/converted/docs.it4i.cz/salomon/network-1/network.md b/converted/docs.it4i.cz/salomon/network-1/network.md index dc1e8ac5e874112fc46f81bbc545b20b593841de..235fb38c295e350e19bb2032e62ddf4ee770b516 100644 --- a/converted/docs.it4i.cz/salomon/network-1/network.md +++ b/converted/docs.it4i.cz/salomon/network-1/network.md @@ -31,7 +31,7 @@ single-plain topology](ib-single-plane-topology.html) -The compute nodes may be accessed via the Infiniband network using ib0 +The compute nodes may be ed via the Infiniband network using ib0 network interface, in address range 10.17.0.0 (mask 255.255.224.0). The MPI may be used to establish native Infiniband connection among the nodes. @@ -54,7 +54,7 @@ Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time r4i1n0/0*24+r4i1n1/0*24+r4i1n2/0*24+r4i1n3/0*24 ``` -In this example, we access the node r4i1n0 by Infiniband network via the +In this example, we the node r4i1n0 by Infiniband network via the ib0 interface. ``` diff --git a/converted/docs.it4i.cz/salomon/prace.md b/converted/docs.it4i.cz/salomon/prace.md index ecbac35e56165c3973a87bb5153e771a05275ac7..d2f8a2ad24d0b508c8830dc55b48cec7bb7ae7e0 100644 --- a/converted/docs.it4i.cz/salomon/prace.md +++ b/converted/docs.it4i.cz/salomon/prace.md @@ -14,12 +14,12 @@ general documentation applies to them as well. This section shows the main differences for quicker orientation, but often uses references to the original documentation. PRACE users who don't undergo the full procedure (including signing the IT4I AuP on top of the PRACE AuP) will -not have a password and thus access to some services intended for +not have a password and thus to some services intended for regular users. This can lower their comfort, but otherwise they should be able to use the TIER-1 system as intended. Please see the [Obtaining Login Credentials section](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.html), -if the same level of access is required. +if the same level of is required. All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation/) @@ -35,7 +35,7 @@ Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/). Information about the local services are provided in the [introduction of general user documentation](introduction.html). Please keep in mind, that standard PRACE accounts don't have a password -to access the web interface of the local (IT4Innovations) request +to the web interface of the local (IT4Innovations) request tracker and thus a new ticket should be created by sending an e-mail to support[at]it4i.cz. @@ -59,7 +59,7 @@ Accessing the cluster ### Access with GSI-SSH -For all PRACE users the method for interactive access (login) and data +For all PRACE users the method for interactive (login) and data transfer based on grid services from Globus Toolkit (GSI SSH and GridFTP) is supported. @@ -67,14 +67,14 @@ The user will need a valid certificate and to be present in the PRACE LDAP (please contact your HOME SITE or the primary investigator of your project for LDAP account creation). -Most of the information needed by PRACE users accessing the Salomon +Most of the information needed by PRACE users ing the Salomon TIER-1 system can be found here: - [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs) - [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ) -- [Interactive access using +- [Interactive using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh) - [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details) @@ -95,9 +95,9 @@ valid 12 hours), use:  -To access Salomon cluster, two login nodes running GSI SSH service are +To Salomon cluster, two login nodes running GSI SSH service are available. The service is available from public Internet as well as from -the internal PRACE network (accessible only from other PRACE partners). +the internal PRACE network (ible only from other PRACE partners). Access from PRACE network:** @@ -166,13 +166,13 @@ gsiscp can be used: If the user needs to run X11 based graphical application and does not have a X11 server, the applications can be run using VNC service. If the -user is using regular SSH based access, please see the [section in +user is using regular SSH based , please see the [section in general -documentation](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html). +documentation](../get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html). -If the user uses GSI SSH based access, then the procedure is similar to -the SSH based access ([look -here](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html)), +If the user uses GSI SSH based , then the procedure is similar to +the SSH based ([look +here](../get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html)), only the port forwarding must be done using GSI SSH: $ gsissh -p 2222 salomon.it4i.cz -L 5961:localhost:5961 @@ -180,10 +180,10 @@ only the port forwarding must be done using GSI SSH: ### Access with SSH After successful obtainment of login credentials for the local -IT4Innovations account, the PRACE users can access the cluster as +IT4Innovations account, the PRACE users can the cluster as regular users using SSH. For more information please see the [section in general -documentation](accessing-the-cluster/shell-and-data-access/shell-and-data-access.html). +documentation](ing-the-cluster/shell-and-data-access/shell-and-data-access.html). File transfers ------------------ @@ -191,12 +191,12 @@ File transfers PRACE users can use the same transfer mechanisms as regular users (if they've undergone the full registration procedure). For information about this, please see [the section in the general -documentation](accessing-the-cluster/shell-and-data-access/shell-and-data-access.html). +documentation](ing-the-cluster/shell-and-data-access/shell-and-data-access.html). Apart from the standard mechanisms, for PRACE users to transfer data to/from Salomon cluster, a GridFTP server running Globus Toolkit GridFTP service is available. The service is available from public Internet as -well as from the internal PRACE network (accessible only from other +well as from the internal PRACE network (ible only from other PRACE partners). There's one control server and three backend servers for striping and/or @@ -280,8 +280,8 @@ Usage of the cluster -------------------- There are some limitations for PRACE user when using the cluster. By -default PRACE users aren't allowed to access special queues in the PBS -Pro to have high priority or exclusive access to some special equipment +default PRACE users aren't allowed to special queues in the PBS +Pro to have high priority or exclusive to some special equipment like accelerated nodes and high memory (fat) nodes. There may be also restrictions obtaining a working license for the commercial software installed on the cluster, mostly because of the license agreement or diff --git a/converted/docs.it4i.cz/salomon/resource-allocation-and-job-execution/introduction.md b/converted/docs.it4i.cz/salomon/resource-allocation-and-job-execution/introduction.md index 3fb197f639f8b26b1e2bc024b2ca65fc3539a28d..08c8e4151330890c54152951deae9b12711e6689 100644 --- a/converted/docs.it4i.cz/salomon/resource-allocation-and-job-execution/introduction.md +++ b/converted/docs.it4i.cz/salomon/resource-allocation-and-job-execution/introduction.md @@ -23,8 +23,8 @@ The resources are allocated to the job in a fairshare fashion, subject to constraints set by the queue and resources available to the Project. [The Fairshare](job-priority.html) at Salomon ensures that individual users may consume approximately equal amount of -resources per week. The resources are accessible via several queues for -queueing the jobs. The queues provide prioritized and exclusive access +resources per week. The resources are ible via several queues for +queueing the jobs. The queues provide prioritized and exclusive to the computational resources. Following queues are available to Anselm users: @@ -32,7 +32,7 @@ users: - **qprod**, the Production queue**** - **qlong**, the Long queue - **qmpp**, the Massively parallel queue -- **qfat**, the queue to access SMP UV2000 machine +- **qfat**, the queue to SMP UV2000 machine - **qfree,** the Free resource utilization queue Check the queue status at <https://extranet.it4i.cz/rsweb/salomon/> diff --git a/converted/docs.it4i.cz/salomon/resource-allocation-and-job-execution/job-submission-and-execution.md b/converted/docs.it4i.cz/salomon/resource-allocation-and-job-execution/job-submission-and-execution.md index fb8a5a1b01efe259017eff72d1de77a3d853251d..d0af553ee539725b43dfb53622bc7c62f88100b5 100644 --- a/converted/docs.it4i.cz/salomon/resource-allocation-and-job-execution/job-submission-and-execution.md +++ b/converted/docs.it4i.cz/salomon/resource-allocation-and-job-execution/job-submission-and-execution.md @@ -90,7 +90,7 @@ approach through attributes "accelerator", "naccelerators" and "accelerator_model" is used. The "accelerator_model" can be omitted, since on Salomon only one type of accelerator type/model is available. -The absence of specialized queue for accessing the nodes with cards +The absence of specialized queue for ing the nodes with cards means, that the Phi cards can be utilized in any queue, including qexp for testing/experiments, qlong for longer jobs, qfree after the project resources have been spent, etc. The Phi cards are thus also available to @@ -447,11 +447,11 @@ $ pwd In this example, 4 nodes were allocated interactively for 1 hour via the qexp queue. The interactive shell is executed in the home directory. -All nodes within the allocation may be accessed via ssh. Unallocated -nodes are not accessible to user. +All nodes within the allocation may be ed via ssh. Unallocated +nodes are not ible to user. -The allocated nodes are accessible via ssh from login nodes. The nodes -may access each other via ssh as well. +The allocated nodes are ible via ssh from login nodes. The nodes +may each other via ssh as well. Calculations on allocated nodes may be executed remotely via the MPI, ssh, pdsh or clush. You may find out which nodes belong to the diff --git a/converted/docs.it4i.cz/salomon/resource-allocation-and-job-execution/resources-allocation-policy.md b/converted/docs.it4i.cz/salomon/resource-allocation-and-job-execution/resources-allocation-policy.md index d75ca58423cd5c23a793da977493b2031f096e00..0281059a85aa10184fedfebff45540e90e8a9feb 100644 --- a/converted/docs.it4i.cz/salomon/resource-allocation-and-job-execution/resources-allocation-policy.md +++ b/converted/docs.it4i.cz/salomon/resource-allocation-and-job-execution/resources-allocation-policy.md @@ -13,8 +13,8 @@ to constraints set by the queue and resources available to the Project. The Fairshare at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority.html) section. The -resources are accessible via several queues for queueing the jobs. The -queues provide prioritized and exclusive access to the computational +resources are ible via several queues for queueing the jobs. The +queues provide prioritized and exclusive to the computational resources. Following table provides the queue partitioning overview:  @@ -148,14 +148,14 @@ is allowed after request for this queue. - **qprod**, the Production queue****: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All - nodes may be accessed via the qprod queue, however only 86 per job. + nodes may be ed via the qprod queue, however only 86 per job. ** Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours. - **qlong**, the Long queue****: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 336 nodes - without acceleration may be accessed via the qlong queue. Full + without acceleration may be ed via the qlong queue. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it.> *The maximum runtime in qlong is 144 hours (three times of the @@ -163,7 +163,7 @@ is allowed after request for this queue. - >****qmpp**, the massively parallel queue. This queue is intended for massively parallel runs. It is required that active project with nonzero remaining resources is specified to enter - the qmpp. All nodes may be accessed via the qmpp queue. ** Full + the qmpp. All nodes may be ed via the qmpp queue. ** Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qmpp is 4 hours. An PI> *needs explicitly* @@ -172,7 +172,7 @@ is allowed after request for this queue. her/his Project. - >****qfat**, the UV2000 queue. This queue is dedicated - to access the fat SGI UV2000 SMP machine. The machine (uv1) has 112 + to the fat SGI UV2000 SMP machine. The machine (uv1) has 112 Intel IvyBridge cores at 3.3GHz and 3.25TB RAM. An PI> *needs explicitly* ask [support](https://support.it4i.cz/rt/) for @@ -185,13 +185,13 @@ is allowed after request for this queue. after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to - the Project. Only 178 nodes without accelerator may be accessed from + the Project. Only 178 nodes without accelerator may be ed from this queue. Full nodes, 24 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours. - **qviz**, the Visualization queue****: Intended for pre-/post-processing using OpenGL accelerated graphics. Currently - when accessing the node, each user gets 4 cores of a CPU allocated, + when ing the node, each user gets 4 cores of a CPU allocated, thus approximately 73 GB of RAM and 1/7 of the GPU capacity (default "chunk"). *If more GPU power or RAM is required, it is recommended to allocate more chunks (with 4 cores each) up to one @@ -202,7 +202,7 @@ is allowed after request for this queue.  -To access node with Xeon Phi co-processor user needs to specify that in +To node with Xeon Phi co-processor user needs to specify that in [job submission select statement](job-submission-and-execution.html). diff --git a/converted/docs.it4i.cz/salomon/salomon b/converted/docs.it4i.cz/salomon/salomon new file mode 100644 index 0000000000000000000000000000000000000000..9365ab931a49eec462a6f2c24d3a86e5eaa7d9d1 Binary files /dev/null and b/converted/docs.it4i.cz/salomon/salomon differ diff --git a/converted/docs.it4i.cz/salomon/salomon-2 b/converted/docs.it4i.cz/salomon/salomon-2 new file mode 100644 index 0000000000000000000000000000000000000000..00283bcbb639d32788f9e1171bda7d43f8e486bc Binary files /dev/null and b/converted/docs.it4i.cz/salomon/salomon-2 differ diff --git a/converted/docs.it4i.cz/salomon/software/ansys/ansys-fluent.md b/converted/docs.it4i.cz/salomon/software/ansys/ansys-fluent.md index dec9fca12ccf802eedb06f590801761b99d2a1f5..8006674affb72aa794003bb4ef4dd0eb5c7fb3d3 100644 --- a/converted/docs.it4i.cz/salomon/software/ansys/ansys-fluent.md +++ b/converted/docs.it4i.cz/salomon/software/ansys/ansys-fluent.md @@ -88,13 +88,13 @@ This syntax will start the ANSYS FLUENT job under PBS Professional using the qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of -class="emphasis">*job_ID.hostname*. This job ID can then be used +*job_ID.hostname*. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o> -class="emphasis">*job_ID*.     +*job_ID*.     3. Running Fluent via user's config file ---------------------------------------- diff --git a/converted/docs.it4i.cz/salomon/software/ansys/ansys-ls-dyna.md b/converted/docs.it4i.cz/salomon/software/ansys/ansys-ls-dyna.md index 7b94cf848b907b642a25b99c2ebcee6c0792348d..9ff80553032d906a5f863d5faae1a4979029bd19 100644 --- a/converted/docs.it4i.cz/salomon/software/ansys/ansys-ls-dyna.md +++ b/converted/docs.it4i.cz/salomon/software/ansys/ansys-ls-dyna.md @@ -3,7 +3,7 @@ ANSYS LS-DYNA [ANSYS LS-DYNA](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/Explicit+Dynamics/ANSYS+LS-DYNA) -software provides convenient and easy-to-use access to the +software provides convenient and easy-to-use to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in diff --git a/converted/docs.it4i.cz/salomon/software/ansys/licensing.md b/converted/docs.it4i.cz/salomon/software/ansys/licensing.md index 4e7cdebe4649ad9b68a996478b8450365caed786..5d09c5aeda8dc396ef1004bc7a3c11763a9b44f8 100644 --- a/converted/docs.it4i.cz/salomon/software/ansys/licensing.md +++ b/converted/docs.it4i.cz/salomon/software/ansys/licensing.md @@ -10,11 +10,11 @@ ANSYS licence can be used by: CE IT4Innovations project partners, particularly the University of Ostrava, the Brno University of Technology - Faculty of Informatics, the Silesian University in Opava, Institute of Geonics AS CR.) -- id="result_box" class="short_text"> class="hps">all - persons class="hps">who have a valid - class="hps">license -- id="result_box" class="short_text"> class="hps">students - of class="hps">the Technical University</span> +- id="result_box" all + persons who have a valid + license +- id="result_box" students + of the Technical University</span> ANSYS Academic Research ----------------------- diff --git a/converted/docs.it4i.cz/salomon/software/ansys/setting-license-preferences.md b/converted/docs.it4i.cz/salomon/software/ansys/setting-license-preferences.md index 5338e55ced9d7217de8952066c724173bfa7ecc9..b3c44f8f74a9bd9a8c684dafe078b086b274488c 100644 --- a/converted/docs.it4i.cz/salomon/software/ansys/setting-license-preferences.md +++ b/converted/docs.it4i.cz/salomon/software/ansys/setting-license-preferences.md @@ -17,11 +17,11 @@ Launch the ANSLIC_ADMIN utility in a graphical environment: ANSLIC_ADMIN Utility will be run -[](Fluent_Licence_1.jpg) + -[](Fluent_Licence_2.jpg) + -[](Fluent_Licence_3.jpg) +  @@ -30,7 +30,7 @@ the bottom of the list.  -[](Fluent_Licence_4.jpg) + diff --git a/converted/docs.it4i.cz/salomon/software/ansys/workbench.md b/converted/docs.it4i.cz/salomon/software/ansys/workbench.md index dfa1cb805f42d7e9a7ab7972dc579330b5d459c7..11091bd109f2d9db029501d3cbf459b3124603f5 100644 --- a/converted/docs.it4i.cz/salomon/software/ansys/workbench.md +++ b/converted/docs.it4i.cz/salomon/software/ansys/workbench.md @@ -10,7 +10,7 @@ your project in Workbench. Then, for example, in Mechanical, go to Tools - Solve Process Settings ..., click Advanced button as shown on the screenshot. - + Enable Distribute Solution checkbox and enter number of cores (eg. 48 to run on two Salomon nodes). If you want the job to run on more then 1 diff --git a/converted/docs.it4i.cz/salomon/software/chemistry/molpro.md b/converted/docs.it4i.cz/salomon/software/chemistry/molpro.md index 844869e13be3d2aa1eefeb34de18ac59cd1daaab..77cf1643123f5d7ba4697a95eefa3e44353a0b71 100644 --- a/converted/docs.it4i.cz/salomon/software/chemistry/molpro.md +++ b/converted/docs.it4i.cz/salomon/software/chemistry/molpro.md @@ -14,7 +14,7 @@ License ------- Molpro software package is available only to users that have a valid -license. Please contact support to enable access to Molpro if you have a +license. Please contact support to enable to Molpro if you have a valid license appropriate for running on our cluster (eg. >academic research group licence, parallel execution). diff --git a/converted/docs.it4i.cz/salomon/software/compilers.md b/converted/docs.it4i.cz/salomon/software/compilers.md index 294b3784c272924bdfda1204df3304a761cc3fcb..5dca15f3a6f31ad0b84f366ff7f749d2a6e12649 100644 --- a/converted/docs.it4i.cz/salomon/software/compilers.md +++ b/converted/docs.it4i.cz/salomon/software/compilers.md @@ -63,7 +63,7 @@ GNU For compatibility reasons there are still available the original (old 4.4.7-11) versions of GNU compilers as part of the OS. These are -accessible in the search path by default. +ible in the search path by default. It is strongly recommended to use the up to date version which comes with the module GCC: diff --git a/converted/docs.it4i.cz/salomon/software/comsol/comsol-multiphysics.md b/converted/docs.it4i.cz/salomon/software/comsol/comsol-multiphysics.md index b444af48f2a132562f9df7ca5d271767e1349c8d..a4446f4149168757c1c627ddbe18193d9aebc88c 100644 --- a/converted/docs.it4i.cz/salomon/software/comsol/comsol-multiphysics.md +++ b/converted/docs.it4i.cz/salomon/software/comsol/comsol-multiphysics.md @@ -9,7 +9,7 @@ COMSOL Multiphysics® ------------------------- ->>[COMSOL](http://www.comsol.com)<span><span> +>>[COMSOL](http://www.comsol.com)><span> is a powerful environment for modelling and solving various engineering and scientific problems based on partial differential equations. COMSOL is designed to solve coupled or multiphysics phenomena. For many @@ -37,9 +37,9 @@ applications. others](http://www.comsol.com/products) >>COMSOL also allows an ->><span><span>interface support for +>>><span>interface support for equation-based modelling of -</span></span>>>partial differential +>>partial differential equations. >>Execution @@ -49,21 +49,21 @@ equations. >>On the clusters COMSOL is available in the latest stable version. There are two variants of the release: -- >>**Non commercial**<span><span> or so +- >>**Non commercial**><span> or so called >>**EDU variant**>>, which can be used for research and educational purposes. -- >>**Commercial**<span><span> or so called - >>**COM variant**</span></span><span><span>, +- >>**Commercial**><span> or so called + >>**COM variant**><span>, which can used also for commercial activities. - >>**COM variant**</span></span><span><span> + >>**COM variant**><span> has only subset of features compared to the >>**EDU - variant**>> available. <span - id="result_box" class="short_text"> - class="hps">More class="hps">about - licensing will be posted class="hps">here + variant**>> available. + id="result_box" + More about + licensing will be posted here soon.</span> @@ -73,7 +73,7 @@ version. There are two variants of the release: $ module load COMSOL/51-EDU ``` ->>By default the <span><span>**EDU +>>By default the ><span>**EDU variant**>> will be loaded. If user needs other version or variant, load the particular version. To obtain the list of available versions use @@ -86,7 +86,7 @@ $ module avail COMSOL it is recommend to use COMSOL on the compute nodes via PBS Pro scheduler. In order run the COMSOL Desktop GUI on Windows is recommended to use the [Virtual Network Computing -(VNC)](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html). +(VNC)](../../../get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html). ``` $ xhost + @@ -136,19 +136,19 @@ LiveLink™* *for MATLAB^®^ >>COMSOL is the software package for the numerical solution of the partial differential equations. LiveLink for MATLAB allows connection to the -COMSOL>>^<span><span><span><span><span><span><span>**®**</span></span></span></span></span></span></span>^</span></span><span><span> +COMSOL>>^><span><span><span><span>**®**</span></span></span></span></span>^ API (Application Programming Interface) with the benefits of the programming language and computing environment of the MATLAB. >>LiveLink for MATLAB is available in both ->>**EDU**</span></span><span><span> and ->>**COM**</span></span><span><span> ->>**variant**</span></span><span><span> of the +>>**EDU**><span> and +>>**COM**><span> +>>**variant**><span> of the COMSOL release. On the clusters 1 commercial -(>>**COM**</span></span><span><span>) license +(>>**COM**><span>) license and the 5 educational -(>>**EDU**</span></span><span><span>) licenses +(>>**EDU**><span>) licenses of LiveLink for MATLAB (please see the [ISV Licenses](../isv_licenses.html)) are available. Following example shows how to start COMSOL model from MATLAB via diff --git a/converted/docs.it4i.cz/salomon/software/comsol/licensing-and-available-versions.md b/converted/docs.it4i.cz/salomon/software/comsol/licensing-and-available-versions.md index c12b4071016e0b938bfa6268be33ded0d9860364..4cd91829119228c196ce5d8c9569f01d0e79d211 100644 --- a/converted/docs.it4i.cz/salomon/software/comsol/licensing-and-available-versions.md +++ b/converted/docs.it4i.cz/salomon/software/comsol/licensing-and-available-versions.md @@ -10,11 +10,11 @@ Comsol licence can be used by: CE IT4Innovations project partners, particularly the University of Ostrava, the Brno University of Technology - Faculty of Informatics, the Silesian University in Opava, Institute of Geonics AS CR.) -- id="result_box" class="short_text"> class="hps">all - persons class="hps">who have a valid - class="hps">license -- id="result_box" class="short_text"> class="hps">students - of class="hps">the Technical University</span> +- id="result_box" all + persons who have a valid + license +- id="result_box" students + of the Technical University</span> Comsol EDU Network Licence -------------------------- @@ -27,11 +27,11 @@ Comsol COM Network Licence The licence intended to be used for science and research, publications, students’ projects, commercial research with no commercial use -restrictions. id="result_box"> class="hps">E<span -class="hps">nables class="hps">the solution -class="hps">of at least class="hps">one job -class="hps">by one user class="hps">in one -class="hps">program start. +restrictions. id="result_box"> E +nables the solution +of at least one job +by one user in one +program start. Available Versions ------------------ diff --git a/converted/docs.it4i.cz/salomon/software/debuggers.md b/converted/docs.it4i.cz/salomon/software/debuggers.md index 6ff12150489d2616f140cf0b59f654e00ab3e85e..49d0dfc938e6a792bf7c471e2ed321efba319742 100644 --- a/converted/docs.it4i.cz/salomon/software/debuggers.md +++ b/converted/docs.it4i.cz/salomon/software/debuggers.md @@ -22,7 +22,7 @@ The intel debugger version 13.0 is available, via module intel. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X -display](../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html) +display](../../get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html) for running the GUI. $ module load intel diff --git a/converted/docs.it4i.cz/salomon/software/debuggers/aislinn.md b/converted/docs.it4i.cz/salomon/software/debuggers/aislinn.md index fa876d1ca4a9ba0057df1a4f96efed267c2d13b3..451b73a0f81b07483b9960f496e6b5b24d33bdd7 100644 --- a/converted/docs.it4i.cz/salomon/software/debuggers/aislinn.md +++ b/converted/docs.it4i.cz/salomon/software/debuggers/aislinn.md @@ -5,7 +5,7 @@ Aislinn covers all possible runs with respect to nondeterminism introduced by MPI. It allows to detect bugs (for sure) that occurs very rare in normal runs. -- Aislinn detects problems like invalid memory accesses, deadlocks, +- Aislinn detects problems like invalid memory es, deadlocks, misuse of MPI, and resource leaks. - Aislinn is open-source software; you can use it without any licensing limitations. diff --git a/converted/docs.it4i.cz/salomon/software/debuggers/allinea-ddt.md b/converted/docs.it4i.cz/salomon/software/debuggers/allinea-ddt.md index 014d2e7dcd9d95ca5884809e853da44b8415ed8b..4e1b2df7e873ca9553332a021af0ed7654fa5830 100644 --- a/converted/docs.it4i.cz/salomon/software/debuggers/allinea-ddt.md +++ b/converted/docs.it4i.cz/salomon/software/debuggers/allinea-ddt.md @@ -72,15 +72,15 @@ Direct starting a Job with Forge Be sure to log in with an [ X window forwarding -enabled](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html). +enabled](../../../get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html). This could mean using the -X in the ssh:  $ ssh -X username@clustername.it4i.cz -Other options is to access login node using VNC. Please see the detailed +Other options is to login node using VNC. Please see the detailed information on [how to use graphic user interface on the -clusters](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html) +clusters](../../../get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html) . From the login node an interactive session **with X windows forwarding** @@ -154,7 +154,7 @@ It is recommended to set the following environment values on the offload host: export MYO_WATCHDOG_MONITOR=-1 # To make sure the host process isn't killed when we enter a debugging session - export AMPLXE_COI_DEBUG_SUPPORT=true # To make sure that debugging symbols are accessible on the host and the card + export AMPLXE_COI_DEBUG_SUPPORT=true # To make sure that debugging symbols are ible on the host and the card unset OFFLOAD_MAIN # To make sure allinea DDT can attach to offloaded codes Then use one of the above mentioned methods to launch Forge. (Reverse diff --git a/converted/docs.it4i.cz/salomon/software/debuggers/summary.md b/converted/docs.it4i.cz/salomon/software/debuggers/summary.md index 0739ba35eb0bd0a8ff9947f43b0ee7f94f627f70..d649fa53c4a2d17b8ecb4cb3b24d93be00d2ac01 100644 --- a/converted/docs.it4i.cz/salomon/software/debuggers/summary.md +++ b/converted/docs.it4i.cz/salomon/software/debuggers/summary.md @@ -22,7 +22,7 @@ The intel debugger version 13.0 is available, via module intel. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X -display](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html) +display](../../../get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html) for running the GUI. $ module load intel diff --git a/converted/docs.it4i.cz/salomon/software/debuggers/total-view.md b/converted/docs.it4i.cz/salomon/software/debuggers/total-view.md index 5502fa7cf920f6baa1711bfee75d00c3264d59c7..db26eae9977e5fb5f38441c1f43ca12f5b310f35 100644 --- a/converted/docs.it4i.cz/salomon/software/debuggers/total-view.md +++ b/converted/docs.it4i.cz/salomon/software/debuggers/total-view.md @@ -56,7 +56,7 @@ using the -X in the ssh: ssh -X username@salomon.it4i.cz -Other options is to access login node using VNC. Please see the detailed +Other options is to login node using VNC. Please see the detailed information on how to use graphic user interface on Anselm [here](https://docs.it4i.cz/salomon/software/debuggers/resolveuid/11e53ad0d2fd4c5187537f4baeedff33#VNC). diff --git a/converted/docs.it4i.cz/salomon/software/debuggers/valgrind.md b/converted/docs.it4i.cz/salomon/software/debuggers/valgrind.md index 281244908b180a0e2cc66691a5fc71fe386550fd..27c51bce32b23ee1db384fe5ce5b83ceb96709ed 100644 --- a/converted/docs.it4i.cz/salomon/software/debuggers/valgrind.md +++ b/converted/docs.it4i.cz/salomon/software/debuggers/valgrind.md @@ -20,8 +20,8 @@ Valgrind run 5-100 times slower. The main tools available in Valgrind are : - **Memcheck**, the original, must used and default tool. Verifies - memory access in you program and can detect use of unitialized - memory, out of bounds memory access, memory leaks, double free, etc. + memory in you program and can detect use of unitialized + memory, out of bounds memory , memory leaks, double free, etc. - **Massif**, a heap profiler. - **Hellgrind** and **DRD** can detect race conditions in multi-threaded applications. @@ -121,7 +121,7 @@ description of command line options. ==12652== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 6 from 6) In the output we can see that Valgrind has detected both errors - the -off-by-one memory access at line 5 and a memory leak of 40 bytes. If we +off-by-one memory at line 5 and a memory leak of 40 bytes. If we want a detailed analysis of the memory leak, we need to run Valgrind with --leak-check=full option : diff --git a/converted/docs.it4i.cz/salomon/software/intel-suite/intel-debugger.md b/converted/docs.it4i.cz/salomon/software/intel-suite/intel-debugger.md index 6ffaab545926b4d888555054de75ec6e0711e22c..82f38d171627b66ce9e765e29f2b3f0ca50bf7b5 100644 --- a/converted/docs.it4i.cz/salomon/software/intel-suite/intel-debugger.md +++ b/converted/docs.it4i.cz/salomon/software/intel-suite/intel-debugger.md @@ -14,7 +14,7 @@ The intel debugger version 13.0 is available, via module intel. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X -display](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html) +display](../../../get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html) for running the GUI. $ module load intel/2014.06 @@ -26,9 +26,9 @@ The debugger may run in text mode. To debug in text mode, use $ idbc To debug on the compute nodes, module intel must be loaded. -The GUI on compute nodes may be accessed using the same way as in [the +The GUI on compute nodes may be ed using the same way as in [the GUI -section](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html) +section](../../../get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html) Example: @@ -43,7 +43,7 @@ Example: In this example, we allocate 1 full compute node, compile program myprog.c with debugging options -O0 -g and run the idb debugger -interactively on the myprog.x executable. The GUI access is via X11 port +interactively on the myprog.x executable. The GUI is via X11 port forwarding provided by the PBS workload manager. Debugging parallel applications @@ -56,7 +56,7 @@ programs as well. For debugging small number of MPI ranks, you may execute and debug each rank in separate xterm terminal (do not forget the [X -display](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html)). +display](../../../get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html)). Using Intel MPI, this may be done in following way: $ qsub -q qexp -l select=2:ncpus=24 -X -I diff --git a/converted/docs.it4i.cz/salomon/software/intel-suite/intel-mkl.md b/converted/docs.it4i.cz/salomon/software/intel-suite/intel-mkl.md index 69deedba6b0e9c44e08b576f45ba4a796a325ced..43cb59ba1248fd8d93ea9eac1a579ef3c4c3a7e0 100644 --- a/converted/docs.it4i.cz/salomon/software/intel-suite/intel-mkl.md +++ b/converted/docs.it4i.cz/salomon/software/intel-suite/intel-mkl.md @@ -14,20 +14,20 @@ subroutines, extensively threaded and optimized for maximum performance. Intel MKL provides these basic math kernels: -- <div id="d4841e18"> +- BLAS (level 1, 2, and 3) and LAPACK linear algebra routines, offering vector, vector-matrix, and matrix-matrix operations. -- <div id="d4841e21"> +- The PARDISO direct sparse solver, an iterative sparse solver, and supporting sparse BLAS (level 1, 2, and 3) routines for solving sparse systems of equations. -- <div id="d4841e24"> +- @@ -35,7 +35,7 @@ Intel MKL provides these basic math kernels: Linux* and Windows* operating systems, as well as the Basic Linear Algebra Communications Subprograms (BLACS) and the Parallel Basic Linear Algebra Subprograms (PBLAS). -- <div id="d4841e27"> +- @@ -43,13 +43,13 @@ Intel MKL provides these basic math kernels: dimensions with support for mixed radices (not limited to sizes that are powers of 2), as well as distributed versions of these functions. -- <div id="d4841e30"> +- Vector Math Library (VML) routines for optimized mathematical operations on vectors. -- <div id="d4841e34"> +- @@ -57,7 +57,7 @@ Intel MKL provides these basic math kernels: high-performance vectorized random number generators (RNG) for several probability distributions, convolution and correlation routines, and summary statistics functions. -- <div id="d4841e37"> +- diff --git a/converted/docs.it4i.cz/salomon/software/intel-suite/intel-parallel-studio-introduction.md b/converted/docs.it4i.cz/salomon/software/intel-suite/intel-parallel-studio-introduction.md index adeb405a8af2bc8357f6b2f87fe32faa2bab4717..498b7c9d1a6f1367f77a40f41a370385627631ad 100644 --- a/converted/docs.it4i.cz/salomon/software/intel-suite/intel-parallel-studio-introduction.md +++ b/converted/docs.it4i.cz/salomon/software/intel-suite/intel-parallel-studio-introduction.md @@ -42,7 +42,7 @@ IDB is no longer available since Parallel Studio 2015. debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X -display](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html) +display](../../../get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html) for running the GUI. $ module load intel diff --git a/converted/docs.it4i.cz/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md b/converted/docs.it4i.cz/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md index 9fbc281c05262b9fc7af4d66f8141bc5579a58d4..9f6cb95bb5edfe46b0f59dc4bcc29aed28009421 100644 --- a/converted/docs.it4i.cz/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md +++ b/converted/docs.it4i.cz/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md @@ -32,7 +32,7 @@ Viewing traces -------------- To view and analyze the trace, open ITAC GUI in a [graphical -environment](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html) +environment](../../../get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html) : $ module load itac/9.1.2.024 @@ -40,8 +40,8 @@ environment](../../../get-started-with-it4innovations/accessing-the-clusters/gra The GUI will launch and you can open the produced *.stf file. - + + Please refer to Intel documenation about usage of the GUI tool. diff --git a/converted/docs.it4i.cz/salomon/software/intel-xeon-phi.md b/converted/docs.it4i.cz/salomon/software/intel-xeon-phi.md index dcdaa2091784a97cd88522d2c02dee6d63546032..c5684fdb9ba0c257211c190d3b04b2fb08dcf003 100644 --- a/converted/docs.it4i.cz/salomon/software/intel-xeon-phi.md +++ b/converted/docs.it4i.cz/salomon/software/intel-xeon-phi.md @@ -13,7 +13,7 @@ this document are supported. Intel Utilities for Xeon Phi ---------------------------- -To get access to a compute node with Intel Xeon Phi accelerator, use the +To get to a compute node with Intel Xeon Phi accelerator, use the PBS interactive session $ qsub -I -q qprod -l select=1:ncpus=24:accelerator=True:naccelerators=2:accelerator_model=phi7120 -A NONE-0-0 @@ -510,7 +510,7 @@ Phi: ### Execution of the Program in Native Mode on Intel Xeon Phi -The user access to the Intel Xeon Phi is through the SSH. Since user +The user to the Intel Xeon Phi is through the SSH. Since user home directories are mounted using NFS on the accelerator, users do not have to copy binary files or libraries between the host and accelerator.  @@ -543,7 +543,7 @@ compiler module that was used to compile the code on the host computer. For your information the list of libraries and their location required for execution of an OpenMP parallel code on Intel Xeon Phi is: -class="discreet visualHighlight"> + >/apps/all/icc/2015.3.187-GNU-5.1.0-2.25/composer_xe_2015.3.187/compiler/lib/mic @@ -799,7 +799,7 @@ Similarly to execution of OpenMP programs in native mode, since the environmental module are not supported on MIC, user has to setup paths to Intel MPI libraries and binaries manually. One time setup can be done by creating a "**.profile**" file in user's home directory. This file -sets up the environment on the MIC automatically once user access to the +sets up the environment on the MIC automatically once user to the accelerator through the SSH. At first get the LD_LIBRARY_PATH for currenty used Intel Compiler and @@ -828,7 +828,7 @@ libraries. library and particular version of an Intel compiler. These versions have to match with loaded modules. -To access a MIC accelerator located on a node that user is currently +To a MIC accelerator located on a node that user is currently connected to, use: $ ssh mic0 @@ -898,7 +898,7 @@ execute: Execution on host - MPI processes distributed over multiple accelerators on multiple nodes** ->To get access to multiple nodes with MIC accelerator, user has to +>To get to multiple nodes with MIC accelerator, user has to use PBS to allocate the resources. To start interactive session, that allocates 2 compute nodes = 2 MIC accelerators run qsub command with following parameters: @@ -919,7 +919,7 @@ immediately. To see the other nodes that have been allocated use: r38u32n1001.bullx >This output means that the PBS allocated nodes r38u31n1000 and -r38u32n1001, which means that user has direct access to +r38u32n1001, which means that user has direct to "**r38u31n1000-mic0**" and "**>r38u32n1001-mic0**" accelerators. @@ -1083,4 +1083,4 @@ Optimization For more details about optimization techniques please read Intel document [Optimization and Performance Tuning for Intel® Xeon Phi™ Coprocessors](http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization "http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization") -. + diff --git a/converted/docs.it4i.cz/salomon/software/mpi-1/mpi.md b/converted/docs.it4i.cz/salomon/software/mpi-1/mpi.md index 22a1f72ed72340f755062d1e662b73fc943abbc0..990d65d85d444d618bf87ef18a1a099437fb56f0 100644 --- a/converted/docs.it4i.cz/salomon/software/mpi-1/mpi.md +++ b/converted/docs.it4i.cz/salomon/software/mpi-1/mpi.md @@ -141,7 +141,7 @@ scratch /lscratch filesystem. ### Ways to run MPI programs Optimal way to run an MPI program depends on its memory requirements, -memory access pattern and communication pattern. +memory pattern and communication pattern. Consider these ways to run an MPI program: 1. One MPI process per node, 24 threads per process @@ -152,13 +152,13 @@ One MPI** process per node, using 24 threads, is most useful for memory demanding applications, that make good use of processor cache memory and are not memory bound. This is also a preferred way for communication intensive applications as one process per node enjoys full -bandwidth access to the network interface. +bandwidth to the network interface. Two MPI** processes per node, using 12 threads each, bound to processor socket is most useful for memory bandwidth bound applications such as BLAS1 or FFT, with scalable memory demand. However, note that -the two processes will share access to the network interface. The 12 -threads and socket binding should ensure maximum memory access bandwidth +the two processes will share to the network interface. The 12 +threads and socket binding should ensure maximum memory bandwidth and minimize communication, migration and numa effect overheads. Important! Bind every OpenMP thread to a core! diff --git a/converted/docs.it4i.cz/salomon/software/numerical-languages/matlab.md b/converted/docs.it4i.cz/salomon/software/numerical-languages/matlab.md index 0e2275e041b2fa63ebd5c4ac4c75beab900280f7..282ebca40f4b34ba664d5b7dc27a09ec511065c4 100644 --- a/converted/docs.it4i.cz/salomon/software/numerical-languages/matlab.md +++ b/converted/docs.it4i.cz/salomon/software/numerical-languages/matlab.md @@ -36,12 +36,12 @@ Matlab on the compute nodes via PBS Pro scheduler. If you require the Matlab GUI, please follow the general informations about [running graphical -applications](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html). +applications](../../../get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html). Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part -[here](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html)) +[here](../../../get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html)) is recommended. To run Matlab with GUI, use @@ -94,7 +94,7 @@ code on just a single node. Following example shows how to start interactive session with support for Matlab GUI. For more information about GUI based applications on Anselm see [this -page](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html). +page](../../../get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html). $ xhost + $ qsub -I -v DISPLAY=$(uname -n):$(echo $DISPLAY | cut -d ':' -f 2) -A NONE-0-0 -q qexp -l select=1 -l walltime=00:30:00 @@ -106,7 +106,7 @@ The second part of the command shows how to request all necessary licenses. In this case 1 Matlab-EDU license and 48 Distributed Computing Engines licenses. -Once the access to compute nodes is granted by PBS, user can load +Once the to compute nodes is granted by PBS, user can load following modules and start Matlab: r1i0n17$ module load MATLAB/2015a-EDU diff --git a/converted/docs.it4i.cz/salomon/software/numerical-languages/r.md b/converted/docs.it4i.cz/salomon/software/numerical-languages/r.md index f75613e70d239aa24ed31ecfe8c9741031f95b1a..58bf5bb8c24d5e3eea5a049993cf702f085c8fb6 100644 --- a/converted/docs.it4i.cz/salomon/software/numerical-languages/r.md +++ b/converted/docs.it4i.cz/salomon/software/numerical-languages/r.md @@ -34,12 +34,12 @@ Modules ------- **The R version 3.1.1 is available on the cluster, along with GUI -interface Rstudio +interface Rstudio** Application Version module ------------- -------------- --------------------- R** R 3.1.1 R/3.1.1-intel-2015b -Rstudio** Rstudio 0.97 Rstudio +Rstudio**** Rstudio 0.97 Rstudio $ module load R @@ -55,10 +55,10 @@ OMP_NUM_THREADS environment variable. ### Interactive execution -To run R interactively, using Rstudio GUI, log in with ssh -X parameter +To run R interactively, using Rstudio** GUI, log in with ssh -X parameter for X11 forwarding. Run rstudio: - $ module load Rstudio + $ module load Rstudio** $ rstudio ### Batch execution @@ -396,7 +396,7 @@ mpi.apply Rmpi example: The above is the mpi.apply MPI example for calculating the number Ď€. Only the slave processes carry out the calculation. Note the mpi.parSapply(), ** function call. The package -class="anchor-link">parallel +parallel [example](r.html#package-parallel)[above](r.html#package-parallel) may be trivially adapted (for much better performance) to this structure using the mclapply() in place of mpi.parSapply(). diff --git a/converted/docs.it4i.cz/salomon/storage/cesnet-data-storage.md b/converted/docs.it4i.cz/salomon/storage/cesnet-data-storage.md index a5dfb47922cded184c2b07602830603efd4d70c4..be82b6fac217d54c30a7ea235c88437d4229ac0c 100644 --- a/converted/docs.it4i.cz/salomon/storage/cesnet-data-storage.md +++ b/converted/docs.it4i.cz/salomon/storage/cesnet-data-storage.md @@ -23,7 +23,7 @@ Republic. User of data storage CESNET (DU) association can become organizations or an individual person who is either in the current employment relationship (employees) or the current study relationship (students) to -a legal entity (organization) that meets the “Principles for access to +a legal entity (organization) that meets the “Principles for to CESNET Large infrastructure (Access Policy)”. User may only use data storage CESNET for data transfer and storage @@ -37,12 +37,12 @@ The service is documented at please contact directly CESNET Storage Department via e-mail [du-support(at)cesnet.cz](mailto:du-support@cesnet.cz). -The procedure to obtain the CESNET access is quick and trouble-free. +The procedure to obtain the CESNET is quick and trouble-free. (source [https://du.cesnet.cz/](https://du.cesnet.cz/wiki/doku.php/en/start "CESNET Data Storage")) -CESNET storage access +CESNET storage --------------------- ### Understanding Cesnet storage @@ -51,7 +51,7 @@ It is very important to understand the Cesnet storage before uploading data. Please read <https://du.cesnet.cz/en/navody/home-migrace-plzen/start> first. -Once registered for CESNET Storage, you may [access the +Once registered for CESNET Storage, you may [ the storage](https://du.cesnet.cz/en/navody/faq/start) in number of ways. We recommend the SSHFS and RSYNC methods. @@ -59,7 +59,7 @@ number of ways. We recommend the SSHFS and RSYNC methods. SSHFS: The storage will be mounted like a local hard drive -The SSHFS provides a very convenient way to access the CESNET Storage. +The SSHFS provides a very convenient way to the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable harddrive. Files can be than copied in and out in a usual fashion. @@ -74,7 +74,7 @@ Mount tier1_home **(only 5120M !)**: $ sshfs username@ssh.du1.cesnet.cz:. cesnet/ -For easy future access from Anselm, install your public key +For easy future from Anselm, install your public key $ cp .ssh/id_rsa.pub cesnet/.ssh/authorized_keys @@ -92,7 +92,7 @@ Once done, please remember to unmount the storage $ fusermount -u cesnet -### Rsync access +### Rsync Rsync provides delta transfer for best performance, can resume interrupted transfers diff --git a/converted/docs.it4i.cz/salomon/storage/storage.md b/converted/docs.it4i.cz/salomon/storage/storage.md index d0a381ed39b91a740d2d24315d3124bf4bbe7e27..8cd9de885abc4336dfad2443b677e95ede76fa8f 100644 --- a/converted/docs.it4i.cz/salomon/storage/storage.md +++ b/converted/docs.it4i.cz/salomon/storage/storage.md @@ -9,11 +9,11 @@ Introduction ------------ There are two main shared file systems on Salomon cluster, the [ -class="anchor-link"> -class="anchor-link">HOME](storage.html#home) -and [ class="anchor-link"> -class="anchor-link">SCRATCH](storage.html#shared-filesystems). -All login and compute nodes may access same data on shared filesystems. + +HOME](storage.html#home) +and [ +SCRATCH](storage.html#shared-filesystems). +All login and compute nodes may same data on shared filesystems. Compute nodes are also equipped with local (non-shared) scratch, ramdisk and tmp filesystems. @@ -41,14 +41,14 @@ Shared Filesystems ---------------------- Salomon computer provides two main shared filesystems, the [ -class="anchor-link">HOME +HOME filesystem](storage.html#home-filesystem) and the [SCRATCH filesystem](storage.html#scratch-filesystem). The SCRATCH filesystem is partitioned to [WORK and TEMP workspaces](storage.html#shared-workspaces). The HOME filesystem is realized as a tiered NFS disk storage. The SCRATCH filesystem is realized as a parallel Lustre filesystem. Both shared file -systems are accessible via the Infiniband network. Extended ACLs are +systems are ible via the Infiniband network. Extended ACLs are provided on both HOME/SCRATCH filesystems for the purpose of sharing data with other users using fine-grained control. @@ -72,9 +72,9 @@ workspaces](storage.html#shared-workspaces). - class="emphasis"> class="emphasis"> -- class="emphasis">SCRATCH Lustre object storage - <div class="itemizedlist"> + class="emphasis"> +- SCRATCH Lustre object storage + - Disk array SFA12KX - 540 4TB SAS 7.2krpm disks @@ -84,8 +84,8 @@ workspaces](storage.html#shared-workspaces). -- class="emphasis">SCRATCH Lustre metadata storage - <div class="itemizedlist"> +- SCRATCH Lustre metadata storage + - Disk array EF3015 - 12 600GB SAS 15krpm disks @@ -103,15 +103,15 @@ A user file on the Lustre filesystem can be divided into multiple chunks (OSTs) (disks). The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing. -When a client (a class="glossaryItem">compute -class="glossaryItem">node from your job) needs to create -or access a file, the client queries the metadata server ( -class="glossaryItem">MDS) and the metadata target ( -class="glossaryItem">MDT) for the layout and location of the +When a client (a compute +node from your job) needs to create +or a file, the client queries the metadata server ( +MDS) and the metadata target ( +MDT) for the layout and location of the [file's stripes](http://www.nas.nasa.gov/hecc/support/kb/Lustre_Basics_224.html#striping). Once the file is opened and the client obtains the striping information, -the class="glossaryItem">MDS is no longer involved in the +the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval. @@ -132,7 +132,7 @@ directories or files to get optimum I/O performance: Salomon Lustre filesystems one can specify -1 to use all OSTs in the filesystem. 3.stripe_offset The index of the - class="glossaryItem">OST where the first stripe is to be + OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended. @@ -144,7 +144,7 @@ significantly impact the I/O performance you experience. Use the lfs getstripe for getting the stripe parameters. Use the lfs setstripe command for setting the stripe parameters to get optimal I/O performance The correct stripe setting depends on your needs and file -access patterns. + patterns. ``` $ lfs getstripe dir|filename @@ -185,7 +185,7 @@ When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the stripe_count is set to a larger value. The stripe count sets the number of OSTs the file will be written to. By default, the stripe count is set -to 1. While this default setting provides for efficient access of +to 1. While this default setting provides for efficient of metadata (for example to support the ls -l command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A @@ -197,10 +197,10 @@ of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes. -Using a large stripe size can improve performance when accessing very +Using a large stripe size can improve performance when ing very large files -Large stripe size allows each client to have exclusive access to its own +Large stripe size allows each client to have exclusive to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file. @@ -293,7 +293,7 @@ number of named user and named group entries. ACLs on a Lustre file system work exactly like ACLs on any Linux file system. They are manipulated with the standard tools in the standard -manner. Below, we create a directory and allow a specific user access. +manner. Below, we create a directory and allow a specific user . ``` [vop999@login1.salomon ~]$ umask 027 @@ -377,11 +377,11 @@ The WORK workspace resides on SCRATCH filesystem. Users may create subdirectories and files in directories **/scratch/work/user/username** and **/scratch/work/project/projectid. **The /scratch/work/user/username is private to user, much like the home directory. The -/scratch/work/project/projectid is accessible to all users involved in +/scratch/work/project/projectid is ible to all users involved in project projectid. > The WORK workspace is intended to store users project data as well as -for high performance access to input and output files. All project data +for high performance to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up. @@ -417,7 +417,7 @@ Lustre ### TEMP The TEMP workspace resides on SCRATCH filesystem. The TEMP workspace -accesspoint is /scratch/temp. Users may freely create subdirectories +point is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6P, shared among all users on TEMP and WORK. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this @@ -428,14 +428,14 @@ insufficient for particular user, please contact lifted upon request. The TEMP workspace is intended for temporary scratch data generated -during the calculation as well as for high performance access to input +during the calculation as well as for high performance to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory. Users are advised to save the necessary data from the TEMP workspace to HOME or WORK after the calculations and clean up the scratch files. -Files on the TEMP filesystem that are **not accessed for more than 90 +Files on the TEMP filesystem that are **not ed for more than 90 days** will be automatically **deleted**. The TEMP workspace is hosted on SCRATCH filesystem. The SCRATCH is @@ -471,16 +471,16 @@ RAM disk Every computational node is equipped with filesystem realized in memory, so called RAM disk. -Use RAM disk in case you need really fast access to your data of limited +Use RAM disk in case you need really fast to your data of limited size during your calculation. Be very careful, use of RAM disk filesystem is at the expense of operational memory. -The local RAM disk is mounted as /ramdisk and is accessible to user +The local RAM disk is mounted as /ramdisk and is ible to user at /ramdisk/$PBS_JOBID directory. The local RAM disk filesystem is intended for temporary scratch data -generated during the calculation as well as for high performance access +generated during the calculation as well as for high performance to input and output files. Size of RAM disk filesystem is limited. Be very careful, use of RAM disk filesystem is at the expense of operational memory. It is not recommended to allocate large amount of diff --git a/html_md.sh b/html_md.sh index c7ae05a53739ea08e7d51f6d4302fcdf70270b17..cb24f7b7f1c05dfd05797b392893b50035130b41 100755 --- a/html_md.sh +++ b/html_md.sh @@ -2,8 +2,8 @@ ### DOWNLOAD AND CONVERT DOCUMENTATION # autor: kru0052 -# version: 0.35 -# change: repair images bugs, change version number -1 - beta +# version: 0.36 +# change: repair images bugs and add new filtering html and css elements # bugs: bad formatting tables, bad links for other files, stayed a few html elements, formatting bugs... ### @@ -34,6 +34,42 @@ if [ "$1" = "-t" ]; then while read a b ; do cp "$a" "./converted/$b"; done < <(paste ./info/list_image.txt ./source/list_image_mv.txt) + cp ./docs.it4i.cz/salomon/salomon ./converted/docs.it4i.cz/salomon/salomon + cp ./docs.it4i.cz/salomon/salomon-2 ./converted/docs.it4i.cz/salomon/salomon-2 + cp ./converted/docs.it4i.cz/salomon/resource-allocation-and-job-execution/fairshare_formula.png ./converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/fairshare_formula.png + cp ./converted/docs.it4i.cz/salomon/resource-allocation-and-job-execution/job_sort_formula.png ./converted/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job_sort_formula.png + cp ./converted/docs.it4i.cz/salomon/software/debuggers/vtune-amplifier.png ./converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/vtune-amplifier.png + cp ./converted/docs.it4i.cz/salomon/software/debuggers/Snmekobrazovky20160708v12.33.35.png ./converted/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/Snmekobrazovky20160708v12.33.35.png + + wget https://docs.it4i.cz/anselm-cluster-documentation/software/virtualization/virtualization-job-workflow + mv ./virtualization-job-workflow ./converted/docs.it4i.cz/anselm-cluster-documentation/software/ + wget https://docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig6.png + mv ./fig6.png ./converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig6.png + wget https://docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig5.png + mv ./fig5.png ./converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig5.png + wget https://docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig1.png + mv ./fig1.png ./converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig1.png + wget https://docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig2.png + mv ./fig2.png ./converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig2.png + wget https://docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig4.png + mv ./fig4.png ./converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig4.png + wget https://docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig3.png + mv ./fig3.png ./converted/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/fig3.png + + + + else + echo "list_md.txt not exists!!!!!" + fi + + +fi +if [ "$1" = "-t1" ]; then + # testing new function + + echo "Testing 1..." + + while read a ; do echo "$a"; @@ -54,9 +90,6 @@ if [ "$1" = "-t" ]; then rm "./converted/${a%TMP.*}.TEST.md"; done <./source/list_md_mv.txt - else - echo "list_md.txt not exists!!!!!" - fi fi diff --git a/source/list_image_mv.txt b/source/list_image_mv.txt index 3447b7aa4d9d2531c52637e92bd1810182621ece..bb78bf58628ec319d36b9fc944c53ab8c1ff6e19 100644 --- a/source/list_image_mv.txt +++ b/source/list_image_mv.txt @@ -66,10 +66,10 @@ ./docs.it4i.cz/salomon/software/ansys/Fluent_Licence_4.jpg ./docs.it4i.cz/salomon/software/ansys/Fluent_Licence_1.jpg ./docs.it4i.cz/salomon/software/ansys/Fluent_Licence_3.jpg -./docs.it4i.cz/anselm-cluster-documentation/Anselmprofile.jpg -./docs.it4i.cz/anselm-cluster-documentation/anyconnecticon.jpg -./docs.it4i.cz/anselm-cluster-documentation/anyconnectcontextmenu.jpg -./docs.it4i.cz/anselm-cluster-documentation/logingui.jpg +./docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/Anselmprofile.jpg +./docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/anyconnecticon.jpg +./docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/anyconnectcontextmenu.jpg +./docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/logingui.jpg ./docs.it4i.cz/anselm-cluster-documentation/software/ansys/Fluent_Licence_2.jpg ./docs.it4i.cz/anselm-cluster-documentation/software/ansys/Fluent_Licence_4.jpg ./docs.it4i.cz/anselm-cluster-documentation/software/ansys/Fluent_Licence_1.jpg @@ -78,14 +78,14 @@ ./docs.it4i.cz/anselm-cluster-documentation/successfullconnection.jpg ./docs.it4i.cz/salomon/sgi-c1104-gp1.jpeg ./docs.it4i.cz/salomon/salomon-1.jpeg -./docs.it4i.cz/salomon/uv-2000.jpeg +./docs.it4i.cz/salomon/hardware-overview-1/uv-2000.jpeg ./docs.it4i.cz/salomon/salomon-3.jpeg ./docs.it4i.cz/salomon/salomon-4.jpeg -./docs.it4i.cz/anselm-cluster-documentation/loginwithprofile.jpeg -./docs.it4i.cz/anselm-cluster-documentation/instalationfile.jpeg -./docs.it4i.cz/anselm-cluster-documentation/successfullinstalation.jpeg -./docs.it4i.cz/anselm-cluster-documentation/java_detection.jpeg -./docs.it4i.cz/anselm-cluster-documentation/executionaccess.jpeg -./docs.it4i.cz/anselm-cluster-documentation/downloadfilesuccessfull.jpeg -./docs.it4i.cz/anselm-cluster-documentation/executionaccess2.jpeg -./docs.it4i.cz/anselm-cluster-documentation/login.jpeg +./docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/loginwithprofile.jpeg +./docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/instalationfile.jpeg +./docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/successfullinstalation.jpeg +./docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/java_detection.jpeg +./docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/executionaccess.jpeg +./docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/downloadfilesuccessfull.jpeg +./docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/executionaccess2.jpeg +./docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/login.jpeg diff --git a/source/replace.txt b/source/replace.txt index 074b420e9a27ada00a9adaac7d4ce731c94b7499..5eaf20e394371db52e3e95aa748214e4208add27 100644 --- a/source/replace.txt +++ b/source/replace.txt @@ -1,113 +1,280 @@ -[](putty-tunnel.png)& -& -[****](TightVNC_login.png)& + []()& + **[]()&** +- - & +.[]()&. +### []()[]()&### +### []()&### +### **&### +1. style="text-align: left; float: none; ">Locate and modify&1. Locate and modify +2000](../uv-2000/@@images/04ce7514-8d27-4cdb-bf0f-45d875c75df0.jpeg "UV 2000")& +2](../executionaccess2.jpg/@@images/bed3998c-4b82-4b40-83bd-c3528dde2425.jpeg "Execution access 2")& +2); its mate is mapped to 37 on the reverse strand (32). Read r002 has& +2. style="text-align: left; float: none; ">&2. Check Putty settings: +[](7D_Enhanced_hypercube.png)& +access& +access](../executionaccess.jpg/@@images/4d6e7cb7-9aa7-419c-9583-6dfd92b2c015.jpeg "Execution access") +According to FLAG 163 (=1+2+32+128), the read mapped to position 7 is& +& +Analyzer](Snmekobrazovky20151204v15.35.12.png/@@images/fb3b3ac2-a88f-4e55-a25e-23f1da2200cb.png "Intel Trace Analyzer")& +and comes to IT4I (represented by the blue dashed line). The data& +Anselm](../../anselm-cluster-documentation/Authorization_chain.png "Authorization chain")& +& +& +& +are mandatory, the rest is optional but strongly recommended. Each line& +class="anchor-link">& +class="Apple-converted-space">& +class="discreet visualHighlight">& +class="emphasis">& +class="glossaryItem">& +class="highlightedSearchTerm">& +class="highlightedSearchTerm">SSH</span><span>&highlightedSearchTerm + class="hps">& +class="hps">& + class="hps">More</span> <span class="hps">& +class="hps trans-target-highlight">& +class="internal-link">& + class="internal-link"><span id="result_box" class="short_text"><span& +class="monospace">& +class="monospace">LAPACKE</span> module, which includes Intel's LAPACKE&LAPACKE modelu, which includes Intel's LAPACKE + class="n">& +### class="n">&### +class="n">& + class="pre">& +class="pun">node_group_key& +class="short_text"><span& +class="smarterwiki-popup-bubble-body"><span& +class="smarterwiki-popup-bubble-links-container"><span& +class="smarterwiki-popup-bubble-links-row">[{.smarterwiki-popup-bubble-link-favicon}](http://maps.google.com/maps?q=HDF5%20icc%20serial%09pthread%09hdf5%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09%24HDF5_INC%20%24HDF5_CPP_LIB%09%24HDF5_INC%20%24HDF5_F90_LIB%0A%0AHDF5%20icc%20parallel%20MPI%0A%09pthread%2C%20IntelMPI%09hdf5-parallel%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09Not%20supported%09%24HDF5_INC%20%24HDF5_F90_LIB "Search Google Maps"){.smarterwiki-popup-bubble-link}[{.smarterwiki-popup-bubble-link-favicon}](http://www.google.com/search?q=HDF5%20icc%20serial%09pthread%09hdf5%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09%24HDF5_INC%20%24HDF5_CPP_LIB%09%24HDF5_INC%20%24HDF5_F90_LIB%0A%0AHDF5%20icc%20parallel%20MPI%0A%09pthread%2C%20IntelMPI%09hdf5-parallel%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09Not%20supported%09%24HDF5_INC%20%24HDF5_F90_LIB "Search Google"){.smarterwiki-popup-bubble-link}[](http://www.google.com/search?hl=com&btnI=I'm+Feeling+Lucky&q=HDF5%20icc%20serial%09pthread%09hdf5%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09%24HDF5_INC%20%24HDF5_CPP_LIB%09%24HDF5_INC%20%24HDF5_F90_LIB%0A%0AHDF5%20icc%20parallel%20MPI%0A%09pthread%2C%20IntelMPI%09hdf5-parallel%2F1.8.13%09%24HDF5_INC%20%24HDF5_SHLIB%09Not%20supported%09%24HDF5_INC%20%24HDF5_F90_LIB+wikipedia "Search Wikipedia"){.smarterwiki-popup-bubble-link}</span></span></span></span></span>& +class="smarterwiki-popup-bubble-links"><span& +class="smarterwiki-popup-bubble smarterwiki-popup-bubble-active smarterwiki-popup-bubble-flipped"><span&& +class="smarterwiki-popup-bubble-tip"></span><span +Cluster](https://docs.it4i.cz/salomon/vpn_contacting_https_cluster.png/@@images/22b15d8c-5d5f-4c5c-8973-fbc4e9a32128.png "VPN Contacting Cluster")](../vpn_contacting_https_cluster.png)& +Cluster](https://docs.it4i.cz/salomon/vpn_contacting_https.png/@@images/ff365499-d07c-4baf-abb8-ce3e15559210.png "VPN Contacting Cluster")](../vpn_contacting_https.png)& +Cluster](https://docs.it4i.cz/salomon/vpn_contacting.png/@@images/9ccabccf-581a-476a-8c24-ce9842c3e657.png "VPN Contacting Cluster")](../vpn_contacting.png)&& +column and referenced from the genotype fields as 1-based indexes to& +component where they can be analysed directly by the user that produced& +connection](https://docs.it4i.cz/anselm-cluster-documentation/anyconnecticon.jpg/@@images/ebdd0e41-e839-497e-ab33-45162d00b03b.jpeg "Successfull connection")](../../anselm-cluster-documentation/anyconnecticon.jpg)& +Connection](https://docs.it4i.cz/salomon/vpn_successfull_connection.png/@@images/45537053-a47f-48b2-aacd-3b519d6770e6.png "VPN Succesfull Connection")](../vpn_successfull_connection.png)&& +Connect](Snmekobrazovky20160211v14.27.45.png/@@images/3550e4ae-2eab-4571-8387-11a112dd6ca8.png "Allinea Reverse Connect")& +contains a P (padding) operation which correctly aligns the inserted& +.contenttype-file}& +[](cygwin-and-x11-forwarding.html)& +data, the separator indicates whether the data are phased (|) or& +[{.image-inline width="451"& [](https://docs.it4i.cz/get-started-with-it4innovations/gnome_screen.jpg)& -[](../../../../salomon/gnome_screen.jpg.1)& +deletion, replacement, and a large deletion. The REF columns shows the& +described by the annotation in the INFO column, the coordinate is that& +detection](../java_detection.jpg/@@images/5498e1ba-2242-4b9c-a799-0377a73f779e.jpeg "Java detection")& [](gdmdisablescreensaver.png)& -[](gnome-terminal.png)& +disease definitions from the lists. Thus, virtual panels can be& +</div>& +<div>& +<div class="itemizedlist">& +<div id="d4841e18">& +<div id="d4841e21">& +<div id="d4841e24">& +<div id="d4841e27">& +<div id="d4841e30">& +<div id="d4841e34">& +<div id="d4841e37">& +& +file](../instalationfile.jpg/@@images/202d14e9-e2e1-450b-a584-e78c018d6b6a.jpeg "Installation file")& +filters available. The tool includes a genomic viewer (Genome Maps 30)& +& +[](Fluent_Licence_1.jpg)& +[](Fluent_Licence_2.jpg)& +[](Fluent_Licence_3.jpg)& +[](Fluent_Licence_4.jpg)& +for each sequenced patient. These lists files together with primary and& + forwarding style="text-align: left; float: none; ">& +genomic coordinates.](images/fig6.png.1 "fig6.png")& +genomic position or region. All alternate alleles are listed in the ALT& [](gnome-compute-nodes-over-vnc.png)& -### **&### -### []()[]()&### - [](PuttyKeygeneratorV.png)& +### Gnome on Windows**&### Gnome on Windows +[](gnome-terminal.png)& +height="513"}](ddt1.png)& +[****](TightVNC_login.png)& + [](PuTTY_host_Salomon.png)& + [](PuTTY_open_Salomon.png)& + [](PuTTY_save_Salomon.png)& +[](totalview1.png)& +& +& + id="Key_management" class="mw-headline">Key management& +increases.](images/fig5.png.1 "fig5.png")& +insertion; the third a SNP; the fourth a large structural variant& +instalation](../successfullinstalation.jpg/@@images/c6d69ffe-da75-4cb6-972a-0cf4c686b6e1.jpeg "Successfull instalation")& +](../copy_of_vpn_web_install_3.png)& +Install](https://docs.it4i.cz/salomon/vpn_web_download_2.png/@@images/3358d2ce-fe4d-447b-9e6c-b82285f9796e.png "VPN Install")](../vpn_web_download_2.png)& +Install](https://docs.it4i.cz/salomon/vpn_web_download.png/@@images/06a88cce-5f51-42d3-8f0a-f615a245beef.png "VPN Install")](../vpn_web_download.png)& +Install](https://docs.it4i.cz/salomon/vpn_web_install_2.png/@@images/c2baba93-824b-418d-b548-a73af8030320.png "VPN Install")](../vpn_web_install_2.png)[ +Install](https://docs.it4i.cz/salomon/vpn_web_install_4.png/@@images/4cc26b3b-399d-413b-9a6c-82ec47899585.png "VPN Install")](../vpn_web_install_4.png)& +Install](https://docs.it4i.cz/salomon/vpn_web_login_2.png/@@images/be923364-0175-4099-a363-79229b88e252.png "VPN Install")](../vpn_web_login_2.png)& +Install](https://docs.it4i.cz/salomon/vpn_web_login.png/@@images/5eac6b9c-22e4-4abe-ab38-e4ccbe87b710.png "VPN Install")](../vpn_web_login.png)& +& +login](https://docs.it4i.cz/salomon/vpn_login.png/@@images/5102f29d-93cf-4cfd-8f55-c99c18f196ea.png "VPN login")](../vpn_login.png)&& +& +login](../successfullconnection.jpg "successful login")& +](../../anselm-cluster-documentation/anyconnectcontextmenu.jpg)&& +mismatches. Read r004 is aligned across an intron, indicated by the N& +### Notes **&### Notes +of reference sequences. Notably, r001 is the name of a read pair.& +of the base before the variant. (b–f ) Alignments and VCF& +of the body describes variants present in the sampled population at one& +& +out.](images/fig1.png "Fig 1")& +[Pageant (for Windows& + [](PageantV.png)& +& +position of the first aligned base. The CIGAR string for this alignment& +Preferences](gdmscreensaver.png/@@images/44048cfa-e854-4cb4-902b-c173821c2db1.png "Screensaver Preferences")](../../../../salomon/gnome_screen.jpg.1)& +pre-processor converts raw data into a list of variants and annotations& +present in the sequence field. The NM tag gives the number of& +profile](../loginwithprofile.jpg/@@images/a6fd5f3f-bce4-45c9-85e1-8d93c6395eee.jpeg "Login with profile")& +PuTTY - class="Apple-converted-space"> before we start SSH connection ssh-connection style="text-align: start; "}& +[](PuttyKeygenerator_001V.png)& [](PuttyKeygenerator_002V.png)& [](20150312_143443.png)& [](PuttyKeygenerator_004V.png)& [](PuttyKeygenerator_005V.png)& [](PuttyKeygenerator_006V.png)& - **[]()&** - []()& - & - [](cygwin-and-x11-forwarding.html)& -### Gnome on Windows**&### Gnome on Windows -### []()&### - id="Key_management" class="mw-headline">Key management& -style="text-align: start; float: none; ">& -class="Apple-converted-space">& -style="text-align: start; ">& -[Pageant (for Windows& -users)](putty/PageantV.png)& -PuTTY - class="Apple-converted-space"> before we start SSH connection ssh-connection style="text-align: start; "}& - [](PuTTY_host_Salomon.png)& + [](PuttyKeygeneratorV.png)& [](PuTTY_keyV.png)& - [](PuTTY_save_Salomon.png)& - [](PuTTY_open_Salomon.png)& - [](PageantV.png)& -& +& +& +& +& +& +representations of different sequence variants: SNP, insertion,& resources on& -Anselm](../../anselm-cluster-documentation/Authorization_chain.png "Authorization chain")& -Water-cooled Compute Nodes With MIC Accelerator**&**Water-cooled Compute Nodes With MIC Accelerator** +& +Rstudio&Rstudio** +& +[](../salomon-2)& +[](salomon)& [](salomon)& & -Tape Library T950B**&**Tape Library T950B** +& &![]](salomon-3.jpeg) & -& -& -& -class="pun">node_group_key& -class="internal-link">& -[](7D_Enhanced_hypercube.png)& - []&![] -{.state-missing-value -.contenttype-file}& -- - & -[](../vpn_web_login.png)& -Install](https://docs.it4i.cz/salomon/vpn_web_login_2.png/@@images/be923364-0175-4099-a363-79229b88e252.png "VPN Install")](../vpn_web_login_2.png)& -Install](https://docs.it4i.cz/salomon/vpn_web_install_2.png/@@images/c2baba93-824b-418d-b548-a73af8030320.png "VPN Install")](../vpn_web_install_2.png)[ -Install](https://docs.it4i.cz/salomon/vpn_web_install_4.png/@@images/4cc26b3b-399d-413b-9a6c-82ec47899585.png "VPN Install")](../vpn_web_install_4.png)& -Install](https://docs.it4i.cz/salomon/vpn_web_download.png/@@images/06a88cce-5f51-42d3-8f0a-f615a245beef.png "VPN Install")](../vpn_web_download.png)& -Install](https://docs.it4i.cz/salomon/vpn_web_download_2.png/@@images/3358d2ce-fe4d-447b-9e6c-b82285f9796e.png "VPN Install")](../vpn_web_download_2.png)& -& -Install](https://docs.it4i.cz/salomon/copy_of_vpn_web_install_3.png/@@images/9c34e8ad-64b1-4e1d-af3a-13c7a18fbca4.png "VPN Install")](../copy_of_vpn_web_install_3.png)& -[](../vpn_contacting_https_cluster.png)& -Cluster](https://docs.it4i.cz/salomon/vpn_contacting_https.png/@@images/ff365499-d07c-4baf-abb8-ce3e15559210.png "VPN Contacting Cluster")](../vpn_contacting_https.png)& -[](../../anselm-cluster-documentation/anyconnecticon.jpg)& -[](../../anselm-cluster-documentation/anyconnectcontextmenu.jpg)&& -Cluster](https://docs.it4i.cz/salomon/vpn_contacting.png/@@images/9ccabccf-581a-476a-8c24-ce9842c3e657.png "VPN Contacting Cluster")](../vpn_contacting.png)&& -login](https://docs.it4i.cz/salomon/vpn_login.png/@@images/5102f29d-93cf-4cfd-8f55-c99c18f196ea.png "VPN login")](../vpn_login.png)&& -Connection](https://docs.it4i.cz/salomon/vpn_successfull_connection.png/@@images/45537053-a47f-48b2-aacd-3b519d6770e6.png "VPN Succesfull Connection")](../vpn_successfull_connection.png)&& +](gdmdisablescreensaver.png)& [](vtune-amplifier)& -<span& -class="internal-link">& +Screenshot](Snmekobrazovky20141204v12.56.36.png "CUBE Screenshot")& +& +secondary (alignment) data files are stored in IT4I sequence DB and& +sequences. Padding operations can be absent when an aligner does not& +session](../../../../salomon/gnome_screen.jpg/@@images/7758b792-24eb-48dc-bf72-618cda100fda.png "Default Gnome session")](https://docs.it4i.cz/get-started-with-it4innovations/gnome_screen.jpg)& +shows an example of a deletion (present in SAMPLE1) and a replacement of& </span>& -[{.image-inline width="451"& -height="513"}](ddt1.png)& -& -& -& -[](totalview1.png)& -[](totalview2.png)& -class="monospace">& -{.external& -.text}& - class="n">& -class="n">& +<span& + <span& + [<span class="anchor-link">& +<span class="discreet">& +<span class="discreet"></span>& +<span class="glossaryItem">& + <span class="glossaryItem">& +<span class="hps">& +<span class="hps alt-edited">& +<span class="listitem">& <span class="n">& - class="pre">& -### class="n">&### - style="text-align: left; float: none; "> & -1. style="text-align: left; float: none; ">Locate and modify&1. Locate and modify - style="text-align: left; float: none; ">& +<span class="s1">& + <span class="WYSIWYG_LINK">& +<span dir="auto">& +<span id="__caret">& +<span id="__caret"><span id="__caret"></span></span>& +(<span id="result_box">& +<span id="result_box" class="short_text"><span class="hps">& + <span id="result_box"><span class="hps">& +</span></span>& +</span> <span& +<span><span>& +</span> <span class="hps">& + </span> <span class="hps">& +</span> <span class="hps">who have a valid</span> <span& +<span><span class="monospace">& +<span><span>Introduction&###Introduction +</span></span><span><span>& +</span></span></span></span><span><span>& +</span></span><span><span><span><span>& +.<span style="text-align: left; "> </span>& +<span style="text-align: start; ">& +{.state-missing-value style="text-align: left; float: none; ">& style="text-align: left; float: none; ">& style="text-align: left; float: none; ">change it&change it to - to</span>& - style="text-align: left; float: none; ">& - style="text-align: left; float: none; ">& -2. style="text-align: left; float: none; ">&2. Check Putty settings: style="text-align: left; float: none; ">Check Putty settings:& style="text-align: left; float: none; ">Enable X11&Enable X11 - forwarding style="text-align: left; float: none; ">& - style="text-align: left; float: none; ">& + style="text-align: left; float: none; "> & +style="text-align: start; ">& +style="text-align: start; float: none; ">& +& +**Summary&**Summary** +support multiple sequence alignment. The last six bases of read r003 map& +Tape Library T950B**&**Tape Library T950B** +.text}.& +.text}& +that enables the representation of the variants in the corresponding& +]&![] +to position 9, and the first five to position 29 on the reverse strand.& + to</span>& +[](totalview2.png)& +tunnel](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/putty-tunnel.png/@@images/4c66cd51-c858-473b-98c2-8d901aea7118.png "PuTTY Tunnel")](putty-tunnel.png)& +two bases by another base (SAMPLE2); the second line shows a SNP and an& +unphased (/). Thus, the two alleles C and G at the positions 2 and 5 in& +uploaded to the discovery (candidate priorization) or diagnostic& +users)](putty/PageantV.png)& +use simplest representation possible and lowest coordinate in cases& +& +& +vncviewer](../../../../anselm-cluster-documentation/vncviewer.png/@@images/bb4cedff-4cb6-402b-ac79-039186fe5df3.png "Vncviewer")& +[& +Workflow](virtualization-job-workflow "Virtualization Job Workflow")& + & +[]() & + & + [**](TightVNC_login.png)& +[](gnome-terminal.png)& +[](gnome-compute-nodes-over-vnc.png)& +screensaver](https://docs.it4i.cz/get-started-with-it4innovations/ing-the-clusters/graphical-user-interface/vnc/gdmdisablescreensaver.png/@@images/8a4758d9-3027-4ce4-9a90-2d5e88197451.png "Disable lock screen and screensaver")](gdmdisablescreensaver.png)&