diff --git a/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/outgoing-connections.md b/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/outgoing-connections.md index a4d2ed19453be74114c3b4e5ba4cb8e6ed659106..c127c9b0b956fffa494d5f0d769b360a2733c01c 100644 --- a/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/outgoing-connections.md +++ b/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/outgoing-connections.md @@ -70,7 +70,7 @@ To establish local proxy server on your workstation, install and run SOCKS proxy local $ ssh -D 1080 localhost ``` -On Windows, install and run the free, open source [Sock Puppet](http://sockspuppet.com/) server. +On Windows, install and run the free, open source [Sock Puppet](http://sockspuppet.com/) server. Once the proxy server is running, establish ssh port forwarding from Anselm to the proxy server, port 1080, exactly as [described above](outgoing-connections.html#port-forwarding-from-login-nodes). diff --git a/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/shell-and-data-access.md b/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/shell-and-data-access.md index 18d077a50ea859e30acf9eb2ac392fab82ea1358..4ceaee780818a6023d43ea5a0f3ee431dfa926ee 100644 --- a/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/shell-and-data-access.md +++ b/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/shell-and-data-access.md @@ -50,11 +50,11 @@ Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com [username@login2.anselm ~]$ ``` ->The environment is **not** shared between login nodes, except for [shared filesystems](../storage-1.html#section-1). +>The environment is **not** shared between login nodes, except for [shared filesystems](../storage/storage/#section-1). Data Transfer ------------- -Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp protocols. (Not available yet.) In case large volumes of data are transferred, use dedicated data mover node dm1.anselm.it4i.cz for increased performance. +Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp protocols. (Not available yet.) In case large volumes of data are transferred, use dedicated data mover node dm1.anselm.it4i.cz for increased performance. |Address|Port|Protocol| |---|---| @@ -104,7 +104,7 @@ $ man scp $ man sshfs ``` -On Windows, use [WinSCP client](http://winscp.net/eng/download.php) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/) provides a way to mount the Anselm filesystems directly as an external disc. +On Windows, use [WinSCP client](http://winscp.net/eng/download.php) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/) provides a way to mount the Anselm filesystems directly as an external disc. More information about the shared file systems is available [here](../../storage.html). diff --git a/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/vpn-access.md b/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/vpn-access.md index a4e7e0fc465ce7200d9a4376a9dfb8b36fa0265b..aba2a1b7bc16087a2db782ae8f8e58fbcc9d7e19 100644 --- a/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/vpn-access.md +++ b/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/vpn-access.md @@ -5,7 +5,7 @@ Accessing IT4Innovations internal resources via VPN --------------------------------------------------- >**Failed to initialize connection subsystem Win 8.1 - 02-10-15 MS patch** -Workaround can be found at [https://docs.it4i.cz/vpn-connection-fail-in-win-8.1](../../vpn-connection-fail-in-win-8.1.html) +Workaround can be found at [vpn-connection-fail-in-win-8.1](../../vpn-connection-fail-in-win-8.1.html) For using resources and licenses which are located at IT4Innovations local network, it is necessary to VPN connect to this network. We use Cisco AnyConnect Secure Mobility Client, which is supported on the following operating systems: diff --git a/docs.it4i/anselm-cluster-documentation/environment-and-modules.md b/docs.it4i/anselm-cluster-documentation/environment-and-modules.md index e5a31f724d407ab091dfdc7f2e5960eee2aa1af1..62a7da453c7dec182cd116777a0fdc81f3f88684 100644 --- a/docs.it4i/anselm-cluster-documentation/environment-and-modules.md +++ b/docs.it4i/anselm-cluster-documentation/environment-and-modules.md @@ -76,7 +76,7 @@ PrgEnv-intel sets up the INTEL development environment in conjunction with the I ### Application Modules Path Expansion -All application modules on Salomon cluster (and further) will be build using tool called [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). In case that you want to use some applications that are build by EasyBuild already, you have to modify your MODULEPATH environment variable. +All application modules on Salomon cluster (and further) will be build using tool called [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). In case that you want to use some applications that are build by EasyBuild already, you have to modify your MODULEPATH environment variable. ```bash export MODULEPATH=$MODULEPATH:/apps/easybuild/modules/all/ diff --git a/docs.it4i/anselm-cluster-documentation/hardware-overview.md b/docs.it4i/anselm-cluster-documentation/hardware-overview.md index 9253e435a834779b2a7a9f4948da982a183f6fef..fe790d9e256e4cfa02b3ffd6a90bbee20ef6bf80 100644 --- a/docs.it4i/anselm-cluster-documentation/hardware-overview.md +++ b/docs.it4i/anselm-cluster-documentation/hardware-overview.md @@ -3,7 +3,7 @@ Hardware Overview The Anselm cluster consists of 209 computational nodes named cn[1-209] of which 180 are regular compute nodes, 23 GPU Kepler K20 accelerated nodes, 4 MIC Xeon Phi 5110 accelerated nodes and 2 fat nodes. Each node is a powerful x86-64 computer, equipped with 16 cores (two eight-core Intel Sandy Bridge processors), at least 64GB RAM, and local hard drive. The user access to the Anselm cluster is provided by two login nodes login[1,2]. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 320TB /home disk storage to store the user files. The 146TB shared /scratch storage is available for the scratch data. -The Fat nodes are equipped with large amount (512GB) of memory. Virtualization infrastructure provides resources to run long term servers and services in virtual mode. Fat nodes and virtual servers may access 45 TB of dedicated block storage. Accelerated nodes, fat nodes, and virtualization infrastructure are available [upon request](https://support.it4i.cz/rt) made by a PI. +The Fat nodes are equipped with large amount (512GB) of memory. Virtualization infrastructure provides resources to run long term servers and services in virtual mode. Fat nodes and virtual servers may access 45 TB of dedicated block storage. Accelerated nodes, fat nodes, and virtualization infrastructure are available [upon request](https://support.it4i.cz/rt) made by a PI. Schematic representation of the Anselm cluster. Each box represents a node (computer) or storage capacity: @@ -518,5 +518,4 @@ The parameters are summarized in the following tables: |MIC accelerated|2x Intel Sandy Bridge E5-2470, 2.3GHz|96GB|Intel Xeon Phi P5110| |Fat compute node|2x Intel Sandy Bridge E5-2665, 2.4GHz|512GB|-| -For more details please refer to the [Compute nodes](compute-nodes/), [Storage](storage/storage/), and [Network](network/). - +For more details please refer to the [Compute nodes](compute-nodes/), [Storage](storage/storage/), and [Network](network/). \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/introduction.md b/docs.it4i/anselm-cluster-documentation/introduction.md index 04de0b2bba766960901a352a160009df2fdf71d0..75113f0a3638d316d87c05e1d2972d70eeab96cb 100644 --- a/docs.it4i/anselm-cluster-documentation/introduction.md +++ b/docs.it4i/anselm-cluster-documentation/introduction.md @@ -3,7 +3,7 @@ Introduction Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15TB RAM and giving over 94 Tflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64GB RAM, and 500GB harddrive. Nodes are interconnected by fully non-blocking fat-tree Infiniband network and equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/). -The cluster runs bullx Linux [](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)[operating system](software/operating-system/), which is compatible with the RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/). +The cluster runs bullx Linux ([bull](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)) [operating system](software/operating-system/), which is compatible with the RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/). User data shared file-system (HOME, 320TB) and job data shared file-system (SCRATCH, 146TB) are available to users. diff --git a/docs.it4i/anselm-cluster-documentation/network.md b/docs.it4i/anselm-cluster-documentation/network.md index 38e969fad63ecb1fc95256e50d1aa87fdaf454a3..11bbcbb5c05f8f395b951b7cb50985a7a0b946c1 100644 --- a/docs.it4i/anselm-cluster-documentation/network.md +++ b/docs.it4i/anselm-cluster-documentation/network.md @@ -1,11 +1,11 @@ Network ======= -All compute and login nodes of Anselm are interconnected by [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) QDR network and by Gigabit [Ethernet](http://en.wikipedia.org/wiki/Ethernet) network. Both networks may be used to transfer user data. +All compute and login nodes of Anselm are interconnected by [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) QDR network and by Gigabit [Ethernet](http://en.wikipedia.org/wiki/Ethernet) network. Both networks may be used to transfer user data. Infiniband Network ------------------ -All compute and login nodes of Anselm are interconnected by a high-bandwidth, low-latency [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) QDR network (IB 4x QDR, 40 Gbps). The network topology is a fully non-blocking fat-tree. +All compute and login nodes of Anselm are interconnected by a high-bandwidth, low-latency [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) QDR network (IB 4x QDR, 40 Gbps). The network topology is a fully non-blocking fat-tree. The compute nodes may be accessed via the Infiniband network using ib0 network interface, in address range 10.2.1.1-209. The MPI may be used to establish native Infiniband connection among the nodes. diff --git a/docs.it4i/anselm-cluster-documentation/prace.md b/docs.it4i/anselm-cluster-documentation/prace.md index be57d0808636d0baf81f8a5bc2bd2767c0d0da2e..f2641aa702dba17d2059bed250a2ae8c98d8456d 100644 --- a/docs.it4i/anselm-cluster-documentation/prace.md +++ b/docs.it4i/anselm-cluster-documentation/prace.md @@ -5,11 +5,11 @@ Intro ----- PRACE users coming to Anselm as to TIER-1 system offered through the DECI calls are in general treated as standard users and so most of the general documentation applies to them as well. This section shows the main differences for quicker orientation, but often uses references to the original documentation. PRACE users who don't undergo the full procedure (including signing the IT4I AuP on top of the PRACE AuP) will not have a password and thus access to some services intended for regular users. This can lower their comfort, but otherwise they should be able to use the TIER-1 system as intended. Please see the [Obtaining Login Credentials section](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/), if the same level of access is required. -All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation/) should be read before continuing reading the local documentation here. +All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation/) should be read before continuing reading the local documentation here. Help and Support -------------------- -If you have any troubles, need information, request support or want to install additional software, please use [PRACE Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/). +If you have any troubles, need information, request support or want to install additional software, please use [PRACE Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/). Information about the local services are provided in the [introduction of general user documentation](introduction/). Please keep in mind, that standard PRACE accounts don't have a password to access the web interface of the local (IT4Innovations) request tracker and thus a new ticket should be created by sending an e-mail to support[at]it4i.cz. @@ -30,11 +30,11 @@ The user will need a valid certificate and to be present in the PRACE LDAP (plea Most of the information needed by PRACE users accessing the Anselm TIER-1 system can be found here: -- [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs) -- [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ) -- [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh) -- [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details) -- [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer) +- [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs) +- [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ) +- [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh) +- [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details) +- [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer) Before you start to use any of the services don't forget to create a proxy certificate from your certificate: @@ -209,7 +209,7 @@ For production runs always use scratch file systems, either the global shared or All system wide installed software on the cluster is made available to the users via the modules. The information about the environment and modules usage is in this [section of general documentation](environment-and-modules/). -PRACE users can use the "prace" module to use the [PRACE Common Production Environment](http://www.prace-ri.eu/PRACE-common-production). +PRACE users can use the "prace" module to use the [PRACE Common Production Environment](http://www.prace-ri.eu/PRACE-common-production). ```bash $ module load prace @@ -231,13 +231,13 @@ qprace**, the PRACE \***: This queue is intended for normal production runs. It ### Accounting & Quota -The resources that are currently subject to accounting are the core hours. The core hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. See [example in the general documentation](resource-allocation-and-job-execution/resources-allocation-policy/). +The resources that are currently subject to accounting are the core hours. The core hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. See [example in the general documentation](resource-allocation-and-job-execution/resources-allocation-policy/). -PRACE users should check their project accounting using the [PRACE Accounting Tool (DART)](http://www.prace-ri.eu/accounting-report-tool/). +PRACE users should check their project accounting using the [PRACE Accounting Tool (DART)](http://www.prace-ri.eu/accounting-report-tool/). Users who have undergone the full local registration procedure (including signing the IT4Innovations Acceptable Use Policy) and who have received local password may check at any time, how many core-hours have been consumed by themselves and their projects using the command "it4ifree". Please note that you need to know your user password to use the command and that the displayed core hours are "system core hours" which differ from PRACE "standardized core hours". ->The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients> +>The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients> ```bash $ it4ifree diff --git a/docs.it4i/anselm-cluster-documentation/remote-visualization.md b/docs.it4i/anselm-cluster-documentation/remote-visualization.md index 7c877aa4cfcaacb8e2d2d7aad26ade86b5225cc9..b21396878c7f6eecb3c2d6a22399380eeff713c6 100644 --- a/docs.it4i/anselm-cluster-documentation/remote-visualization.md +++ b/docs.it4i/anselm-cluster-documentation/remote-visualization.md @@ -30,7 +30,7 @@ How to use the service ### Setup and start your own TurboVNC server. -TurboVNC is designed and implemented for cooperation with VirtualGL and available for free for all major platforms. For more information and download, please refer to: <http://sourceforge.net/projects/turbovnc/> +TurboVNC is designed and implemented for cooperation with VirtualGL and available for free for all major platforms. For more information and download, please refer to: <http://sourceforge.net/projects/turbovnc/> **Always use TurboVNC on both sides** (server and client) **don't mix TurboVNC and other VNC implementations** (TightVNC, TigerVNC, ...) as the VNC protocol implementation may slightly differ and diminish your user experience by introducing picture artifacts, etc. diff --git a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity-computing.md b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity-computing.md index 9717d44bcd4daa6fc93c86ba90600a23c4e93cca..0514284c9cd53f174f64c060f085b458b63631a6 100644 --- a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity-computing.md +++ b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity-computing.md @@ -9,13 +9,13 @@ However, executing huge number of jobs via the PBS queue may strain the system. >Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time. -- Use [Job arrays](capacity-computing.html#job-arrays) when running huge number of [multithread](capacity-computing.html#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs -- Use [GNU parallel](capacity-computing.html#gnu-parallel) when running single core jobs -- Combine[GNU parallel with Job arrays](capacity-computing.html#combining-job-arrays-and-gnu-parallel) when running huge number of single core jobs +- Use [Job arrays](capacity-computing/#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs +- Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs +- Combine[GNU parallel with Job arrays](capacity-computing/#combining-job-arrays-and-gnu-parallel) when running huge number of single core jobs Policy ------ -1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing.html#job-arrays). +1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays). 2. The array size is at most 1000 subjobs. Job arrays @@ -75,7 +75,7 @@ If huge number of parallel multicore (in means of multinode multithread, e. g. M ### Submit the job array -To submit the job array, use the qsub -J command. The 900 jobs of the [example above](capacity-computing.html#array_example) may be submitted like this: +To submit the job array, use the qsub -J command. The 900 jobs of the [example above](capacity-computing/#array_example) may be submitted like this: ```bash $ qsub -N JOBNAME -J 1-900 jobscript @@ -144,7 +144,7 @@ Display status information for all user's subjobs. $ qstat -u $USER -tJ ``` -Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation.html). +Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/sitemap/). GNU parallel ---------------- @@ -205,7 +205,7 @@ In this example, tasks from tasklist are executed via the GNU parallel. The job ### Submit the job -To submit the job, use the qsub command. The 101 tasks' job of the [example above](capacity-computing.html#gp_example) may be submitted like this: +To submit the job, use the qsub command. The 101 tasks' job of the [example above](capacity-computing/#gp_example) may be submitted like this: ```bash $ qsub -N JOBNAME jobscript @@ -286,7 +286,7 @@ When deciding this values, think about following guiding rules: ### Submit the job array -To submit the job array, use the qsub -J command. The 992 tasks' job of the [example above](capacity-computing.html#combined_example) may be submitted like this: +To submit the job array, use the qsub -J command. The 992 tasks' job of the [example above](capacity-computing/#combined_example) may be submitted like this: ```bash $ qsub -N JOBNAME -J 1-992:32 jobscript @@ -299,7 +299,7 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo Examples -------- -Download the examples in [capacity.zip](capacity-computing-examples), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs. +Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs. Unzip the archive in an empty directory on Anselm and follow the instructions in the README file diff --git a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity.zip b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity.zip new file mode 100644 index 0000000000000000000000000000000000000000..747453323cdc23fabbf0105771445d560fd9ae93 Binary files /dev/null and b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity.zip differ diff --git a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/job-priority.md b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/job-priority.md index 6d2c19fe4d7674af7cfed9367bed710db0e9ba66..04d79d5e54907b2987dda39427f0fa486234df1b 100644 --- a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/job-priority.md +++ b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/job-priority.md @@ -18,7 +18,7 @@ Queue priority is priority of queue where job is queued before execution. Queue priority has the biggest impact on job execution priority. Execution priority of jobs in higher priority queues is always greater than execution priority of jobs in lower priority queues. Other properties of job used for determining job execution priority (fairshare priority, eligible time) cannot compete with queue priority. -Queue priorities can be seen at <https://extranet.it4i.cz/anselm/queues> +Queue priorities can be seen at <https://extranet.it4i.cz/anselm/queues> ### Fairshare priority @@ -34,7 +34,7 @@ where MAX_FAIRSHARE has value 1E6, usage~Project~ is cumulated usage by all memb Usage counts allocated corehours (ncpus*walltime). Usage is decayed, or cut in half periodically, at the interval 168 hours (one week). Jobs queued in queue qexp are not calculated to project's usage. ->Calculated usage and fairshare priority can be seen at <https://extranet.it4i.cz/anselm/projects>. +>Calculated usage and fairshare priority can be seen at <https://extranet.it4i.cz/anselm/projects>. Calculated fairshare priority can be also seen as Resource_List.fairshare attribute of a job. diff --git a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution.md b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution.md index 7645699bceafa3f9825efc6ddd6afc4ce57aac0a..bfa3393237dcea854b6ec9c674f1b81780d349a4 100644 --- a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution.md +++ b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution.md @@ -48,7 +48,7 @@ $ qsub -A OPEN-0-0 -q qfree -l select=10:ncpus=16 ./myjob In this example, we allocate 10 nodes, 16 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. Jobscript myjob will be executed on the first node in the allocation. -All qsub options may be [saved directly into the jobscript](job-submission-and-execution.html#PBSsaved). In such a case, no options to qsub are needed. +All qsub options may be [saved directly into the jobscript](job-submission-and-execution/#PBSsaved). In such a case, no options to qsub are needed. ```bash $ qsub ./myjob @@ -90,9 +90,9 @@ In this example, we allocate 4 nodes, 16 cores, selecting only the nodes with In ### Placement by IB switch -Groups of computational nodes are connected to chassis integrated Infiniband switches. These switches form the leaf switch layer of the [Infiniband network](../network.html) fat tree topology. Nodes sharing the leaf switch can communicate most efficiently. Sharing the same switch prevents hops in the network and provides for unbiased, most efficient network communication. +Groups of computational nodes are connected to chassis integrated Infiniband switches. These switches form the leaf switch layer of the [Infiniband network](../network/) fat tree topology. Nodes sharing the leaf switch can communicate most efficiently. Sharing the same switch prevents hops in the network and provides for unbiased, most efficient network communication. -Nodes sharing the same switch may be selected via the PBS resource attribute ibswitch. Values of this attribute are iswXX, where XX is the switch number. The node-switch mapping can be seen at [Hardware Overview](../hardware-overview.html) section. +Nodes sharing the same switch may be selected via the PBS resource attribute ibswitch. Values of this attribute are iswXX, where XX is the switch number. The node-switch mapping can be seen at [Hardware Overview](../hardware-overview/) section. We recommend allocating compute nodes of a single switch when best possible computational network performance is required to run the job efficiently: @@ -334,7 +334,7 @@ exit In this example, some directory on the /home holds the input file input and executable mympiprog.x . We create a directory myjob on the /scratch filesystem, copy input and executable files from the /home directory where the qsub was invoked ($PBS_O_WORKDIR) to /scratch, execute the MPI programm mympiprog.x and copy the output file back to the /home directory. The mympiprog.x is executed as one process per node, on all allocated nodes. ->Consider preloading inputs and executables onto [shared scratch](../storage.html) before the calculation starts. +>Consider preloading inputs and executables onto [shared scratch](../storage/storage/) before the calculation starts. In some cases, it may be impractical to copy the inputs to scratch and outputs to home. This is especially true when very large input and output files are expected, or when the files should be reused by a subsequent calculation. In such a case, it is users responsibility to preload the input files on shared /scratch before the job submission and retrieve the outputs manually, after all calculations are finished. @@ -365,14 +365,14 @@ exit In this example, input and executable files are assumed preloaded manually in /scratch/$USER/myjob directory. Note the **mpiprocs** and **ompthreads** qsub options, controlling behavior of the MPI execution. The mympiprog.x is executed as one process per node, on all 100 allocated nodes. If mympiprog.x implements OpenMP threads, it will run 16 threads per node. -More information is found in the [Running OpenMPI](../software/mpi-1/Running_OpenMPI.html) and [Running MPICH2](../software/mpi-1/running-mpich2.html) +More information is found in the [Running OpenMPI](../software/mpi/Running_OpenMPI/) and [Running MPICH2](../software/mpi/running-mpich2/) sections. ### Example Jobscript for Single Node Calculation >Local scratch directory is often useful for single node jobs. Local scratch will be deleted immediately after the job ends. -Example jobscript for single node calculation, using [local scratch](../storage.html) on the node: +Example jobscript for single node calculation, using [local scratch](../storage/storage/) on the node: ```bash #!/bin/bash @@ -398,4 +398,4 @@ In this example, some directory on the home holds the input file input and execu ### Other Jobscript Examples -Further jobscript examples may be found in the [Software](../software.1.html) section and the [Capacity computing](capacity-computing.html) section. \ No newline at end of file +Further jobscript examples may be found in the software section and the [Capacity computing](capacity-computing/) section. \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy.md b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy.md index 8224ce4d02f04ed2ccab8953d2b08b495fd57dea..629556916472154abe690c2ea1508a1b0e76e28d 100644 --- a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy.md +++ b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy.md @@ -3,7 +3,7 @@ Resources Allocation Policy Resources Allocation Policy --------------------------- -The resources are allocated to the job in a fairshare fashion, subject to constraints set by the queue and resources available to the Project. The Fairshare at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority.html) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview: +The resources are allocated to the job in a fairshare fashion, subject to constraints set by the queue and resources available to the Project. The Fairshare at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview: |queue |active project |project resources |nodes|min ncpus*|priority|authorization|>walltime | | --- | --- | @@ -25,15 +25,15 @@ The resources are allocated to the job in a fairshare fashion, subject to constr ### Notes -The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be [set manually, see examples](job-submission-and-execution.html). +The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be [set manually, see examples](job-submission-and-execution/). Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatically. Wall clock time limit can be changed for queuing jobs (state Q) using the qalter command, however can not be changed for a running job (state R). -Anselm users may check current queue configuration at <https://extranet.it4i.cz/anselm/queues>. +Anselm users may check current queue configuration at <https://extranet.it4i.cz/anselm/queues>. ### Queue status ->Check the status of jobs, queues and compute nodes at <https://extranet.it4i.cz/anselm/> +>Check the status of jobs, queues and compute nodes at <https://extranet.it4i.cz/anselm/>  @@ -106,12 +106,12 @@ Resources Accounting Policy ### The Core-Hour -The resources that are currently subject to accounting are the core-hours. The core-hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. 1 core-hour is defined as 1 processor core allocated for 1 hour of wall clock time. Allocating a full node (16 cores) for 1 hour accounts to 16 core-hours. See example in the [Job submission and execution](job-submission-and-execution.html) section. +The resources that are currently subject to accounting are the core-hours. The core-hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. 1 core-hour is defined as 1 processor core allocated for 1 hour of wall clock time. Allocating a full node (16 cores) for 1 hour accounts to 16 core-hours. See example in the [Job submission and execution](job-submission-and-execution/) section. ### Check consumed resources >The **it4ifree** command is a part of it4i.portal.clients package, located here: -<https://pypi.python.org/pypi/it4i.portal.clients> +<https://pypi.python.org/pypi/it4i.portal.clients> User may check at any time, how many core-hours have been consumed by himself/herself and his/her projects. The command is available on clusters' login nodes. diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md index f5cf8a8bfa652bc1f2bc23b4c703894979c0f3a0..a450ed452af93df8aeffe089c0019775ab52b6e3 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md @@ -1,7 +1,7 @@ ANSYS CFX ========= -[ANSYS CFX](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+CFX) +[ANSYS CFX](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+CFX) software is a high-performance, general purpose fluid dynamics program that has been applied to solve wide-ranging fluid flow problems for over 20 years. At the heart of ANSYS CFX is its advanced solver technology, the key to achieving reliable and accurate solutions quickly and robustly. The modern, highly parallelized solver is the foundation for an abundant choice of physical models to capture virtually any type of phenomena related to fluid flow. The solver and its many physical models are wrapped in a modern, intuitive, and flexible GUI and user environment, with extensive capabilities for customization and automation using session files, scripting and a powerful expression language. To run ANSYS CFX in batch mode you can utilize/modify the default cfx.pbs script and execute it via the qsub command. @@ -49,9 +49,9 @@ echo Machines: $hl /ansys_inc/v145/CFX/bin/cfx5solve -def input.def -size 4 -size-ni 4x -part-large -start-method "Platform MPI Distributed Parallel" -par-dist $hl -P aa_r ``` -Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. >Input file has to be defined by common CFX def file which is attached to the cfx solver via parameter -def **License** should be selected by parameter -P (Big letter **P**). Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**. -[More about licensing here](licensing.md) \ No newline at end of file +[More about licensing here](licensing/) \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md index 458ac3295687f60c3cfc4259aa8286951277fb96..09acb0b0945d802369eaa7c37b570191e506a67f 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md @@ -1,7 +1,7 @@ ANSYS Fluent ============ -[ANSYS Fluent](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+Fluent) +[ANSYS Fluent](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+Fluent) software contains the broad physical modeling capabilities needed to model flow, turbulence, heat transfer, and reactions for industrial applications ranging from air flow over an aircraft wing to combustion in a furnace, from bubble columns to oil platforms, from blood flow to semiconductor manufacturing, and from clean room design to wastewater treatment plants. Special models that give the software the ability to model in-cylinder combustion, aeroacoustics, turbomachinery, and multiphase systems have served to broaden its reach. 1. Common way to run Fluent over pbs file diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md index c2b18864670a33be6c60b2e6daf768f1731ad863..47c66dd254b0e00bc5964a382d5cfcf92c7a7ab8 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md @@ -1,7 +1,7 @@ ANSYS LS-DYNA ============= -**[ANSYSLS-DYNA](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/Explicit+Dynamics/ANSYS+LS-DYNA)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern graphical user environment. +**[ANSYSLS-DYNA](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/Explicit+Dynamics/ANSYS+LS-DYNA)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern graphical user environment. To run ANSYS LS-DYNA in batch mode you can utilize/modify the default ansysdyna.pbs script and execute it via the qsub command. @@ -51,6 +51,6 @@ echo Machines: $hl /ansys_inc/v145/ansys/bin/ansys145 -dis -lsdynampp i=input.k -machines $hl ``` -Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ansys solver via parameter i= \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md index 56d5f78aa795a93e244e9209c2f6ff82ebc241a1..20c751e0ded0da6e5791e98f79ec86deb21a0c42 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md @@ -1,7 +1,7 @@ ANSYS MAPDL =========== -**[ANSYS Multiphysics](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/ANSYS+Multiphysics)** +**[ANSYS Multiphysics](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/ANSYS+Multiphysics)** software offers a comprehensive product solution for both multiphysics and single-physics analysis. The product includes structural, thermal, fluid and both high- and low-frequency electromagnetic analysis. The product also contains solutions for both direct and sequentially coupled physics problems including direct coupled-field elements and the ANSYS multi-field solver. To run ANSYS MAPDL in batch mode you can utilize/modify the default mapdl.pbs script and execute it via the qsub command. @@ -50,9 +50,9 @@ echo Machines: $hl /ansys_inc/v145/ansys/bin/ansys145 -b -dis -p aa_r -i input.dat -o file.out -machines $hl -dir $WORK_DIR ``` -Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common APDL file which is attached to the ansys solver via parameter -i **License** should be selected by parameter -p. Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**, aa_r_dy (ANSYS **Academic** AUTODYN) -[More about licensing here](licensing.md) \ No newline at end of file +[More about licensing here](licensing/) \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md index 53e175f0fe1ade524602d72648f550e5c35a6308..8faf1827ee0d3385b7a7919e625d04c70f5b7532 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md @@ -1,9 +1,9 @@ Overview of ANSYS Products ========================== -**[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM) +**[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM) -Anselm provides as commercial as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of  license or by two letter preposition "**aa_**" in the license feature name. Change of license is realized on command line respectively directly in user's pbs file (see individual products). [ More about licensing here](ansys/licensing.html) +Anselm provides as commercial as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of  license or by two letter preposition "**aa_**" in the license feature name. Change of license is realized on command line respectively directly in user's pbs file (see individual products). [ More about licensing here](ansys/licensing/) To load the latest version of any ANSYS product (Mechanical, Fluent, CFX, MAPDL,...) load the module: diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ls-dyna.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ls-dyna.md index e2eb2c12ac41b5df6997743034f410608c8ea34e..774a62e9c33eaa1c9a293d19b840c3edb4d4bbbe 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ls-dyna.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ls-dyna.md @@ -1,7 +1,7 @@ LS-DYNA ======= -[LS-DYNA](http://www.lstc.com/) is a multi-purpose, explicit and implicit finite element program used to analyze the nonlinear dynamic response of structures. Its fully automated contact analysis capability, a wide range of constitutive models to simulate a whole range of engineering materials (steels, composites, foams, concrete, etc.), error-checking features and the high scalability have enabled users worldwide to solve successfully many complex problems. Additionally LS-DYNA is extensively used to simulate impacts on structures from drop tests, underwater shock, explosions or high-velocity impacts. Explosive forming, process engineering, accident reconstruction, vehicle dynamics, thermal brake disc analysis or nuclear safety are further areas in the broad range of possible applications. In leading-edge research LS-DYNA is used to investigate the behaviour of materials like composites, ceramics, concrete, or wood. Moreover, it is used in biomechanics, human modelling, molecular structures, casting, forging, or virtual testing. +[LS-DYNA](http://www.lstc.com/) is a multi-purpose, explicit and implicit finite element program used to analyze the nonlinear dynamic response of structures. Its fully automated contact analysis capability, a wide range of constitutive models to simulate a whole range of engineering materials (steels, composites, foams, concrete, etc.), error-checking features and the high scalability have enabled users worldwide to solve successfully many complex problems. Additionally LS-DYNA is extensively used to simulate impacts on structures from drop tests, underwater shock, explosions or high-velocity impacts. Explosive forming, process engineering, accident reconstruction, vehicle dynamics, thermal brake disc analysis or nuclear safety are further areas in the broad range of possible applications. In leading-edge research LS-DYNA is used to investigate the behaviour of materials like composites, ceramics, concrete, or wood. Moreover, it is used in biomechanics, human modelling, molecular structures, casting, forging, or virtual testing. Anselm provides **1 commercial license of LS-DYNA without HPC** support now. @@ -31,6 +31,6 @@ module load lsdyna /apps/engineering/lsdyna/lsdyna700s i=input.k ``` -Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.html). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.html). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA **.k** file which is attached to the LS-DYNA solver via parameter i= \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md b/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md index f993cf58b57de39054e4f898531bfaf1d54ccede..5d0b5aec424f8025ffb29f3065325eba2f8cb94e 100644 --- a/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md +++ b/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md @@ -5,13 +5,13 @@ Molpro is a complete system of ab initio programs for molecular electronic struc About Molpro ------------ -Molpro is a software package used for accurate ab-initio quantum chemistry calculations. More information can be found at the [official webpage](http://www.molpro.net/). +Molpro is a software package used for accurate ab-initio quantum chemistry calculations. More information can be found at the [official webpage](http://www.molpro.net/). License ------- Molpro software package is available only to users that have a valid license. Please contact support to enable access to Molpro if you have a valid license appropriate for running on our cluster (eg. academic research group licence, parallel execution). -To run Molpro, you need to have a valid license token present in " $HOME/.molpro/token". You can download the token from [Molpro website](https://www.molpro.net/licensee/?portal=licensee). +To run Molpro, you need to have a valid license token present in " $HOME/.molpro/token". You can download the token from [Molpro website](https://www.molpro.net/licensee/?portal=licensee). Installed version ----------------- @@ -31,11 +31,11 @@ Compilation parameters are default: Running ------ -Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details. +Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details. >The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS. -You are advised to use the -d option to point to a directory in [SCRATCH filesystem](../../storage.md). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch filesystem. +You are advised to use the -d option to point to a directory in [SCRATCH filesystem](../../storage/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch filesystem. ### Example jobscript diff --git a/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md b/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md index c079a16835f53b1309fba3247441701eb4ef0d9f..a0beff73a3d4b908447dba9822cea119079332cd 100644 --- a/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md +++ b/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md @@ -7,7 +7,7 @@ Introduction ------------------------- NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters. -[Homepage](http://www.nwchem-sw.org/index.php/Main_Page) +[Homepage](http://www.nwchem-sw.org/index.php/Main_Page) Installed versions ------------------ @@ -40,7 +40,7 @@ NWChem is compiled for parallel MPI execution. Normal procedure for MPI jobs app Options -------------------- -Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives : +Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives : - MEMORY : controls the amount of memory NWChem will use -- SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage.md#scratch) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct" \ No newline at end of file +- SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage/storage/#scratch) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct" \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/compilers.md b/docs.it4i/anselm-cluster-documentation/software/compilers.md index 2aaf9714afcbf74f65fa3260d34bcb4e15389cec..171891d655eb6f26cb9373089d71f2d67142beea 100644 --- a/docs.it4i/anselm-cluster-documentation/software/compilers.md +++ b/docs.it4i/anselm-cluster-documentation/software/compilers.md @@ -15,7 +15,7 @@ The C/C++ and Fortran compilers are divided into two main groups GNU and Intel. Intel Compilers --------------- -For information about the usage of Intel Compilers and other Intel products, please read the [Intel Parallel studio](intel-suite.html) page. +For information about the usage of Intel Compilers and other Intel products, please read the [Intel Parallel studio](intel-suite/) page. GNU C/C++ and Fortran Compilers ------------------------------- @@ -148,8 +148,8 @@ For more informations see the man pages. Java ---- -For information how to use Java (runtime and/or compiler), please read the [Java page](java.html). +For information how to use Java (runtime and/or compiler), please read the [Java page](java/). nVidia CUDA ----------- -For information how to work with nVidia CUDA, please read the [nVidia CUDA page](nvidia-cuda.html). \ No newline at end of file +For information how to work with nVidia CUDA, please read the [nVidia CUDA page](nvidia-cuda/). \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md b/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md index fba0656f32dfbf42190d9c2fb0c8e576ecd4f518..bbc63027e54c5a7c58e3a2567d3f0162d562d2e8 100644 --- a/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md +++ b/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md @@ -3,15 +3,15 @@ COMSOL Multiphysics® Introduction ------------------------- -[COMSOL](http://www.comsol.com) is a powerful environment for modelling and solving various engineering and scientific problems based on partial differential equations. COMSOL is designed to solve coupled or multiphysics phenomena. For many +[COMSOL](http://www.comsol.com) is a powerful environment for modelling and solving various engineering and scientific problems based on partial differential equations. COMSOL is designed to solve coupled or multiphysics phenomena. For many standard engineering problems COMSOL provides add-on products such as electrical, mechanical, fluid flow, and chemical applications. -- [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module), -- [Heat Transfer Module](http://www.comsol.com/heat-transfer-module), -- [CFD Module](http://www.comsol.com/cfd-module), -- [Acoustics Module](http://www.comsol.com/acoustics-module), -- and [many others](http://www.comsol.com/products) +- [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module), +- [Heat Transfer Module](http://www.comsol.com/heat-transfer-module), +- [CFD Module](http://www.comsol.com/cfd-module), +- [Acoustics Module](http://www.comsol.com/acoustics-module), +- and [many others](http://www.comsol.com/products) COMSOL also allows an interface support for equation-based modelling of partial differential equations. @@ -34,7 +34,7 @@ By default the **EDU variant** will be loaded. If user needs other version or va $ module avail comsol ``` -If user needs to prepare COMSOL jobs in the interactive mode it is recommend to use COMSOL on the compute nodes via PBS Pro scheduler. In order run the COMSOL Desktop GUI on Windows is recommended to use the [Virtual Network Computing (VNC)](https://docs.it4i.cz/anselm-cluster-documentation/software/comsol/resolveuid/11e53ad0d2fd4c5187537f4baeedff33). +If user needs to prepare COMSOL jobs in the interactive mode it is recommend to use COMSOL on the compute nodes via PBS Pro scheduler. In order run the COMSOL Desktop GUI on Windows is recommended to use the Virtual Network Computing (VNC). ```bash $ xhost + @@ -76,7 +76,7 @@ LiveLink™* *for MATLAB®^ ------------------------- COMSOL is the software package for the numerical solution of the partial differential equations. LiveLink for MATLAB allows connection to the COMSOL**®** API (Application Programming Interface) with the benefits of the programming language and computing environment of the MATLAB. -LiveLink for MATLAB is available in both **EDU** and **COM** **variant** of the COMSOL release. On Anselm 1 commercial (**COM**) license and the 5 educational (**EDU**) licenses of LiveLink for MATLAB (please see the [ISV Licenses](../isv_licenses.html)) are available. +LiveLink for MATLAB is available in both **EDU** and **COM** **variant** of the COMSOL release. On Anselm 1 commercial (**COM**) license and the 5 educational (**EDU**) licenses of LiveLink for MATLAB (please see the [ISV Licenses](../isv_licenses/)) are available. Following example shows how to start COMSOL model from MATLAB via LiveLink in the interactive mode. ```bash diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md index e929ddf3b08d4996f0a246b31b1ac16a750fd09c..53ebc20d48fae86208c5cf43af0125839ec98de3 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md @@ -94,4 +94,4 @@ Users can find original User Guide after loading the DDT module: $DDTPATH/doc/userguide.pdf ``` -[1] Discipline, Magic, Inspiration and Science: Best Practice Debugging with Allinea DDT, Workshop conducted at LLNL by Allinea on May 10, 2013, [link](https://computing.llnl.gov/tutorials/allineaDDT/index.html) \ No newline at end of file +[1] Discipline, Magic, Inspiration and Science: Best Practice Debugging with Allinea DDT, Workshop conducted at LLNL by Allinea on May 10, 2013, [link](https://computing.llnl.gov/tutorials/allineaDDT/index.html) \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md index c58e94b3a3a71ba8ab3e9025a20350c56ca2ff0e..8ae1a2f066a8e691ee8716d09f1904a22103e176 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md @@ -13,7 +13,6 @@ Our license is limited to 64 MPI processes. Modules ------- - Allinea Performance Reports version 6.0 is available ```bash @@ -26,13 +25,13 @@ Usage ----- >Use the the perf-report wrapper on your (MPI) program. -Instead of [running your MPI program the usual way](../mpi-1.md), use the the perf report wrapper: +Instead of [running your MPI program the usual way](../mpi/), use the the perf report wrapper: ```bash $ perf-report mpirun ./mympiprog.x ``` -The mpi program will run as usual. The perf-report creates two additional files, in *.txt and *.html format, containing the performance report. Note that [demanding MPI codes should be run within the queue system](../../resource-allocation-and-job-execution/job-submission-and-execution.md). +The mpi program will run as usual. The perf-report creates two additional files, in *.txt and *.html format, containing the performance report. Note that [demanding MPI codes should be run within the queue system](../../resource-allocation-and-job-execution/job-submission-and-execution/). Example ------- @@ -59,4 +58,4 @@ Now lets profile the code: $ perf-report mpirun ./mympiprog.x ``` -Performance report files [mympiprog_32p*.txt](mympiprog_32p_2014-10-15_16-56.txt) and [mympiprog_32p*.html](mympiprog_32p_2014-10-15_16-56.html) were created. We can see that the code is very efficient on MPI and is CPU bounded. \ No newline at end of file +Performance report files [mympiprog_32p*.txt](mympiprog_32p_2014-10-15_16-56.txt) and [mympiprog_32p*.html](mympiprog_32p_2014-10-15_16-56.html) were created. We can see that the code is very efficient on MPI and is CPU bounded. \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md index 96d237f7291022e29d8390f7b1114508a7c32313..ccc237e4411d64c20f95f674314174094fbcb3d8 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md @@ -19,19 +19,19 @@ Each node in the tree is colored by severity (the color scheme is displayed at t Installed versions ------------------ -Currently, there are two versions of CUBE 4.2.3 available as [modules](../../environment-and-modules.html) : +Currently, there are two versions of CUBE 4.2.3 available as [modules](../../environment-and-modules/) : - cube/4.2.3-gcc, compiled with GCC - cube/4.2.3-icc, compiled with Intel compiler Usage ----- -CUBE is a graphical application. Refer to [Graphical User Interface documentation](https://docs.it4i.cz/anselm-cluster-documentation/software/debuggers/resolveuid/11e53ad0d2fd4c5187537f4baeedff33) for a list of methods to launch graphical applications on Anselm. +CUBE is a graphical application. Refer to Graphical User Interface documentation for a list of methods to launch graphical applications on Anselm. >Analyzing large data sets can consume large amount of CPU and RAM. Do not perform large analysis on login nodes. After loading the apropriate module, simply launch cube command, or alternatively you can use scalasca -examine command to launch the GUI. Note that for Scalasca datasets, if you do not analyze the data with scalasca -examine before to opening them with CUBE, not all performance data will be available. References -1. <http://www.scalasca.org/software/cube-4.x/download.html> +1. <http://www.scalasca.org/software/cube-4.x/download.html> diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/debuggers.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/debuggers.md index 04a6a0b5ebed669d5b92333e1cf8149762173bca..eba68ad3907d5bce1f3879d4e6bb70d0aacc266a 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/debuggers.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/debuggers.md @@ -7,14 +7,14 @@ We provide state of the art programms and tools to develop, profile and debug HP Intel debugger -------------- -The intel debugger version 13.0 is available, via module intel. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X display](https://docs.it4i.cz/anselm-cluster-documentation/software/debuggers/resolveuid/11e53ad0d2fd4c5187537f4baeedff33) for running the GUI. +The intel debugger version 13.0 is available, via module intel. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use X display for running the GUI. ```bash $ module load intel $ idb ``` -Read more at the [Intel Debugger](intel-suite/intel-debugger.html) page. +Read more at the [Intel Debugger](intel-suite/intel-debugger/) page. Allinea Forge (DDT/MAP) ----------------------- @@ -25,7 +25,7 @@ Allinea DDT, is a commercial debugger primarily for debugging parallel MPI or Op $ forge ``` -Read more at the [Allinea DDT](debuggers/allinea-ddt.html) page. +Read more at the [Allinea DDT](debuggers/allinea-ddt/) page. Allinea Performance Reports --------------------------- @@ -36,7 +36,7 @@ Allinea Performance Reports characterize the performance of HPC application runs $ perf-report mpirun -n 64 ./my_application argument01 argument02 ``` -Read more at the [Allinea Performance Reports](debuggers/allinea-performance-reports.html) page. +Read more at the [Allinea Performance Reports](debuggers/allinea-performance-reports/) page. RougeWave Totalview ------------------- @@ -47,7 +47,7 @@ TotalView is a source- and machine-level debugger for multi-process, multi-threa $ totalview ``` -Read more at the [Totalview](debuggers/total-view.html) page. +Read more at the [Totalview](debuggers/total-view/) page. Vampir trace analyzer --------------------- @@ -58,4 +58,4 @@ Vampir is a GUI trace analyzer for traces in OTF format. $ vampir ``` -Read more at the [Vampir](../../salomon/software/debuggers/vampir.html) page. \ No newline at end of file +Read more at the [Vampir](../../salomon/software/debuggers/vampir/) page. \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md index b552be9ca6fc356cc7359f51e007e48c2f44c54d..a5ff9fde5776bdd767fdb23bc4f11ce3e003d7d5 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md @@ -3,11 +3,11 @@ Intel Performance Counter Monitor Introduction ------------ -Intel PCM (Performance Counter Monitor) is a tool to monitor performance hardware counters on Intel>® processors, similar to [PAPI](papi.html). The difference between PCM and PAPI is that PCM supports only Intel hardware, but PCM can monitor also uncore metrics, like memory controllers and >QuickPath Interconnect links. +Intel PCM (Performance Counter Monitor) is a tool to monitor performance hardware counters on Intel>® processors, similar to [PAPI](papi/). The difference between PCM and PAPI is that PCM supports only Intel hardware, but PCM can monitor also uncore metrics, like memory controllers and >QuickPath Interconnect links. Installed version ------------------------------ -Currently installed version 2.6. To load the [module](../../environment-and-modules.html), issue: +Currently installed version 2.6. To load the [module](../../environment-and-modules/), issue: ```bash $ module load intelpcm @@ -191,7 +191,7 @@ Can be used as a sensor for ksysguard GUI, which is currently not installed on A API --- -In a similar fashion to PAPI, PCM provides a C++ API to access the performance counter from within your application. Refer to the [doxygen documentation](http://intel-pcm-api-documentation.github.io/classPCM.html) for details of the API. +In a similar fashion to PAPI, PCM provides a C++ API to access the performance counter from within your application. Refer to the [doxygen documentation](http://intel-pcm-api-documentation.github.io/classPCM.html) for details of the API. >Due to security limitations, using PCM API to monitor your applications is currently not possible on Anselm. (The application must be run as root user) @@ -276,7 +276,7 @@ Sample output: References ---------- -1. <https://software.intel.com/en-us/articles/intel-performance-counter-monitor-a-better-way-to-measure-cpu-utilization> -2. <https://software.intel.com/sites/default/files/m/3/2/2/xeon-e5-2600-uncore-guide.pdf> Intel® Xeon® Processor E5-2600 Product Family Uncore Performance Monitoring Guide. -3. <http://intel-pcm-api-documentation.github.io/classPCM.html> API Documentation +1. <https://software.intel.com/en-us/articles/intel-performance-counter-monitor-a-better-way-to-measure-cpu-utilization> +2. <https://software.intel.com/sites/default/files/m/3/2/2/xeon-e5-2600-uncore-guide.pdf> Intel® Xeon® Processor E5-2600 Product Family Uncore Performance Monitoring Guide. +3. <http://intel-pcm-api-documentation.github.io/classPCM.html> API Documentation diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md index 67ddd11fe19e4af5b370bbb0c90662014c039f02..0966634cb7d643246ed7a3cf78b2b4ecf855c8f3 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md @@ -27,7 +27,7 @@ and launch the GUI : $ amplxe-gui ``` ->To profile an application with VTune Amplifier, special kernel modules need to be loaded. The modules are not loaded on Anselm login nodes, thus direct profiling on login nodes is not possible. Use VTune on compute nodes and refer to the documentation on [using GUI applications](https://docs.it4i.cz/anselm-cluster-documentation/software/debuggers/resolveuid/11e53ad0d2fd4c5187537f4baeedff33). +>To profile an application with VTune Amplifier, special kernel modules need to be loaded. The modules are not loaded on Anselm login nodes, thus direct profiling on login nodes is not possible. Use VTune on compute nodes and refer to the documentation on using GUI applications. The GUI will open in new window. Click on "*New Project...*" to create a new project. After clicking *OK*, a new window with project properties will appear.  At "*Application:*", select the bath to your binary you want to profile (the binary should be compiled with -g flag). Some additional options such as command line arguments can be selected. At "*Managed code profiling mode:*" select "*Native*" (unless you want to profile managed mode .NET/Mono applications). After clicking *OK*, your project is created. @@ -55,7 +55,7 @@ Application: ssh Application parameters: mic0 source ~/.profile && /path/to/your/bin -Note that we include source ~/.profile in the command to setup environment paths [as described here](../intel-xeon-phi.html). +Note that we include source ~/.profile in the command to setup environment paths [as described here](../intel-xeon-phi/). >If the analysis is interrupted or aborted, further analysis on the card might be impossible and you will get errors like "ERROR connecting to MIC card". In this case please contact our support to reboot the MIC card. @@ -68,4 +68,4 @@ You may also use remote analysis to collect data from the MIC and then analyze i References ---------- -1. <https://www.rcac.purdue.edu/tutorials/phi/PerformanceTuningXeonPhi-Tullos.pdf> Performance Tuning for Intel® Xeon Phi™ Coprocessors \ No newline at end of file +1. <https://www.rcac.purdue.edu/tutorials/phi/PerformanceTuningXeonPhi-Tullos.pdf> Performance Tuning for Intel® Xeon Phi™ Coprocessors \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md index e7465fbc5b99504cbc36cd7fcebb7cb34b5d599c..83149f3b6961a2a2d00f89a370a228884522067c 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md @@ -12,7 +12,7 @@ PAPI can be used with parallel as well as serial programs. Usage ----- -To use PAPI, load [module](../../environment-and-modules.html) papi: +To use PAPI, load [module](../../environment-and-modules/) papi: ```bash $ module load papi @@ -92,19 +92,19 @@ The include path is automatically added by papi module to $INCLUDE. ### High level API -Please refer to <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:High_Level> for a description of the High level API. +Please refer to <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:High_Level> for a description of the High level API. ### Low level API -Please refer to <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:Low_Level> for a description of the Low level API. +Please refer to <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:Low_Level> for a description of the Low level API. ### Timers -PAPI provides the most accurate timers the platform can support. See <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:Timers> +PAPI provides the most accurate timers the platform can support. See <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:Timers> ### System information -PAPI can be used to query some system infromation, such as CPU name and MHz. See <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:System_Information> +PAPI can be used to query some system infromation, such as CPU name and MHz. See <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:System_Information> Example ------- @@ -195,7 +195,7 @@ Now the compiler won't remove the multiplication loop. (However it is still not >PAPI currently supports only a subset of counters on the Intel Xeon Phi processor compared to Intel Xeon, for example the floating point operations counter is missing. -To use PAPI in [Intel Xeon Phi](../intel-xeon-phi.html) native applications, you need to load module with " -mic" suffix, for example " papi/5.3.2-mic" : +To use PAPI in [Intel Xeon Phi](../intel-xeon-phi/) native applications, you need to load module with " -mic" suffix, for example " papi/5.3.2-mic" : ```bash $ module load papi/5.3.2-mic @@ -234,6 +234,6 @@ To use PAPI in offload mode, you need to provide both host and MIC versions of P References ---------- -1. <http://icl.cs.utk.edu/papi/> Main project page -2. <http://icl.cs.utk.edu/projects/papi/wiki/Main_Page> Wiki -3. <http://icl.cs.utk.edu/papi/docs/> API Documentation \ No newline at end of file +1. <http://icl.cs.utk.edu/papi/> Main project page +2. <http://icl.cs.utk.edu/projects/papi/wiki/Main_Page> Wiki +3. <http://icl.cs.utk.edu/papi/docs/> API Documentation \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md index a91e07b4d8158f16f0986834a164a072ee2e5403..615eb9ee409a6f7851c3894b5284a93667b21d61 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md @@ -3,16 +3,16 @@ Scalasca Introduction ------------------------- -[Scalasca](http://www.scalasca.org/) is a software tool that supports the performance optimization of parallel programs by measuring and analyzing their runtime behavior. The analysis identifies potential performance bottlenecks – in particular those concerning communication and synchronization – and offers guidance in exploring their causes. +[Scalasca](http://www.scalasca.org/) is a software tool that supports the performance optimization of parallel programs by measuring and analyzing their runtime behavior. The analysis identifies potential performance bottlenecks – in particular those concerning communication and synchronization – and offers guidance in exploring their causes. Scalasca supports profiling of MPI, OpenMP and hybrid MPI+OpenMP applications. Installed versions ------------------ -There are currently two versions of Scalasca 2.0 [modules](../../environment-and-modules.html) installed on Anselm: +There are currently two versions of Scalasca 2.0 [modules](../../environment-and-modules/) installed on Anselm: -- scalasca2/2.0-gcc-openmpi, for usage with [GNU Compiler](../compilers.html) and [OpenMPI](../mpi-1/Running_OpenMPI.html), -- scalasca2/2.0-icc-impi, for usage with [Intel Compiler](../compilers.html) and [Intel MPI](../mpi-1/running-mpich2.html). +- scalasca2/2.0-gcc-openmpi, for usage with [GNU Compiler](../compilers/) and [OpenMPI](../mpi/Running_OpenMPI/), +- scalasca2/2.0-icc-impi, for usage with [Intel Compiler](../compilers.html) and [Intel MPI](../mpi/running-mpich2/). Usage ----- @@ -24,7 +24,7 @@ Profiling a parallel application with Scalasca consists of three steps: ### Instrumentation -Instrumentation via " scalasca -instrument" is discouraged. Use [Score-P instrumentation](score-p.html). +Instrumentation via " scalasca -instrument" is discouraged. Use [Score-P instrumentation](score-p/). ### Runtime measurement @@ -43,11 +43,11 @@ Some notable Scalsca options are: **-t Enable trace data collection. By default, only summary data are collected.** **-e <directory> Specify a directory to save the collected data to. By default, Scalasca saves the data to a directory with prefix scorep_, followed by name of the executable and launch configuration.** ->Scalasca can generate a huge amount of data, especially if tracing is enabled. Please consider saving the data to a [scratch directory](../../storage.html). +>Scalasca can generate a huge amount of data, especially if tracing is enabled. Please consider saving the data to a [scratch directory](../../storage/storage/). ### Analysis of reports -For the analysis, you must have [Score-P](score-p.html) and [CUBE](cube.html) modules loaded. The analysis is done in two steps, first, the data is preprocessed and then CUBE GUI tool is launched. +For the analysis, you must have [Score-P](score-p/) and [CUBE](cube/) modules loaded. The analysis is done in two steps, first, the data is preprocessed and then CUBE GUI tool is launched. To launch the analysis, run : @@ -63,8 +63,8 @@ scalasca -examine -s <experiment_directory> Alternatively you can open CUBE and load the data directly from here. Keep in mind that in that case the preprocessing is not done and not all metrics will be shown in the viewer. -Refer to [CUBE documentation](cube.html) on usage of the GUI viewer. +Refer to [CUBE documentation](cube/) on usage of the GUI viewer. References ---------- -1. <http://www.scalasca.org/> \ No newline at end of file +1. <http://www.scalasca.org/> \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md index 4453d0d121641d6fba326f556d6b88737ca6800c..bf551f73c0dfbffc78eeabaa085bd3826ad890da 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md @@ -3,16 +3,16 @@ Score-P Introduction ------------ -The [Score-P measurement infrastructure](http://www.vi-hps.org/projects/score-p/) is a highly scalable and easy-to-use tool suite for profiling, event tracing, and online analysis of HPC applications. +The [Score-P measurement infrastructure](http://www.vi-hps.org/projects/score-p/) is a highly scalable and easy-to-use tool suite for profiling, event tracing, and online analysis of HPC applications. -Score-P can be used as an instrumentation tool for [Scalasca](scalasca.html). +Score-P can be used as an instrumentation tool for [Scalasca](scalasca/). Installed versions ------------------ -There are currently two versions of Score-P version 1.2.6 [modules](../../environment-and-modules.html) installed on Anselm : +There are currently two versions of Score-P version 1.2.6 [modules](../../environment-and-modules/) installed on Anselm : -- scorep/1.2.3-gcc-openmpi, for usage with [GNU Compiler](../compilers.html) and [OpenMPI](../mpi-1/Running_OpenMPI.html) -- scorep/1.2.3-icc-impi, for usage with [Intel Compiler](../compilers.html)> and [Intel MPI](../mpi-1/running-mpich2.html)>. +- scorep/1.2.3-gcc-openmpi, for usage with [GNU Compiler](../compilers/) and [OpenMPI](../mpi/Running_OpenMPI/) +- scorep/1.2.3-icc-impi, for usage with [Intel Compiler](../compilers.html)> and [Intel MPI](../mpi/running-mpich2/)>. Instrumentation --------------- @@ -75,7 +75,7 @@ An example in C/C++ : end subroutine foo ``` -Please refer to the [documentation for description of the API](https://silc.zih.tu-dresden.de/scorep-current/pdf/scorep.pdf). +Please refer to the [documentation for description of the API](https://silc.zih.tu-dresden.de/scorep-current/pdf/scorep.pdf). ###Manual instrumentation using directives @@ -115,4 +115,4 @@ and in Fortran : end subroutine foo ``` -The directives are ignored if the program is compiled without Score-P. Again, please refer to the [documentation](https://silc.zih.tu-dresden.de/scorep-current/pdf/scorep.pdf) for a more elaborate description. \ No newline at end of file +The directives are ignored if the program is compiled without Score-P. Again, please refer to the [documentation](https://silc.zih.tu-dresden.de/scorep-current/pdf/scorep.pdf) for a more elaborate description. \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md index cc7f78ef4b80046b97294d8657b2db923ccdacaf..782aff8f35b5ac65d709fcd8f86cb9b2c8f0f53b 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md @@ -70,7 +70,7 @@ Be sure to log in with an X window forwarding enabled. This could mean using the ssh -X username@anselm.it4i.cz ``` -Other options is to access login node using VNC. Please see the detailed information on how to use graphic user interface on Anselm [here](https://docs.it4i.cz/anselm-cluster-documentation/software/debuggers/resolveuid/11e53ad0d2fd4c5187537f4baeedff33#VNC). +Other options is to access login node using VNC. Please see the detailed information on how to use graphic user interface on Anselm. From the login node an interactive session with X windows forwarding (-X option) can be started by following command: @@ -156,5 +156,5 @@ More information regarding the command line parameters of the TotalView can be f Documentation ------------- -[1] The [TotalView documentation](http://www.roguewave.com/support/product-documentation/totalview-family.aspx#totalview) web page is a good resource for learning more about some of the advanced TotalView features. +[1] The [TotalView documentation](http://www.roguewave.com/support/product-documentation/totalview-family.aspx#totalview) web page is a good resource for learning more about some of the advanced TotalView features. diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md index b70e955d178b41166ecaf1b7619737354d3424ac..ca68b44d02f3ccb6635b4b20e3da8bbd1fde17bb 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md @@ -8,7 +8,7 @@ About Valgrind Valgrind is an open-source tool, used mainly for debuggig memory-related problems, such as memory leaks, use of uninitalized memory etc. in C/C++ applications. The toolchain was however extended over time with more functionality, such as debugging of threaded applications, cache profiling, not limited only to C/C++. -Valgind is an extremely useful tool for debugging memory errors such as [off-by-one](http://en.wikipedia.org/wiki/Off-by-one_error). Valgrind uses a virtual machine and dynamic recompilation of binary code, because of that, you can expect that programs being debugged by Valgrind run 5-100 times slower. +Valgind is an extremely useful tool for debugging memory errors such as [off-by-one](http://en.wikipedia.org/wiki/Off-by-one_error). Valgrind uses a virtual machine and dynamic recompilation of binary code, because of that, you can expect that programs being debugged by Valgrind run 5-100 times slower. The main tools available in Valgrind are : @@ -17,14 +17,14 @@ The main tools available in Valgrind are : - **Hellgrind** and **DRD** can detect race conditions in multi-threaded applications. - **Cachegrind**, a cache profiler. - **Callgrind**, a callgraph analyzer. -- For a full list and detailed documentation, please refer to the [official Valgrind documentation](http://valgrind.org/docs/). +- For a full list and detailed documentation, please refer to the [official Valgrind documentation](http://valgrind.org/docs/). Installed versions ------------------ There are two versions of Valgrind available on Anselm. - Version 3.6.0, installed by operating system vendor in /usr/bin/valgrind. This version is available by default, without the need to load any module. This version however does not provide additional MPI support. -- Version 3.9.0 with support for Intel MPI, available in [module](../../environment-and-modules.html) valgrind/3.9.0-impi. After loading the module, this version replaces the default valgrind. +- Version 3.9.0 with support for Intel MPI, available in [module](../../environment-and-modules/) valgrind/3.9.0-impi. After loading the module, this version replaces the default valgrind. Usage ----- @@ -261,5 +261,4 @@ Prints this output : (note that there is output printed for every launched MPI p ==31319== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 4 from 4) ``` -We can see that Valgrind has reported use of unitialised memory on the master process (which reads the array to be broadcasted) and use of unaddresable memory on both processes. - +We can see that Valgrind has reported use of unitialised memory on the master process (which reads the array to be broadcasted) and use of unaddresable memory on both processes. \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/vampir.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/vampir.md index 1575596251bdbb2d1fe719ae4faa34fb48c67dc2..8be78cd36ba4129fb94160622821226fac64b915 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/vampir.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/vampir.md @@ -1,7 +1,7 @@ Vampir ====== -Vampir is a commercial trace analysis and visualisation tool. It can work with traces in OTF and OTF2 formats. It does not have the functionality to collect traces, you need to use a trace collection tool (such as [Score-P](../../../salomon/software/debuggers/score-p.html)) first to collect the traces. +Vampir is a commercial trace analysis and visualisation tool. It can work with traces in OTF and OTF2 formats. It does not have the functionality to collect traces, you need to use a trace collection tool (such as [Score-P](../../../salomon/software/debuggers/score-p/)) first to collect the traces.  @@ -20,4 +20,4 @@ You can find the detailed user manual in PDF format in $EBROOTVAMPIR/doc/vampir References ---------- -[1]. <https://www.vampir.eu> \ No newline at end of file +[1]. <https://www.vampir.eu> \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/gpi2.md b/docs.it4i/anselm-cluster-documentation/software/gpi2.md index 4e49e6b61fee8e2694bddfd0385df838b191671e..ca1950dc8439b84b45d08ee1809454b455abfe4d 100644 --- a/docs.it4i/anselm-cluster-documentation/software/gpi2.md +++ b/docs.it4i/anselm-cluster-documentation/software/gpi2.md @@ -7,7 +7,7 @@ Introduction ------------ Programming Next Generation Supercomputers: GPI-2 is an API library for asynchronous interprocess, cross-node communication. It provides a flexible, scalable and fault tolerant interface for parallel applications. -The GPI-2 library ([www.gpi-site.com/gpi2/](http://www.gpi-site.com/gpi2/)) implements the GASPI specification (Global Address Space Programming Interface, [www.gaspi.de](http://www.gaspi.de/en/project.html)). GASPI is a Partitioned Global Address Space (PGAS) API. It aims at scalable, flexible and failure tolerant computing in massively parallel environments. +The GPI-2 library ([www.gpi-site.com/gpi2/](http://www.gpi-site.com/gpi2/)) implements the GASPI specification (Global Address Space Programming Interface, [www.gaspi.de](http://www.gaspi.de/en/project.html)). GASPI is a Partitioned Global Address Space (PGAS) API. It aims at scalable, flexible and failure tolerant computing in massively parallel environments. Modules ------- diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md index a0fc0a341a657d9c1e61342f08438883093cb0ce..23eb1d13cc1cb5c42c2f938ea951d4dd6fe852ac 100644 --- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md +++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md @@ -27,7 +27,7 @@ The compiler recognizes the omp, simd, vector and ivdep pragmas for OpenMP paral $ ifort -ipo -O3 -vec -xAVX -vec-report1 -openmp myprog.f mysubroutines.f -o myprog.x ``` -Read more at <http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/composerxe/compiler/cpp-lin/index.htm> +Read more at <http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/composerxe/compiler/cpp-lin/index.htm> Sandy Bridge/Haswell binary compatibility ----------------------------------------- diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-debugger.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-debugger.md index e0fce8a66fd615247a5b3773957b4105084c475f..dcef17d86acffda145018a4cadca2a137ac10e2e 100644 --- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-debugger.md +++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-debugger.md @@ -3,7 +3,7 @@ Intel Debugger Debugging serial applications ----------------------------- -The intel debugger version 13.0 is available, via module intel. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X display](https://docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/resolveuid/11e53ad0d2fd4c5187537f4baeedff33) for running the GUI. +The intel debugger version 13.0 is available, via module intel. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use X display for running the GUI. ```bash $ module load intel @@ -16,7 +16,7 @@ The debugger may run in text mode. To debug in text mode, use $ idbc ``` -To debug on the compute nodes, module intel must be loaded. The GUI on compute nodes may be accessed using the same way as in [the GUI section](https://docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/resolveuid/11e53ad0d2fd4c5187537f4baeedff33) +To debug on the compute nodes, module intel must be loaded. The GUI on compute nodes may be accessed using the same way as in the GUI section Example: @@ -39,7 +39,7 @@ Intel debugger is capable of debugging multithreaded and MPI parallel programs a ### Small number of MPI ranks -For debugging small number of MPI ranks, you may execute and debug each rank in separate xterm terminal (do not forget the [X display](https://docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/resolveuid/11e53ad0d2fd4c5187537f4baeedff33)). Using Intel MPI, this may be done in following way: +For debugging small number of MPI ranks, you may execute and debug each rank in separate xterm terminal (do not forget the X display. Using Intel MPI, this may be done in following way: ```bash $ qsub -q qexp -l select=2:ncpus=16 -X -I @@ -71,5 +71,5 @@ Run the idb debugger in GUI mode. The menu Parallel contains number of tools for Further information ------------------- -Exhaustive manual on idb features and usage is published at [Intel website](http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/composerxe/debugger/user_guide/index.htm) +Exhaustive manual on idb features and usage is published at [Intel website](http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/composerxe/debugger/user_guide/index.htm) diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md index c88980f6f796a31ad44d00010ee4b1cb9f443b96..05839d105ac015a16e8b9f76cd1fb56aa45b7c4a 100644 --- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md +++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md @@ -77,6 +77,6 @@ You will need the ipp module loaded to run the ipp enabled executable. This may Code samples and documentation ------------------------------ -Intel provides number of [Code Samples for IPP](https://software.intel.com/en-us/articles/code-samples-for-intel-integrated-performance-primitives-library), illustrating use of IPP. +Intel provides number of [Code Samples for IPP](https://software.intel.com/en-us/articles/code-samples-for-intel-integrated-performance-primitives-library), illustrating use of IPP. -Read full documentation on IPP [on Intel website,](http://software.intel.com/sites/products/search/search.php?q=&x=15&y=6&product=ipp&version=7.1&docos=lin) in particular the [IPP Reference manual.](http://software.intel.com/sites/products/documentation/doclib/ipp_sa/71/ipp_manual/index.htm) \ No newline at end of file +Read full documentation on IPP [on Intel website,](http://software.intel.com/sites/products/search/search.php?q=&x=15&y=6&product=ipp&version=7.1&docos=lin) in particular the [IPP Reference manual.](http://software.intel.com/sites/products/documentation/doclib/ipp_sa/71/ipp_manual/index.htm) \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md index a4792b029cd92198f8758d0cff58640731fa7615..eb51c824ff5733555cd1c6e6511ed45980936cfb 100644 --- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md +++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md @@ -14,7 +14,7 @@ Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, e - Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search. - Extended Eigensolver, a shared memory version of an eigensolver based on the Feast Eigenvalue Solver. -For details see the [Intel MKL Reference Manual](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mklman/index.htm). +For details see the [Intel MKL Reference Manual](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mklman/index.htm). Intel MKL version 13.5.192 is available on Anselm @@ -37,7 +37,7 @@ The MKL library provides number of interfaces. The fundamental once are the LP64 ### Linking -Linking MKL libraries may be complex. Intel [mkl link line advisor](http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor) helps. See also [examples](intel-mkl.html#examples) below. +Linking MKL libraries may be complex. Intel [mkl link line advisor](http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor) helps. See also [examples](intel-mkl/#examples) below. You will need the mkl module loaded to run the mkl enabled executable. This may be avoided, by compiling library search paths into the executable. Include rpath on the compile line: @@ -114,8 +114,8 @@ In this example, we compile, link and run the cblas_dgemm example, using LP64 MKL and MIC accelerators ------------------------ -The MKL is capable to automatically offload the computations o the MIC accelerator. See section [Intel XeonPhi](../intel-xeon-phi.html) for details. +The MKL is capable to automatically offload the computations o the MIC accelerator. See section [Intel XeonPhi](../intel-xeon-phi/) for details. Further reading --------------- -Read more on [Intel website](http://software.intel.com/en-us/intel-mkl), in particular the [MKL users guide](https://software.intel.com/en-us/intel-mkl/documentation/linux). \ No newline at end of file +Read more on [Intel website](http://software.intel.com/en-us/intel-mkl), in particular the [MKL users guide](https://software.intel.com/en-us/intel-mkl/documentation/linux). \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md index 973e31bdfa8c6924222b85392090c673ea68ce85..a6ef96ba0934f3826308e6f946689f91ca2dcd1b 100644 --- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md +++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md @@ -4,7 +4,7 @@ Intel TBB Intel Threading Building Blocks ------------------------------- Intel Threading Building Blocks (Intel TBB) is a library that supports scalable parallel programming using standard ISO C++ code. It does not require special languages or compilers. To use the library, you specify tasks, not threads, and let the library map tasks onto threads in an efficient manner. The tasks are executed by a runtime scheduler and may -be offloaded to [MIC accelerator](../intel-xeon-phi.html). +be offloaded to [MIC accelerator](../intel-xeon-phi/). Intel TBB version 4.1 is available on Anselm @@ -39,5 +39,5 @@ You will need the tbb module loaded to run the tbb enabled executable. This may Further reading --------------- -Read more on Intel website, <http://software.intel.com/sites/products/documentation/doclib/tbb_sa/help/index.htm> +Read more on Intel website, <http://software.intel.com/sites/products/documentation/doclib/tbb_sa/help/index.htm> diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/introduction.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/introduction.md index 99867b30bad4389576b5ece35048719b2fb4d365..f5270fe2ca0c8e3526aa390ffdb434b69f1eb788 100644 --- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/introduction.md +++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/introduction.md @@ -21,18 +21,18 @@ The Intel compilers version 13.1.3 are available, via module intel. The compiler $ ifort -v ``` -Read more at the [Intel Compilers](intel-compilers.html) page. +Read more at the [Intel Compilers](intel-compilers/) page. Intel debugger -------------- -The intel debugger version 13.0 is available, via module intel. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X display](https://docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/resolveuid/11e53ad0d2fd4c5187537f4baeedff33) for running the GUI. +The intel debugger version 13.0 is available, via module intel. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use X display for running the GUI. ```bash $ module load intel $ idb ``` -Read more at the [Intel Debugger](intel-debugger.html) page. +Read more at the [Intel Debugger](intel-debugger/) page. Intel Math Kernel Library ------------------------- @@ -42,7 +42,7 @@ Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, e $ module load mkl ``` -Read more at the [Intel MKL](intel-mkl.html) page. +Read more at the [Intel MKL](intel-mkl/) page. Intel Integrated Performance Primitives --------------------------------------- @@ -52,7 +52,7 @@ Intel Integrated Performance Primitives, version 7.1.1, compiled for AVX is avai $ module load ipp ``` -Read more at the [Intel IPP](intel-integrated-performance-primitives.html) page. +Read more at the [Intel IPP](intel-integrated-performance-primitives/) page. Intel Threading Building Blocks ------------------------------- @@ -62,4 +62,4 @@ Intel Threading Building Blocks (Intel TBB) is a library that supports scalable $ module load tbb ``` -Read more at the [Intel TBB](intel-tbb.html) page. \ No newline at end of file +Read more at the [Intel TBB](intel-tbb/) page. \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md b/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md index d7dd7454e9085bf903534be6be9059adf74afa85..e5ef60196501d18a0d0e48b3de96952bf150cfdc 100644 --- a/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md +++ b/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md @@ -241,7 +241,7 @@ Automatic Offload using Intel MKL Library ----------------------------------------- Intel MKL includes an Automatic Offload (AO) feature that enables computationally intensive MKL functions called in user code to benefit from attached Intel Xeon Phi coprocessors automatically and transparently. -Behavioral of automatic offload mode is controlled by functions called within the program or by environmental variables. Complete list of controls is listed [ here](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mkl_userguide_lnx/GUID-3DC4FC7D-A1E4-423D-9C0C-06AB265FFA86.htm). +Behavioral of automatic offload mode is controlled by functions called within the program or by environmental variables. Complete list of controls is listed [ here](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mkl_userguide_lnx/GUID-3DC4FC7D-A1E4-423D-9C0C-06AB265FFA86.htm). The Automatic Offload may be enabled by either an MKL function call within the code: @@ -255,7 +255,7 @@ or by setting environment variable $ export MKL_MIC_ENABLE=1 ``` -To get more information about automatic offload please refer to "[Using Intel® MKL Automatic Offload on Intel ® Xeon Phi™ Coprocessors](http://software.intel.com/sites/default/files/11MIC42_How_to_Use_MKL_Automatic_Offload_0.pdf)" white paper or [ Intel MKL documentation](https://software.intel.com/en-us/articles/intel-math-kernel-library-documentation). +To get more information about automatic offload please refer to "[Using Intel® MKL Automatic Offload on Intel ® Xeon Phi™ Coprocessors](http://software.intel.com/sites/default/files/11MIC42_How_to_Use_MKL_Automatic_Offload_0.pdf)" white paper or [ Intel MKL documentation](https://software.intel.com/en-us/articles/intel-math-kernel-library-documentation). ### Automatic offload example @@ -511,7 +511,7 @@ The compilation command for this example is: $ g++ cmdoptions.cpp gemm.cpp ../common/basic.cpp ../common/cmdparser.cpp ../common/oclobject.cpp -I../common -lOpenCL -o gemm -I/apps/intel/opencl/include/ ``` -To see the performance of Intel Xeon Phi performing the DGEMM run the example as follows: +To see the performance of Intel Xeon Phi performing the DGEMM run the example as follows: ```bash ./gemm -d 1 @@ -884,4 +884,4 @@ Please note each host or accelerator is listed only per files. User has to speci Optimization ------------ -For more details about optimization techniques please read Intel document [Optimization and Performance Tuning for Intel® Xeon Phi™ Coprocessors](http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization "http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization") \ No newline at end of file +For more details about optimization techniques please read Intel document [Optimization and Performance Tuning for Intel® Xeon Phi™ Coprocessors](http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization "http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization") \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md b/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md index 9cce45ebbd2127f8cea56ae89865317a1b18edb7..7d165b42210ae739105b0f9e379f19f54ff70ac2 100644 --- a/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md +++ b/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md @@ -5,7 +5,7 @@ ISV Licenses On Anselm cluster there are also installed commercial software applications, also known as ISV (Independent Software Vendor), which are subjects to licensing. The licenses are limited and their usage may be restricted only to some users or user groups. -Currently Flex License Manager based licensing is supported on the cluster for products Ansys, Comsol and Matlab. More information about the applications can be found in the general [Software](../software.1.html) section. +Currently Flex License Manager based licensing is supported on the cluster for products Ansys, Comsol and Matlab. More information about the applications can be found in the general software section. If an ISV application was purchased for educational (research) purposes and also for commercial purposes, then there are always two separate versions maintained and suffix "edu" is used in the name of the non-commercial version. @@ -16,7 +16,7 @@ Overview of the licenses usage ### Web interface For each license there is a table, which provides the information about the name, number of available (purchased/licensed), number of used and number of free license features -<https://extranet.it4i.cz/anselm/licenses> +<https://extranet.it4i.cz/anselm/licenses> ### Text interface @@ -69,7 +69,7 @@ Names of applications (APP): - matlab - matlab-edu -To get the FEATUREs of a license take a look into the corresponding state file ([see above](isv_licenses.html#Licence)), or use: +To get the FEATUREs of a license take a look into the corresponding state file ([see above](isv_licenses/#Licence)), or use: **Application and List of provided features** - **ansys** $ grep -v "#" /apps/user/licenses/ansys_features_state.txt | cut -f1 -d' ' diff --git a/docs.it4i/anselm-cluster-documentation/software/java.md b/docs.it4i/anselm-cluster-documentation/software/java.md index 4755ee2ba4eebaf3919be8fb93209ef16cd98831..29998d001318672709b1646629966c32155fc2b2 100644 --- a/docs.it4i/anselm-cluster-documentation/software/java.md +++ b/docs.it4i/anselm-cluster-documentation/software/java.md @@ -25,5 +25,5 @@ With the module loaded, not only the runtime environment (JRE), but also the dev $ which javac ``` -Java applications may use MPI for interprocess communication, in conjunction with OpenMPI. Read more on <http://www.open-mpi.org/faq/?category=java>. This functionality is currently not supported on Anselm cluster. In case you require the java interface to MPI, please contact [Anselm support](https://support.it4i.cz/rt/). +Java applications may use MPI for interprocess communication, in conjunction with OpenMPI. Read more on <http://www.open-mpi.org/faq/?category=java>. This functionality is currently not supported on Anselm cluster. In case you require the java interface to MPI, please contact [Anselm support](https://support.it4i.cz/rt/). diff --git a/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md b/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md index 1f1043692226c6404be06d049fce26aab7e86a99..c28561cebbd8d363d73e7bd72cede1e42fab12dc 100644 --- a/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md +++ b/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md @@ -13,7 +13,7 @@ There are situations when Anselm's environment is not suitable for user needs. - Application requires privileged access to operating system - ... and combinations of above cases -We offer solution for these cases - **virtualization**. Anselm's environment gives the possibility to run virtual machines on compute nodes. Users can create their own images of operating system with specific software stack and run instances of these images as virtual machines on compute nodes. Run of virtual machines is provided by standard mechanism of [Resource Allocation and Job Execution](../../resource-allocation-and-job-execution/introduction.html). +We offer solution for these cases - **virtualization**. Anselm's environment gives the possibility to run virtual machines on compute nodes. Users can create their own images of operating system with specific software stack and run instances of these images as virtual machines on compute nodes. Run of virtual machines is provided by standard mechanism of [Resource Allocation and Job Execution](../../resource-allocation-and-job-execution/introduction/). Solution is based on QEMU-KVM software stack and provides hardware-assisted x86 virtualization. @@ -26,7 +26,7 @@ Anselm's virtualization does not provide performance and all features of native Virtualization has also some drawbacks, it is not so easy to setup efficient solution. -Solution described in chapter [HOWTO](virtualization.html#howto) is suitable for single node tasks, does not introduce virtual machine clustering. +Solution described in chapter [HOWTO](virtualization/#howto) is suitable for single node tasks, does not introduce virtual machine clustering. >Please consider virtualization as last resort solution for your needs. @@ -37,7 +37,7 @@ Solution described in chapter [HOWTO](virtualization.html#howto) is suitable fo Licensing --------- -IT4Innovations does not provide any licenses for operating systems and software of virtual machines. Users are ( in accordance with [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf)) fully responsible for licensing all software running in virtual machines on Anselm. Be aware of complex conditions of licensing software in virtual environments. +IT4Innovations does not provide any licenses for operating systems and software of virtual machines. Users are ( in accordance with [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf)) fully responsible for licensing all software running in virtual machines on Anselm. Be aware of complex conditions of licensing software in virtual environments. >Users are responsible for licensing OS e.g. MS Windows and all software running in their virtual machines. @@ -50,7 +50,7 @@ We propose this job workflow:  -Our recommended solution is that job script creates distinct shared job directory, which makes a central point for data exchange between Anselm's environment, compute node (host) (e.g HOME, SCRATCH, local scratch and other local or cluster filesystems) and virtual machine (guest). Job script links or copies input data and instructions what to do (run script) for virtual machine to job directory and virtual machine process input data according instructions in job directory and store output back to job directory. We recommend, that virtual machine is running in so called [snapshot mode](virtualization.html#snapshot-mode), image is immutable - image does not change, so one image can be used for many concurrent jobs. +Our recommended solution is that job script creates distinct shared job directory, which makes a central point for data exchange between Anselm's environment, compute node (host) (e.g HOME, SCRATCH, local scratch and other local or cluster filesystems) and virtual machine (guest). Job script links or copies input data and instructions what to do (run script) for virtual machine to job directory and virtual machine process input data according instructions in job directory and store output back to job directory. We recommend, that virtual machine is running in so called [snapshot mode](virtualization/#snapshot-mode), image is immutable - image does not change, so one image can be used for many concurrent jobs. ### Procedure @@ -78,11 +78,11 @@ You can convert your existing image using qemu-img convert command. Supported fo We recommend using advanced QEMU native image format qcow2. -[More about QEMU Images](http://en.wikibooks.org/wiki/QEMU/Images) +[More about QEMU Images](http://en.wikibooks.org/wiki/QEMU/Images) ### Optimize image of your virtual machine -Use virtio devices (for disk/drive and network adapter) and install virtio drivers (paravirtualized drivers) into virtual machine. There is significant performance gain when using virtio drivers. For more information see [Virtio Linux](http://www.linux-kvm.org/page/Virtio) and [Virtio Windows](http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers). +Use virtio devices (for disk/drive and network adapter) and install virtio drivers (paravirtualized drivers) into virtual machine. There is significant performance gain when using virtio drivers. For more information see [Virtio Linux](http://www.linux-kvm.org/page/Virtio) and [Virtio Windows](http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers). Disable all unnecessary services and tasks. Restrict all unnecessary operating system operations. @@ -98,7 +98,7 @@ Your image should run some kind of operating system startup script. Startup scri We recommend, that startup script -- maps Job Directory from host (from compute node) +- maps Job Directory from host (from compute node) - runs script (we call it "run script") from Job Directory and waits for application's exit - for management purposes if run script does not exist wait for some time period (few minutes) - shutdowns/quits OS @@ -151,7 +151,7 @@ Example startup script maps shared job script as drive z: and looks for run scri Create job script according recommended -[Virtual Machine Job Workflow](virtualization.html#virtual-machine-job-workflow). +[Virtual Machine Job Workflow](virtualization.html#virtual-machine-job-workflow). Example job for Windows virtual machine: @@ -203,7 +203,7 @@ Run script runs application from shared job directory (mapped as drive z:), proc ### Run jobs -Run jobs as usual, see [Resource Allocation and Job Execution](../../resource-allocation-and-job-execution/introduction.html). Use only full node allocation for virtualization jobs. +Run jobs as usual, see [Resource Allocation and Job Execution](../../resource-allocation-and-job-execution/introduction/). Use only full node allocation for virtualization jobs. ### Running Virtual Machines @@ -228,7 +228,7 @@ Run virtual machine (simple) $ qemu-system-x86_64 -hda win.img -enable-kvm -cpu host -smp 16 -m 32768 -vga std -localtime -usb -usbdevice tablet -vnc :0 ``` -You can access virtual machine by VNC viewer (option -vnc) connecting to IP address of compute node. For VNC you must use [VPN network](../../accessing-the-cluster/vpn-access.html). +You can access virtual machine by VNC viewer (option -vnc) connecting to IP address of compute node. For VNC you must use [VPN network](../../accessing-the-cluster/vpn-access/). Install virtual machine from iso file @@ -246,7 +246,7 @@ Run virtual machine using optimized devices, user network backend with sharing a $ qemu-system-x86_64 -drive file=win.img,media=disk,if=virtio -enable-kvm -cpu host -smp 16 -m 32768 -vga std -localtime -usb -usbdevice tablet -device virtio-net-pci,netdev=net0 -netdev user,id=net0,smb=/scratch/$USER/tmp,hostfwd=tcp::3389-:3389 -vnc :0 -snapshot ``` -Thanks to port forwarding you can access virtual machine via SSH (Linux) or RDP (Windows) connecting to IP address of compute node (and port 2222 for SSH). You must use [VPN network](../../accessing-the-cluster/vpn-access.html). +Thanks to port forwarding you can access virtual machine via SSH (Linux) or RDP (Windows) connecting to IP address of compute node (and port 2222 for SSH). You must use [VPN network](../../accessing-the-cluster/vpn-access/). >Keep in mind, that if you use virtio devices, you must have virtio drivers installed on your virtual machine. diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md index b853d1360a5e3474ee3c173e2d9995733ea1180a..4beed6d6e540857609560087f68bf49b647d147e 100644 --- a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md +++ b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md @@ -9,7 +9,7 @@ The Anselm cluster provides several implementations of the MPI library: | --- | --- | |The highly optimized and stable **bullxmpi 1.2.4.1** |Partial thread support up to MPI_THREAD_SERIALIZED | |The **Intel MPI 4.1** |Full thread support up to MPI_THREAD_MULTIPLE | - |The [OpenMPI 1.6.5](href="http://www.open-mpi.org)| Full thread support up to MPI_THREAD_MULTIPLE, BLCR c/r support | + |The [OpenMPI 1.6.5](href="http://www.open-mpi.org)| Full thread support up to MPI_THREAD_MULTIPLE, BLCR c/r support | |The OpenMPI 1.8.1 |Full thread support up to MPI_THREAD_MULTIPLE, MPI-3.0 support | |The **mpich2 1.9** |Full thread support up to MPI_THREAD_MULTIPLE, BLCR c/r support | @@ -135,10 +135,10 @@ In the previous two cases with one or two MPI processes per node, the operating ### Running OpenMPI -The **bullxmpi-1.2.4.1** and [**OpenMPI 1.6.5**](http://www.open-mpi.org/) are both based on OpenMPI. Read more on [how to run OpenMPI](Running_OpenMPI.html) based MPI. +The **bullxmpi-1.2.4.1** and [**OpenMPI 1.6.5**](http://www.open-mpi.org/) are both based on OpenMPI. Read more on [how to run OpenMPI](Running_OpenMPI/) based MPI. ### Running MPICH2 -The **Intel MPI** and **mpich2 1.9** are MPICH2 based implementations. Read more on [how to run MPICH2](running-mpich2.html) based MPI. +The **Intel MPI** and **mpich2 1.9** are MPICH2 based implementations. Read more on [how to run MPICH2](running-mpich2/) based MPI. -The Intel MPI may run on the Intel Xeon Phi accelerators as well. Read more on [how to run Intel MPI on accelerators](../intel-xeon-phi.html). \ No newline at end of file +The Intel MPI may run on the Intel Xeon Phi accelerators as well. Read more on [how to run Intel MPI on accelerators](../intel-xeon-phi/). \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md index 9025af6c90774bee2d007db0aa391b476d86e7c0..504dca09be2a12afcfe4db4c4e4ca3ba9d1b3008 100644 --- a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md +++ b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md @@ -28,7 +28,7 @@ You need to import MPI to your python program. Include the following line to the from mpi4py import MPI ``` -The MPI4Py enabled python programs [execute as any other OpenMPI](Running_OpenMPI.html) code.The simpliest way is to run +The MPI4Py enabled python programs [execute as any other OpenMPI](Running_OpenMPI/) code.The simpliest way is to run ```bash $ mpiexec python <script>.py @@ -94,5 +94,4 @@ Execute the above code as: $ mpiexec -bycore -bind-to-core python hello_world.py ``` -In this example, we run MPI4Py enabled code on 4 nodes, 16 cores per node (total of 64 processes), each python process is bound to a different core. More examples and documentation can be found on [MPI for Python webpage](https://pythonhosted.org/mpi4py/usrman/index.html). - +In this example, we run MPI4Py enabled code on 4 nodes, 16 cores per node (total of 64 processes), each python process is bound to a different core. More examples and documentation can be found on [MPI for Python webpage](https://pythonhosted.org/mpi4py/usrman/index.html). \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md b/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md index 13ce6296e70e8e4ba5905d372f14305508d44306..96e73118392ccf9075d612c362c11397663db42b 100644 --- a/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md +++ b/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md @@ -145,7 +145,7 @@ In this example, we see that ranks have been mapped on nodes according to the or ### Process Binding -The Intel MPI automatically binds each process and its threads to the corresponding portion of cores on the processor socket of the node, no options needed. The binding is primarily controlled by environment variables. Read more about mpi process binding on [Intel website](https://software.intel.com/sites/products/documentation/hpc/ics/impi/41/lin/Reference_Manual/Environment_Variables_Process_Pinning.htm). The MPICH2 uses the -bind-to option Use -bind-to numa or -bind-to core to bind the process on single core or entire socket. +The Intel MPI automatically binds each process and its threads to the corresponding portion of cores on the processor socket of the node, no options needed. The binding is primarily controlled by environment variables. Read more about mpi process binding on [Intel website](https://software.intel.com/sites/products/documentation/hpc/ics/impi/41/lin/Reference_Manual/Environment_Variables_Process_Pinning.htm). The MPICH2 uses the -bind-to option Use -bind-to numa or -bind-to core to bind the process on single core or entire socket. ### Bindings verification @@ -159,4 +159,4 @@ In all cases, binding and threading may be verified by executing Intel MPI on Xeon Phi --------------------- -The[MPI section of Intel Xeon Phi chapter](../intel-xeon-phi.html) provides details on how to run Intel MPI code on Xeon Phi architecture. \ No newline at end of file +The[MPI section of Intel Xeon Phi chapter](../intel-xeon-phi/) provides details on how to run Intel MPI code on Xeon Phi architecture. \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/introduction.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/introduction.md index 26594af551c0d33f51e72b56d5373fd90da1d060..d963631218af2e64a581ccdf96525f68ac8a4585 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/introduction.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/introduction.md @@ -16,7 +16,7 @@ MATLAB®^ is a high-level language and interactive environment for numerical com $ matlab ``` -Read more at the [Matlab page](matlab.md). +Read more at the [Matlab page](matlab/). Octave ------ @@ -27,7 +27,7 @@ GNU Octave is a high-level interpreted language, primarily intended for numerica $ octave ``` -Read more at the [Octave page](octave.md). +Read more at the [Octave page](octave/). R --- @@ -39,4 +39,4 @@ The R is an interpreted language and environment for statistical computing and g $ R ``` -Read more at the [R page](r.md). \ No newline at end of file +Read more at the [R page](r/). \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab 2013-2014.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab 2013-2014.md index 3ccc8a4f114dd286e1df2441bd31207a5d304323..ad7f4980fbfb414cfa2209904c26ae939948b216 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab 2013-2014.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab 2013-2014.md @@ -3,7 +3,7 @@ Matlab 2013-2014 Introduction ------------ ->This document relates to the old versions R2013 and R2014. For MATLAB 2015, please use [this documentation instead](copy_of_matlab.html). +>This document relates to the old versions R2013 and R2014. For MATLAB 2015, please use [this documentation instead](matlab/). Matlab is available in the latest stable version. There are always two variants of the release: @@ -24,9 +24,9 @@ By default the EDU variant is marked as default. If you need other version or va If you need to use the Matlab GUI to prepare your Matlab programs, you can use Matlab directly on the login nodes. But for all computations use Matlab on the compute nodes via PBS Pro scheduler. -If you require the Matlab GUI, please follow the general informations about [running graphical applications](https://docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/resolveuid/11e53ad0d2fd4c5187537f4baeedff33). +If you require the Matlab GUI, please follow the general informations about running graphical applications -Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part [here](https://docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/resolveuid/11e53ad0d2fd4c5187537f4baeedff33)) is recommended. +Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part) is recommended. To run Matlab with GUI, use @@ -73,7 +73,7 @@ System MPI library allows Matlab to communicate through 40Gbps Infiniband QDR in ### Parallel Matlab interactive session -Once this file is in place, user can request resources from PBS. Following example shows how to start interactive session with support for Matlab GUI. For more information about GUI based applications on Anselm see [this page](https://docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/resolveuid/11e53ad0d2fd4c5187537f4baeedff33). +Once this file is in place, user can request resources from PBS. Following example shows how to start interactive session with support for Matlab GUI. For more information about GUI based applications on Anselm see. ```bash $ xhost + @@ -189,9 +189,9 @@ You can copy and paste the example in a .m file and execute. Note that the matla ### Non-interactive Session and Licenses -If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the " -l _feature_matlab_MATLAB=1" for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, please [look here](../isv_licenses.html). +If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the " -l _feature_matlab_MATLAB=1" for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, please [look here](../isv_licenses/). -In case of non-interactive session please read the [following information](../isv_licenses.html) on how to modify the qsub command to test for available licenses prior getting the resource allocation. +In case of non-interactive session please read the [following information](../isv_licenses/) on how to modify the qsub command to test for available licenses prior getting the resource allocation. ### Matlab Distributed Computing Engines start up time diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md index b57aa7ee2ba31b78dba59595f3347abe21cc12c5..e9a817d67017395cd735d441b6113a83e54d281f 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md @@ -22,9 +22,9 @@ By default the EDU variant is marked as default. If you need other version or va If you need to use the Matlab GUI to prepare your Matlab programs, you can use Matlab directly on the login nodes. But for all computations use Matlab on the compute nodes via PBS Pro scheduler. -If you require the Matlab GUI, please follow the general informations about [running graphical applications](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html). +If you require the Matlab GUI, please follow the general informations about [running graphical applications](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/). -Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part [here](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html)) is recommended. +Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part [here](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-system/)) is recommended. To run Matlab with GUI, use @@ -44,7 +44,7 @@ Running parallel Matlab using Distributed Computing Toolbox / Engine ------------------------------------------------------------------------ >Distributed toolbox is available only for the EDU variant -The MPIEXEC mode available in previous versions is no longer available in MATLAB 2015. Also, the programming interface has changed. Refer to [Release Notes](http://www.mathworks.com/help/distcomp/release-notes.html#buanp9e-1). +The MPIEXEC mode available in previous versions is no longer available in MATLAB 2015. Also, the programming interface has changed. Refer to [Release Notes](http://www.mathworks.com/help/distcomp/release-notes.html#buanp9e-1). Delete previously used file mpiLibConf.m, we have observed crashes when using Intel MPI. @@ -68,7 +68,7 @@ With the new mode, MATLAB itself launches the workers via PBS, so you can either ### Parallel Matlab interactive session -Following example shows how to start interactive session with support for Matlab GUI. For more information about GUI based applications on Anselm see [this page](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.html). +Following example shows how to start interactive session with support for Matlab GUI. For more information about GUI based applications on Anselm see [this page](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-system/). ```bash $ xhost + @@ -216,7 +216,7 @@ This method is a "hack" invented by us to emulate the mpiexec functionality foun Please note that this method is experimental. -For this method, you need to use SalomonDirect profile, import it using [the same way as SalomonPBSPro](copy_of_matlab.html#running-parallel-matlab-using-distributed-computing-toolbox---engine) +For this method, you need to use SalomonDirect profile, import it using [the same way as SalomonPBSPro](matlab/#running-parallel-matlab-using-distributed-computing-toolbox---engine) This is an example of m-script using direct mode: @@ -247,9 +247,9 @@ This is an example of m-script using direct mode: ### Non-interactive Session and Licenses -If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the " -l _feature_matlab_MATLAB=1" for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, please [look here](../isv_licenses.html). +If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the " -l _feature_matlab_MATLAB=1" for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, please [look here](../isv_licenses/). -In case of non-interactive session please read the [following information](../isv_licenses.html) on how to modify the qsub command to test for available licenses prior getting the resource allocation. +In case of non-interactive session please read the [following information](../isv_licenses/) on how to modify the qsub command to test for available licenses prior getting the resource allocation. ### Matlab Distributed Computing Engines start up time @@ -274,4 +274,4 @@ Since this is a SMP machine, you can completely avoid using Parallel Toolbox and ### Local cluster mode -You can also use Parallel Toolbox on UV2000. Use l[ocal cluster mode](copy_of_matlab.html#parallel-matlab-batch-job-in-local-mode), "SalomonPBSPro" profile will not work. \ No newline at end of file +You can also use Parallel Toolbox on UV2000. Use l[ocal cluster mode](matlab/#parallel-matlab-batch-job-in-local-mode), "SalomonPBSPro" profile will not work. \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md index 91b59910cbe8af9c0f087832bc97d03804f674dd..ea4edf294772f53cd9e5cf93e34f1b22121f9ce1 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md @@ -3,7 +3,7 @@ Octave Introduction ------------ -GNU Octave is a high-level interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation. Octave is normally used through its interactive command line interface, but it can also be used to write non-interactive programs. The Octave language is quite similar to Matlab so that most programs are easily portable. Read more on <http://www.gnu.org/software/octave/> +GNU Octave is a high-level interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation. Octave is normally used through its interactive command line interface, but it can also be used to write non-interactive programs. The Octave language is quite similar to Matlab so that most programs are easily portable. Read more on <http://www.gnu.org/software/octave/> Two versions of octave are available on Anselm, via module @@ -50,7 +50,7 @@ To run octave in batch mode, write an octave script, then write a bash jobscript exit ``` -This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](http://support.it4i.cz/docs/anselm-cluster-documentation/resource-allocation-and-job-execution). +This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](http://support.it4i.cz/docs/anselm-cluster-documentation/resource-allocation-and-job-execution). The octave c compiler mkoctfile calls the GNU gcc 4.8.1 for compiling native c code. This is very useful for running native c subroutines in octave environment. @@ -58,15 +58,15 @@ The octave c compiler mkoctfile calls the GNU gcc 4.8.1 for compiling native c c $ mkoctfile -v ``` -Octave may use MPI for interprocess communication This functionality is currently not supported on Anselm cluster. In case you require the octave interface to MPI, please contact [Anselm support](https://support.it4i.cz/rt/). +Octave may use MPI for interprocess communication This functionality is currently not supported on Anselm cluster. In case you require the octave interface to MPI, please contact [Anselm support](https://support.it4i.cz/rt/). Xeon Phi Support ---------------- -Octave may take advantage of the Xeon Phi accelerators. This will only work on the [Intel Xeon Phi](../intel-xeon-phi.html) [accelerated nodes](../../compute-nodes.html). +Octave may take advantage of the Xeon Phi accelerators. This will only work on the [Intel Xeon Phi](../intel-xeon-phi/) [accelerated nodes](../../compute-nodes/). ### Automatic offload support -Octave can accelerate BLAS type operations (in particular the Matrix Matrix multiplications] on the Xeon Phi accelerator, via [Automatic Offload using the MKL library](../intel-xeon-phi.html#section-3) +Octave can accelerate BLAS type operations (in particular the Matrix Matrix multiplications] on the Xeon Phi accelerator, via [Automatic Offload using the MKL library](../intel-xeon-phi/#section-3) Example @@ -90,7 +90,7 @@ In this example, the calculation was automatically divided among the CPU cores a ### Native support -A version of [native](../intel-xeon-phi.html#section-4) Octave is compiled for Xeon Phi accelerators. Some limitations apply for this version: +A version of [native](../intel-xeon-phi/#section-4) Octave is compiled for Xeon Phi accelerators. Some limitations apply for this version: - Only command line support. GUI, graph plotting etc. is not supported. - Command history in interactive mode is not supported. @@ -98,7 +98,7 @@ A version of [native](../intel-xeon-phi.html#section-4) Octave is compiled for X Octave is linked with parallel Intel MKL, so it best suited for batch processing of tasks that utilize BLAS, LAPACK and FFT operations. By default, number of threads is set to 120, you can control this with > OMP_NUM_THREADS environment variable. ->Calculations that do not employ parallelism (either by using parallel MKL eg. via matrix operations, fork() function, [parallel package](http://octave.sourceforge.net/parallel/) or other mechanism) will actually run slower than on host CPU. +>Calculations that do not employ parallelism (either by using parallel MKL eg. via matrix operations, fork() function, [parallel package](http://octave.sourceforge.net/parallel/) or other mechanism) will actually run slower than on host CPU. To use Octave on a node with Xeon Phi: diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/parallel.pdf b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/parallel.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d33d5a99c6e37258bec707a3d2f0aa4e20f5f4a5 Binary files /dev/null and b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/parallel.pdf differ diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md index 779b24cc605063b4da9ee4f69bebd535ba316876..5557ce01989621126dfd0925e2ec9e1da353dd67 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md @@ -11,7 +11,7 @@ Another convenience is the ease with which the C code or third party libraries m Extensive support for parallel computing is available within R. -Read more on <http://www.r-project.org/>, <http://cran.r-project.org/doc/manuals/r-release/R-lang.html> +Read more on <http://www.r-project.org/>, <http://cran.r-project.org/doc/manuals/r-release/R-lang.html> Modules ------- @@ -67,11 +67,11 @@ Example jobscript: exit ``` -This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section](../../resource-allocation-and-job-execution/job-submission-and-execution.html). +This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section](../../resource-allocation-and-job-execution/job-submission-and-execution/). Parallel R ---------- -Parallel execution of R may be achieved in many ways. One approach is the implied parallelization due to linked libraries or specially enabled functions, as [described above](r.html#interactive-execution). In the following sections, we focus on explicit parallelization, where parallel constructs are directly stated within the R script. +Parallel execution of R may be achieved in many ways. One approach is the implied parallelization due to linked libraries or specially enabled functions, as [described above](r/#interactive-execution). In the following sections, we focus on explicit parallelization, where parallel constructs are directly stated within the R script. Package parallel -------------------- @@ -92,7 +92,7 @@ More information and examples may be obtained directly by reading the documentat > vignette("parallel") ``` -Download the package [parallell](package-parallel-vignette) vignette. +Download the package [parallell](package-parallel-vignette.pdf) vignette. The forking is the most simple to use. Forking family of functions provide parallelized, drop in replacement for the serial apply() family of functions. @@ -147,9 +147,9 @@ Package Rmpi ------------ >package Rmpi provides an interface (wrapper) to MPI APIs. -It also provides interactive R slave environment. On Anselm, Rmpi provides interface to the [OpenMPI](../mpi-1/Running_OpenMPI.html). +It also provides interactive R slave environment. On Anselm, Rmpi provides interface to the [OpenMPI](../mpi-1/Running_OpenMPI/). -Read more on Rmpi at <http://cran.r-project.org/web/packages/Rmpi/>, reference manual is available at <http://cran.r-project.org/web/packages/Rmpi/Rmpi.pdf> +Read more on Rmpi at <http://cran.r-project.org/web/packages/Rmpi/>, reference manual is available at <http://cran.r-project.org/web/packages/Rmpi/Rmpi.pdf> When using package Rmpi, both openmpi and R modules must be loaded @@ -347,7 +347,7 @@ mpi.apply Rmpi example: mpi.quit() ``` -The above is the mpi.apply MPI example for calculating the number π. Only the slave processes carry out the calculation. Note the **mpi.parSapply()**, function call. The package parallel [example](r.html#package-parallel)[above](r.html#package-parallel) may be trivially adapted (for much better performance) to this structure using the mclapply() in place of mpi.parSapply(). +The above is the mpi.apply MPI example for calculating the number π. Only the slave processes carry out the calculation. Note the **mpi.parSapply()**, function call. The package parallel [example](r/#package-parallel)[above](r/#package-parallel) may be trivially adapted (for much better performance) to this structure using the mclapply() in place of mpi.parSapply(). Execute the example as: @@ -365,7 +365,7 @@ Parallel execution The R parallel jobs are executed via the PBS queue system exactly as any other parallel jobs. User must create an appropriate jobscript and submit via the **qsub** -Example jobscript for [static Rmpi](r.html#static-rmpi) parallel R execution, running 1 process per core: +Example jobscript for [static Rmpi](r/#static-rmpi) parallel R execution, running 1 process per core: ```bash #!/bin/bash @@ -394,4 +394,4 @@ Example jobscript for [static Rmpi](r.html#static-rmpi) parallel R execution, ru exit ``` -For more information about jobscripts and MPI execution refer to the [Job submission](../../resource-allocation-and-job-execution/job-submission-and-execution.html) and general [MPI](../mpi-1.html) sections. \ No newline at end of file +For more information about jobscripts and MPI execution refer to the [Job submission](../../resource-allocation-and-job-execution/job-submission-and-execution/) and general [MPI](../mpi/) sections. \ No newline at end of file diff --git a/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md b/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md index dfa94b296da1f33ededea9a9b6ea0d7e5c74ac8e..f1ef09a3e0804b1b6c6eae0525e4c9b5653299e8 100644 --- a/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md +++ b/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md @@ -198,11 +198,11 @@ CUDA Libraries ### CuBLAS -The NVIDIA CUDA Basic Linear Algebra Subroutines (cuBLAS) library is a GPU-accelerated version of the complete standard BLAS library with 152 standard BLAS routines. Basic description of the library together with basic performance comparison with MKL can be found [here](https://developer.nvidia.com/cublas "Nvidia cuBLAS"). +The NVIDIA CUDA Basic Linear Algebra Subroutines (cuBLAS) library is a GPU-accelerated version of the complete standard BLAS library with 152 standard BLAS routines. Basic description of the library together with basic performance comparison with MKL can be found [here](https://developer.nvidia.com/cublas "Nvidia cuBLAS"). **CuBLAS example: SAXPY** -SAXPY function multiplies the vector x by the scalar alpha and adds it to the vector y overwriting the latest vector with the result. The description of the cuBLAS function can be found in [NVIDIA CUDA documentation](http://docs.nvidia.com/cuda/cublas/index.html#cublas-lt-t-gt-axpy "Nvidia CUDA documentation "). Code can be pasted in the file and compiled without any modification. +SAXPY function multiplies the vector x by the scalar alpha and adds it to the vector y overwriting the latest vector with the result. The description of the cuBLAS function can be found in [NVIDIA CUDA documentation](http://docs.nvidia.com/cuda/cublas/index.html#cublas-lt-t-gt-axpy "Nvidia CUDA documentation "). Code can be pasted in the file and compiled without any modification. ```cpp /* Includes, system */ @@ -283,8 +283,8 @@ SAXPY function multiplies the vector x by the scalar alpha and adds it to the ve ``` >Please note: cuBLAS has its own function for data transfers between CPU and GPU memory: - - [cublasSetVector](http://docs.nvidia.com/cuda/cublas/index.html#cublassetvector) - transfers data from CPU to GPU memory - - [cublasGetVector](http://docs.nvidia.com/cuda/cublas/index.html#cublasgetvector) - transfers data from GPU to CPU memory + - [cublasSetVector](http://docs.nvidia.com/cuda/cublas/index.html#cublassetvector) - transfers data from CPU to GPU memory + - [cublasGetVector](http://docs.nvidia.com/cuda/cublas/index.html#cublasgetvector) - transfers data from GPU to CPU memory To compile the code using NVCC compiler a "-lcublas" compiler flag has to be specified: diff --git a/docs.it4i/anselm-cluster-documentation/software/openfoam.md b/docs.it4i/anselm-cluster-documentation/software/openfoam.md index a6e7465967928bcfd98c070e9e30e41c76d2aefa..108ab8da2de27ab6d27a49e2fd625b6a113cb3a8 100644 --- a/docs.it4i/anselm-cluster-documentation/software/openfoam.md +++ b/docs.it4i/anselm-cluster-documentation/software/openfoam.md @@ -5,9 +5,9 @@ OpenFOAM Introduction ---------------- -OpenFOAM is a free, open source CFD software package developed by [**OpenCFD Ltd**](http://www.openfoam.com/about) at [**ESI Group**](http://www.esi-group.com/) and distributed by the [**OpenFOAM Foundation **](http://www.openfoam.org/). It has a large user base across most areas of engineering and science, from both commercial and academic organisations. +OpenFOAM is a free, open source CFD software package developed by [**OpenCFD Ltd**](http://www.openfoam.com/about) at [**ESI Group**](http://www.esi-group.com/) and distributed by the [**OpenFOAM Foundation **](http://www.openfoam.org/). It has a large user base across most areas of engineering and science, from both commercial and academic organisations. -Homepage: <http://www.openfoam.com/> +Homepage: <http://www.openfoam.com/> ###Installed version @@ -46,7 +46,7 @@ In /opt/modules/modulefiles/engineering you can see installed engineering softwa lsdyna/7.x.x              openfoam/2.2.1-gcc481-openmpi1.6.5-SP ``` -For information how to use modules please [look here](../environment-and-modules.html "Environment and Modules "). +For information how to use modules please [look here](../environment-and-modules/ "Environment and Modules "). Getting Started ------------------- @@ -113,11 +113,10 @@ Job submission ```bash $ qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=16,walltime=03:00:00 test.sh ``` -For information about job submission please [look here](../resource-allocation-and-job-execution/job-submission-and-execution.html "Job submission"). +For information about job submission please [look here](../resource-allocation-and-job-execution/job-submission-and-execution/ "Job submission"). Running applications in parallel ------------------------------------------------- - Run the second case for example external incompressible turbulent flow - case - motorBike. First we must run serial application bockMesh and decomposePar for preparation of parallel computation. diff --git a/docs.it4i/anselm-cluster-documentation/software/paraview.md b/docs.it4i/anselm-cluster-documentation/software/paraview.md index 3448b506e0ddc96b8efeff9e256f5ab79e255b84..26a13716573b00ed645cd52e137bffbbd405aae1 100644 --- a/docs.it4i/anselm-cluster-documentation/software/paraview.md +++ b/docs.it4i/anselm-cluster-documentation/software/paraview.md @@ -10,7 +10,7 @@ Introduction ParaView was developed to analyze extremely large datasets using distributed memory computing resources. It can be run on supercomputers to analyze datasets of exascale size as well as on laptops for smaller data. -Homepage : <http://www.paraview.org/> +Homepage : <http://www.paraview.org/> Installed version ----------------- @@ -18,7 +18,7 @@ Currently, version 4.0.1 compiled with GCC 4.8.1 against Bull MPI library and OS Usage ----- -On Anselm, ParaView is to be used in client-server mode. A parallel ParaView server is launched on compute nodes by the user, and client is launched on your desktop PC to control and view the visualization. Download ParaView client application for your OS here: <http://paraview.org/paraview/resources/software.php>. Important : **your version must match the version number installed on Anselm** ! (currently v4.0.1) +On Anselm, ParaView is to be used in client-server mode. A parallel ParaView server is launched on compute nodes by the user, and client is launched on your desktop PC to control and view the visualization. Download ParaView client application for your OS here: <http://paraview.org/paraview/resources/software.php>. Important : **your version must match the version number installed on Anselm** ! (currently v4.0.1) ### Launching server @@ -28,7 +28,7 @@ To launch the server, you must first allocate compute nodes, for example $ qsub -I -q qprod -A OPEN-0-0 -l select=2 ``` -to launch an interactive session on 2 nodes. Refer to [Resource Allocation and Job Execution](../resource-allocation-and-job-execution/introduction.html) for details. +to launch an interactive session on 2 nodes. Refer to [Resource Allocation and Job Execution](../resource-allocation-and-job-execution/introduction/) for details. After the interactive session is opened, load the ParaView module : @@ -55,7 +55,7 @@ Because a direct connection is not allowed to compute nodes on Anselm, you must ssh -TN -L 12345:cn77:11111 username@anselm.it4i.cz ``` -replace username with your login and cn77 with the name of compute node your ParaView server is running on (see previous step). If you use PuTTY on Windows, load Anselm connection configuration, t>hen go to Connection-> SSH>->Tunnels to set up the port forwarding. Click Remote radio button. Insert 12345 to Source port textbox. Insert cn77:11111. Click Add button, then Open. [Read more about port forwarding.](https://docs.it4i.cz/anselm-cluster-documentation/software/resolveuid/11e53ad0d2fd4c5187537f4baeedff33) +replace username with your login and cn77 with the name of compute node your ParaView server is running on (see previous step). If you use PuTTY on Windows, load Anselm connection configuration, t>hen go to Connection-> SSH>->Tunnels to set up the port forwarding. Click Remote radio button. Insert 12345 to Source port textbox. Insert cn77:11111. Click Add button, then Open. Now launch ParaView client installed on your desktop PC. Select File->Connect..., click Add Server. Fill in the following : diff --git a/docs.it4i/anselm-cluster-documentation/storage/cesnet-data-storage.md b/docs.it4i/anselm-cluster-documentation/storage/cesnet-data-storage.md index 8185c1bdd05c63070d3cf2c5686804af5222828a..e7e2c02930982e719cc701d4858e2b88c62518c3 100644 --- a/docs.it4i/anselm-cluster-documentation/storage/cesnet-data-storage.md +++ b/docs.it4i/anselm-cluster-documentation/storage/cesnet-data-storage.md @@ -5,7 +5,7 @@ Introduction ------------ Do not use shared filesystems at IT4Innovations as a backup for large amount of data or long-term archiving purposes. ->The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service](https://du.cesnet.cz/). +>The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service](https://du.cesnet.cz/). The CESNET Storage service can be used for research purposes, mainly by academic staff and students of research institutions in the Czech Republic. @@ -13,20 +13,20 @@ User of data storage CESNET (DU) association can become organizations or an indi User may only use data storage CESNET for data transfer and storage which are associated with activities in science, research, development, the spread of education, culture and prosperity. In detail see “Acceptable Use Policy CESNET Large Infrastructure (Acceptable Use Policy, AUP)”. -The service is documented at <https://du.cesnet.cz/wiki/doku.php/en/start>. For special requirements please contact directly CESNET Storage Department via e-mail [du-support(at)cesnet.cz](mailto:du-support@cesnet.cz). +The service is documented at <https://du.cesnet.cz/wiki/doku.php/en/start>. For special requirements please contact directly CESNET Storage Department via e-mail [du-support(at)cesnet.cz](mailto:du-support@cesnet.cz). The procedure to obtain the CESNET access is quick and trouble-free. -(source [https://du.cesnet.cz/](https://du.cesnet.cz/wiki/doku.php/en/start "CESNET Data Storage")) +(source [https://du.cesnet.cz/](https://du.cesnet.cz/wiki/doku.php/en/start "CESNET Data Storage")) CESNET storage access --------------------- ### Understanding Cesnet storage ->It is very important to understand the Cesnet storage before uploading data. Please read <https://du.cesnet.cz/en/navody/home-migrace-plzen/start> first. +>It is very important to understand the Cesnet storage before uploading data. Please read <https://du.cesnet.cz/en/navody/home-migrace-plzen/start> first. -Once registered for CESNET Storage, you may [access the storage](https://du.cesnet.cz/en/navody/faq/start) in number of ways. We recommend the SSHFS and RSYNC methods. +Once registered for CESNET Storage, you may [access the storage](https://du.cesnet.cz/en/navody/faq/start) in number of ways. We recommend the SSHFS and RSYNC methods. ### SSHFS Access @@ -80,7 +80,7 @@ Rsync is a fast and extraordinarily versatile file copying tool. It is famous fo Rsync finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time. Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated. -More about Rsync at <https://du.cesnet.cz/en/navody/rsync/start#pro_bezne_uzivatele> +More about Rsync at <https://du.cesnet.cz/en/navody/rsync/start#pro_bezne_uzivatele> Transfer large files to/from Cesnet storage, assuming membership in the Storage VO diff --git a/docs.it4i/anselm-cluster-documentation/storage/storage.md b/docs.it4i/anselm-cluster-documentation/storage/storage.md index cba4781863899c65f1654ec8c4162ad660a27c1f..6fa8f0330ea90c031e811cfe31d197ff4e43b20b 100644 --- a/docs.it4i/anselm-cluster-documentation/storage/storage.md +++ b/docs.it4i/anselm-cluster-documentation/storage/storage.md @@ -1,25 +1,25 @@ Storage ======= -There are two main shared file systems on Anselm cluster, the [HOME](../storage.html#home) and [SCRATCH](../storage.html#scratch). All login and compute nodes may access same data on shared filesystems. Compute nodes are also equipped with local (non-shared) scratch, ramdisk and tmp filesystems. +There are two main shared file systems on Anselm cluster, the [HOME](../storage/#home) and [SCRATCH](../storage/#scratch). All login and compute nodes may access same data on shared filesystems. Compute nodes are also equipped with local (non-shared) scratch, ramdisk and tmp filesystems. Archiving --------- -Please don't use shared filesystems as a backup for large amount of data or long-term archiving mean. The academic staff and students of research institutions in the Czech Republic can use [CESNET storage service](cesnet-data-storage.html), which is available via SSHFS. +Please don't use shared filesystems as a backup for large amount of data or long-term archiving mean. The academic staff and students of research institutions in the Czech Republic can use [CESNET storage service](cesnet-data-storage/), which is available via SSHFS. Shared Filesystems ------------------ -Anselm computer provides two main shared filesystems, the [HOME filesystem](../storage.html#home) and the [SCRATCH filesystem](../storage.html#scratch). Both HOME and SCRATCH filesystems are realized as a parallel Lustre filesystem. Both shared file systems are accessible via the Infiniband network. Extended ACLs are provided on both Lustre filesystems for the purpose of sharing data with other users using fine-grained control. +Anselm computer provides two main shared filesystems, the [HOME filesystem](../storage.html#home) and the [SCRATCH filesystem](../storage/#scratch). Both HOME and SCRATCH filesystems are realized as a parallel Lustre filesystem. Both shared file systems are accessible via the Infiniband network. Extended ACLs are provided on both Lustre filesystems for the purpose of sharing data with other users using fine-grained control. ### Understanding the Lustre Filesystems -(source <http://www.nas.nasa.gov>) +(source <http://www.nas.nasa.gov>) A user file on the Lustre filesystem can be divided into multiple chunks (stripes) and stored across a subset of the object storage targets (OSTs) (disks). The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing. -When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server ( MDS) and the metadata target ( MDT) for the layout and location of the [file's stripes](http://www.nas.nasa.gov/hecc/support/kb/Lustre_Basics_224.html#striping). Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval. +When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server ( MDS) and the metadata target ( MDT) for the layout and location of the [file's stripes](http://www.nas.nasa.gov/hecc/support/kb/Lustre_Basics_224.html#striping). Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval. If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency so that all clients see consistent results. @@ -72,7 +72,7 @@ Another good practice is to make the stripe count be an integral factor of the n Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file. -Read more on <http://wiki.lustre.org/manual/LustreManual20_HTML/ManagingStripingFreeSpace.html> +Read more on <http://wiki.lustre.org/manual/LustreManual20_HTML/ManagingStripingFreeSpace.html> ### Lustre on Anselm @@ -100,13 +100,13 @@ The architecture of Lustre on Anselm is composed of two metadata servers (MDS) ###HOME -The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 320TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request. +The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 320TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request. >The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects. The HOME filesystem should not be used to archive data of past Projects or other unrelated data. -The files on HOME filesystem will not be deleted until end of the [users lifecycle](../../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.html). +The files on HOME filesystem will not be deleted until end of the [users lifecycle](../../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/). The filesystem is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files. @@ -127,7 +127,7 @@ Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for t ###SCRATCH -The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 146TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. If 100TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request. +The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 146TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. If 100TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request. >The Scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory. @@ -250,7 +250,7 @@ other::--- Default ACL mechanism can be used to replace setuid/setgid permissions on directories. Setting a default ACL on a directory (-d flag to setfacl) will cause the ACL permissions to be inherited by any newly created file or subdirectory within the directory. Refer to this page for more information on Linux ACL: -[http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html ](http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html) +[http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html ](http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html) Local Filesystems ----------------- @@ -309,7 +309,4 @@ Summary |/scratch|cluster shared jobs' data|Lustre|146 TiB|6 GB/s|Quota 100TB|Compute and login nodes|files older 90 days removed| |/lscratch|node local jobs' data|local|330 GB|100 MB/s|none|Compute nodes|purged after job ends| |/ramdisk|node local jobs' data|local|60, 90, 500 GB|5-50 GB/s|none|Compute nodes|purged after job ends| -|/tmp|local temporary files|local|9.5 GB|100 MB/s|none|Compute and login nodes|auto| purged - - - +|/tmp|local temporary files|local|9.5 GB|100 MB/s|none|Compute and login nodes|auto| purged \ No newline at end of file diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/cygwin-and-x11-forwarding.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/cygwin-and-x11-forwarding.md index 0d1de9f9d31fbaad49fd56c11921d10b06f1132b..5a2841520fb30afb272c0f3d9c2c0bf426095fd9 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/cygwin-and-x11-forwarding.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/cygwin-and-x11-forwarding.md @@ -12,7 +12,7 @@ PuTTY X11 proxy: unable to connect to forwarded X server: Network error: Connect (gnome-session:23691): WARNING **: Cannot open display:** ``` -1. Locate and modify Cygwin shortcut that uses [startxwin](http://x.cygwin.com/docs/man1/startxwin.1.html) +1. Locate and modify Cygwin shortcut that uses [startxwin](http://x.cygwin.com/docs/man1/startxwin.1.html) locate C:cygwin64binXWin.exe change it diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md index c382147321b8a83d918d54f751d7827b48d075b9..0eb37ff947a8f41b101fa2e576fe1462205251ae 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md @@ -11,7 +11,7 @@ Read more about configuring [**X Window System**](x-window-system/). VNC --- -The **Virtual Network Computing** (**VNC**) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing "Desktop sharing") system that uses the [Remote Frame Buffer protocol (RFB)](http://en.wikipedia.org/wiki/RFB_protocol "RFB protocol") to remotely control another [computer](http://en.wikipedia.org/wiki/Computer "Computer"). +The **Virtual Network Computing** (**VNC**) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing "Desktop sharing") system that uses the [Remote Frame Buffer protocol (RFB)](http://en.wikipedia.org/wiki/RFB_protocol "RFB protocol") to remotely control another [computer](http://en.wikipedia.org/wiki/Computer "Computer"). Read more about configuring **[VNC](vnc/)**. diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md index 4b6b4eb72caf1401f66fcd20908d356eb95f5ffa..3129ac6043626a89f05be9a56db73e1eac42bfda 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md @@ -1,9 +1,9 @@ VNC === -The **Virtual Network Computing** (**VNC**) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing "Desktop sharing") system that uses the [Remote Frame Buffer protocol (RFB)](http://en.wikipedia.org/wiki/RFB_protocol "RFB protocol") to remotely control another [computer](http://en.wikipedia.org/wiki/Computer "Computer"). It transmits the [keyboard](http://en.wikipedia.org/wiki/Computer_keyboard "Computer keyboard") and [mouse] http://en.wikipedia.org/wiki/Computer_mouse "Computer mouse") events from one computer to another, relaying the graphical [screen](http://en.wikipedia.org/wiki/Computer_screen "Computer screen") updates back in the other direction, over a [network](http://en.wikipedia.org/wiki/Computer_network "Computer network"). +The **Virtual Network Computing** (**VNC**) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing "Desktop sharing") system that uses the [Remote Frame Buffer protocol (RFB)](http://en.wikipedia.org/wiki/RFB_protocol "RFB protocol") to remotely control another [computer](http://en.wikipedia.org/wiki/Computer "Computer"). It transmits the [keyboard](http://en.wikipedia.org/wiki/Computer_keyboard "Computer keyboard") and [mouse](http://en.wikipedia.org/wiki/Computer_mouse") events from one computer to another, relaying the graphical [screen](http://en.wikipedia.org/wiki/Computer_screen "Computer screen") updates back in the other direction, over a [network](http://en.wikipedia.org/wiki/Computer_network "Computer network"). -The recommended clients are [TightVNC](http://www.tightvnc.com) or[TigerVNC](http://sourceforge.net/apps/mediawiki/tigervnc/index.php?title=Main_Page) (free, open source, available for almost any platform). +The recommended clients are [TightVNC](http://www.tightvnc.com) or[TigerVNC](http://sourceforge.net/apps/mediawiki/tigervnc/index.php?title=Main_Page) (free, open source, available for almost any platform). Create VNC password ------------------- diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md index deb75a0ae81091959dbd1a23010a2e198b208a8a..d5b1f973d7dce9f0f67782a6f8163eb82c8dfaf5 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md @@ -1,7 +1,7 @@ X Window System =============== -The X Window system is a principal way to get GUI access to the clusters. The **X Window System** (commonly known as **X11**, based on its current major version being 11, or shortened to simply **X**, and sometimes informally **X-Windows**) is a computer software system and network [protocol](http://en.wikipedia.org/wiki/Protocol_%28computing%29 "Protocol (computing)") that provides a basis for [graphical user interfaces](http://en.wikipedia.org/wiki/Graphical_user_interface "Graphical user interface") (GUIs) and rich input device capability for [networked computers](http://en.wikipedia.org/wiki/Computer_network "Computer network"). +The X Window system is a principal way to get GUI access to the clusters. The **X Window System** (commonly known as **X11**, based on its current major version being 11, or shortened to simply **X**, and sometimes informally **X-Windows**) is a computer software system and network [protocol](http://en.wikipedia.org/wiki/Protocol_%28computing%29 "Protocol (computing)") that provides a basis for [graphical user interfaces](http://en.wikipedia.org/wiki/Graphical_user_interface "Graphical user interface") (GUIs) and rich input device capability for [networked computers](http://en.wikipedia.org/wiki/Computer_network "Computer network"). >The X display forwarding must be activated and the X server running on client side @@ -37,18 +37,18 @@ In order to display graphical user interface GUI of various software tools, you ### X Server on OS X -Mac OS users need to install [XQuartz server](http://xquartz.macosforge.org/landing/). +Mac OS users need to install [XQuartz server](http://xquartz.macosforge.org/landing/). ### X Server on Windows -There are variety of X servers available for Windows environment. The commercial Xwin32 is very stable and rich featured. The Cygwin environment provides fully featured open-source XWin X server. For simplicity, we recommend open-source X server by the [Xming project](http://sourceforge.net/projects/xming/). For stability and full features we recommend the -[XWin](http://x.cygwin.com/) X server by Cygwin +There are variety of X servers available for Windows environment. The commercial Xwin32 is very stable and rich featured. The Cygwin environment provides fully featured open-source XWin X server. For simplicity, we recommend open-source X server by the [Xming project](http://sourceforge.net/projects/xming/). For stability and full features we recommend the +[XWin](http://x.cygwin.com/) X server by Cygwin |How to use Xwin |How to use Xming | | --- | --- | - |[Install Cygwin](http://x.cygwin.com/)Find and execute XWin.exeto start the X server on Windows desktop computer.[If no able to forward X11 using PuTTY to CygwinX](x-window-system/cygwin-and-x11-forwarding.html) |<p>Use Xlaunch to configure the Xming.<p>Run Xmingto start the X server on Windows desktop computer.| + |[Install Cygwin](http://x.cygwin.com/) Find and execute XWin.exeto start the X server on Windows desktop computer.[If no able to forward X11 using PuTTY to CygwinX](cygwin-and-x11-forwarding/) |<p>Use Xlaunch to configure the Xming.<p>Run Xmingto start the X server on Windows desktop computer.| -Read more on [http://www.math.umn.edu/systems_guide/putty_xwin32.html](http://www.math.umn.edu/systems_guide/putty_xwin32.shtml) +Read more on [http://www.math.umn.edu/systems_guide/putty_xwin32.html](http://www.math.umn.edu/systems_guide/putty_xwin32.shtml) ### Running GUI Enabled Applications @@ -92,7 +92,7 @@ The Gnome 2.28 GUI environment is available on the clusters. We recommend to use ### Gnome on Linux and OS X To run the remote Gnome session in a window on Linux/OS X computer, you need to install Xephyr. Ubuntu package is -xserver-xephyr, on OS X it is part of [XQuartz](http://xquartz.macosforge.org/landing/). First, launch Xephyr on local machine: +xserver-xephyr, on OS X it is part of [XQuartz](http://xquartz.macosforge.org/landing/). First, launch Xephyr on local machine: ```bash local $ Xephyr -ac -screen 1024x768 -br -reset -terminate :1 & diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md index 1b2f8afd12b741b9945a7d478685b919dfe63dd0..3732e64efdc3eaad55749a1e77db1dd164a28a9a 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md @@ -6,10 +6,10 @@ PuTTY - before we start SSH connection ### Windows PuTTY Installer -We recommned you to download "**A Windows installer for everything except PuTTYtel**" with **Pageant** (SSH authentication agent) and **PuTTYgen** (PuTTY key generator) which is available [here](http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html). +We recommned you to download "**A Windows installer for everything except PuTTYtel**" with **Pageant** (SSH authentication agent) and **PuTTYgen** (PuTTY key generator) which is available [here](http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html). >After installation you can proceed directly to private keys authentication using ["Putty"](putty#putty). -"Change Password for Existing Private Key" is optional. +"Change Password for Existing Private Key" is optional. "Generate a New Public/Private key pair" is intended for users without Public/Private key in the initial email containing login credentials. "Pageant" is optional. diff --git a/docs.it4i/get-started-with-it4innovations/applying-for-resources.md b/docs.it4i/get-started-with-it4innovations/applying-for-resources.md index e54e47f0db4ed67c1e6e4dabdb90cb40a9f69a6d..125f4e686eb750cad94e927d1eeced9f84ae0971 100644 --- a/docs.it4i/get-started-with-it4innovations/applying-for-resources.md +++ b/docs.it4i/get-started-with-it4innovations/applying-for-resources.md @@ -1,12 +1,12 @@ Applying for Resources ====================== -Computational resources may be allocated by any of the following [Computing resources allocation](http://www.it4i.cz/computing-resources-allocation/?lang=en) mechanisms. +Computational resources may be allocated by any of the following [Computing resources allocation](http://www.it4i.cz/computing-resources-allocation/?lang=en) mechanisms. -Academic researchers can apply for computational resources via [Open Access Competitions](http://www.it4i.cz/open-access-competition/?lang=en&lang=en). +Academic researchers can apply for computational resources via [Open Access Competitions](http://www.it4i.cz/open-access-competition/?lang=en&lang=en). -Anyone is welcomed to apply via the [Directors Discretion.](http://www.it4i.cz/obtaining-computational-resources-through-directors-discretion/?lang=en&lang=en) +Anyone is welcomed to apply via the [Directors Discretion.](http://www.it4i.cz/obtaining-computational-resources-through-directors-discretion/?lang=en&lang=en) -Foreign (mostly European) users can obtain computational resources via the [PRACE (DECI) program](http://www.prace-ri.eu/DECI-Projects). +Foreign (mostly European) users can obtain computational resources via the [PRACE (DECI) program](http://www.prace-ri.eu/DECI-Projects). -In all cases, IT4Innovations’ access mechanisms are aimed at distributing computational resources while taking into account the development and application of supercomputing methods and their benefits and usefulness for society. The applicants are expected to submit a proposal. In the proposal, the applicants **apply for a particular amount of core-hours** of computational resources. The requested core-hours should be substantiated by scientific excellence of the proposal, its computational maturity and expected impacts. Proposals do undergo a scientific, technical and economic evaluation. The allocation decisions are based on this evaluation. More information at [Computing resources allocation](http://www.it4i.cz/computing-resources-allocation/?lang=en) and [Obtaining Login Credentials](obtaining-login-credentials/obtaining-login-credentials/) page. \ No newline at end of file +In all cases, IT4Innovations’ access mechanisms are aimed at distributing computational resources while taking into account the development and application of supercomputing methods and their benefits and usefulness for society. The applicants are expected to submit a proposal. In the proposal, the applicants **apply for a particular amount of core-hours** of computational resources. The requested core-hours should be substantiated by scientific excellence of the proposal, its computational maturity and expected impacts. Proposals do undergo a scientific, technical and economic evaluation. The allocation decisions are based on this evaluation. More information at [Computing resources allocation](http://www.it4i.cz/computing-resources-allocation/?lang=en) and [Obtaining Login Credentials](obtaining-login-credentials/obtaining-login-credentials/) page. \ No newline at end of file diff --git a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md index a3b157f60ec9ef57e19d3df0b6542e0c7095b107..1b19273adb13a218fe9ce37a07c9e5bdfc19f267 100644 --- a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md +++ b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md @@ -19,7 +19,7 @@ There are different kinds of certificates, each with a different scope of use. W Q: Which X.509 certificates are recognised by IT4Innovations? ------------------------------------------------------------- -Any certificate that has been issued by a Certification Authority (CA) from a member of the IGTF ([http:www.igtf.net](http://www.igtf.net/)) is recognised by IT4Innovations: European certificates are issued by members of the EUGridPMA ([https://www.eugridmpa.org](https://www.eugridpma.org/)), which is part if the IGTF and coordinates the trust fabric for e-Science Grid authentication within Europe. Further the Czech *"Qualified certificate" (Kvalifikovaný certifikát)* (provided by <http://www.postsignum.cz/> or <http://www.ica.cz/Kvalifikovany-certifikat.aspx>), that is used in electronic contact with Czech public authorities is accepted. +Any certificate that has been issued by a Certification Authority (CA) from a member of the IGTF ([http:www.igtf.net](http://www.igtf.net/)) is recognised by IT4Innovations: European certificates are issued by members of the EUGridPMA ([https://www.eugridmpa.org](https://www.eugridpma.org/)), which is part if the IGTF and coordinates the trust fabric for e-Science Grid authentication within Europe. Further the Czech *"Qualified certificate" (Kvalifikovaný certifikát)* (provided by <http://www.postsignum.cz/> or <http://www.ica.cz/Kvalifikovany-certifikat.aspx>), that is used in electronic contact with Czech public authorities is accepted. Q: How do I get a User Certificate that can be used with IT4Innovations? ------------------------------------------------------------------------ @@ -33,7 +33,7 @@ Yes, provided that the CA which provides this service is also a member of IGTF. Q: Does IT4Innovations support the TERENA certificate service? -------------------------------------------------------------- - Yes, ITInnovations supports TERENA eScience personal certificates. For more information, please visit [https://tcs-escience-portal.terena.org](https://tcs-escience-portal.terena.org/), where you also can find if your organisation/country can use this service + Yes, ITInnovations supports TERENA eScience personal certificates. For more information, please visit [https://tcs-escience-portal.terena.org](https://tcs-escience-portal.terena.org/), where you also can find if your organisation/country can use this service Q: What format should my certificate take? ------------------------------------------ @@ -53,7 +53,7 @@ Q: What are CA certificates? ---------------------------- Certification Authority (CA) certificates are used to verify the link between your user certificate and the authority which issued it. They are also used to verify the link between the host certificate of a IT4Innovations server and the CA which issued that certificate. In essence they establish a chain of trust between you and the target server. Thus, for some grid services, users must have a copy of all the CA certificates. -To assist users, SURFsara (a member of PRACE) provides a complete and up-to-date bundle of all the CA certificates that any PRACE user (or IT4Innovations grid services user) will require. Bundle of certificates, in either p12, PEM or JKS formats, are available from <http://winnetou.sara.nl/prace/certs/>. +To assist users, SURFsara (a member of PRACE) provides a complete and up-to-date bundle of all the CA certificates that any PRACE user (or IT4Innovations grid services user) will require. Bundle of certificates, in either p12, PEM or JKS formats, are available from <http://winnetou.sara.nl/prace/certs/>. It is worth noting that gsissh-term and DART automatically updates their CA certificates from this SURFsara website. In other cases, if you receive a warning that a server’s certificate can not be validated (not trusted), then please update your CA certificates via the SURFsara website. If this fails, then please contact the IT4Innovations helpdesk. @@ -63,7 +63,7 @@ Lastly, if you need the CA certificates for a personal Globus 5 installation, th myproxy-get-trustroots -s myproxy-prace.lrz.de ``` -If you run this command as ’root’, then it will install the certificates into /etc/grid-security/certificates. If you run this not as ’root’, then the certificates will be installed into $HOME/.globus/certificates. For Globus, you can download the globuscerts.tar.gz packet from <http://winnetou.sara.nl/prace/certs/>. +If you run this command as ’root’, then it will install the certificates into /etc/grid-security/certificates. If you run this not as ’root’, then the certificates will be installed into $HOME/.globus/certificates. For Globus, you can download the globuscerts.tar.gz packet from <http://winnetou.sara.nl/prace/certs/>. Q: What is a DN and how do I find mine? --------------------------------------- @@ -106,7 +106,7 @@ To check your certificate (e.g., DN, validity, issuer, public key algorithm, etc openssl x509 -in usercert.pem -text -noout ``` -To download openssl for both Linux and Windows, please visit <http://www.openssl.org/related/binaries.html>. On Macintosh Mac OS X computers openssl is already pre-installed and can be used immediately. +To download openssl for both Linux and Windows, please visit <http://www.openssl.org/related/binaries.html>. On Macintosh Mac OS X computers openssl is already pre-installed and can be used immediately. Q: How do I create and then manage a keystore? ---------------------------------------------- @@ -128,7 +128,7 @@ You also can import CA certificates into your java keystore with the tool, e.g.: where $mydomain.crt is the certificate of a trusted signing authority (CA) and $mydomain is the alias name that you give to the entry. -More information on the tool can be found at:<http://docs.oracle.com/javase/7/docs/technotes/tools/solaris/keytool.html> +More information on the tool can be found at:<http://docs.oracle.com/javase/7/docs/technotes/tools/solaris/keytool.html> Q: How do I use my certificate to access the different grid Services? --------------------------------------------------------------------- @@ -136,7 +136,7 @@ Most grid services require the use of your certificate; however, the format of y If employing the PRACE version of GSISSH-term (also a Java Web Start Application), you may use either the PEM or p12 formats. Note that this service automatically installs up-to-date PRACE CA certificates. -If the grid service is UNICORE, then you bind your certificate, in either the p12 format or JKS, to UNICORE during the installation of the client on your local machine. For more information, please visit [UNICORE6 in PRACE](http://www.prace-ri.eu/UNICORE6-in-PRACE) +If the grid service is UNICORE, then you bind your certificate, in either the p12 format or JKS, to UNICORE during the installation of the client on your local machine. For more information, please visit [UNICORE6 in PRACE](http://www.prace-ri.eu/UNICORE6-in-PRACE) If the grid service is part of Globus, such as GSI-SSH, GriFTP or GRAM5, then the certificates can be in either p12 or PEM format and must reside in the "$HOME/.globus" directory for Linux and Mac users or %HOMEPATH%.globus for Windows users. (Windows users will have to use the DOS command ’cmd’ to create a directory which starts with a ’.’). Further, user certificates should be named either "usercred.p12" or "usercert.pem" and "userkey.pem", and the CA certificates must be kept in a pre-specified directory as follows. For Linux and Mac users, this directory is either $HOME/.globus/certificates or /etc/grid-security/certificates. For Windows users, this directory is %HOMEPATH%.globuscertificates. (If you are using GSISSH-Term from prace-ri.eu then you do not have to create the .globus directory nor install CA certificates to use this tool alone). @@ -154,8 +154,8 @@ A proxy certificate is a short-lived certificate which may be employed by UNICOR Q: What is the MyProxy service? ------------------------------- -[The MyProxy Service](http://grid.ncsa.illinois.edu/myproxy/) , can be employed by gsissh-term and Globus tools, and is an online repository that allows users to store long lived proxy certificates remotely, which can then be retrieved for use at a later date. Each proxy is protected by a password provided by the user at the time of storage. This is beneficial to Globus users as they do not have to carry their private keys and certificates when travelling; nor do users have to install private keys and certificates on possibly insecure computers. +[The MyProxy Service](http://grid.ncsa.illinois.edu/myproxy/) , can be employed by gsissh-term and Globus tools, and is an online repository that allows users to store long lived proxy certificates remotely, which can then be retrieved for use at a later date. Each proxy is protected by a password provided by the user at the time of storage. This is beneficial to Globus users as they do not have to carry their private keys and certificates when travelling; nor do users have to install private keys and certificates on possibly insecure computers. Q: Someone may have copied or had access to the private key of my certificate either in a separate file or in the browser. What should I do? -Please ask the CA that issued your certificate to revoke this certifcate and to supply you with a new one. In addition, please report this to IT4Innovations by contacting [the support team](https://support.it4i.cz/rt). \ No newline at end of file +Please ask the CA that issued your certificate to revoke this certifcate and to supply you with a new one. In addition, please report this to IT4Innovations by contacting [the support team](https://support.it4i.cz/rt). \ No newline at end of file diff --git a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md index 450b58cf981349faa41854de496526bcd15dc448..ed36270e581f42543a96d9359c72a1bafd8a65ee 100644 --- a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md +++ b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md @@ -19,14 +19,14 @@ The PI is authorized to use the clusters by the allocation decision issued by th This is a preferred way of granting access to project resources. Please, use this method whenever it's possible. -Log in to the [IT4I Extranet portal](https://extranet.it4i.cz) using IT4I credentials and go to the **Projects** section. +Log in to the [IT4I Extranet portal](https://extranet.it4i.cz) using IT4I credentials and go to the **Projects** section. - **Users:** Please, submit your requests for becoming a project member. - **Primary Investigators:** Please, approve or deny users' requests in the same section. ### Authorization by e-mail (an alternative approach) - In order to authorize a Collaborator to utilize the allocated resources, the PI should contact the [IT4I support](https://support.it4i.cz/rt/) (E-mail: [support [at] it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz)) and provide following information: + In order to authorize a Collaborator to utilize the allocated resources, the PI should contact the [IT4I support](https://support.it4i.cz/rt/) (E-mail: [support [at] it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz)) and provide following information: 1. Identify your project by project ID 2. Provide list of people, including himself, who are authorized to use the resources allocated to the project. The list must include full name, e-mail and affiliation. Provide usernames as well, if collaborator login access already exists on the IT4I systems. @@ -54,11 +54,11 @@ Should the above information be provided by e-mail, the e-mail **must be** digit The Login Credentials ------------------------- -Once authorized by PI, every person (PI or Collaborator) wishing to access the clusters, should contact the [IT4I support](https://support.it4i.cz/rt/) (E-mail: [support [at] it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz)) providing following information: +Once authorized by PI, every person (PI or Collaborator) wishing to access the clusters, should contact the [IT4I support](https://support.it4i.cz/rt/) (E-mail: [support [at] it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz)) providing following information: 1. Project ID 2. Full name and affiliation -3. Statement that you have read and accepted the [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf) (AUP). +3. Statement that you have read and accepted the [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf) (AUP). 4. Attach the AUP file. 5. Your preferred username, max 8 characters long. The preferred username must associate your surname and name or be otherwise derived from it. Only alphanumeric sequences, dash and underscore signs are allowed. 6. In case you choose [Alternative way to personal certificate](obtaining-login-credentials/#alternative-way-of-getting-personal-certificate), @@ -96,7 +96,7 @@ You will receive your personal login credentials by protected e-mail. The login 2. ssh private key and private key passphrase 3. system password -The clusters are accessed by the [private key](../accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) and username. Username and password is used for login to the information systems listed on <http://support.it4i.cz/>. +The clusters are accessed by the [private key](../accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) and username. Username and password is used for login to the information systems listed on <http://support.it4i.cz/>. ### Change Passphrase @@ -110,15 +110,15 @@ On Windows, use [PuTTY Key Generator](../accessing-the-clusters/shell-access-and ### Change Password -Change password in your user profile at <https://extranet.it4i.cz/user/> +Change password in your user profile at <https://extranet.it4i.cz/user/> The Certificates for Digital Signatures ------------------------------------------- -We accept personal certificates issued by any widely respected certification authority (CA). This includes certificates by CAs organized in International Grid Trust Federation (<http://www.igtf.net/>), its European branch EUGridPMA - <https://www.eugridpma.org/> and its member organizations, e.g. the CESNET certification authority - <https://tcs-p.cesnet.cz/confusa/>. The Czech *"Qualified certificate" (Kvalifikovaný certifikát)* (provided by <http://www.postsignum.cz/> or <http://www.ica.cz/Kvalifikovany-certifikat.aspx>), that is used in electronic contact with Czech authorities is accepted as well. +We accept personal certificates issued by any widely respected certification authority (CA). This includes certificates by CAs organized in International Grid Trust Federation (<http://www.igtf.net/>), its European branch EUGridPMA - <https://www.eugridpma.org/> and its member organizations, e.g. the CESNET certification authority - <https://tcs-p.cesnet.cz/confusa/>. The Czech *"Qualified certificate" (Kvalifikovaný certifikát)* (provided by <http://www.postsignum.cz/> or <http://www.ica.cz/Kvalifikovany-certifikat.aspx>), that is used in electronic contact with Czech authorities is accepted as well. Certificate generation process is well-described here: -- [How to generate a personal TCS certificate in Mozilla Firefox web browser (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-gen) +- [How to generate a personal TCS certificate in Mozilla Firefox web browser (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-gen) A FAQ about certificates can be found here: [Certificates FAQ](certificates-faq/). @@ -126,7 +126,7 @@ Alternative Way to Personal Certificate ------------------------------------------- Follow these steps **only** if you can not obtain your certificate in a standard way. In case you choose this procedure, please attach a **scan of photo ID** (personal ID or passport or drivers license) when applying for [login credentials](obtaining-login-credentials/#the-login-credentials). -1. Go to <https://www.cacert.org/>. +1. Go to <https://www.cacert.org/>. - If there's a security warning, just acknowledge it. 2. Click *Join*. 3. Fill in the form and submit it by the *Next* button. @@ -145,11 +145,11 @@ Installation of the Certificate Into Your Mail Client The procedure is similar to the following guides: - MS Outlook 2010 - - [How to Remove, Import, and Export Digital certificates](http://support.microsoft.com/kb/179380) - - [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/outl-cert-imp) + - [How to Remove, Import, and Export Digital certificates](http://support.microsoft.com/kb/179380) + - [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/outl-cert-imp) - Mozilla Thudnerbird - - [Installing an SMIME certificate](http://kb.mozillazine.org/Installing_an_SMIME_certificate) - - [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-imp) + - [Installing an SMIME certificate](http://kb.mozillazine.org/Installing_an_SMIME_certificate) + - [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-imp) End of User Account Lifecycle ----------------------------- diff --git a/docs.it4i/img/external.png b/docs.it4i/img/external.png new file mode 100644 index 0000000000000000000000000000000000000000..16f9b92db47a1f1cd9d2320cc7d03122155a5200 Binary files /dev/null and b/docs.it4i/img/external.png differ diff --git a/docs.it4i/img/pdf.png b/docs.it4i/img/pdf.png new file mode 100644 index 0000000000000000000000000000000000000000..64fcbead36b4e6352527f37daf3b2c2d7dcc87a7 Binary files /dev/null and b/docs.it4i/img/pdf.png differ diff --git a/docs.it4i/index.md b/docs.it4i/index.md index d903f07ce2c408126cfd5a447cc87f7ddfd3eef8..8de3e75471f5d922119da63027894de9ab921551 100644 --- a/docs.it4i/index.md +++ b/docs.it4i/index.md @@ -1,7 +1,7 @@ Documentation ============= -Welcome to IT4Innovations documentation pages. The IT4Innovations national supercomputing center operates supercomputers [Salomon](/salomon/introduction/){:target="_blank"} and [Anselm](/anselm-cluster-documentation/introduction/){:target="_blank"}. The supercomputers are [ available](get-started-with-it4innovations/applying-for-resources/) to academic community within the Czech Republic and Europe and industrial community worldwide. The purpose of these pages is to provide a comprehensive documentation on hardware, software and usage of the computers. +Welcome to IT4Innovations documentation pages. The IT4Innovations national supercomputing center operates supercomputers [Salomon](/salomon/introduction/) and [Anselm](/anselm-cluster-documentation/introduction/). The supercomputers are [ available](get-started-with-it4innovations/applying-for-resources/) to academic community within the Czech Republic and Europe and industrial community worldwide. The purpose of these pages is to provide a comprehensive documentation on hardware, software and usage of the computers. How to read the documentation ----------------------------------------- @@ -16,19 +16,19 @@ Welcome to IT4Innovations documentation pages. The IT4Innovations national super Getting Help and Support ------------------------ ->Contact [support [at] it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz) for help and support regarding the cluster technology at IT4Innovations. Please use **Czech**, **Slovak** or **English** language for communication with us. Follow the status of your request to IT4Innovations at [support.it4i.cz/rt](http://support.it4i.cz/rt). +>Contact [support [at] it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz) for help and support regarding the cluster technology at IT4Innovations. Please use **Czech**, **Slovak** or **English** language for communication with us. Follow the status of your request to IT4Innovations at [support.it4i.cz/rt](http://support.it4i.cz/rt). -Use your IT4Innotations username and password to log in to the [support](http://support.it4i.cz/) portal. +Use your IT4Innotations username and password to log in to the [support](http://support.it4i.cz/) portal. Required Proficiency -------------------- >You need basic proficiency in Linux environment. -In order to use the system for your calculations, you need basic proficiency in Linux environment. To gain the proficiency, we recommend you reading the [ introduction to Linux](http://www.tldp.org/LDP/intro-linux/html/) operating system environment and installing a Linux distribution on your personal computer. A good choice might be the [ Fedora](http://fedoraproject.org/) distribution, as it is similar to systems on the clusters at IT4Innovations. It's easy to install and use. In fact, any distribution would do. +In order to use the system for your calculations, you need basic proficiency in Linux environment. To gain the proficiency, we recommend you reading the [ introduction to Linux](http://www.tldp.org/LDP/intro-linux/html/) operating system environment and installing a Linux distribution on your personal computer. A good choice might be the [ Fedora](http://fedoraproject.org/) distribution, as it is similar to systems on the clusters at IT4Innovations. It's easy to install and use. In fact, any distribution would do. >Learn how to parallelize your code! -In many cases, you will run your own code on the cluster. In order to fully exploit the cluster, you will need to carefully consider how to utilize all the cores available on the node and how to use multiple nodes at the same time. You need to **parallelize** your code. Proficieny in MPI, OpenMP, CUDA, UPC or GPI2 programming may be gained via the [training provided by IT4Innovations.](http://prace.it4i.cz) +In many cases, you will run your own code on the cluster. In order to fully exploit the cluster, you will need to carefully consider how to utilize all the cores available on the node and how to use multiple nodes at the same time. You need to **parallelize** your code. Proficieny in MPI, OpenMP, CUDA, UPC or GPI2 programming may be gained via the [training provided by IT4Innovations.](http://prace.it4i.cz) Terminology Frequently Used on These Pages ------------------------------------------ @@ -63,5 +63,4 @@ local $  Errata ------- - -Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in the text or the code we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this documentation. If you find any errata, please report them by visiting http://support.it4i.cz/rt, creating a new ticket, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded on our website. \ No newline at end of file +Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in the text or the code we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this documentation. If you find any errata, please report them by visiting [http://support.it4i.cz/rt](http://support.it4i.cz/rt), creating a new ticket, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded on our website. \ No newline at end of file diff --git a/docs.it4i/pbspro-documentation/sitemap.md b/docs.it4i/pbspro-documentation/sitemap.md index 48cb13acc7d47615d8eb162cf6bf861e29fd22df..9374e051cab08425f36f7a065584758e411b9d63 100644 --- a/docs.it4i/pbspro-documentation/sitemap.md +++ b/docs.it4i/pbspro-documentation/sitemap.md @@ -1,4 +1,4 @@ -* [PBSPro Programmer's Guide](pbspro-programmers-guide.pdf) -* [PBSPro Quick Start Guide](pbspro-quick-start-guide.pdf) -* [PBSPro Reference Guide](pbspro-reference-guide.pdf) -* [PBSPro User's Guide](pbspro-users-guide.pdf) \ No newline at end of file +* )[PBSPro Programmer's Guide](pbspro-programmers-guide.pdf) +* [PBSPro Quick Start Guide](pbspro-quick-start-guide.pdf) +* [PBSPro Reference Guide](pbspro-reference-guide.pdf) +* [PBSPro User's Guide](pbspro-users-guide.pdf) \ No newline at end of file