diff --git a/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/outgoing-connections.md b/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/outgoing-connections.md index 658374a27617d7a4ec6276e61155815fd4ca76f2..6e60ad22349c953cfc9adc496699e031d58bf9a0 100644 --- a/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/outgoing-connections.md +++ b/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/outgoing-connections.md @@ -73,7 +73,7 @@ To establish local proxy server on your workstation, install and run SOCKS proxy local $ ssh -D 1080 localhost ``` -On Windows, install and run the free, open source [Sock Puppet](http://sockspuppet.com/) server. +On Windows, install and run the free, open source [Sock Puppet](http://sockspuppet.com/) server. Once the proxy server is running, establish ssh port forwarding from Anselm to the proxy server, port 1080, exactly as [described above](outgoing-connections/#port-forwarding-from-login-nodes). @@ -81,4 +81,4 @@ Once the proxy server is running, establish ssh port forwarding from Anselm to t local $ ssh -R 6000:localhost:1080 anselm.it4i.cz ``` -Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](outgoing-connections/#port-forwarding-from-compute-nodes) as well . \ No newline at end of file +Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](outgoing-connections/#port-forwarding-from-compute-nodes) as well . diff --git a/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/shell-and-data-access.md b/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/shell-and-data-access.md index 49dbfec6b9d12441767d6dbcbc3578068b0fe931..000774923f9ff9d8a43123d3424bb16388e62cfe 100644 --- a/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/shell-and-data-access.md +++ b/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/shell-and-data-access.md @@ -56,7 +56,7 @@ Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com Data Transfer ------------- -Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp protocols. (Not available yet.) In case large volumes of data are transferred, use dedicated data mover node dm1.anselm.it4i.cz for increased performance. +Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp protocols. (Not available yet.) In case large volumes of data are transferred, use dedicated data mover node dm1.anselm.it4i.cz for increased performance. |Address|Port|Protocol| |---|---| @@ -109,6 +109,6 @@ $ man scp $ man sshfs ``` -On Windows, use [WinSCP client](http://winscp.net/eng/download.php) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/) provides a way to mount the Anselm filesystems directly as an external disc. +On Windows, use [WinSCP client](http://winscp.net/eng/download.php) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/) provides a way to mount the Anselm filesystems directly as an external disc. -More information about the shared file systems is available [here](../../storage/storage/). \ No newline at end of file +More information about the shared file systems is available [here](../../storage/storage/). diff --git a/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/vpn-access.md b/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/vpn-access.md index 0231f61a1e6b2e576b391601f57ba1db7ad1ebeb..09068d696049234fecf4c7c0cb80cf5376cf7ba1 100644 --- a/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/vpn-access.md +++ b/docs.it4i/anselm-cluster-documentation/accessing-the-cluster/vpn-access.md @@ -7,7 +7,7 @@ Accessing IT4Innovations internal resources via VPN !!! Note "Note" **Failed to initialize connection subsystem Win 8.1 - 02-10-15 MS patch** - Workaround can be found at [vpn-connection-fail-in-win-8.1](../../vpn-connection-fail-in-win-8.1.html) + Workaround can be found at [vpn-connection-fail-in-win-8.1](../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.html) For using resources and licenses which are located at IT4Innovations local network, it is necessary to VPN connect to this network. We use Cisco AnyConnect Secure Mobility Client, which is supported on the following operating systems: @@ -83,4 +83,4 @@ After a successful logon, you can see a green circle with a tick mark on the loc  -For disconnecting, right-click on the AnyConnect client icon in the system tray and select **VPN Disconnect**. \ No newline at end of file +For disconnecting, right-click on the AnyConnect client icon in the system tray and select **VPN Disconnect**. diff --git a/docs.it4i/anselm-cluster-documentation/compute-nodes.md b/docs.it4i/anselm-cluster-documentation/compute-nodes.md index f1bcc05718de34cdd9e2a4b4521532e9187aa68f..85cf3c05bae514cc1a06b79127196affde8f1575 100644 --- a/docs.it4i/anselm-cluster-documentation/compute-nodes.md +++ b/docs.it4i/anselm-cluster-documentation/compute-nodes.md @@ -63,7 +63,6 @@ Anselm is cluster of x86-64 Intel based nodes built on Bull Extreme Computing bu Processor Architecture ---------------------- - Anselm is equipped with Intel Sandy Bridge processors Intel Xeon E5-2665 (nodes without accelerator and fat nodes) and Intel Xeon E5-2470 (nodes with accelerator). Processors support Advanced Vector Extensions (AVX) 256-bit instruction set. ### Intel Sandy Bridge E5-2665 Processor @@ -133,4 +132,4 @@ Memory Architecture - 8 DDR3 DIMMS per CPU - 2 DDR3 DIMMS per channel - Data rate support: up to 1600MT/s -- Populated memory: 16x 32GB DDR3 DIMM 1600Mhz \ No newline at end of file +- Populated memory: 16x 32GB DDR3 DIMM 1600Mhz diff --git a/docs.it4i/anselm-cluster-documentation/environment-and-modules.md b/docs.it4i/anselm-cluster-documentation/environment-and-modules.md index fc263ee7637dfeab012ee26765f08cced759000b..a056c909203ca64b89d229c4b05bd3ed3ef36ba1 100644 --- a/docs.it4i/anselm-cluster-documentation/environment-and-modules.md +++ b/docs.it4i/anselm-cluster-documentation/environment-and-modules.md @@ -78,10 +78,10 @@ PrgEnv-intel sets up the INTEL development environment in conjunction with the I ### Application Modules Path Expansion -All application modules on Salomon cluster (and further) will be build using tool called [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). In case that you want to use some applications that are build by EasyBuild already, you have to modify your MODULEPATH environment variable. +All application modules on Salomon cluster (and further) will be build using tool called [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). In case that you want to use some applications that are build by EasyBuild already, you have to modify your MODULEPATH environment variable. ```bash export MODULEPATH=$MODULEPATH:/apps/easybuild/modules/all/ ``` -This command expands your searched paths to modules. You can also add this command to the .bashrc file to expand paths permanently. After this command, you can use same commands to list/add/remove modules as is described above. \ No newline at end of file +This command expands your searched paths to modules. You can also add this command to the .bashrc file to expand paths permanently. After this command, you can use same commands to list/add/remove modules as is described above. diff --git a/docs.it4i/anselm-cluster-documentation/hardware-overview.md b/docs.it4i/anselm-cluster-documentation/hardware-overview.md index fe790d9e256e4cfa02b3ffd6a90bbee20ef6bf80..3a72330a38fa13cc4d165cf4e8a2add9f4c1dfce 100644 --- a/docs.it4i/anselm-cluster-documentation/hardware-overview.md +++ b/docs.it4i/anselm-cluster-documentation/hardware-overview.md @@ -3,7 +3,7 @@ Hardware Overview The Anselm cluster consists of 209 computational nodes named cn[1-209] of which 180 are regular compute nodes, 23 GPU Kepler K20 accelerated nodes, 4 MIC Xeon Phi 5110 accelerated nodes and 2 fat nodes. Each node is a powerful x86-64 computer, equipped with 16 cores (two eight-core Intel Sandy Bridge processors), at least 64GB RAM, and local hard drive. The user access to the Anselm cluster is provided by two login nodes login[1,2]. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 320TB /home disk storage to store the user files. The 146TB shared /scratch storage is available for the scratch data. -The Fat nodes are equipped with large amount (512GB) of memory. Virtualization infrastructure provides resources to run long term servers and services in virtual mode. Fat nodes and virtual servers may access 45 TB of dedicated block storage. Accelerated nodes, fat nodes, and virtualization infrastructure are available [upon request](https://support.it4i.cz/rt) made by a PI. +The Fat nodes are equipped with large amount (512GB) of memory. Virtualization infrastructure provides resources to run long term servers and services in virtual mode. Fat nodes and virtual servers may access 45 TB of dedicated block storage. Accelerated nodes, fat nodes, and virtualization infrastructure are available [upon request](https://support.it4i.cz/rt) made by a PI. Schematic representation of the Anselm cluster. Each box represents a node (computer) or storage capacity: @@ -518,4 +518,4 @@ The parameters are summarized in the following tables: |MIC accelerated|2x Intel Sandy Bridge E5-2470, 2.3GHz|96GB|Intel Xeon Phi P5110| |Fat compute node|2x Intel Sandy Bridge E5-2665, 2.4GHz|512GB|-| -For more details please refer to the [Compute nodes](compute-nodes/), [Storage](storage/storage/), and [Network](network/). \ No newline at end of file +For more details please refer to the [Compute nodes](compute-nodes/), [Storage](storage/storage/), and [Network](network/). diff --git a/docs.it4i/anselm-cluster-documentation/introduction.md b/docs.it4i/anselm-cluster-documentation/introduction.md index 75113f0a3638d316d87c05e1d2972d70eeab96cb..6db81fd060f288f14e5f238fa19bd493dcb290f9 100644 --- a/docs.it4i/anselm-cluster-documentation/introduction.md +++ b/docs.it4i/anselm-cluster-documentation/introduction.md @@ -3,7 +3,7 @@ Introduction Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15TB RAM and giving over 94 Tflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64GB RAM, and 500GB harddrive. Nodes are interconnected by fully non-blocking fat-tree Infiniband network and equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/). -The cluster runs bullx Linux ([bull](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)) [operating system](software/operating-system/), which is compatible with the RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/). +The cluster runs bullx Linux ([bull](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)) [operating system](software/operating-system/), which is compatible with the RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/). User data shared file-system (HOME, 320TB) and job data shared file-system (SCRATCH, 146TB) are available to users. diff --git a/docs.it4i/anselm-cluster-documentation/network.md b/docs.it4i/anselm-cluster-documentation/network.md index 498ba52e697535a7dcd7abac853e7be08186b790..7f8a989eb0c91704703f7c09888ab9f646c830e1 100644 --- a/docs.it4i/anselm-cluster-documentation/network.md +++ b/docs.it4i/anselm-cluster-documentation/network.md @@ -1,11 +1,11 @@ Network ======= -All compute and login nodes of Anselm are interconnected by [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) QDR network and by Gigabit [Ethernet](http://en.wikipedia.org/wiki/Ethernet) network. Both networks may be used to transfer user data. +All compute and login nodes of Anselm are interconnected by [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) QDR network and by Gigabit [Ethernet](http://en.wikipedia.org/wiki/Ethernet) network. Both networks may be used to transfer user data. Infiniband Network ------------------ -All compute and login nodes of Anselm are interconnected by a high-bandwidth, low-latency [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) QDR network (IB 4x QDR, 40 Gbps). The network topology is a fully non-blocking fat-tree. +All compute and login nodes of Anselm are interconnected by a high-bandwidth, low-latency [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) QDR network (IB 4x QDR, 40 Gbps). The network topology is a fully non-blocking fat-tree. The compute nodes may be accessed via the Infiniband network using ib0 network interface, in address range 10.2.1.1-209. The MPI may be used to establish native Infiniband connection among the nodes. @@ -34,4 +34,4 @@ $ ssh 10.2.1.110 $ ssh 10.1.1.108 ``` -In this example, we access the node cn110 by Infiniband network via the ib0 interface, then from cn110 to cn108 by Ethernet network. \ No newline at end of file +In this example, we access the node cn110 by Infiniband network via the ib0 interface, then from cn110 to cn108 by Ethernet network. diff --git a/docs.it4i/anselm-cluster-documentation/prace.md b/docs.it4i/anselm-cluster-documentation/prace.md index 6e854b72f2a23996317a265dee52c930c51d0862..8bf2f5c696785727b4fb0e6ba79971d3b2ebbdad 100644 --- a/docs.it4i/anselm-cluster-documentation/prace.md +++ b/docs.it4i/anselm-cluster-documentation/prace.md @@ -5,11 +5,11 @@ Intro ----- PRACE users coming to Anselm as to TIER-1 system offered through the DECI calls are in general treated as standard users and so most of the general documentation applies to them as well. This section shows the main differences for quicker orientation, but often uses references to the original documentation. PRACE users who don't undergo the full procedure (including signing the IT4I AuP on top of the PRACE AuP) will not have a password and thus access to some services intended for regular users. This can lower their comfort, but otherwise they should be able to use the TIER-1 system as intended. Please see the [Obtaining Login Credentials section](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/), if the same level of access is required. -All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation/) should be read before continuing reading the local documentation here. +All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation/) should be read before continuing reading the local documentation here. Help and Support -------------------- -If you have any troubles, need information, request support or want to install additional software, please use [PRACE Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/). +If you have any troubles, need information, request support or want to install additional software, please use [PRACE Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/). Information about the local services are provided in the [introduction of general user documentation](introduction/). Please keep in mind, that standard PRACE accounts don't have a password to access the web interface of the local (IT4Innovations) request tracker and thus a new ticket should be created by sending an e-mail to support[at]it4i.cz. @@ -30,11 +30,11 @@ The user will need a valid certificate and to be present in the PRACE LDAP (plea Most of the information needed by PRACE users accessing the Anselm TIER-1 system can be found here: -- [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs) -- [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ) -- [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh) -- [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details) -- [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer) +- [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs) +- [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ) +- [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh) +- [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details) +- [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer) Before you start to use any of the services don't forget to create a proxy certificate from your certificate: @@ -209,7 +209,7 @@ For production runs always use scratch file systems, either the global shared or All system wide installed software on the cluster is made available to the users via the modules. The information about the environment and modules usage is in this [section of general documentation](environment-and-modules/). -PRACE users can use the "prace" module to use the [PRACE Common Production Environment](http://www.prace-ri.eu/PRACE-common-production). +PRACE users can use the "prace" module to use the [PRACE Common Production Environment](http://www.prace-ri.eu/PRACE-common-production). ```bash $ module load prace @@ -231,14 +231,14 @@ qprace**, the PRACE \***: This queue is intended for normal production runs. It ### Accounting & Quota -The resources that are currently subject to accounting are the core hours. The core hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. See [example in the general documentation](resource-allocation-and-job-execution/resources-allocation-policy/). +The resources that are currently subject to accounting are the core hours. The core hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. See [example in the general documentation](resource-allocation-and-job-execution/resources-allocation-policy/). -PRACE users should check their project accounting using the [PRACE Accounting Tool (DART)](http://www.prace-ri.eu/accounting-report-tool/). +PRACE users should check their project accounting using the [PRACE Accounting Tool (DART)](http://www.prace-ri.eu/accounting-report-tool/). Users who have undergone the full local registration procedure (including signing the IT4Innovations Acceptable Use Policy) and who have received local password may check at any time, how many core-hours have been consumed by themselves and their projects using the command "it4ifree". Please note that you need to know your user password to use the command and that the displayed core hours are "system core hours" which differ from PRACE "standardized core hours". !!! Note "Note" - The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients> + The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients> ```bash $ it4ifree @@ -256,4 +256,4 @@ By default file system quota is applied. To check the current status of the quot $ lfs quota -u USER_LOGIN /scratch ``` -If the quota is insufficient, please contact the [support](prace/#help-and-support) and request an increase. \ No newline at end of file +If the quota is insufficient, please contact the [support](prace/#help-and-support) and request an increase. diff --git a/docs.it4i/anselm-cluster-documentation/remote-visualization.md b/docs.it4i/anselm-cluster-documentation/remote-visualization.md index bf415fb3623ae7890f460eb11d8fa5ac93ca5828..a0f18e30baea6921413903fa2c1c4cbd7a3fd08a 100644 --- a/docs.it4i/anselm-cluster-documentation/remote-visualization.md +++ b/docs.it4i/anselm-cluster-documentation/remote-visualization.md @@ -30,7 +30,7 @@ How to use the service ### Setup and start your own TurboVNC server. -TurboVNC is designed and implemented for cooperation with VirtualGL and available for free for all major platforms. For more information and download, please refer to: <http://sourceforge.net/projects/turbovnc/> +TurboVNC is designed and implemented for cooperation with VirtualGL and available for free for all major platforms. For more information and download, please refer to: <http://sourceforge.net/projects/turbovnc/> **Always use TurboVNC on both sides** (server and client) **don't mix TurboVNC and other VNC implementations** (TightVNC, TigerVNC, ...) as the VNC protocol implementation may slightly differ and diminish your user experience by introducing picture artifacts, etc. @@ -219,4 +219,4 @@ To have an idea how the settings are affecting the resulting picture uality thre 3. JPEG image quality = 10 - \ No newline at end of file + diff --git a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity-computing.md b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity-computing.md index 20dd27a856b3cd9738ad97c59bdf646ea4019ed7..3addeac9ee24bef2d48e5da71c8e4fd3d23edd2c 100644 --- a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity-computing.md +++ b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity-computing.md @@ -312,4 +312,4 @@ Unzip the archive in an empty directory on Anselm and follow the instructions in ```bash $ unzip capacity.zip $ cat README -``` \ No newline at end of file +``` diff --git a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/introduction.md b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/introduction.md index 99c0219a02eaf93b65f765750d1480f9b6ebb57d..7a0c3fdd757cd3483c3dc200ba5e72b33fb9681c 100644 --- a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/introduction.md +++ b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/introduction.md @@ -36,4 +36,4 @@ Use GNU Parallel and/or Job arrays when running (many) single core jobs. In many cases, it is useful to submit huge (100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization. In this chapter, we discuss the the recommended way to run huge number of jobs, including **ways to run huge number of single core jobs**. -Read more on [Capacity computing](capacity-computing/) page. \ No newline at end of file +Read more on [Capacity computing](capacity-computing/) page. diff --git a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/job-priority.md b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/job-priority.md index b871d1228c7e45259fcd05d888c853775e2ac2af..d86e3a8f184facd524c8bf39cee0ffc27751262e 100644 --- a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/job-priority.md +++ b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/job-priority.md @@ -18,7 +18,7 @@ Queue priority is priority of queue where job is queued before execution. Queue priority has the biggest impact on job execution priority. Execution priority of jobs in higher priority queues is always greater than execution priority of jobs in lower priority queues. Other properties of job used for determining job execution priority (fairshare priority, eligible time) cannot compete with queue priority. -Queue priorities can be seen at <https://extranet.it4i.cz/anselm/queues> +Queue priorities can be seen at <https://extranet.it4i.cz/anselm/queues> ### Fairshare priority @@ -34,7 +34,7 @@ where MAX_FAIRSHARE has value 1E6, usage~Project~ is cumulated usage by all memb Usage counts allocated corehours (ncpus*walltime). Usage is decayed, or cut in half periodically, at the interval 168 hours (one week). Jobs queued in queue qexp are not calculated to project's usage. ->Calculated usage and fairshare priority can be seen at <https://extranet.it4i.cz/anselm/projects>. +>Calculated usage and fairshare priority can be seen at <https://extranet.it4i.cz/anselm/projects>. Calculated fairshare priority can be also seen as Resource_List.fairshare attribute of a job. @@ -65,4 +65,4 @@ It means, that jobs with lower execution priority can be run before jobs with hi !!! Note "Note" It is **very beneficial to specify the walltime** when submitting jobs. -Specifying more accurate walltime enables better schedulling, better execution times and better resource usage. Jobs with suitable (small) walltime could be backfilled - and overtake job(s) with higher priority. \ No newline at end of file +Specifying more accurate walltime enables better schedulling, better execution times and better resource usage. Jobs with suitable (small) walltime could be backfilled - and overtake job(s) with higher priority. diff --git a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution.md b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution.md index b89eb1f17827d2fdfb40f443df95bfa62a512a20..4f91a18ea914db14f5398bad2b478d365d072d8c 100644 --- a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution.md +++ b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution.md @@ -408,4 +408,4 @@ In this example, some directory on the home holds the input file input and execu ### Other Jobscript Examples -Further jobscript examples may be found in the software section and the [Capacity computing](capacity-computing/) section. \ No newline at end of file +Further jobscript examples may be found in the software section and the [Capacity computing](capacity-computing/) section. diff --git a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy.md b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy.md index d7f49f6a3724482d4893432c9d5a35ce699158f0..f84fedbe5d78db31ad662e585efb362ad293b0e3 100644 --- a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy.md +++ b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy.md @@ -30,11 +30,11 @@ The job wall clock time defaults to **half the maximum time**, see table above. Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatically. Wall clock time limit can be changed for queuing jobs (state Q) using the qalter command, however can not be changed for a running job (state R). -Anselm users may check current queue configuration at <https://extranet.it4i.cz/anselm/queues>. +Anselm users may check current queue configuration at <https://extranet.it4i.cz/anselm/queues>. ### Queue status ->Check the status of jobs, queues and compute nodes at <https://extranet.it4i.cz/anselm/> +>Check the status of jobs, queues and compute nodes at <https://extranet.it4i.cz/anselm/>  @@ -112,7 +112,7 @@ The resources that are currently subject to accounting are the core-hours. The c ### Check consumed resources !!! Note "Note" - The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients> + The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients> User may check at any time, how many core-hours have been consumed by himself/herself and his/her projects. The command is available on clusters' login nodes. @@ -123,4 +123,4 @@ Password: -------- ------- ------ -------- ------- OPEN-0-0 1500000 400644 225265 1099356 DD-13-1 10000 2606 2606 7394 -``` \ No newline at end of file +``` diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md index a450ed452af93df8aeffe089c0019775ab52b6e3..93dfb1d2bdb6b1d0f259f0713ab31fcb9444a226 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md @@ -1,8 +1,7 @@ ANSYS CFX ========= -[ANSYS CFX](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+CFX) -software is a high-performance, general purpose fluid dynamics program that has been applied to solve wide-ranging fluid flow problems for over 20 years. At the heart of ANSYS CFX is its advanced solver technology, the key to achieving reliable and accurate solutions quickly and robustly. The modern, highly parallelized solver is the foundation for an abundant choice of physical models to capture virtually any type of phenomena related to fluid flow. The solver and its many physical models are wrapped in a modern, intuitive, and flexible GUI and user environment, with extensive capabilities for customization and automation using session files, scripting and a powerful expression language. +[ANSYS CFX](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+CFX) software is a high-performance, general purpose fluid dynamics program that has been applied to solve wide-ranging fluid flow problems for over 20 years. At the heart of ANSYS CFX is its advanced solver technology, the key to achieving reliable and accurate solutions quickly and robustly. The modern, highly parallelized solver is the foundation for an abundant choice of physical models to capture virtually any type of phenomena related to fluid flow. The solver and its many physical models are wrapped in a modern, intuitive, and flexible GUI and user environment, with extensive capabilities for customization and automation using session files, scripting and a powerful expression language. To run ANSYS CFX in batch mode you can utilize/modify the default cfx.pbs script and execute it via the qsub command. @@ -54,4 +53,4 @@ Header of the pbs file (above) is common and description can be find on [this s Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. >Input file has to be defined by common CFX def file which is attached to the cfx solver via parameter -def **License** should be selected by parameter -P (Big letter **P**). Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**. -[More about licensing here](licensing/) \ No newline at end of file +[More about licensing here](licensing/) diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md index 57177b84a70468bf0b23080ff38a20530a650b50..569a60075262f09a514e9405412a70730f04c11e 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md @@ -1,11 +1,11 @@ ANSYS Fluent ============ -[ANSYS Fluent](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+Fluent) +[ANSYS Fluent](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+Fluent) software contains the broad physical modeling capabilities needed to model flow, turbulence, heat transfer, and reactions for industrial applications ranging from air flow over an aircraft wing to combustion in a furnace, from bubble columns to oil platforms, from blood flow to semiconductor manufacturing, and from clean room design to wastewater treatment plants. Special models that give the software the ability to model in-cylinder combustion, aeroacoustics, turbomachinery, and multiphase systems have served to broaden its reach. 1. Common way to run Fluent over pbs file ------------------------------------------------------- +----------------------------------------- To run ANSYS Fluent in batch mode you can utilize/modify the default fluent.pbs script and execute it via the qsub command. ```bash @@ -161,4 +161,4 @@ ANSLIC_ADMIN Utility will be run ANSYS Academic Research license should be moved up to the top of the list. - \ No newline at end of file + diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md index 47c66dd254b0e00bc5964a382d5cfcf92c7a7ab8..24a16a848199c9049b98a6112fecadc2bdb68364 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md @@ -1,7 +1,7 @@ ANSYS LS-DYNA ============= -**[ANSYSLS-DYNA](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/Explicit+Dynamics/ANSYS+LS-DYNA)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern graphical user environment. +**[ANSYSLS-DYNA](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/Explicit+Dynamics/ANSYS+LS-DYNA)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern graphical user environment. To run ANSYS LS-DYNA in batch mode you can utilize/modify the default ansysdyna.pbs script and execute it via the qsub command. @@ -51,6 +51,6 @@ echo Machines: $hl /ansys_inc/v145/ansys/bin/ansys145 -dis -lsdynampp i=input.k -machines $hl ``` -Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. -Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ansys solver via parameter i= \ No newline at end of file +Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ansys solver via parameter i= diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md index 20c751e0ded0da6e5791e98f79ec86deb21a0c42..22f8e5a2608a0cf5b6d13a269f1a1b700865d95a 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md @@ -1,7 +1,7 @@ ANSYS MAPDL =========== -**[ANSYS Multiphysics](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/ANSYS+Multiphysics)** +**[ANSYS Multiphysics](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/ANSYS+Multiphysics)** software offers a comprehensive product solution for both multiphysics and single-physics analysis. The product includes structural, thermal, fluid and both high- and low-frequency electromagnetic analysis. The product also contains solutions for both direct and sequentially coupled physics problems including direct coupled-field elements and the ANSYS multi-field solver. To run ANSYS MAPDL in batch mode you can utilize/modify the default mapdl.pbs script and execute it via the qsub command. @@ -50,9 +50,9 @@ echo Machines: $hl /ansys_inc/v145/ansys/bin/ansys145 -b -dis -p aa_r -i input.dat -o file.out -machines $hl -dir $WORK_DIR ``` -Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common APDL file which is attached to the ansys solver via parameter -i **License** should be selected by parameter -p. Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**, aa_r_dy (ANSYS **Academic** AUTODYN) -[More about licensing here](licensing/) \ No newline at end of file +[More about licensing here](licensing/) diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md index 8faf1827ee0d3385b7a7919e625d04c70f5b7532..ad4aa8865bedac8f0b5569ed4d76c39306a7617c 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md @@ -1,7 +1,7 @@ Overview of ANSYS Products ========================== -**[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM) +**[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM) Anselm provides as commercial as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of license or by two letter preposition "**aa_**" in the license feature name. Change of license is realized on command line respectively directly in user's pbs file (see individual products). [ More about licensing here](ansys/licensing/) diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ls-dyna.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ls-dyna.md index 774a62e9c33eaa1c9a293d19b840c3edb4d4bbbe..c3ae365595af698c705a1c00b4726ac9937c095c 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ls-dyna.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ls-dyna.md @@ -1,7 +1,7 @@ LS-DYNA ======= -[LS-DYNA](http://www.lstc.com/) is a multi-purpose, explicit and implicit finite element program used to analyze the nonlinear dynamic response of structures. Its fully automated contact analysis capability, a wide range of constitutive models to simulate a whole range of engineering materials (steels, composites, foams, concrete, etc.), error-checking features and the high scalability have enabled users worldwide to solve successfully many complex problems. Additionally LS-DYNA is extensively used to simulate impacts on structures from drop tests, underwater shock, explosions or high-velocity impacts. Explosive forming, process engineering, accident reconstruction, vehicle dynamics, thermal brake disc analysis or nuclear safety are further areas in the broad range of possible applications. In leading-edge research LS-DYNA is used to investigate the behaviour of materials like composites, ceramics, concrete, or wood. Moreover, it is used in biomechanics, human modelling, molecular structures, casting, forging, or virtual testing. +[LS-DYNA](http://www.lstc.com/) is a multi-purpose, explicit and implicit finite element program used to analyze the nonlinear dynamic response of structures. Its fully automated contact analysis capability, a wide range of constitutive models to simulate a whole range of engineering materials (steels, composites, foams, concrete, etc.), error-checking features and the high scalability have enabled users worldwide to solve successfully many complex problems. Additionally LS-DYNA is extensively used to simulate impacts on structures from drop tests, underwater shock, explosions or high-velocity impacts. Explosive forming, process engineering, accident reconstruction, vehicle dynamics, thermal brake disc analysis or nuclear safety are further areas in the broad range of possible applications. In leading-edge research LS-DYNA is used to investigate the behaviour of materials like composites, ceramics, concrete, or wood. Moreover, it is used in biomechanics, human modelling, molecular structures, casting, forging, or virtual testing. Anselm provides **1 commercial license of LS-DYNA without HPC** support now. @@ -31,6 +31,6 @@ module load lsdyna /apps/engineering/lsdyna/lsdyna700s i=input.k ``` -Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.html). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.html). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. -Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA **.k** file which is attached to the LS-DYNA solver via parameter i= \ No newline at end of file +Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA **.k** file which is attached to the LS-DYNA solver via parameter i= diff --git a/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md b/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md index 15c62090c39e8638f4156e3469e85271148541f4..526dbd6624aea1dbc142e48f33822087575e4dab 100644 --- a/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md +++ b/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md @@ -5,13 +5,13 @@ Molpro is a complete system of ab initio programs for molecular electronic struc About Molpro ------------ -Molpro is a software package used for accurate ab-initio quantum chemistry calculations. More information can be found at the [official webpage](http://www.molpro.net/). +Molpro is a software package used for accurate ab-initio quantum chemistry calculations. More information can be found at the [official webpage](http://www.molpro.net/). License ------- Molpro software package is available only to users that have a valid license. Please contact support to enable access to Molpro if you have a valid license appropriate for running on our cluster (eg. academic research group licence, parallel execution). -To run Molpro, you need to have a valid license token present in " $HOME/.molpro/token". You can download the token from [Molpro website](https://www.molpro.net/licensee/?portal=licensee). +To run Molpro, you need to have a valid license token present in " $HOME/.molpro/token". You can download the token from [Molpro website](https://www.molpro.net/licensee/?portal=licensee). Installed version ----------------- @@ -31,7 +31,7 @@ Compilation parameters are default: Running ------ -Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details. +Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details. !!! Note "Note" The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS. @@ -61,4 +61,4 @@ You are advised to use the -d option to point to a directory in [SCRATCH filesys # delete scratch directory rm -rf /scratch/$USER/$PBS_JOBID -``` \ No newline at end of file +``` diff --git a/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md b/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md index a0beff73a3d4b908447dba9822cea119079332cd..2694f43540ebf008b624fe086e88151f7676e752 100644 --- a/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md +++ b/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md @@ -7,11 +7,10 @@ Introduction ------------------------- NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters. -[Homepage](http://www.nwchem-sw.org/index.php/Main_Page) +[Homepage](http://www.nwchem-sw.org/index.php/Main_Page) Installed versions ------------------ - The following versions are currently installed: - 6.1.1, not recommended, problems have been observed with this version @@ -40,7 +39,7 @@ NWChem is compiled for parallel MPI execution. Normal procedure for MPI jobs app Options -------------------- -Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives : +Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives : - MEMORY : controls the amount of memory NWChem will use -- SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage/storage/#scratch) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct" \ No newline at end of file +- SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage/storage/#scratch) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct" diff --git a/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md b/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md index bbc63027e54c5a7c58e3a2567d3f0162d562d2e8..28fa2fe85034999d650ba621c6102d3af81465a2 100644 --- a/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md +++ b/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md @@ -3,15 +3,15 @@ COMSOL Multiphysics® Introduction ------------------------- -[COMSOL](http://www.comsol.com) is a powerful environment for modelling and solving various engineering and scientific problems based on partial differential equations. COMSOL is designed to solve coupled or multiphysics phenomena. For many +[COMSOL](http://www.comsol.com) is a powerful environment for modelling and solving various engineering and scientific problems based on partial differential equations. COMSOL is designed to solve coupled or multiphysics phenomena. For many standard engineering problems COMSOL provides add-on products such as electrical, mechanical, fluid flow, and chemical applications. -- [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module), -- [Heat Transfer Module](http://www.comsol.com/heat-transfer-module), -- [CFD Module](http://www.comsol.com/cfd-module), -- [Acoustics Module](http://www.comsol.com/acoustics-module), -- and [many others](http://www.comsol.com/products) +- [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module), +- [Heat Transfer Module](http://www.comsol.com/heat-transfer-module), +- [CFD Module](http://www.comsol.com/cfd-module), +- [Acoustics Module](http://www.comsol.com/acoustics-module), +- and [many others](http://www.comsol.com/products) COMSOL also allows an interface support for equation-based modelling of partial differential equations. @@ -118,4 +118,4 @@ cd /apps/engineering/comsol/comsol43b/mli matlab -nodesktop -nosplash -r "mphstart; addpath /scratch/$USER; test_job" ``` -This example shows how to run Livelink for MATLAB with following configuration: 3 nodes and 16 cores per node. Working directory has to be created before submitting (comsol_matlab.pbs) job script into the queue. Input file (test_job.m) has to be in working directory or full path to input file has to be specified. The Matlab command option (-r ”mphstart”) created a connection with a COMSOL server using the default port number. \ No newline at end of file +This example shows how to run Livelink for MATLAB with following configuration: 3 nodes and 16 cores per node. Working directory has to be created before submitting (comsol_matlab.pbs) job script into the queue. Input file (test_job.m) has to be in working directory or full path to input file has to be specified. The Matlab command option (-r ”mphstart”) created a connection with a COMSOL server using the default port number. diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md index efca43d2e5b858615f600f2c8b72741f731e52b3..e09cd64a3f7724af581315d1e3a936b84faa381f 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md @@ -94,4 +94,4 @@ Users can find original User Guide after loading the DDT module: $DDTPATH/doc/userguide.pdf ``` -[1] Discipline, Magic, Inspiration and Science: Best Practice Debugging with Allinea DDT, Workshop conducted at LLNL by Allinea on May 10, 2013, [link](https://computing.llnl.gov/tutorials/allineaDDT/index.html) \ No newline at end of file +[1] Discipline, Magic, Inspiration and Science: Best Practice Debugging with Allinea DDT, Workshop conducted at LLNL by Allinea on May 10, 2013, [link](https://computing.llnl.gov/tutorials/allineaDDT/index.html) diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md index 34307fa3c98e5ad9a0ca0506133a11ca9588e3ef..9264e27d82a4673ca69bd864e3c50194117806c3 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md @@ -59,4 +59,4 @@ Now lets profile the code: $ perf-report mpirun ./mympiprog.x ``` -Performance report files [mympiprog_32p*.txt](mympiprog_32p_2014-10-15_16-56.txt) and [mympiprog_32p*.html](mympiprog_32p_2014-10-15_16-56.html) were created. We can see that the code is very efficient on MPI and is CPU bounded. \ No newline at end of file +Performance report files [mympiprog_32p*.txt](mympiprog_32p_2014-10-15_16-56.txt) and [mympiprog_32p*.html](mympiprog_32p_2014-10-15_16-56.html) were created. We can see that the code is very efficient on MPI and is CPU bounded. diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md index 4bfc6f68360e20b08a2d0ce0f6642bca9fbf9667..95faa664a69b0683a5159ed0eef90d075ab6ecef 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md @@ -19,7 +19,7 @@ Each node in the tree is colored by severity (the color scheme is displayed at t Installed versions ------------------ -Currently, there are two versions of CUBE 4.2.3 available as [modules](../../environment-and-modules/) : +Currently, there are two versions of CUBE 4.2.3 available as [modules](../../environment-and-modules/): - cube/4.2.3-gcc, compiled with GCC - cube/4.2.3-icc, compiled with Intel compiler @@ -34,5 +34,5 @@ CUBE is a graphical application. Refer to Graphical User Interface documentation After loading the apropriate module, simply launch cube command, or alternatively you can use scalasca -examine command to launch the GUI. Note that for Scalasca datasets, if you do not analyze the data with scalasca -examine before to opening them with CUBE, not all performance data will be available. References -1. <http://www.scalasca.org/software/cube-4.x/download.html> +1. <http://www.scalasca.org/software/cube-4.x/download.html> diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/debuggers.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/debuggers.md index eba68ad3907d5bce1f3879d4e6bb70d0aacc266a..e8c678789f81fb3c593801f67dda182e203fd531 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/debuggers.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/debuggers.md @@ -58,4 +58,4 @@ Vampir is a GUI trace analyzer for traces in OTF format. $ vampir ``` -Read more at the [Vampir](../../salomon/software/debuggers/vampir/) page. \ No newline at end of file +Read more at the [Vampir](../../salomon/software/debuggers/vampir/) page. diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md index 97d46065b217b6bcf0289978e2dc37e2f48cd9d1..ff6c5a426f23d7f128e66ec242dafdf802f662f7 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md @@ -191,7 +191,7 @@ Can be used as a sensor for ksysguard GUI, which is currently not installed on A API --- -In a similar fashion to PAPI, PCM provides a C++ API to access the performance counter from within your application. Refer to the [doxygen documentation](http://intel-pcm-api-documentation.github.io/classPCM.html) for details of the API. +In a similar fashion to PAPI, PCM provides a C++ API to access the performance counter from within your application. Refer to the [doxygen documentation](http://intel-pcm-api-documentation.github.io/classPCM.html) for details of the API. !!! Note "Note" Due to security limitations, using PCM API to monitor your applications is currently not possible on Anselm. (The application must be run as root user) @@ -277,6 +277,6 @@ Sample output: References ---------- -1. <https://software.intel.com/en-us/articles/intel-performance-counter-monitor-a-better-way-to-measure-cpu-utilization> -2. <https://software.intel.com/sites/default/files/m/3/2/2/xeon-e5-2600-uncore-guide.pdf> Intel® Xeon® Processor E5-2600 Product Family Uncore Performance Monitoring Guide. -3. <http://intel-pcm-api-documentation.github.io/classPCM.html> API Documentation \ No newline at end of file +1. <https://software.intel.com/en-us/articles/intel-performance-counter-monitor-a-better-way-to-measure-cpu-utilization> +2. <https://software.intel.com/sites/default/files/m/3/2/2/xeon-e5-2600-uncore-guide.pdf> Intel® Xeon® Processor E5-2600 Product Family Uncore Performance Monitoring Guide. +3. <http://intel-pcm-api-documentation.github.io/classPCM.html> API Documentation diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md index c1f2801837c61801fb4a3aec3c797020e5736983..a3796c229299fc51a24eff3673613c319621e738 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md @@ -71,4 +71,4 @@ You may also use remote analysis to collect data from the MIC and then analyze i References ---------- -1. <https://www.rcac.purdue.edu/tutorials/phi/PerformanceTuningXeonPhi-Tullos.pdf> Performance Tuning for Intel® Xeon Phi™ Coprocessors \ No newline at end of file +1. <https://www.rcac.purdue.edu/tutorials/phi/PerformanceTuningXeonPhi-Tullos.pdf> Performance Tuning for Intel® Xeon Phi™ Coprocessors diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md index 83d9a00147760dd63468815d63c4fd51f92487a4..c3e16ad32da249dcb739c72257f2e34b868858e9 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md @@ -3,7 +3,6 @@ PAPI Introduction ------------ - Performance Application Programming Interface (PAPI) is a portable interface to access hardware performance counters (such as instruction counts and cache misses) found in most modern architectures. With the new component framework, PAPI is not limited only to CPU counters, but offers also components for CUDA, network, Infiniband etc. PAPI provides two levels of interface - a simpler, high level interface and more detailed low level interface. @@ -92,19 +91,19 @@ The include path is automatically added by papi module to $INCLUDE. ### High level API -Please refer to <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:High_Level> for a description of the High level API. +Please refer to <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:High_Level> for a description of the High level API. ### Low level API -Please refer to <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:Low_Level> for a description of the Low level API. +Please refer to <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:Low_Level> for a description of the Low level API. ### Timers -PAPI provides the most accurate timers the platform can support. See <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:Timers> +PAPI provides the most accurate timers the platform can support. See <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:Timers> ### System information -PAPI can be used to query some system infromation, such as CPU name and MHz. See <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:System_Information> +PAPI can be used to query some system infromation, such as CPU name and MHz. See <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:System_Information> Example ------- @@ -235,6 +234,6 @@ To use PAPI in offload mode, you need to provide both host and MIC versions of P References ---------- -1. <http://icl.cs.utk.edu/papi/> Main project page -2. <http://icl.cs.utk.edu/projects/papi/wiki/Main_Page> Wiki -3. <http://icl.cs.utk.edu/papi/docs/> API Documentation \ No newline at end of file +1. <http://icl.cs.utk.edu/papi/> Main project page +2. <http://icl.cs.utk.edu/projects/papi/wiki/Main_Page> Wiki +3. <http://icl.cs.utk.edu/papi/docs/> API Documentation diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md index d144998313d7efbe809d588933cd7374cf69d28a..34c49f79f05ef4572f22ce78666c7f74ab8ba379 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md @@ -3,7 +3,7 @@ Scalasca Introduction ------------------------- -[Scalasca](http://www.scalasca.org/) is a software tool that supports the performance optimization of parallel programs by measuring and analyzing their runtime behavior. The analysis identifies potential performance bottlenecks – in particular those concerning communication and synchronization – and offers guidance in exploring their causes. +[Scalasca](http://www.scalasca.org/) is a software tool that supports the performance optimization of parallel programs by measuring and analyzing their runtime behavior. The analysis identifies potential performance bottlenecks – in particular those concerning communication and synchronization – and offers guidance in exploring their causes. Scalasca supports profiling of MPI, OpenMP and hybrid MPI+OpenMP applications. @@ -68,4 +68,4 @@ Refer to [CUBE documentation](cube/) on usage of the GUI viewer. References ---------- -1. <http://www.scalasca.org/> \ No newline at end of file +1. <http://www.scalasca.org/> diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md index 2e401424a3177060649e85d795230b71c7fc3bc9..29ecc08820cef2344662cbdf58652aee6d68474b 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md @@ -3,7 +3,7 @@ Score-P Introduction ------------ -The [Score-P measurement infrastructure](http://www.vi-hps.org/projects/score-p/) is a highly scalable and easy-to-use tool suite for profiling, event tracing, and online analysis of HPC applications. +The [Score-P measurement infrastructure](http://www.vi-hps.org/projects/score-p/) is a highly scalable and easy-to-use tool suite for profiling, event tracing, and online analysis of HPC applications. Score-P can be used as an instrumentation tool for [Scalasca](scalasca/). @@ -75,7 +75,7 @@ An example in C/C++ : end subroutine foo ``` -Please refer to the [documentation for description of the API](https://silc.zih.tu-dresden.de/scorep-current/pdf/scorep.pdf). +Please refer to the [documentation for description of the API](https://silc.zih.tu-dresden.de/scorep-current/pdf/scorep.pdf). ###Manual instrumentation using directives @@ -115,4 +115,4 @@ and in Fortran : end subroutine foo ``` -The directives are ignored if the program is compiled without Score-P. Again, please refer to the [documentation](https://silc.zih.tu-dresden.de/scorep-current/pdf/scorep.pdf) for a more elaborate description. \ No newline at end of file +The directives are ignored if the program is compiled without Score-P. Again, please refer to the [documentation](https://silc.zih.tu-dresden.de/scorep-current/pdf/scorep.pdf) for a more elaborate description. diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md index a4f33d610f910d392c077e5da0cf17a888de100e..8e448a8c4c2225abf03fead61d138507073b27c9 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md @@ -159,4 +159,4 @@ More information regarding the command line parameters of the TotalView can be f Documentation ------------- -[1] The [TotalView documentation](http://www.roguewave.com/support/product-documentation/totalview-family.aspx#totalview) web page is a good resource for learning more about some of the advanced TotalView features. \ No newline at end of file +[1] The [TotalView documentation](http://www.roguewave.com/support/product-documentation/totalview-family.aspx#totalview) web page is a good resource for learning more about some of the advanced TotalView features. diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md index ca68b44d02f3ccb6635b4b20e3da8bbd1fde17bb..3ac747aa5762eea4b91533c0485eb94b9509f6a8 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md @@ -5,10 +5,9 @@ Valgrind is a tool for memory debugging and profiling. About Valgrind -------------- - Valgrind is an open-source tool, used mainly for debuggig memory-related problems, such as memory leaks, use of uninitalized memory etc. in C/C++ applications. The toolchain was however extended over time with more functionality, such as debugging of threaded applications, cache profiling, not limited only to C/C++. -Valgind is an extremely useful tool for debugging memory errors such as [off-by-one](http://en.wikipedia.org/wiki/Off-by-one_error). Valgrind uses a virtual machine and dynamic recompilation of binary code, because of that, you can expect that programs being debugged by Valgrind run 5-100 times slower. +Valgind is an extremely useful tool for debugging memory errors such as [off-by-one](http://en.wikipedia.org/wiki/Off-by-one_error). Valgrind uses a virtual machine and dynamic recompilation of binary code, because of that, you can expect that programs being debugged by Valgrind run 5-100 times slower. The main tools available in Valgrind are : @@ -17,14 +16,14 @@ The main tools available in Valgrind are : - **Hellgrind** and **DRD** can detect race conditions in multi-threaded applications. - **Cachegrind**, a cache profiler. - **Callgrind**, a callgraph analyzer. -- For a full list and detailed documentation, please refer to the [official Valgrind documentation](http://valgrind.org/docs/). +- For a full list and detailed documentation, please refer to the [official Valgrind documentation](http://valgrind.org/docs/). Installed versions ------------------ There are two versions of Valgrind available on Anselm. - Version 3.6.0, installed by operating system vendor in /usr/bin/valgrind. This version is available by default, without the need to load any module. This version however does not provide additional MPI support. -- Version 3.9.0 with support for Intel MPI, available in [module](../../environment-and-modules/) valgrind/3.9.0-impi. After loading the module, this version replaces the default valgrind. +- Version 3.9.0 with support for Intel MPI, available in [module](../../environment-and-modules/) valgrind/3.9.0-impi. After loading the module, this version replaces the default valgrind. Usage ----- @@ -261,4 +260,4 @@ Prints this output : (note that there is output printed for every launched MPI p ==31319== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 4 from 4) ``` -We can see that Valgrind has reported use of unitialised memory on the master process (which reads the array to be broadcasted) and use of unaddresable memory on both processes. \ No newline at end of file +We can see that Valgrind has reported use of unitialised memory on the master process (which reads the array to be broadcasted) and use of unaddresable memory on both processes. diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/vampir.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/vampir.md index 780b38f8dcb075cf0c13ea711774505aeb5b17b1..f2a0558e50da67982ea5f29fe11e51f1de3a3126 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/vampir.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/vampir.md @@ -1,4 +1,4 @@ -Vampir +hVampir ====== Vampir is a commercial trace analysis and visualisation tool. It can work with traces in OTF and OTF2 formats. It does not have the functionality to collect traces, you need to use a trace collection tool (such as [Score-P](../../../salomon/software/debuggers/score-p/)) first to collect the traces. @@ -20,4 +20,4 @@ You can find the detailed user manual in PDF format in $EBROOTVAMPIR/doc/vampir References ---------- -[1]. <https://www.vampir.eu> \ No newline at end of file +[1]. <https://www.vampir.eu> diff --git a/docs.it4i/anselm-cluster-documentation/software/gpi2.md b/docs.it4i/anselm-cluster-documentation/software/gpi2.md index 7de6e02815e7d3048a2e220ff138599cc3be1a74..ab600fedf603e71f5ad855b6242bb46f47696445 100644 --- a/docs.it4i/anselm-cluster-documentation/software/gpi2.md +++ b/docs.it4i/anselm-cluster-documentation/software/gpi2.md @@ -7,7 +7,7 @@ Introduction ------------ Programming Next Generation Supercomputers: GPI-2 is an API library for asynchronous interprocess, cross-node communication. It provides a flexible, scalable and fault tolerant interface for parallel applications. -The GPI-2 library ([www.gpi-site.com/gpi2/](http://www.gpi-site.com/gpi2/)) implements the GASPI specification (Global Address Space Programming Interface, [www.gaspi.de](http://www.gaspi.de/en/project.html)). GASPI is a Partitioned Global Address Space (PGAS) API. It aims at scalable, flexible and failure tolerant computing in massively parallel environments. +The GPI-2 library ([www.gpi-site.com/gpi2/](http://www.gpi-site.com/gpi2/)) implements the GASPI specification (Global Address Space Programming Interface, [www.gaspi.de](http://www.gaspi.de/en/project.html)). GASPI is a Partitioned Global Address Space (PGAS) API. It aims at scalable, flexible and failure tolerant computing in massively parallel environments. Modules ------- @@ -169,4 +169,4 @@ At the same time, in another session, you may start the gaspi logger: [cn80:0] Hello from rank 1 of 2 ``` -In this example, we compile the helloworld_gpi.c code using the **gnu compiler** (gcc) and link it to the GPI-2 and ibverbs library. The library search path is compiled in. For execution, we use the qexp queue, 2 nodes 1 core each. The GPI module must be loaded on the master compute node (in this example the cn79), gaspi_logger is used from different session to view the output of the second process. \ No newline at end of file +In this example, we compile the helloworld_gpi.c code using the **gnu compiler** (gcc) and link it to the GPI-2 and ibverbs library. The library search path is compiled in. For execution, we use the qexp queue, 2 nodes 1 core each. The GPI module must be loaded on the master compute node (in this example the cn79), gaspi_logger is used from different session to view the output of the second process. diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md index ab52b214871d56c008f157112e3610a08ef387f3..a209a0d1786c4da41d165006cb257d52da1b96db 100644 --- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md +++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md @@ -27,11 +27,11 @@ The compiler recognizes the omp, simd, vector and ivdep pragmas for OpenMP paral $ ifort -ipo -O3 -vec -xAVX -vec-report1 -openmp myprog.f mysubroutines.f -o myprog.x ``` -Read more at <http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/composerxe/compiler/cpp-lin/index.htm> +Read more at <http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/composerxe/compiler/cpp-lin/index.htm> Sandy Bridge/Haswell binary compatibility ----------------------------------------- Anselm nodes are currently equipped with Sandy Bridge CPUs, while Salomon will use Haswell architecture. >The new processors are backward compatible with the Sandy Bridge nodes, so all programs that ran on the Sandy Bridge processors, should also run on the new Haswell nodes. >To get optimal performance out of the Haswell processors a program should make use of the special AVX2 instructions for this processor. One can do this by recompiling codes with the compiler flags >designated to invoke these instructions. For the Intel compiler suite, there are two ways of doing this: - Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge nodes. -- Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this will result in larger binaries. \ No newline at end of file +- Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this will result in larger binaries. diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-debugger.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-debugger.md index dcef17d86acffda145018a4cadca2a137ac10e2e..92e19f9c03e985fc8660d19a4a5df5a09942f4be 100644 --- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-debugger.md +++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-debugger.md @@ -71,5 +71,5 @@ Run the idb debugger in GUI mode. The menu Parallel contains number of tools for Further information ------------------- -Exhaustive manual on idb features and usage is published at [Intel website](http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/composerxe/debugger/user_guide/index.htm) +Exhaustive manual on idb features and usage is published at [Intel website](http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/composerxe/debugger/user_guide/index.htm) diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md index f450284383c5639c30be2895d63f0aaa36744be3..08067b718f70cd6ad18bd7d62ceb3997c5799223 100644 --- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md +++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md @@ -78,6 +78,6 @@ You will need the ipp module loaded to run the ipp enabled executable. This may Code samples and documentation ------------------------------ -Intel provides number of [Code Samples for IPP](https://software.intel.com/en-us/articles/code-samples-for-intel-integrated-performance-primitives-library), illustrating use of IPP. +Intel provides number of [Code Samples for IPP](https://software.intel.com/en-us/articles/code-samples-for-intel-integrated-performance-primitives-library), illustrating use of IPP. -Read full documentation on IPP [on Intel website,](http://software.intel.com/sites/products/search/search.php?q=&x=15&y=6&product=ipp&version=7.1&docos=lin) in particular the [IPP Reference manual.](http://software.intel.com/sites/products/documentation/doclib/ipp_sa/71/ipp_manual/index.htm) \ No newline at end of file +Read full documentation on IPP [on Intel website,](http://software.intel.com/sites/products/search/search.php?q=&x=15&y=6&product=ipp&version=7.1&docos=lin) in particular the [IPP Reference manual.](http://software.intel.com/sites/products/documentation/doclib/ipp_sa/71/ipp_manual/index.htm) diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md index b8ea29b8faabce069e9872afae1f85d1610de164..cb6a8ea1d5b082c412d22f54423f6fad9a3661a0 100644 --- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md +++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md @@ -14,7 +14,7 @@ Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, e - Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search. - Extended Eigensolver, a shared memory version of an eigensolver based on the Feast Eigenvalue Solver. -For details see the [Intel MKL Reference Manual](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mklman/index.htm). +For details see the [Intel MKL Reference Manual](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mklman/index.htm). Intel MKL version 13.5.192 is available on Anselm @@ -38,7 +38,7 @@ The MKL library provides number of interfaces. The fundamental once are the LP64 ### Linking -Linking MKL libraries may be complex. Intel [mkl link line advisor](http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor) helps. See also [examples](intel-mkl/#examples) below. +Linking MKL libraries may be complex. Intel [mkl link line advisor](http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor) helps. See also [examples](intel-mkl/#examples) below. You will need the mkl module loaded to run the mkl enabled executable. This may be avoided, by compiling library search paths into the executable. Include rpath on the compile line: @@ -120,4 +120,4 @@ The MKL is capable to automatically offload the computations o the MIC accelerat Further reading --------------- -Read more on [Intel website](http://software.intel.com/en-us/intel-mkl), in particular the [MKL users guide](https://software.intel.com/en-us/intel-mkl/documentation/linux). \ No newline at end of file +Read more on [Intel website](http://software.intel.com/en-us/intel-mkl), in particular the [MKL users guide](https://software.intel.com/en-us/intel-mkl/documentation/linux). diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md index 75238c7da6642934d3cacaa2d2b116cf2f352fde..ab6defdbe8aa6cded4d12376dce93a93a8b9dccb 100644 --- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md +++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md @@ -40,5 +40,5 @@ You will need the tbb module loaded to run the tbb enabled executable. This may Further reading --------------- -Read more on Intel website, <http://software.intel.com/sites/products/documentation/doclib/tbb_sa/help/index.htm> +Read more on Intel website, <http://software.intel.com/sites/products/documentation/doclib/tbb_sa/help/index.htm> diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/introduction.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/introduction.md index f5270fe2ca0c8e3526aa390ffdb434b69f1eb788..206c70444ca5dcb00613409a3ef27a17f9730ceb 100644 --- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/introduction.md +++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/introduction.md @@ -62,4 +62,4 @@ Intel Threading Building Blocks (Intel TBB) is a library that supports scalable $ module load tbb ``` -Read more at the [Intel TBB](intel-tbb/) page. \ No newline at end of file +Read more at the [Intel TBB](intel-tbb/) page. diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md b/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md index e55b49f5a640627a0a13a8066884ca48983aed35..8ea1ba38a2798804b67e164d9e97eeed623bb5d8 100644 --- a/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md +++ b/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md @@ -243,7 +243,7 @@ Automatic Offload using Intel MKL Library ----------------------------------------- Intel MKL includes an Automatic Offload (AO) feature that enables computationally intensive MKL functions called in user code to benefit from attached Intel Xeon Phi coprocessors automatically and transparently. -Behavioral of automatic offload mode is controlled by functions called within the program or by environmental variables. Complete list of controls is listed [ here](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mkl_userguide_lnx/GUID-3DC4FC7D-A1E4-423D-9C0C-06AB265FFA86.htm). +Behavioral of automatic offload mode is controlled by functions called within the program or by environmental variables. Complete list of controls is listed [ here](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mkl_userguide_lnx/GUID-3DC4FC7D-A1E4-423D-9C0C-06AB265FFA86.htm). The Automatic Offload may be enabled by either an MKL function call within the code: @@ -257,7 +257,7 @@ or by setting environment variable $ export MKL_MIC_ENABLE=1 ``` -To get more information about automatic offload please refer to "[Using Intel® MKL Automatic Offload on Intel ® Xeon Phi™ Coprocessors](http://software.intel.com/sites/default/files/11MIC42_How_to_Use_MKL_Automatic_Offload_0.pdf)" white paper or [ Intel MKL documentation](https://software.intel.com/en-us/articles/intel-math-kernel-library-documentation). +To get more information about automatic offload please refer to "[Using Intel® MKL Automatic Offload on Intel ® Xeon Phi™ Coprocessors](http://software.intel.com/sites/default/files/11MIC42_How_to_Use_MKL_Automatic_Offload_0.pdf)" white paper or [ Intel MKL documentation](https://software.intel.com/en-us/articles/intel-math-kernel-library-documentation). ### Automatic offload example @@ -901,4 +901,4 @@ Please note each host or accelerator is listed only per files. User has to speci Optimization ------------ -For more details about optimization techniques please read Intel document [Optimization and Performance Tuning for Intel® Xeon Phi™ Coprocessors](http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization "http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization") \ No newline at end of file +For more details about optimization techniques please read Intel document [Optimization and Performance Tuning for Intel® Xeon Phi™ Coprocessors](http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization "http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization") diff --git a/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md b/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md index c9ae13dfd1c63da235e946c6002b99cd5b577917..bfadff58d378b7cf893e74acf78a7fbb9d6d206b 100644 --- a/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md +++ b/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md @@ -17,7 +17,7 @@ Overview of the licenses usage ### Web interface For each license there is a table, which provides the information about the name, number of available (purchased/licensed), number of used and number of free license features -<https://extranet.it4i.cz/anselm/licenses> +<https://extranet.it4i.cz/anselm/licenses> ### Text interface @@ -104,4 +104,4 @@ Run an interactive PBS job with 1 Matlab EDU license, 1 Distributed Computing To $ qsub -I -q qprod -A PROJECT_ID -l select=2:ncpus=16 -l feature__matlab-edu__MATLAB=1 -l feature__matlab-edu__Distrib_Computing_Toolbox=1 -l feature__matlab-edu__MATLAB_Distrib_Comp_Engine=32 ``` -The license is used and accounted only with the real usage of the product. So in this example, the general Matlab is used after Matlab is run vy the user and not at the time, when the shell of the interactive job is started. Also the Distributed Computing licenses are used at the time, when the user uses the distributed parallel computation in Matlab (e. g. issues pmode start, matlabpool, etc.). \ No newline at end of file +The license is used and accounted only with the real usage of the product. So in this example, the general Matlab is used after Matlab is run vy the user and not at the time, when the shell of the interactive job is started. Also the Distributed Computing licenses are used at the time, when the user uses the distributed parallel computation in Matlab (e. g. issues pmode start, matlabpool, etc.). diff --git a/docs.it4i/anselm-cluster-documentation/software/java.md b/docs.it4i/anselm-cluster-documentation/software/java.md index 29998d001318672709b1646629966c32155fc2b2..4755ee2ba4eebaf3919be8fb93209ef16cd98831 100644 --- a/docs.it4i/anselm-cluster-documentation/software/java.md +++ b/docs.it4i/anselm-cluster-documentation/software/java.md @@ -25,5 +25,5 @@ With the module loaded, not only the runtime environment (JRE), but also the dev $ which javac ``` -Java applications may use MPI for interprocess communication, in conjunction with OpenMPI. Read more on <http://www.open-mpi.org/faq/?category=java>. This functionality is currently not supported on Anselm cluster. In case you require the java interface to MPI, please contact [Anselm support](https://support.it4i.cz/rt/). +Java applications may use MPI for interprocess communication, in conjunction with OpenMPI. Read more on <http://www.open-mpi.org/faq/?category=java>. This functionality is currently not supported on Anselm cluster. In case you require the java interface to MPI, please contact [Anselm support](https://support.it4i.cz/rt/). diff --git a/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md b/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md index 6cacfbf5e6317660657a30c4e34cea53afc94e82..4d7c290df1a8663db253d545ad8521876ccd3d23 100644 --- a/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md +++ b/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md @@ -38,7 +38,7 @@ Solution described in chapter [HOWTO](virtualization/#howto) is suitable for si Licensing --------- -IT4Innovations does not provide any licenses for operating systems and software of virtual machines. Users are ( in accordance with [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf)) fully responsible for licensing all software running in virtual machines on Anselm. Be aware of complex conditions of licensing software in virtual environments. +IT4Innovations does not provide any licenses for operating systems and software of virtual machines. Users are ( in accordance with [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf)) fully responsible for licensing all software running in virtual machines on Anselm. Be aware of complex conditions of licensing software in virtual environments. !!! Note "Note" Users are responsible for licensing OS e.g. MS Windows and all software running in their virtual machines. @@ -80,11 +80,11 @@ You can convert your existing image using qemu-img convert command. Supported fo We recommend using advanced QEMU native image format qcow2. -[More about QEMU Images](http://en.wikibooks.org/wiki/QEMU/Images) +[More about QEMU Images](http://en.wikibooks.org/wiki/QEMU/Images) ### Optimize image of your virtual machine -Use virtio devices (for disk/drive and network adapter) and install virtio drivers (paravirtualized drivers) into virtual machine. There is significant performance gain when using virtio drivers. For more information see [Virtio Linux](http://www.linux-kvm.org/page/Virtio) and [Virtio Windows](http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers). +Use virtio devices (for disk/drive and network adapter) and install virtio drivers (paravirtualized drivers) into virtual machine. There is significant performance gain when using virtio drivers. For more information see [Virtio Linux](http://www.linux-kvm.org/page/Virtio) and [Virtio Windows](http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers). Disable all unnecessary services and tasks. Restrict all unnecessary operating system operations. @@ -153,7 +153,7 @@ Example startup script maps shared job script as drive z: and looks for run scri Create job script according recommended -[Virtual Machine Job Workflow](virtualization.html#virtual-machine-job-workflow). +[Virtual Machine Job Workflow](virtualization.html#virtual-machine-job-workflow). Example job for Windows virtual machine: @@ -410,4 +410,4 @@ For Windows guests we recommend these options, life will be easier: ```bash $ qemu-system-x86_64 ... -localtime -usb -usbdevice tablet -``` \ No newline at end of file +``` diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md b/docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md index ac353f47e239b1370f4cacd32bea59c2f123fff8..0ca699766fa71ebb3d70dee5940ca37c29460feb 100644 --- a/docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md +++ b/docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md @@ -214,4 +214,4 @@ Some options have changed in OpenMPI version 1.8. |--bind-to-socket |--bind-to socket | |-bysocket |--map-by socket | |-bycore |--map-by core | - |-pernode |--map-by ppr:1:node | \ No newline at end of file + |-pernode |--map-by ppr:1:node | diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md index b7179fa477f29c7683d0abc6fade06881edf56d8..c5c9baa5ef99a316a309ff7cc1b642d7189c638a 100644 --- a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md +++ b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md @@ -9,7 +9,7 @@ The Anselm cluster provides several implementations of the MPI library: | --- | --- | |The highly optimized and stable **bullxmpi 1.2.4.1** |Partial thread support up to MPI_THREAD_SERIALIZED | |The **Intel MPI 4.1** |Full thread support up to MPI_THREAD_MULTIPLE | - |The [OpenMPI 1.6.5](href="http://www.open-mpi.org)| Full thread support up to MPI_THREAD_MULTIPLE, BLCR c/r support | + |The [OpenMPI 1.6.5](href="http://www.open-mpi.org)| Full thread support up to MPI_THREAD_MULTIPLE, BLCR c/r support | |The OpenMPI 1.8.1 |Full thread support up to MPI_THREAD_MULTIPLE, MPI-3.0 support | |The **mpich2 1.9** |Full thread support up to MPI_THREAD_MULTIPLE, BLCR c/r support | @@ -140,10 +140,10 @@ In the previous two cases with one or two MPI processes per node, the operating ### Running OpenMPI -The **bullxmpi-1.2.4.1** and [**OpenMPI 1.6.5**](http://www.open-mpi.org/) are both based on OpenMPI. Read more on [how to run OpenMPI](Running_OpenMPI/) based MPI. +The **bullxmpi-1.2.4.1** and [**OpenMPI 1.6.5**](http://www.open-mpi.org/) are both based on OpenMPI. Read more on [how to run OpenMPI](Running_OpenMPI/) based MPI. ### Running MPICH2 The **Intel MPI** and **mpich2 1.9** are MPICH2 based implementations. Read more on [how to run MPICH2](running-mpich2/) based MPI. -The Intel MPI may run on the Intel Xeon Phi accelerators as well. Read more on [how to run Intel MPI on accelerators](../intel-xeon-phi/). \ No newline at end of file +The Intel MPI may run on the Intel Xeon Phi accelerators as well. Read more on [how to run Intel MPI on accelerators](../intel-xeon-phi/). diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md index c4955467bdb5aeabd5728ab54dd37604f819435f..08ec7b4689d47006c323b885846b4812cb110c0a 100644 --- a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md +++ b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md @@ -92,4 +92,4 @@ Execute the above code as: $ mpiexec -bycore -bind-to-core python hello_world.py ``` -In this example, we run MPI4Py enabled code on 4 nodes, 16 cores per node (total of 64 processes), each python process is bound to a different core. More examples and documentation can be found on [MPI for Python webpage](https://pythonhosted.org/mpi4py/usrman/index.html). \ No newline at end of file +In this example, we run MPI4Py enabled code on 4 nodes, 16 cores per node (total of 64 processes), each python process is bound to a different core. More examples and documentation can be found on [MPI for Python webpage](https://pythonhosted.org/mpi4py/usrman/index.html). diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md b/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md index 7045165e8aab0df0316546e6bb9d926f3853fbdc..52653f0290c66bba3605d5ae3a10176310afcde1 100644 --- a/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md +++ b/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md @@ -148,7 +148,7 @@ In this example, we see that ranks have been mapped on nodes according to the or ### Process Binding -The Intel MPI automatically binds each process and its threads to the corresponding portion of cores on the processor socket of the node, no options needed. The binding is primarily controlled by environment variables. Read more about mpi process binding on [Intel website](https://software.intel.com/sites/products/documentation/hpc/ics/impi/41/lin/Reference_Manual/Environment_Variables_Process_Pinning.htm). The MPICH2 uses the -bind-to option Use -bind-to numa or -bind-to core to bind the process on single core or entire socket. +The Intel MPI automatically binds each process and its threads to the corresponding portion of cores on the processor socket of the node, no options needed. The binding is primarily controlled by environment variables. Read more about mpi process binding on [Intel website](https://software.intel.com/sites/products/documentation/hpc/ics/impi/41/lin/Reference_Manual/Environment_Variables_Process_Pinning.htm). The MPICH2 uses the -bind-to option Use -bind-to numa or -bind-to core to bind the process on single core or entire socket. ### Bindings verification @@ -162,4 +162,4 @@ In all cases, binding and threading may be verified by executing Intel MPI on Xeon Phi --------------------- -The[MPI section of Intel Xeon Phi chapter](../intel-xeon-phi/) provides details on how to run Intel MPI code on Xeon Phi architecture. \ No newline at end of file +The[MPI section of Intel Xeon Phi chapter](../intel-xeon-phi/) provides details on how to run Intel MPI code on Xeon Phi architecture. diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/introduction.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/introduction.md index d963631218af2e64a581ccdf96525f68ac8a4585..aa0cd2515ad189e3c21f9c0a4aba57f1c72aae58 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/introduction.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/introduction.md @@ -39,4 +39,4 @@ The R is an interpreted language and environment for statistical computing and g $ R ``` -Read more at the [R page](r/). \ No newline at end of file +Read more at the [R page](r/). diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md index 1e863daf76d14146007feeb186713b03ee17c81f..b8a9f59011cbf7c02324bf34641580114e28e1ba 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md @@ -45,7 +45,7 @@ Running parallel Matlab using Distributed Computing Toolbox / Engine !!! Note "Note" Distributed toolbox is available only for the EDU variant -The MPIEXEC mode available in previous versions is no longer available in MATLAB 2015. Also, the programming interface has changed. Refer to [Release Notes](http://www.mathworks.com/help/distcomp/release-notes.html#buanp9e-1). +The MPIEXEC mode available in previous versions is no longer available in MATLAB 2015. Also, the programming interface has changed. Refer to [Release Notes](http://www.mathworks.com/help/distcomp/release-notes.html#buanp9e-1). Delete previously used file mpiLibConf.m, we have observed crashes when using Intel MPI. @@ -277,4 +277,4 @@ Since this is a SMP machine, you can completely avoid using Parallel Toolbox and ### Local cluster mode -You can also use Parallel Toolbox on UV2000. Use l[ocal cluster mode](matlab/#parallel-matlab-batch-job-in-local-mode), "SalomonPBSPro" profile will not work. \ No newline at end of file +You can also use Parallel Toolbox on UV2000. Use l[ocal cluster mode](matlab/#parallel-matlab-batch-job-in-local-mode), "SalomonPBSPro" profile will not work. diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md index 4334ee786f01d339f86925adbfc3cbc1900b3b12..f5b62facdb304d30c8cfdeb2b806872f8c4b527c 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md @@ -205,4 +205,4 @@ Starting Matlab workers is an expensive process that requires certain amount of |16|256|1008| |8|128|534| |4|64|333| - |2|32|210| \ No newline at end of file + |2|32|210| diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md index 330f10ad9f6851bcdd85e04f08c0181f7afd6f78..2bc6b01dc63055d7713db14e6b19564fc2cb296c 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md @@ -3,7 +3,7 @@ Octave Introduction ------------ -GNU Octave is a high-level interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation. Octave is normally used through its interactive command line interface, but it can also be used to write non-interactive programs. The Octave language is quite similar to Matlab so that most programs are easily portable. Read more on <http://www.gnu.org/software/octave/> +GNU Octave is a high-level interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation. Octave is normally used through its interactive command line interface, but it can also be used to write non-interactive programs. The Octave language is quite similar to Matlab so that most programs are easily portable. Read more on <http://www.gnu.org/software/octave/> Two versions of octave are available on Anselm, via module @@ -50,7 +50,7 @@ To run octave in batch mode, write an octave script, then write a bash jobscript exit ``` -This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](http://support.it4i.cz/docs/anselm-cluster-documentation/resource-allocation-and-job-execution). +This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](http://support.it4i.cz/docs/anselm-cluster-documentation/resource-allocation-and-job-execution). The octave c compiler mkoctfile calls the GNU gcc 4.8.1 for compiling native c code. This is very useful for running native c subroutines in octave environment. @@ -58,7 +58,7 @@ The octave c compiler mkoctfile calls the GNU gcc 4.8.1 for compiling native c c $ mkoctfile -v ``` -Octave may use MPI for interprocess communication This functionality is currently not supported on Anselm cluster. In case you require the octave interface to MPI, please contact [Anselm support](https://support.it4i.cz/rt/). +Octave may use MPI for interprocess communication This functionality is currently not supported on Anselm cluster. In case you require the octave interface to MPI, please contact [Anselm support](https://support.it4i.cz/rt/). Xeon Phi Support ---------------- @@ -99,7 +99,7 @@ Octave is linked with parallel Intel MKL, so it best suited for batch processing variable. !!! Note "Note" - Calculations that do not employ parallelism (either by using parallel MKL eg. via matrix operations, fork() function, [parallel package](http://octave.sourceforge.net/parallel/) or other mechanism) will actually run slower than on host CPU. + Calculations that do not employ parallelism (either by using parallel MKL eg. via matrix operations, fork() function, [parallel package](http://octave.sourceforge.net/parallel/) or other mechanism) will actually run slower than on host CPU. To use Octave on a node with Xeon Phi: @@ -107,4 +107,4 @@ To use Octave on a node with Xeon Phi: $ ssh mic0 # login to the MIC card $ source /apps/tools/octave/3.8.2-mic/bin/octave-env.sh # set up environment variables $ octave -q /apps/tools/octave/3.8.2-mic/example/test0.m # run an example -``` \ No newline at end of file +``` diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md index 35661aad7e5d316d18d9ab74c6718990e914f036..627ce6c53c74c099815e5fd24968fe93be3248cb 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md @@ -11,7 +11,7 @@ Another convenience is the ease with which the C code or third party libraries m Extensive support for parallel computing is available within R. -Read more on <http://www.r-project.org/>, <http://cran.r-project.org/doc/manuals/r-release/R-lang.html> +Read more on <http://www.r-project.org/>, <http://cran.r-project.org/doc/manuals/r-release/R-lang.html> Modules ------- @@ -152,7 +152,7 @@ Package Rmpi It also provides interactive R slave environment. On Anselm, Rmpi provides interface to the [OpenMPI](../mpi-1/Running_OpenMPI/). -Read more on Rmpi at <http://cran.r-project.org/web/packages/Rmpi/>, reference manual is available at <http://cran.r-project.org/web/packages/Rmpi/Rmpi.pdf> +Read more on Rmpi at <http://cran.r-project.org/web/packages/Rmpi/>, reference manual is available at <http://cran.r-project.org/web/packages/Rmpi/Rmpi.pdf> When using package Rmpi, both openmpi and R modules must be loaded @@ -398,4 +398,4 @@ Example jobscript for [static Rmpi](r/#static-rmpi) parallel R execution, runnin exit ``` -For more information about jobscripts and MPI execution refer to the [Job submission](../../resource-allocation-and-job-execution/job-submission-and-execution/) and general [MPI](../mpi/mpi/) sections. \ No newline at end of file +For more information about jobscripts and MPI execution refer to the [Job submission](../../resource-allocation-and-job-execution/job-submission-and-execution/) and general [MPI](../mpi/mpi/) sections. diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md index e27676c6702ff264207f8a6a34d0bfc94619efdb..3964da323565c1000b47a7111251f29ae3a24754 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md @@ -73,4 +73,4 @@ Load modules and compile: Run the example as [Intel MPI program](../mpi/running-mpich2/). -Read more on FFTW usage on the [FFTW website.](http://www.fftw.org/fftw3_doc/) \ No newline at end of file +Read more on FFTW usage on the [FFTW website.](http://www.fftw.org/fftw3_doc/) diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md index 0258e866cdd560234b95313479c1dd42f4163d1f..17285809258ab79c51107c027a0a9ce5e4154d01 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md @@ -14,7 +14,7 @@ The library covers a wide range of topics in numerical computing. Routines are a Special Functions Vectors and Matrices - Permutations Combinations + Permutations Combinations Sorting BLAS Support @@ -144,4 +144,4 @@ Load modules and compile: icc dwt.c -o dwt.x -Wl,-rpath=$LIBRARY_PATH -mkl -lgsl ``` -In this example, we compile the dwt.c code using the Intel compiler and link it to the MKL and GSL library, note the -mkl and -lgsl options. The library search path is compiled in, so that no modules are necessary to run the code. \ No newline at end of file +In this example, we compile the dwt.c code using the Intel compiler and link it to the MKL and GSL library, note the -mkl and -lgsl options. The library search path is compiled in, so that no modules are necessary to run the code. diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/hdf5.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/hdf5.md index 53b95fe595dbb25d73e9d43131e00f4942b3f51c..aceb866ff90059ecef4f9777248ad3e14041ecec 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/hdf5.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/hdf5.md @@ -3,7 +3,7 @@ HDF5 Hierarchical Data Format library. Serial and MPI parallel version. -[HDF5 (Hierarchical Data Format)](http://www.hdfgroup.org/HDF5/) is a general purpose library and file format for storing scientific data. HDF5 can store two primary objects: datasets and groups. A dataset is essentially a multidimensional array of data elements, and a group is a structure for organizing objects in an HDF5 file. Using these two basic objects, one can create and store almost any kind of scientific data structure, such as images, arrays of vectors, and structured and unstructured grids. You can also mix and match them in HDF5 files according to your needs. +[HDF5 (Hierarchical Data Format)](http://www.hdfgroup.org/HDF5/) is a general purpose library and file format for storing scientific data. HDF5 can store two primary objects: datasets and groups. A dataset is essentially a multidimensional array of data elements, and a group is a structure for organizing objects in an HDF5 file. Using these two basic objects, one can create and store almost any kind of scientific data structure, such as images, arrays of vectors, and structured and unstructured grids. You can also mix and match them in HDF5 files according to your needs. Versions **1.8.11** and **1.8.13** of HDF5 library are available on Anselm, compiled for **Intel MPI** and **OpenMPI** using **intel** and **gnu** compilers. These are available via modules: @@ -24,7 +24,7 @@ Versions **1.8.11** and **1.8.13** of HDF5 library are available on Anselm, comp The module sets up environment variables, required for linking and running HDF5 enabled applications. Make sure that the choice of HDF5 module is consistent with your choice of MPI library. Mixing MPI of different implementations may have unpredictable results. !!! Note "Note" - Be aware, that GCC version of **HDF5 1.8.11** has serious performance issues, since it's compiled with -O0 optimization flag. This version is provided only for testing of code compiled only by GCC and IS NOT recommended for production computations. For more informations, please see: <http://www.hdfgroup.org/ftp/HDF5/prev-releases/ReleaseFiles/release5-1811> + Be aware, that GCC version of **HDF5 1.8.11** has serious performance issues, since it's compiled with -O0 optimization flag. This version is provided only for testing of code compiled only by GCC and IS NOT recommended for production computations. For more informations, please see: <http://www.hdfgroup.org/ftp/HDF5/prev-releases/ReleaseFiles/release5-1811> All GCC versions of **HDF5 1.8.13** are not affected by the bug, are compiled with -O3 optimizations and are recommended for production computations. @@ -88,4 +88,4 @@ Load modules and compile: Run the example as [Intel MPI program](../anselm-cluster-documentation/software/mpi/running-mpich2/). -For further informations, please see the website: <http://www.hdfgroup.org/HDF5/> \ No newline at end of file +For further informations, please see the website: <http://www.hdfgroup.org/HDF5/> diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/intel-numerical-libraries.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/intel-numerical-libraries.md index 0a9c25c505a656ef26a1b08c1d8b1e04c98421f5..0683f6bd1a65e784f3277bf980be437ecf07a21d 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/intel-numerical-libraries.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/intel-numerical-libraries.md @@ -31,4 +31,4 @@ Intel Threading Building Blocks (Intel TBB) is a library that supports scalable $ module load tbb ``` -Read more at the [Intel TBB](../intel-suite/intel-tbb/) page. \ No newline at end of file +Read more at the [Intel TBB](../intel-suite/intel-tbb/) page. diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md index 32558f85f284e6a6d41af8a205015114c7f2a63e..dd88fef723d642e04133914dbb0d13830c245b0e 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md @@ -76,8 +76,8 @@ To test if the MAGMA server runs properly we can run one of examples that are pa **export OMP_NUM_THREADS=16** -See more details at [MAGMA home page](http://icl.cs.utk.edu/magma/). +See more details at [MAGMA home page](http://icl.cs.utk.edu/magma/). References ---------- -[1] MAGMA MIC: Linear Algebra Library for Intel Xeon Phi Coprocessors, Jack Dongarra et. al, [http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf](http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf) \ No newline at end of file +[1] MAGMA MIC: Linear Algebra Library for Intel Xeon Phi Coprocessors, Jack Dongarra et. al, [http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf](http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf) diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md index f08618e77d3372504d6a83a89e3c2342d9febe3a..e914c36fce429151eaa28adb6caad0f0e3126ee1 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md @@ -9,11 +9,11 @@ PETSc (Portable, Extensible Toolkit for Scientific Computation) is a suite of bu Resources --------- -- [project webpage](http://www.mcs.anl.gov/petsc/) -- [documentation](http://www.mcs.anl.gov/petsc/documentation/) - - [PETSc Users Manual (PDF)](http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf) - - [index of all manual pages](http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html) -- PRACE Video Tutorial [part1](http://www.youtube.com/watch?v=asVaFg1NDqY), [part2](http://www.youtube.com/watch?v=ubp_cSibb9I), [part3](http://www.youtube.com/watch?v=vJAAAQv-aaw), [part4](http://www.youtube.com/watch?v=BKVlqWNh8jY), [part5](http://www.youtube.com/watch?v=iXkbLEBFjlM) +- [project webpage](http://www.mcs.anl.gov/petsc/) +- [documentation](http://www.mcs.anl.gov/petsc/documentation/) + - [PETSc Users Manual (PDF)](http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf) + - [index of all manual pages](http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html) +- PRACE Video Tutorial [part1](http://www.youtube.com/watch?v=asVaFg1NDqY), [part2](http://www.youtube.com/watch?v=ubp_cSibb9I), [part3](http://www.youtube.com/watch?v=vJAAAQv-aaw), [part4](http://www.youtube.com/watch?v=BKVlqWNh8jY), [part5](http://www.youtube.com/watch?v=iXkbLEBFjlM) Modules ------- @@ -25,13 +25,13 @@ You can start using PETSc on Anselm by loading the PETSc module. Module names ob module load petsc/3.4.4-icc-impi-mkl-opt ``` -where `variant` is replaced by one of `{dbg, opt, threads-dbg, threads-opt}`. The `opt` variant is compiled without debugging information (no `-g` option) and with aggressive compiler optimizations (`-O3 -xAVX`). This variant is suitable for performance measurements and production runs. In all other cases use the debug (`dbg`) variant, because it contains debugging information, performs validations and self-checks, and provides a clear stack trace and message in case of an error. The other two variants `threads-dbg` and `threads-opt` are `dbg` and `opt`, respectively, built with [OpenMP and pthreads threading support](http://www.mcs.anl.gov/petsc/features/threads.html). +where `variant` is replaced by one of `{dbg, opt, threads-dbg, threads-opt}`. The `opt` variant is compiled without debugging information (no `-g` option) and with aggressive compiler optimizations (`-O3 -xAVX`). This variant is suitable for performance measurements and production runs. In all other cases use the debug (`dbg`) variant, because it contains debugging information, performs validations and self-checks, and provides a clear stack trace and message in case of an error. The other two variants `threads-dbg` and `threads-opt` are `dbg` and `opt`, respectively, built with [OpenMP and pthreads threading support](http://www.mcs.anl.gov/petsc/features/threads.html). External libraries ------------------ PETSc needs at least MPI, BLAS and LAPACK. These dependencies are currently satisfied with Intel MPI and Intel MKL in Anselm `petsc` modules. -PETSc can be linked with a plethora of [external numerical libraries](http://www.mcs.anl.gov/petsc/miscellaneous/external.html), extending PETSc functionality, e.g. direct linear system solvers, preconditioners or partitioners. See below a list of libraries currently included in Anselm `petsc` modules. +PETSc can be linked with a plethora of [external numerical libraries](http://www.mcs.anl.gov/petsc/miscellaneous/external.html), extending PETSc functionality, e.g. direct linear system solvers, preconditioners or partitioners. See below a list of libraries currently included in Anselm `petsc` modules. All these libraries can be used also alone, without PETSc. Their static or shared program libraries are available in `$PETSC_DIR/$PETSC_ARCH/lib` and header files in `$PETSC_DIR/$PETSC_ARCH/include`. `PETSC_DIR` and `PETSC_ARCH` are environment variables pointing to a specific PETSc instance based on the petsc module loaded. @@ -39,24 +39,24 @@ All these libraries can be used also alone, without PETSc. Their static or share ### Libraries linked to PETSc on Anselm (as of 11 April 2015) - dense linear algebra - - [Elemental](http://libelemental.org/) + - [Elemental](http://libelemental.org/) - sparse linear system solvers - - [Intel MKL Pardiso](https://software.intel.com/en-us/node/470282) - - [MUMPS](http://mumps.enseeiht.fr/) - - [PaStiX](http://pastix.gforge.inria.fr/) - - [SuiteSparse](http://faculty.cse.tamu.edu/davis/suitesparse.html) - - [SuperLU](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu) - - [SuperLU_Dist](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu_dist) + - [Intel MKL Pardiso](https://software.intel.com/en-us/node/470282) + - [MUMPS](http://mumps.enseeiht.fr/) + - [PaStiX](http://pastix.gforge.inria.fr/) + - [SuiteSparse](http://faculty.cse.tamu.edu/davis/suitesparse.html) + - [SuperLU](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu) + - [SuperLU_Dist](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu_dist) - input/output - - [ExodusII](http://sourceforge.net/projects/exodusii/) - - [HDF5](http://www.hdfgroup.org/HDF5/) - - [NetCDF](http://www.unidata.ucar.edu/software/netcdf/) + - [ExodusII](http://sourceforge.net/projects/exodusii/) + - [HDF5](http://www.hdfgroup.org/HDF5/) + - [NetCDF](http://www.unidata.ucar.edu/software/netcdf/) - partitioning - - [Chaco](http://www.cs.sandia.gov/CRF/chac.html) - - [METIS](http://glaros.dtc.umn.edu/gkhome/metis/metis/overview) - - [ParMETIS](http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview) - - [PT-Scotch](http://www.labri.fr/perso/pelegrin/scotch/) + - [Chaco](http://www.cs.sandia.gov/CRF/chac.html) + - [METIS](http://glaros.dtc.umn.edu/gkhome/metis/metis/overview) + - [ParMETIS](http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview) + - [PT-Scotch](http://www.labri.fr/perso/pelegrin/scotch/) - preconditioners & multigrid - - [Hypre](http://acts.nersc.gov/hypre/) - - [Trilinos ML](http://trilinos.sandia.gov/packages/ml/) - - [SPAI - Sparse Approximate Inverse](https://bitbucket.org/petsc/pkg-spai) \ No newline at end of file + - [Hypre](http://acts.nersc.gov/hypre/) + - [Trilinos ML](http://trilinos.sandia.gov/packages/ml/) + - [SPAI - Sparse Approximate Inverse](https://bitbucket.org/petsc/pkg-spai) diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md index c49c005f8835128d085b88df6b9846011c251013..beee8c20ef89f01d1bf38742e7b513828bfb17d6 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md @@ -19,7 +19,7 @@ Current Trilinos installation on ANSELM contains (among others) the following ma - **IFPACK** - distributed algebraic preconditioner (includes e.g. incomplete LU factorization) - **Teuchos** - common tools packages. This package contains classes for memory management, output, performance monitoring, BLAS and LAPACK wrappers etc. -For the full list of Trilinos packages, descriptions of their capabilities, and user manuals see [http://trilinos.sandia.gov.](http://trilinos.sandia.gov) +For the full list of Trilinos packages, descriptions of their capabilities, and user manuals see [http://trilinos.sandia.gov.](http://trilinos.sandia.gov) ### Installed version @@ -33,7 +33,7 @@ First, load the appropriate module: $ module load trilinos ``` -For the compilation of CMake-aware project, Trilinos provides the FIND_PACKAGE( Trilinos ) capability, which makes it easy to build against Trilinos, including linking against the correct list of libraries. For details, see <http://trilinos.sandia.gov/Finding_Trilinos.txt> +For the compilation of CMake-aware project, Trilinos provides the FIND_PACKAGE( Trilinos ) capability, which makes it easy to build against Trilinos, including linking against the correct list of libraries. For details, see <http://trilinos.sandia.gov/Finding_Trilinos.txt> For compiling using simple makefiles, Trilinos provides Makefile.export system, which allows users to include important Trilinos variables directly into their makefiles. This can be done simply by inserting the following line into the makefile: @@ -47,4 +47,4 @@ or include Makefile.export.<package> ``` -if you are interested only in a specific Trilinos package. This will give you access to the variables such as Trilinos_CXX_COMPILER, Trilinos_INCLUDE_DIRS, Trilinos_LIBRARY_DIRS etc. For the detailed description and example makefile see <http://trilinos.sandia.gov/Export_Makefile.txt>. \ No newline at end of file +if you are interested only in a specific Trilinos package. This will give you access to the variables such as Trilinos_CXX_COMPILER, Trilinos_INCLUDE_DIRS, Trilinos_LIBRARY_DIRS etc. For the detailed description and example makefile see <http://trilinos.sandia.gov/Export_Makefile.txt>. diff --git a/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md b/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md index 2bd26525cf6d724354c1068bcfb9237d68e9931f..fab15f43356f5da42e9c19f90f40368578e78efe 100644 --- a/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md +++ b/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md @@ -198,11 +198,11 @@ CUDA Libraries ### CuBLAS -The NVIDIA CUDA Basic Linear Algebra Subroutines (cuBLAS) library is a GPU-accelerated version of the complete standard BLAS library with 152 standard BLAS routines. Basic description of the library together with basic performance comparison with MKL can be found [here](https://developer.nvidia.com/cublas "Nvidia cuBLAS"). +The NVIDIA CUDA Basic Linear Algebra Subroutines (cuBLAS) library is a GPU-accelerated version of the complete standard BLAS library with 152 standard BLAS routines. Basic description of the library together with basic performance comparison with MKL can be found [here](https://developer.nvidia.com/cublas "Nvidia cuBLAS"). **CuBLAS example: SAXPY** -SAXPY function multiplies the vector x by the scalar alpha and adds it to the vector y overwriting the latest vector with the result. The description of the cuBLAS function can be found in [NVIDIA CUDA documentation](http://docs.nvidia.com/cuda/cublas/index.html#cublas-lt-t-gt-axpy "Nvidia CUDA documentation "). Code can be pasted in the file and compiled without any modification. +SAXPY function multiplies the vector x by the scalar alpha and adds it to the vector y overwriting the latest vector with the result. The description of the cuBLAS function can be found in [NVIDIA CUDA documentation](http://docs.nvidia.com/cuda/cublas/index.html#cublas-lt-t-gt-axpy "Nvidia CUDA documentation "). Code can be pasted in the file and compiled without any modification. ```cpp /* Includes, system */ @@ -285,8 +285,8 @@ SAXPY function multiplies the vector x by the scalar alpha and adds it to the ve !!! Note "Note" Please note: cuBLAS has its own function for data transfers between CPU and GPU memory: - - [cublasSetVector](http://docs.nvidia.com/cuda/cublas/index.html#cublassetvector) - transfers data from CPU to GPU memory - - [cublasGetVector](http://docs.nvidia.com/cuda/cublas/index.html#cublasgetvector) - transfers data from GPU to CPU memory + - [cublasSetVector](http://docs.nvidia.com/cuda/cublas/index.html#cublassetvector) - transfers data from CPU to GPU memory + - [cublasGetVector](http://docs.nvidia.com/cuda/cublas/index.html#cublasgetvector) - transfers data from GPU to CPU memory To compile the code using NVCC compiler a "-lcublas" compiler flag has to be specified: @@ -307,4 +307,4 @@ To compile the same code with Intel compiler: ```bash $ module load cuda intel $ icc -std=c99 test_cublas.c -o test_cublas_icc -lcublas -lcudart -``` \ No newline at end of file +``` diff --git a/docs.it4i/anselm-cluster-documentation/software/omics-master/diagnostic-component-team.md b/docs.it4i/anselm-cluster-documentation/software/omics-master/diagnostic-component-team.md index 58051b735498cb64c12b0b6409a453ac776e42bd..4c24b3245bab051ae2208caa89e564f1e13d497a 100644 --- a/docs.it4i/anselm-cluster-documentation/software/omics-master/diagnostic-component-team.md +++ b/docs.it4i/anselm-cluster-documentation/software/omics-master/diagnostic-component-team.md @@ -3,7 +3,7 @@ Diagnostic component (TEAM) ### Access -TEAM is available at the following address: <http://omics.it4i.cz/team/> +TEAM is available at the following address: <http://omics.it4i.cz/team/> !!! Note "Note" The address is accessible only via [VPN. ](../../accessing-the-cluster/vpn-access/) @@ -16,4 +16,4 @@ TEAM (27) is an intuitive and easy-to-use web tool that fills the gap between  -**Figure 5.** Interface of the application. Panels for defining targeted regions of interest can be set up by just drag and drop known disease genes or disease definitions from the lists. Thus, virtual panels can be interactively improved as the knowledge of the disease increases. \ No newline at end of file +**Figure 5.** Interface of the application. Panels for defining targeted regions of interest can be set up by just drag and drop known disease genes or disease definitions from the lists. Thus, virtual panels can be interactively improved as the knowledge of the disease increases. diff --git a/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md b/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md index 22abd458949c52b652915bbf6ac5160ff86ead59..83179e3c4a692dbe9a0237f360cb2be65a09a77e 100644 --- a/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md +++ b/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md @@ -287,16 +287,16 @@ Details on the pipeline ------------------------------------ The pipeline calls the following tools: -- [fastqc](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/), quality control tool for high throughput +- [fastqc](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/), quality control tool for high throughput sequence data. -- [gatk](https://www.broadinstitute.org/gatk/), The Genome Analysis Toolkit or GATK is a software package developed at +- [gatk](https://www.broadinstitute.org/gatk/), The Genome Analysis Toolkit or GATK is a software package developed at the Broad Institute to analyze high-throughput sequencing data. The toolkit offers a wide variety of tools, with a primary focus on variant discovery and genotyping as well as strong emphasis on data quality assurance. Its robust architecture, powerful processing engine and high-performance computing features make it capable of taking on projects of any size. -- [hpg-aligner](http://wiki.opencb.org/projects/hpg/doku.php?id=aligner:downloads), HPG Aligner has been designed to align short and long reads with high sensitivity, therefore any number of mismatches or indels are allowed. HPG Aligner implements and combines two well known algorithms: *Burrows-Wheeler Transform* (BWT) to speed-up mapping high-quality reads, and *Smith-Waterman*> (SW) to increase sensitivity when reads cannot be mapped using BWT. -- [hpg-fastq](http://docs.bioinfo.cipf.es/projects/fastqhpc/wiki), a quality control tool for high throughput sequence data. -- [hpg-variant](http://wiki.opencb.org/projects/hpg/doku.php?id=variant:downloads), The HPG Variant suite is an ambitious project aimed to provide a complete suite of tools to work with genomic variation data, from VCF tools to variant profiling or genomic statistics. It is being implemented using High Performance Computing technologies to provide the best performance possible. -- [picard](http://picard.sourceforge.net/), Picard comprises Java-based command-line utilities that manipulate SAM files, and a Java API (HTSJDK) for creating new programs that read and write SAM files. Both SAM text format and SAM binary (BAM) format are supported. -- [samtools](http://samtools.sourceforge.net/samtools-c.shtml), SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format. -- [snpEff](http://snpeff.sourceforge.net/), Genetic variant annotation and effect prediction toolbox. +- [hpg-aligner](http://wiki.opencb.org/projects/hpg/doku.php?id=aligner:downloads), HPG Aligner has been designed to align short and long reads with high sensitivity, therefore any number of mismatches or indels are allowed. HPG Aligner implements and combines two well known algorithms: *Burrows-Wheeler Transform* (BWT) to speed-up mapping high-quality reads, and *Smith-Waterman*> (SW) to increase sensitivity when reads cannot be mapped using BWT. +- [hpg-fastq](http://docs.bioinfo.cipf.es/projects/fastqhpc/wiki), a quality control tool for high throughput sequence data. +- [hpg-variant](http://wiki.opencb.org/projects/hpg/doku.php?id=variant:downloads), The HPG Variant suite is an ambitious project aimed to provide a complete suite of tools to work with genomic variation data, from VCF tools to variant profiling or genomic statistics. It is being implemented using High Performance Computing technologies to provide the best performance possible. +- [picard](http://picard.sourceforge.net/), Picard comprises Java-based command-line utilities that manipulate SAM files, and a Java API (HTSJDK) for creating new programs that read and write SAM files. Both SAM text format and SAM binary (BAM) format are supported. +- [samtools](http://samtools.sourceforge.net/samtools-c.shtml), SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format. +- [snpEff](http://snpeff.sourceforge.net/), Genetic variant annotation and effect prediction toolbox. This listing show which tools are used in each step of the pipeline : @@ -339,7 +339,7 @@ The output folder contains all the subfolders with the intermediate data. This f **Figure 7**. *TEAM upload panel.* *Once the file has been uploaded, a panel must be chosen from the Panel* list. Then, pressing the Run button the diagnostic process starts. -Once the file has been uploaded, a panel must be chosen from the Panel list. Then, pressing the Run button the diagnostic process starts. TEAM searches first for known diagnostic mutation(s) taken from four databases: HGMD-public (20), [HUMSAVAR](http://www.uniprot.org/docs/humsavar), ClinVar (29)^ and COSMIC (23). +Once the file has been uploaded, a panel must be chosen from the Panel list. Then, pressing the Run button the diagnostic process starts. TEAM searches first for known diagnostic mutation(s) taken from four databases: HGMD-public (20), [HUMSAVAR](http://www.uniprot.org/docs/humsavar), ClinVar (29)^ and COSMIC (23).  @@ -386,6 +386,6 @@ References 25. Croft,D., O’Kelly,G., Wu,G., Haw,R., Gillespie,M., Matthews,L., Caudy,M., Garapati,P., Gopinath,G., Jassal,B. et al. (2011) Reactome: a database of reactions, pathways and biological processes. Nucleic Acids Res., 39, D691–D697. 26. Demir,E., Cary,M.P., Paley,S., Fukuda,K., Lemer,C., Vastrik,I.,Wu,G., D’Eustachio,P., Schaefer,C., Luciano,J. et al. (2010) The BioPAX community standard for pathway data sharing. Nature Biotechnol., 28, 935–942. 27. Alemán Z, García-García F, Medina I, Dopazo J (2014): A web tool for the design and management of panels of genes for targeted enrichment and massive sequencing for clinical applications. Nucleic Acids Res 42: W83-7. -28. [Alemán A](http://www.ncbi.nlm.nih.gov/pubmed?term=Alem%C3%A1n%20A%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Garcia-Garcia F](http://www.ncbi.nlm.nih.gov/pubmed?term=Garcia-Garcia%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Salavert F](http://www.ncbi.nlm.nih.gov/pubmed?term=Salavert%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Medina I](http://www.ncbi.nlm.nih.gov/pubmed?term=Medina%20I%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Dopazo J](http://www.ncbi.nlm.nih.gov/pubmed?term=Dopazo%20J%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)> (2014). A web-based interactive framework to assist in the prioritization of disease candidate genes in whole-exome sequencing studies. [Nucleic Acids Res.](http://www.ncbi.nlm.nih.gov/pubmed/?term=BiERapp "Nucleic acids research.")>42 :W88-93. +28. [Alemán A](http://www.ncbi.nlm.nih.gov/pubmed?term=Alem%C3%A1n%20A%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Garcia-Garcia F](http://www.ncbi.nlm.nih.gov/pubmed?term=Garcia-Garcia%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Salavert F](http://www.ncbi.nlm.nih.gov/pubmed?term=Salavert%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Medina I](http://www.ncbi.nlm.nih.gov/pubmed?term=Medina%20I%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Dopazo J](http://www.ncbi.nlm.nih.gov/pubmed?term=Dopazo%20J%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)> (2014). A web-based interactive framework to assist in the prioritization of disease candidate genes in whole-exome sequencing studies. [Nucleic Acids Res.](http://www.ncbi.nlm.nih.gov/pubmed/?term=BiERapp "Nucleic acids research.")>42 :W88-93. 29. Landrum,M.J., Lee,J.M., Riley,G.R., Jang,W., Rubinstein,W.S., Church,D.M. and Maglott,D.R. (2014) ClinVar: public archive of relationships among sequence variation and human phenotype. Nucleic Acids Res., 42, D980–D985. -30. Medina I, Salavert F, Sanchez R, de Maria A, Alonso R, Escobar P, Bleda M, Dopazo J: Genome Maps, a new generation genome browser. Nucleic Acids Res 2013, 41:W41-46. \ No newline at end of file +30. Medina I, Salavert F, Sanchez R, de Maria A, Alonso R, Escobar P, Bleda M, Dopazo J: Genome Maps, a new generation genome browser. Nucleic Acids Res 2013, 41:W41-46. diff --git a/docs.it4i/anselm-cluster-documentation/software/openfoam.md b/docs.it4i/anselm-cluster-documentation/software/openfoam.md index 24019c9ee801811970c989450b2c0c36076f6336..437a3640737a4ea2d6c8aa5a50683a6af5df35af 100644 --- a/docs.it4i/anselm-cluster-documentation/software/openfoam.md +++ b/docs.it4i/anselm-cluster-documentation/software/openfoam.md @@ -5,9 +5,9 @@ OpenFOAM Introduction ---------------- -OpenFOAM is a free, open source CFD software package developed by [**OpenCFD Ltd**](http://www.openfoam.com/about) at [**ESI Group**](http://www.esi-group.com/) and distributed by the [**OpenFOAM Foundation **](http://www.openfoam.org/). It has a large user base across most areas of engineering and science, from both commercial and academic organisations. +OpenFOAM is a free, open source CFD software package developed by [**OpenCFD Ltd**](http://www.openfoam.com/about) at [**ESI Group**](http://www.esi-group.com/) and distributed by the [**OpenFOAM Foundation **](http://www.openfoam.org/). It has a large user base across most areas of engineering and science, from both commercial and academic organisations. -Homepage: <http://www.openfoam.com/> +Homepage: <http://www.openfoam.com/> ###Installed version diff --git a/docs.it4i/anselm-cluster-documentation/software/paraview.md b/docs.it4i/anselm-cluster-documentation/software/paraview.md index f7235ff4750814af887043f63ac1c705657ace4c..6ed62410f5c91138b3df27bbdcc53b446d8a79ca 100644 --- a/docs.it4i/anselm-cluster-documentation/software/paraview.md +++ b/docs.it4i/anselm-cluster-documentation/software/paraview.md @@ -10,7 +10,7 @@ Introduction ParaView was developed to analyze extremely large datasets using distributed memory computing resources. It can be run on supercomputers to analyze datasets of exascale size as well as on laptops for smaller data. -Homepage : <http://www.paraview.org/> +Homepage : <http://www.paraview.org/> Installed version ----------------- @@ -18,7 +18,7 @@ Currently, version 4.0.1 compiled with GCC 4.8.1 against Bull MPI library and OS Usage ----- -On Anselm, ParaView is to be used in client-server mode. A parallel ParaView server is launched on compute nodes by the user, and client is launched on your desktop PC to control and view the visualization. Download ParaView client application for your OS here: <http://paraview.org/paraview/resources/software.php>. Important : **your version must match the version number installed on Anselm** ! (currently v4.0.1) +On Anselm, ParaView is to be used in client-server mode. A parallel ParaView server is launched on compute nodes by the user, and client is launched on your desktop PC to control and view the visualization. Download ParaView client application for your OS here: <http://paraview.org/paraview/resources/software.php>. Important : **your version must match the version number installed on Anselm** ! (currently v4.0.1) ### Launching server @@ -78,4 +78,4 @@ Remember to close the interactive session after you finish working with ParaView GPU support ----------- -Currently, GPU acceleration is not supported in the server and ParaView will not take advantage of accelerated nodes on Anselm. Support for GPU acceleration might be added in the future. \ No newline at end of file +Currently, GPU acceleration is not supported in the server and ParaView will not take advantage of accelerated nodes on Anselm. Support for GPU acceleration might be added in the future. diff --git a/docs.it4i/anselm-cluster-documentation/storage/cesnet-data-storage.md b/docs.it4i/anselm-cluster-documentation/storage/cesnet-data-storage.md index c4764a91878165083b1da1039b38808c9c702e84..35dd71ba778f5b1e45a2acc8c979323a5041da49 100644 --- a/docs.it4i/anselm-cluster-documentation/storage/cesnet-data-storage.md +++ b/docs.it4i/anselm-cluster-documentation/storage/cesnet-data-storage.md @@ -6,7 +6,7 @@ Introduction Do not use shared filesystems at IT4Innovations as a backup for large amount of data or long-term archiving purposes. !!! Note "Note" - The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service](https://du.cesnet.cz/). + The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service](https://du.cesnet.cz/). The CESNET Storage service can be used for research purposes, mainly by academic staff and students of research institutions in the Czech Republic. @@ -14,11 +14,11 @@ User of data storage CESNET (DU) association can become organizations or an indi User may only use data storage CESNET for data transfer and storage which are associated with activities in science, research, development, the spread of education, culture and prosperity. In detail see “Acceptable Use Policy CESNET Large Infrastructure (Acceptable Use Policy, AUP)”. -The service is documented at <https://du.cesnet.cz/wiki/doku.php/en/start>. For special requirements please contact directly CESNET Storage Department via e-mail [du-support(at)cesnet.cz](mailto:du-support@cesnet.cz). +The service is documented at <https://du.cesnet.cz/wiki/doku.php/en/start>. For special requirements please contact directly CESNET Storage Department via e-mail [du-support(at)cesnet.cz](mailto:du-support@cesnet.cz). The procedure to obtain the CESNET access is quick and trouble-free. -(source [https://du.cesnet.cz/](https://du.cesnet.cz/wiki/doku.php/en/start "CESNET Data Storage")) +(source [https://du.cesnet.cz/](https://du.cesnet.cz/wiki/doku.php/en/start "CESNET Data Storage")) CESNET storage access --------------------- @@ -26,9 +26,9 @@ CESNET storage access ### Understanding Cesnet storage !!! Note "Note" - It is very important to understand the Cesnet storage before uploading data. Please read <https://du.cesnet.cz/en/navody/home-migrace-plzen/start> first. + It is very important to understand the Cesnet storage before uploading data. Please read <https://du.cesnet.cz/en/navody/home-migrace-plzen/start> first. -Once registered for CESNET Storage, you may [access the storage](https://du.cesnet.cz/en/navody/faq/start) in number of ways. We recommend the SSHFS and RSYNC methods. +Once registered for CESNET Storage, you may [access the storage](https://du.cesnet.cz/en/navody/faq/start) in number of ways. We recommend the SSHFS and RSYNC methods. ### SSHFS Access @@ -84,7 +84,7 @@ Rsync is a fast and extraordinarily versatile file copying tool. It is famous fo Rsync finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time. Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated. -More about Rsync at <https://du.cesnet.cz/en/navody/rsync/start#pro_bezne_uzivatele> +More about Rsync at <https://du.cesnet.cz/en/navody/rsync/start#pro_bezne_uzivatele> Transfer large files to/from Cesnet storage, assuming membership in the Storage VO @@ -100,4 +100,4 @@ Transfer large directories to/from Cesnet storage, assuming membership in the St $ rsync --progress -av username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafolder . ``` -Transfer rates of about 28MB/s can be expected. \ No newline at end of file +Transfer rates of about 28MB/s can be expected. diff --git a/docs.it4i/anselm-cluster-documentation/storage/storage.md b/docs.it4i/anselm-cluster-documentation/storage/storage.md index eb76ccc243883a67fecbf0f07a4396de5cfb9e37..72486eb6c463ddcb93cb1bf87a269282a3edbbd7 100644 --- a/docs.it4i/anselm-cluster-documentation/storage/storage.md +++ b/docs.it4i/anselm-cluster-documentation/storage/storage.md @@ -15,11 +15,11 @@ Anselm computer provides two main shared filesystems, the [HOME filesystem](../s ### Understanding the Lustre Filesystems -(source <http://www.nas.nasa.gov>) +(source <http://www.nas.nasa.gov>) A user file on the Lustre filesystem can be divided into multiple chunks (stripes) and stored across a subset of the object storage targets (OSTs) (disks). The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing. -When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server ( MDS) and the metadata target ( MDT) for the layout and location of the [file's stripes](http://www.nas.nasa.gov/hecc/support/kb/Lustre_Basics_224.html#striping). Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval. +When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server ( MDS) and the metadata target ( MDT) for the layout and location of the [file's stripes](http://www.nas.nasa.gov/hecc/support/kb/Lustre_Basics_224.html#striping). Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval. If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency so that all clients see consistent results. @@ -75,7 +75,7 @@ Another good practice is to make the stripe count be an integral factor of the n Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file. -Read more on <http://wiki.lustre.org/manual/LustreManual20_HTML/ManagingStripingFreeSpace.html> +Read more on <http://wiki.lustre.org/manual/LustreManual20_HTML/ManagingStripingFreeSpace.html> ### Lustre on Anselm @@ -103,7 +103,7 @@ The architecture of Lustre on Anselm is composed of two metadata servers (MDS) ###HOME -The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 320TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request. +The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 320TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request. !!! Note "Note" The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects. @@ -132,7 +132,7 @@ Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for t ###SCRATCH -The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 146TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. If 100TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request. +The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 146TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. If 100TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request. !!! Note "Note" The Scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory. @@ -257,7 +257,7 @@ other::--- Default ACL mechanism can be used to replace setuid/setgid permissions on directories. Setting a default ACL on a directory (-d flag to setfacl) will cause the ACL permissions to be inherited by any newly created file or subdirectory within the directory. Refer to this page for more information on Linux ACL: -[http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html ](http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html) +[http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html ](http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html) Local Filesystems ----------------- @@ -319,4 +319,4 @@ Summary |/scratch|cluster shared jobs' data|Lustre|146 TiB|6 GB/s|Quota 100TB|Compute and login nodes|files older 90 days removed| |/lscratch|node local jobs' data|local|330 GB|100 MB/s|none|Compute nodes|purged after job ends| |/ramdisk|node local jobs' data|local|60, 90, 500 GB|5-50 GB/s|none|Compute nodes|purged after job ends| -|/tmp|local temporary files|local|9.5 GB|100 MB/s|none|Compute and login nodes|auto| purged \ No newline at end of file +|/tmp|local temporary files|local|9.5 GB|100 MB/s|none|Compute and login nodes|auto| purged diff --git a/docs.it4i/css/extra.css b/docs.it4i/css/extra.css deleted file mode 100644 index e7c2377a013043583b70293d4abbd6ad2a32f474..0000000000000000000000000000000000000000 --- a/docs.it4i/css/extra.css +++ /dev/null @@ -1 +0,0 @@ -img[alt=external] { width: 12px; height: 12px;} diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/cygwin-and-x11-forwarding.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/cygwin-and-x11-forwarding.md index ffd66e9747645b8e16897566e4e7204bd7760f65..4fb6482b8d769df2142b04831240151c28d99fed 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/cygwin-and-x11-forwarding.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/cygwin-and-x11-forwarding.md @@ -12,7 +12,7 @@ PuTTY X11 proxy: unable to connect to forwarded X server: Network error: Connect (gnome-session:23691): WARNING **: Cannot open display:** ``` -1. Locate and modify Cygwin shortcut that uses [startxwin](http://x.cygwin.com/docs/man1/startxwin.1.html) +1. Locate and modify Cygwin shortcut that uses [startxwin](http://x.cygwin.com/docs/man1/startxwin.1.html) locate C:cygwin64binXWin.exe change it @@ -24,4 +24,4 @@ PuTTY X11 proxy: unable to connect to forwarded X server: Network error: Connect 2. Check Putty settings: Enable X11 forwarding -  \ No newline at end of file +  diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md index 7472bb4a3570023495eeb713a2f39527a372fe93..5a96ef084a1a4c835805b4cf841b82da388d362d 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md @@ -9,7 +9,7 @@ Read more about configuring [**X Window System**](x-window-system/). VNC --- -The **Virtual Network Computing** (**VNC**) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing "Desktop sharing") system that uses the [Remote Frame Buffer protocol (RFB)](http://en.wikipedia.org/wiki/RFB_protocol "RFB protocol") to remotely control another [computer](http://en.wikipedia.org/wiki/Computer "Computer"). +The **Virtual Network Computing** (**VNC**) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing "Desktop sharing") system that uses the [Remote Frame Buffer protocol (RFB)](http://en.wikipedia.org/wiki/RFB_protocol "RFB protocol") to remotely control another [computer](http://en.wikipedia.org/wiki/Computer "Computer"). Read more about configuring **[VNC](vnc/)**. diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md index 9dc8493787654cb7b71c73a8e90fe30e9465e764..bdb995576f194f98f64bbc072d319d20827af7fd 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md @@ -1,9 +1,9 @@ VNC === -The **Virtual Network Computing** (**VNC**) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing "Desktop sharing") system that uses the [Remote Frame Buffer protocol (RFB)](http://en.wikipedia.org/wiki/RFB_protocol "RFB protocol") to remotely control another [computer](http://en.wikipedia.org/wiki/Computer "Computer"). It transmits the [keyboard](http://en.wikipedia.org/wiki/Computer_keyboard "Computer keyboard") and [mouse](http://en.wikipedia.org/wiki/Computer_mouse") events from one computer to another, relaying the graphical [screen](http://en.wikipedia.org/wiki/Computer_screen "Computer screen") updates back in the other direction, over a [network](http://en.wikipedia.org/wiki/Computer_network "Computer network"). +The **Virtual Network Computing** (**VNC**) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing "Desktop sharing") system that uses the [Remote Frame Buffer protocol (RFB)](http://en.wikipedia.org/wiki/RFB_protocol "RFB protocol") to remotely control another [computer](http://en.wikipedia.org/wiki/Computer "Computer"). It transmits the [keyboard](http://en.wikipedia.org/wiki/Computer_keyboard "Computer keyboard") and [mouse](http://en.wikipedia.org/wiki/Computer_mouse") events from one computer to another, relaying the graphical [screen](http://en.wikipedia.org/wiki/Computer_screen "Computer screen") updates back in the other direction, over a [network](http://en.wikipedia.org/wiki/Computer_network "Computer network"). -The recommended clients are [TightVNC](http://www.tightvnc.com) or[TigerVNC](http://sourceforge.net/apps/mediawiki/tigervnc/index.php?title=Main_Page) (free, open source, available for almost any platform). +The recommended clients are [TightVNC](http://www.tightvnc.com) or[TigerVNC](http://sourceforge.net/apps/mediawiki/tigervnc/index.php?title=Main_Page) (free, open source, available for almost any platform). Create VNC password ------------------- @@ -214,4 +214,4 @@ $ xterm Example described above: - \ No newline at end of file + diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md index f554c5d01c4e7d774d72b8ff80c0b2a55dfe28ad..ed0ad9f0defe2873e8728e1186ffb7a20fde4e4b 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md @@ -1,7 +1,7 @@ X Window System =============== -The X Window system is a principal way to get GUI access to the clusters. The **X Window System** (commonly known as **X11**, based on its current major version being 11, or shortened to simply **X**, and sometimes informally **X-Windows**) is a computer software system and network [protocol](http://en.wikipedia.org/wiki/Protocol_%28computing%29 "Protocol (computing)") that provides a basis for [graphical user interfaces](http://en.wikipedia.org/wiki/Graphical_user_interface "Graphical user interface") (GUIs) and rich input device capability for [networked computers](http://en.wikipedia.org/wiki/Computer_network "Computer network"). +The X Window system is a principal way to get GUI access to the clusters. The **X Window System** (commonly known as **X11**, based on its current major version being 11, or shortened to simply **X**, and sometimes informally **X-Windows**) is a computer software system and network [protocol](http://en.wikipedia.org/wiki/Protocol_%28computing%29 "Protocol (computing)") that provides a basis for [graphical user interfaces](http://en.wikipedia.org/wiki/Graphical_user_interface "Graphical user interface") (GUIs) and rich input device capability for [networked computers](http://en.wikipedia.org/wiki/Computer_network "Computer network"). !!! Note "Note" The X display forwarding must be activated and the X server running on client side @@ -38,18 +38,18 @@ In order to display graphical user interface GUI of various software tools, you ### X Server on OS X -Mac OS users need to install [XQuartz server](http://xquartz.macosforge.org/landing/). +Mac OS users need to install [XQuartz server](http://xquartz.macosforge.org/landing/). ### X Server on Windows -There are variety of X servers available for Windows environment. The commercial Xwin32 is very stable and rich featured. The Cygwin environment provides fully featured open-source XWin X server. For simplicity, we recommend open-source X server by the [Xming project](http://sourceforge.net/projects/xming/). For stability and full features we recommend the -[XWin](http://x.cygwin.com/) X server by Cygwin +There are variety of X servers available for Windows environment. The commercial Xwin32 is very stable and rich featured. The Cygwin environment provides fully featured open-source XWin X server. For simplicity, we recommend open-source X server by the [Xming project](http://sourceforge.net/projects/xming/). For stability and full features we recommend the +[XWin](http://x.cygwin.com/) X server by Cygwin |How to use Xwin |How to use Xming | | --- | --- | - |[Install Cygwin](http://x.cygwin.com/) Find and execute XWin.exeto start the X server on Windows desktop computer.[If no able to forward X11 using PuTTY to CygwinX](cygwin-and-x11-forwarding/) |<p>Use Xlaunch to configure the Xming.<p>Run Xmingto start the X server on Windows desktop computer.| + |[Install Cygwin](http://x.cygwin.com/) Find and execute XWin.exeto start the X server on Windows desktop computer.[If no able to forward X11 using PuTTY to CygwinX](cygwin-and-x11-forwarding/) |<p>Use Xlaunch to configure the Xming.<p>Run Xmingto start the X server on Windows desktop computer.| -Read more on [http://www.math.umn.edu/systems_guide/putty_xwin32.html](http://www.math.umn.edu/systems_guide/putty_xwin32.shtml) +Read more on [http://www.math.umn.edu/systems_guide/putty_xwin32.html](http://www.math.umn.edu/systems_guide/putty_xwin32.shtml) ### Running GUI Enabled Applications @@ -94,7 +94,7 @@ The Gnome 2.28 GUI environment is available on the clusters. We recommend to use ### Gnome on Linux and OS X To run the remote Gnome session in a window on Linux/OS X computer, you need to install Xephyr. Ubuntu package is -xserver-xephyr, on OS X it is part of [XQuartz](http://xquartz.macosforge.org/landing/). First, launch Xephyr on local machine: +xserver-xephyr, on OS X it is part of [XQuartz](http://xquartz.macosforge.org/landing/). First, launch Xephyr on local machine: ```bash local $ Xephyr -ac -screen 1024x768 -br -reset -terminate :1 & @@ -129,4 +129,4 @@ $ gnome-session & In this way, we run remote gnome session on the cluster, displaying it in the local X server -Use System->Log Out to close the gnome-session \ No newline at end of file +Use System->Log Out to close the gnome-session diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md index 24416dbb3577c974f95a388d6689a2e8d9454e00..1a0545c7914437c6e463bcf98c1723d4676e9d82 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md @@ -6,7 +6,7 @@ PuTTY - before we start SSH connection ### Windows PuTTY Installer -We recommned you to download "**A Windows installer for everything except PuTTYtel**" with **Pageant** (SSH authentication agent) and **PuTTYgen** (PuTTY key generator) which is available [here](http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html). +We recommned you to download "**A Windows installer for everything except PuTTYtel**" with **Pageant** (SSH authentication agent) and **PuTTYgen** (PuTTY key generator) which is available [here](http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html). !!! Note "Note" After installation you can proceed directly to private keys authentication using ["Putty"](putty#putty). @@ -56,4 +56,4 @@ Another PuTTY Settings - Category -> Windows -> Translation -> Remote character set and select **UTF-8**. - Category -> Terminal -> Features and select **Disable application keypad mode** (enable numpad) -- Save your configuration on Session page in to Default Settings with *Save* button. \ No newline at end of file +- Save your configuration on Session page in to Default Settings with *Save* button. diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md index de26dfc187fc4a8791b547b9b85a734472efbba3..78d9bd2734e7180aff763ddf850cf8199b5a0357 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md @@ -3,7 +3,6 @@ SSH keys Key management ------------------------------------------------------------------- - After logging in, you can see .ssh/ directory with SSH keys and authorized_keys file: ```bash @@ -110,4 +109,4 @@ In this example, we add an additional public key, stored in file additional_key. ### How to remove your own key -Removing your key from authorized_keys can be done simply by deleting the corresponding public key which can be identified by a comment at the end of line (eg. username@organization.example.com). \ No newline at end of file +Removing your key from authorized_keys can be done simply by deleting the corresponding public key which can be identified by a comment at the end of line (eg. username@organization.example.com). diff --git a/docs.it4i/get-started-with-it4innovations/applying-for-resources.md b/docs.it4i/get-started-with-it4innovations/applying-for-resources.md index 125f4e686eb750cad94e927d1eeced9f84ae0971..c15173236f1be4532b01779c79a8d9583e1d7fc6 100644 --- a/docs.it4i/get-started-with-it4innovations/applying-for-resources.md +++ b/docs.it4i/get-started-with-it4innovations/applying-for-resources.md @@ -1,12 +1,12 @@ Applying for Resources ====================== -Computational resources may be allocated by any of the following [Computing resources allocation](http://www.it4i.cz/computing-resources-allocation/?lang=en) mechanisms. +Computational resources may be allocated by any of the following [Computing resources allocation](http://www.it4i.cz/computing-resources-allocation/?lang=en) mechanisms. -Academic researchers can apply for computational resources via [Open Access Competitions](http://www.it4i.cz/open-access-competition/?lang=en&lang=en). +Academic researchers can apply for computational resources via [Open Access Competitions](http://www.it4i.cz/open-access-competition/?lang=en&lang=en). -Anyone is welcomed to apply via the [Directors Discretion.](http://www.it4i.cz/obtaining-computational-resources-through-directors-discretion/?lang=en&lang=en) +Anyone is welcomed to apply via the [Directors Discretion.](http://www.it4i.cz/obtaining-computational-resources-through-directors-discretion/?lang=en&lang=en) -Foreign (mostly European) users can obtain computational resources via the [PRACE (DECI) program](http://www.prace-ri.eu/DECI-Projects). +Foreign (mostly European) users can obtain computational resources via the [PRACE (DECI) program](http://www.prace-ri.eu/DECI-Projects). -In all cases, IT4Innovations’ access mechanisms are aimed at distributing computational resources while taking into account the development and application of supercomputing methods and their benefits and usefulness for society. The applicants are expected to submit a proposal. In the proposal, the applicants **apply for a particular amount of core-hours** of computational resources. The requested core-hours should be substantiated by scientific excellence of the proposal, its computational maturity and expected impacts. Proposals do undergo a scientific, technical and economic evaluation. The allocation decisions are based on this evaluation. More information at [Computing resources allocation](http://www.it4i.cz/computing-resources-allocation/?lang=en) and [Obtaining Login Credentials](obtaining-login-credentials/obtaining-login-credentials/) page. \ No newline at end of file +In all cases, IT4Innovations’ access mechanisms are aimed at distributing computational resources while taking into account the development and application of supercomputing methods and their benefits and usefulness for society. The applicants are expected to submit a proposal. In the proposal, the applicants **apply for a particular amount of core-hours** of computational resources. The requested core-hours should be substantiated by scientific excellence of the proposal, its computational maturity and expected impacts. Proposals do undergo a scientific, technical and economic evaluation. The allocation decisions are based on this evaluation. More information at [Computing resources allocation](http://www.it4i.cz/computing-resources-allocation/?lang=en) and [Obtaining Login Credentials](obtaining-login-credentials/obtaining-login-credentials/) page. diff --git a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md index 1b19273adb13a218fe9ce37a07c9e5bdfc19f267..b4f91f12589eb7e2d9d8f27929e6aa016d372a7c 100644 --- a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md +++ b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md @@ -19,7 +19,7 @@ There are different kinds of certificates, each with a different scope of use. W Q: Which X.509 certificates are recognised by IT4Innovations? ------------------------------------------------------------- -Any certificate that has been issued by a Certification Authority (CA) from a member of the IGTF ([http:www.igtf.net](http://www.igtf.net/)) is recognised by IT4Innovations: European certificates are issued by members of the EUGridPMA ([https://www.eugridmpa.org](https://www.eugridpma.org/)), which is part if the IGTF and coordinates the trust fabric for e-Science Grid authentication within Europe. Further the Czech *"Qualified certificate" (Kvalifikovaný certifikát)* (provided by <http://www.postsignum.cz/> or <http://www.ica.cz/Kvalifikovany-certifikat.aspx>), that is used in electronic contact with Czech public authorities is accepted. +Any certificate that has been issued by a Certification Authority (CA) from a member of the IGTF ([http:www.igtf.net](http://www.igtf.net/)) is recognised by IT4Innovations: European certificates are issued by members of the EUGridPMA ([https://www.eugridmpa.org](https://www.eugridpma.org/)), which is part if the IGTF and coordinates the trust fabric for e-Science Grid authentication within Europe. Further the Czech *"Qualified certificate" (Kvalifikovaný certifikát)* (provided by <http://www.postsignum.cz/> or <http://www.ica.cz/Kvalifikovany-certifikat.aspx>), that is used in electronic contact with Czech public authorities is accepted. Q: How do I get a User Certificate that can be used with IT4Innovations? ------------------------------------------------------------------------ @@ -33,7 +33,7 @@ Yes, provided that the CA which provides this service is also a member of IGTF. Q: Does IT4Innovations support the TERENA certificate service? -------------------------------------------------------------- - Yes, ITInnovations supports TERENA eScience personal certificates. For more information, please visit [https://tcs-escience-portal.terena.org](https://tcs-escience-portal.terena.org/), where you also can find if your organisation/country can use this service + Yes, ITInnovations supports TERENA eScience personal certificates. For more information, please visit [https://tcs-escience-portal.terena.org](https://tcs-escience-portal.terena.org/), where you also can find if your organisation/country can use this service Q: What format should my certificate take? ------------------------------------------ @@ -53,7 +53,7 @@ Q: What are CA certificates? ---------------------------- Certification Authority (CA) certificates are used to verify the link between your user certificate and the authority which issued it. They are also used to verify the link between the host certificate of a IT4Innovations server and the CA which issued that certificate. In essence they establish a chain of trust between you and the target server. Thus, for some grid services, users must have a copy of all the CA certificates. -To assist users, SURFsara (a member of PRACE) provides a complete and up-to-date bundle of all the CA certificates that any PRACE user (or IT4Innovations grid services user) will require. Bundle of certificates, in either p12, PEM or JKS formats, are available from <http://winnetou.sara.nl/prace/certs/>. +To assist users, SURFsara (a member of PRACE) provides a complete and up-to-date bundle of all the CA certificates that any PRACE user (or IT4Innovations grid services user) will require. Bundle of certificates, in either p12, PEM or JKS formats, are available from <http://winnetou.sara.nl/prace/certs/>. It is worth noting that gsissh-term and DART automatically updates their CA certificates from this SURFsara website. In other cases, if you receive a warning that a server’s certificate can not be validated (not trusted), then please update your CA certificates via the SURFsara website. If this fails, then please contact the IT4Innovations helpdesk. @@ -63,7 +63,7 @@ Lastly, if you need the CA certificates for a personal Globus 5 installation, th myproxy-get-trustroots -s myproxy-prace.lrz.de ``` -If you run this command as ’root’, then it will install the certificates into /etc/grid-security/certificates. If you run this not as ’root’, then the certificates will be installed into $HOME/.globus/certificates. For Globus, you can download the globuscerts.tar.gz packet from <http://winnetou.sara.nl/prace/certs/>. +If you run this command as ’root’, then it will install the certificates into /etc/grid-security/certificates. If you run this not as ’root’, then the certificates will be installed into $HOME/.globus/certificates. For Globus, you can download the globuscerts.tar.gz packet from <http://winnetou.sara.nl/prace/certs/>. Q: What is a DN and how do I find mine? --------------------------------------- @@ -106,7 +106,7 @@ To check your certificate (e.g., DN, validity, issuer, public key algorithm, etc openssl x509 -in usercert.pem -text -noout ``` -To download openssl for both Linux and Windows, please visit <http://www.openssl.org/related/binaries.html>. On Macintosh Mac OS X computers openssl is already pre-installed and can be used immediately. +To download openssl for both Linux and Windows, please visit <http://www.openssl.org/related/binaries.html>. On Macintosh Mac OS X computers openssl is already pre-installed and can be used immediately. Q: How do I create and then manage a keystore? ---------------------------------------------- @@ -128,7 +128,7 @@ You also can import CA certificates into your java keystore with the tool, e.g.: where $mydomain.crt is the certificate of a trusted signing authority (CA) and $mydomain is the alias name that you give to the entry. -More information on the tool can be found at:<http://docs.oracle.com/javase/7/docs/technotes/tools/solaris/keytool.html> +More information on the tool can be found at:<http://docs.oracle.com/javase/7/docs/technotes/tools/solaris/keytool.html> Q: How do I use my certificate to access the different grid Services? --------------------------------------------------------------------- @@ -136,7 +136,7 @@ Most grid services require the use of your certificate; however, the format of y If employing the PRACE version of GSISSH-term (also a Java Web Start Application), you may use either the PEM or p12 formats. Note that this service automatically installs up-to-date PRACE CA certificates. -If the grid service is UNICORE, then you bind your certificate, in either the p12 format or JKS, to UNICORE during the installation of the client on your local machine. For more information, please visit [UNICORE6 in PRACE](http://www.prace-ri.eu/UNICORE6-in-PRACE) +If the grid service is UNICORE, then you bind your certificate, in either the p12 format or JKS, to UNICORE during the installation of the client on your local machine. For more information, please visit [UNICORE6 in PRACE](http://www.prace-ri.eu/UNICORE6-in-PRACE) If the grid service is part of Globus, such as GSI-SSH, GriFTP or GRAM5, then the certificates can be in either p12 or PEM format and must reside in the "$HOME/.globus" directory for Linux and Mac users or %HOMEPATH%.globus for Windows users. (Windows users will have to use the DOS command ’cmd’ to create a directory which starts with a ’.’). Further, user certificates should be named either "usercred.p12" or "usercert.pem" and "userkey.pem", and the CA certificates must be kept in a pre-specified directory as follows. For Linux and Mac users, this directory is either $HOME/.globus/certificates or /etc/grid-security/certificates. For Windows users, this directory is %HOMEPATH%.globuscertificates. (If you are using GSISSH-Term from prace-ri.eu then you do not have to create the .globus directory nor install CA certificates to use this tool alone). @@ -154,8 +154,8 @@ A proxy certificate is a short-lived certificate which may be employed by UNICOR Q: What is the MyProxy service? ------------------------------- -[The MyProxy Service](http://grid.ncsa.illinois.edu/myproxy/) , can be employed by gsissh-term and Globus tools, and is an online repository that allows users to store long lived proxy certificates remotely, which can then be retrieved for use at a later date. Each proxy is protected by a password provided by the user at the time of storage. This is beneficial to Globus users as they do not have to carry their private keys and certificates when travelling; nor do users have to install private keys and certificates on possibly insecure computers. +[The MyProxy Service](http://grid.ncsa.illinois.edu/myproxy/) , can be employed by gsissh-term and Globus tools, and is an online repository that allows users to store long lived proxy certificates remotely, which can then be retrieved for use at a later date. Each proxy is protected by a password provided by the user at the time of storage. This is beneficial to Globus users as they do not have to carry their private keys and certificates when travelling; nor do users have to install private keys and certificates on possibly insecure computers. Q: Someone may have copied or had access to the private key of my certificate either in a separate file or in the browser. What should I do? -Please ask the CA that issued your certificate to revoke this certifcate and to supply you with a new one. In addition, please report this to IT4Innovations by contacting [the support team](https://support.it4i.cz/rt). \ No newline at end of file +Please ask the CA that issued your certificate to revoke this certifcate and to supply you with a new one. In addition, please report this to IT4Innovations by contacting [the support team](https://support.it4i.cz/rt). diff --git a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md index 40c2ad36b45c21d760afbf57d7008f83d07ccf4b..083dd2cd7116ccbf3a9ebcc1b10a48992148857a 100644 --- a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md +++ b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md @@ -19,14 +19,14 @@ The PI is authorized to use the clusters by the allocation decision issued by th This is a preferred way of granting access to project resources. Please, use this method whenever it's possible. -Log in to the [IT4I Extranet portal](https://extranet.it4i.cz) using IT4I credentials and go to the **Projects** section. +Log in to the [IT4I Extranet portal](https://extranet.it4i.cz) using IT4I credentials and go to the **Projects** section. - **Users:** Please, submit your requests for becoming a project member. - **Primary Investigators:** Please, approve or deny users' requests in the same section. ### Authorization by e-mail (an alternative approach) - In order to authorize a Collaborator to utilize the allocated resources, the PI should contact the [IT4I support](https://support.it4i.cz/rt/) (E-mail: [support [at] it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz)) and provide following information: + In order to authorize a Collaborator to utilize the allocated resources, the PI should contact the [IT4I support](https://support.it4i.cz/rt/) (E-mail: [support [at] it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz)) and provide following information: 1. Identify your project by project ID 2. Provide list of people, including himself, who are authorized to use the resources allocated to the project. The list must include full name, e-mail and affiliation. Provide usernames as well, if collaborator login access already exists on the IT4I systems. @@ -54,11 +54,11 @@ Should the above information be provided by e-mail, the e-mail **must be** digit The Login Credentials ------------------------- -Once authorized by PI, every person (PI or Collaborator) wishing to access the clusters, should contact the [IT4I support](https://support.it4i.cz/rt/) (E-mail: [support [at] it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz)) providing following information: +Once authorized by PI, every person (PI or Collaborator) wishing to access the clusters, should contact the [IT4I support](https://support.it4i.cz/rt/) (E-mail: [support [at] it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz)) providing following information: 1. Project ID 2. Full name and affiliation -3. Statement that you have read and accepted the [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf) (AUP). +3. Statement that you have read and accepted the [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf) (AUP). 4. Attach the AUP file. 5. Your preferred username, max 8 characters long. The preferred username must associate your surname and name or be otherwise derived from it. Only alphanumeric sequences, dash and underscore signs are allowed. 6. In case you choose [Alternative way to personal certificate](obtaining-login-credentials/#alternative-way-of-getting-personal-certificate), @@ -96,7 +96,7 @@ You will receive your personal login credentials by protected e-mail. The login 2. ssh private key and private key passphrase 3. system password -The clusters are accessed by the [private key](../accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) and username. Username and password is used for login to the information systems listed on <http://support.it4i.cz/>. +The clusters are accessed by the [private key](../accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) and username. Username and password is used for login to the information systems listed on <http://support.it4i.cz/>. ### Change Passphrase @@ -110,15 +110,15 @@ On Windows, use [PuTTY Key Generator](../accessing-the-clusters/shell-access-and ### Change Password -Change password in your user profile at <https://extranet.it4i.cz/user/> +Change password in your user profile at <https://extranet.it4i.cz/user/> The Certificates for Digital Signatures ------------------------------------------- -We accept personal certificates issued by any widely respected certification authority (CA). This includes certificates by CAs organized in International Grid Trust Federation (<http://www.igtf.net/>), its European branch EUGridPMA - <https://www.eugridpma.org/> and its member organizations, e.g. the CESNET certification authority - <https://tcs-p.cesnet.cz/confusa/>. The Czech *"Qualified certificate" (Kvalifikovaný certifikát)* (provided by <http://www.postsignum.cz/> or <http://www.ica.cz/Kvalifikovany-certifikat.aspx>), that is used in electronic contact with Czech authorities is accepted as well. +We accept personal certificates issued by any widely respected certification authority (CA). This includes certificates by CAs organized in International Grid Trust Federation (<http://www.igtf.net/>), its European branch EUGridPMA - <https://www.eugridpma.org/> and its member organizations, e.g. the CESNET certification authority - <https://tcs-p.cesnet.cz/confusa/>. The Czech *"Qualified certificate" (Kvalifikovaný certifikát)* (provided by <http://www.postsignum.cz/> or <http://www.ica.cz/Kvalifikovany-certifikat.aspx>), that is used in electronic contact with Czech authorities is accepted as well. Certificate generation process is well-described here: -- [How to generate a personal TCS certificate in Mozilla Firefox web browser (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-gen) +- [How to generate a personal TCS certificate in Mozilla Firefox web browser (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-gen) A FAQ about certificates can be found here: [Certificates FAQ](certificates-faq/). @@ -126,7 +126,7 @@ Alternative Way to Personal Certificate ------------------------------------------- Follow these steps **only** if you can not obtain your certificate in a standard way. In case you choose this procedure, please attach a **scan of photo ID** (personal ID or passport or drivers license) when applying for [login credentials](obtaining-login-credentials/#the-login-credentials). -1. Go to <https://www.cacert.org/>. +1. Go to <https://www.cacert.org/>. - If there's a security warning, just acknowledge it. 2. Click *Join*. 3. Fill in the form and submit it by the *Next* button. @@ -145,11 +145,11 @@ Installation of the Certificate Into Your Mail Client The procedure is similar to the following guides: - MS Outlook 2010 - - [How to Remove, Import, and Export Digital certificates](http://support.microsoft.com/kb/179380) - - [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/outl-cert-imp) + - [How to Remove, Import, and Export Digital certificates](http://support.microsoft.com/kb/179380) + - [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/outl-cert-imp) - Mozilla Thudnerbird - - [Installing an SMIME certificate](http://kb.mozillazine.org/Installing_an_SMIME_certificate) - - [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-imp) + - [Installing an SMIME certificate](http://kb.mozillazine.org/Installing_an_SMIME_certificate) + - [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-imp) End of User Account Lifecycle ----------------------------- @@ -161,4 +161,4 @@ User will get 3 automatically generated warning e-mail messages of the pending r - Second message will be sent 1 month before the removal - Third message will be sent 1 week before the removal. -The messages will inform about the projected removal date and will challenge the user to migrate her/his data \ No newline at end of file +The messages will inform about the projected removal date and will challenge the user to migrate her/his data diff --git a/docs.it4i/index.md b/docs.it4i/index.md index 733d00109520caa207c44a6d435f1005ac5c5e6b..03a0369704394515c443ed80d7991fca9b5527c0 100644 --- a/docs.it4i/index.md +++ b/docs.it4i/index.md @@ -23,21 +23,21 @@ Welcome to IT4Innovations documentation pages. The IT4Innovations national super Getting Help and Support ------------------------ !!! Note "Note" - Contact [support [at] it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz) for help and support regarding the cluster technology at IT4Innovations. Please use **Czech**, **Slovak** or **English** language for communication with us. Follow the status of your request to IT4Innovations at [support.it4i.cz/rt](http://support.it4i.cz/rt). + Contact [support [at] it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz) for help and support regarding the cluster technology at IT4Innovations. Please use **Czech**, **Slovak** or **English** language for communication with us. Follow the status of your request to IT4Innovations at [support.it4i.cz/rt](http://support.it4i.cz/rt). -Use your IT4Innotations username and password to log in to the [support](http://support.it4i.cz/) portal. +Use your IT4Innotations username and password to log in to the [support](http://support.it4i.cz/) portal. Required Proficiency -------------------- !!! Note "Note" You need basic proficiency in Linux environment. -In order to use the system for your calculations, you need basic proficiency in Linux environment. To gain the proficiency, we recommend you reading the [ introduction to Linux](http://www.tldp.org/LDP/intro-linux/html/) operating system environment and installing a Linux distribution on your personal computer. A good choice might be the [ Fedora](http://fedoraproject.org/) distribution, as it is similar to systems on the clusters at IT4Innovations. It's easy to install and use. In fact, any distribution would do. +In order to use the system for your calculations, you need basic proficiency in Linux environment. To gain the proficiency, we recommend you reading the [ introduction to Linux](http://www.tldp.org/LDP/intro-linux/html/) operating system environment and installing a Linux distribution on your personal computer. A good choice might be the [ Fedora](http://fedoraproject.org/) distribution, as it is similar to systems on the clusters at IT4Innovations. It's easy to install and use. In fact, any distribution would do. !!! Note "Note" Learn how to parallelize your code! -In many cases, you will run your own code on the cluster. In order to fully exploit the cluster, you will need to carefully consider how to utilize all the cores available on the node and how to use multiple nodes at the same time. You need to **parallelize** your code. Proficieny in MPI, OpenMP, CUDA, UPC or GPI2 programming may be gained via the [training provided by IT4Innovations.](http://prace.it4i.cz) +In many cases, you will run your own code on the cluster. In order to fully exploit the cluster, you will need to carefully consider how to utilize all the cores available on the node and how to use multiple nodes at the same time. You need to **parallelize** your code. Proficieny in MPI, OpenMP, CUDA, UPC or GPI2 programming may be gained via the [training provided by IT4Innovations.](http://prace.it4i.cz) Terminology Frequently Used on These Pages ------------------------------------------ @@ -72,4 +72,4 @@ local $ Errata ------- -Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in the text or the code we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this documentation. If you find any errata, please report them by visiting [http://support.it4i.cz/rt](http://support.it4i.cz/rt), creating a new ticket, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded on our website. +Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in the text or the code we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this documentation. If you find any errata, please report them by visiting [http://support.it4i.cz/rt](http://support.it4i.cz/rt), creating a new ticket, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded on our website. diff --git a/docs.it4i/salomon/accessing-the-cluster/accessing-the-cluster.md b/docs.it4i/salomon/accessing-the-cluster/accessing-the-cluster.md index af5e36a46bea39e96385164221342d4ea5afba60..f97ef1b3d7b7a35fcaece92c4a48366cb3abcaf5 100644 --- a/docs.it4i/salomon/accessing-the-cluster/accessing-the-cluster.md +++ b/docs.it4i/salomon/accessing-the-cluster/accessing-the-cluster.md @@ -6,7 +6,7 @@ Interactive Login The Salomon cluster is accessed by SSH protocol via login nodes login1, login2, login3 and login4 at address salomon.it4i.cz. The login nodes may be addressed specifically, by prepending the login node name to the address. !!! Note "Note" - The alias >salomon.it4i.cz is currently not available through VPN connection. Please use loginX.salomon.it4i.cz when connected to VPN. + The alias salomon.it4i.cz is currently not available through VPN connection. Please use loginX.salomon.it4i.cz when connected to VPN. |Login address|Port|Protocol|Login node| |---|---|---|---| @@ -63,7 +63,7 @@ Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com Data Transfer ------------- -Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp protocols. +Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp protocols. In case large volumes of data are transferred, use dedicated data mover nodes cedge[1-3].salomon.it4i.cz for increased performance. @@ -97,7 +97,7 @@ or local $ sftp -o IdentityFile=/path/to/id_rsa username@salomon.it4i.cz ``` -Very convenient way to transfer files in and out of the Salomon computer is via the fuse filesystem [sshfs](http://linux.die.net/man/1/sshfs) +Very convenient way to transfer files in and out of the Salomon computer is via the fuse filesystem [sshfs](http://linux.die.net/man/1/sshfs) ```bash local $ sshfs -o IdentityFile=/path/to/id_rsa username@salomon.it4i.cz:. mountpoint @@ -113,6 +113,6 @@ $ man scp $ man sshfs ``` -On Windows, use [WinSCP client](http://winscp.net/eng/download.php) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/) provides a way to mount the Salomon filesystems directly as an external disc. +On Windows, use [WinSCP client](http://winscp.net/eng/download.php) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/) provides a way to mount the Salomon filesystems directly as an external disc. -More information about the shared file systems is available [here](storage/storage/). \ No newline at end of file +More information about the shared file systems is available [here](storage/storage/). diff --git a/docs.it4i/salomon/accessing-the-cluster/outgoing-connections.md b/docs.it4i/salomon/accessing-the-cluster/outgoing-connections.md index 0cb2968f78992e5ce137be359277d3963ddbc607..c6cdc119fbd917b5fe8044953dd82ea8dc11811c 100644 --- a/docs.it4i/salomon/accessing-the-cluster/outgoing-connections.md +++ b/docs.it4i/salomon/accessing-the-cluster/outgoing-connections.md @@ -72,7 +72,7 @@ To establish local proxy server on your workstation, install and run SOCKS proxy local $ ssh -D 1080 localhost ``` -On Windows, install and run the free, open source [Sock Puppet](http://sockspuppet.com/) server. +On Windows, install and run the free, open source [Sock Puppet](http://sockspuppet.com/) server. Once the proxy server is running, establish ssh port forwarding from Salomon to the proxy server, port 1080, exactly as [described above](outgoing-connections/#port-forwarding-from-login-nodes). @@ -80,4 +80,4 @@ Once the proxy server is running, establish ssh port forwarding from Salomon to local $ ssh -R 6000:localhost:1080 salomon.it4i.cz ``` -Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](outgoing-connections/#port-forwarding-from-compute-nodes) as well . \ No newline at end of file +Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](outgoing-connections/#port-forwarding-from-compute-nodes) as well . diff --git a/docs.it4i/salomon/accessing-the-cluster/vpn-access.md b/docs.it4i/salomon/accessing-the-cluster/vpn-access.md index 4b45bc5b236e52fd67f3c39d580071ceb5c34e0a..e0b10b579b1e7b40164400db06710e6c2f27aa7d 100644 --- a/docs.it4i/salomon/accessing-the-cluster/vpn-access.md +++ b/docs.it4i/salomon/accessing-the-cluster/vpn-access.md @@ -16,7 +16,7 @@ It is impossible to connect to VPN from other operating systems. VPN client installation ------------------------------------ -You can install VPN client from web interface after successful login with LDAP credentials on address <https://vpn.it4i.cz/user> +You can install VPN client from web interface after successful login with LDAP credentials on address <https://vpn.it4i.cz/user>  @@ -45,7 +45,7 @@ Working with VPN client You can use graphical user interface or command line interface to run VPN client on all supported operating systems. We suggest using GUI. -Before the first login to VPN, you have to fill URL **[https://vpn.it4i.cz/user](https://vpn.it4i.cz/user)** into the text field. +Before the first login to VPN, you have to fill URL **[https://vpn.it4i.cz/user](https://vpn.it4i.cz/user)** into the text field.  @@ -73,4 +73,4 @@ After a successful logon, you can see a green circle with a tick mark on the loc  -For disconnecting, right-click on the AnyConnect client icon in the system tray and select **VPN Disconnect**. \ No newline at end of file +For disconnecting, right-click on the AnyConnect client icon in the system tray and select **VPN Disconnect**. diff --git a/docs.it4i/salomon/compute-nodes.md b/docs.it4i/salomon/compute-nodes.md index b8845ff256642ee1d005a979aebaffc6061a5da7..8c4ed9298fbaf20183e04fdcb1bbcf5edc09d31a 100644 --- a/docs.it4i/salomon/compute-nodes.md +++ b/docs.it4i/salomon/compute-nodes.md @@ -3,7 +3,6 @@ Compute Nodes Nodes Configuration ------------------- - Salomon is cluster of x86-64 Intel based nodes. The cluster contains two types of compute nodes of the same processor type and memory size. Compute nodes with MIC accelerator **contains two Intel Xeon Phi 7120P accelerators.** @@ -107,4 +106,4 @@ MIC Accelerator Intel Xeon Phi 7120P Processor Interprocessor Network (IPN) ring. - 16 GDDR5 DIMMS per node - 8 GDDR5 DIMMS per CPU - - 2 GDDR5 DIMMS per channel \ No newline at end of file + - 2 GDDR5 DIMMS per channel diff --git a/docs.it4i/salomon/environment-and-modules.md b/docs.it4i/salomon/environment-and-modules.md index a6f38593d132b252f34feb0fdc31b4c2504b0088..854c2e20e9ec4c01fb49308ab9bb31758c754e48 100644 --- a/docs.it4i/salomon/environment-and-modules.md +++ b/docs.it4i/salomon/environment-and-modules.md @@ -31,7 +31,7 @@ fi In order to configure your shell for running particular application on Salomon we use Module package interface. -Application modules on Salomon cluster are built using [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). The modules are divided into the following structure: +Application modules on Salomon cluster are built using [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). The modules are divided into the following structure: ```bash base: Default module class @@ -122,4 +122,4 @@ On Salomon, we have currently following toolchains installed: |gompi|GCC, OpenMPI| |goolf|BLACS, FFTW, GCC, OpenBLAS, OpenMPI, ScaLAPACK| |iompi|OpenMPI, icc, ifort| - |iccifort|icc, ifort| \ No newline at end of file + |iccifort|icc, ifort| diff --git a/docs.it4i/salomon/hardware-overview.md b/docs.it4i/salomon/hardware-overview.md index 088117ac899b950a91cfe53c9962d3695e6bd4e0..ca6a26750b605c10a7ab9f8b3bd206e33386a714 100644 --- a/docs.it4i/salomon/hardware-overview.md +++ b/docs.it4i/salomon/hardware-overview.md @@ -57,4 +57,4 @@ For large memory computations a special SMP/NUMA SGI UV 2000 server is available | --- | --- | |UV2000 |1 |14x Intel Xeon E5-4627v2, 3.3GHz, 8cores |112 |3328GB DDR3@1866MHz |2x 400GB local SSD1x NVIDIA GM200(GeForce GTX TITAN X),12GB RAM\ | - \ No newline at end of file + diff --git a/docs.it4i/salomon/introduction.md b/docs.it4i/salomon/introduction.md index 5a698254fa2ede99f0e444543b40b3ebf191a7c9..bda0fd83cb2bcbc7f6dee039bab51f5e6878ada6 100644 --- a/docs.it4i/salomon/introduction.md +++ b/docs.it4i/salomon/introduction.md @@ -3,7 +3,7 @@ Introduction Welcome to Salomon supercomputer cluster. The Salomon cluster consists of 1008 compute nodes, totaling 24192 compute cores with 129TB RAM and giving over 2 Pflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 24 cores, at least 128GB RAM. Nodes are interconnected by 7D Enhanced hypercube Infiniband network and equipped with Intel Xeon E5-2680v3 processors. The Salomon cluster consists of 576 nodes without accelerators and 432 nodes equipped with Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/). -The cluster runs [CentOS Linux](http://www.bull.com/bullx-logiciels/systeme-exploitation.html) operating system, which is compatible with the RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) +The cluster runs [CentOS Linux](http://www.bull.com/bullx-logiciels/systeme-exploitation.html) operating system, which is compatible with the RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) **Water-cooled Compute Nodes With MIC Accelerator** @@ -15,4 +15,4 @@ The cluster runs [CentOS Linux](http://www.bull.com/bullx-logiciels/systeme-expl  - \ No newline at end of file + diff --git a/docs.it4i/salomon/network/ib-single-plane-topology.md b/docs.it4i/salomon/network/ib-single-plane-topology.md index 2ee89058e1750e6972e69e096f18f821f8b244c1..e3896525898082a6034c9befe9a1fc2bb144d938 100644 --- a/docs.it4i/salomon/network/ib-single-plane-topology.md +++ b/docs.it4i/salomon/network/ib-single-plane-topology.md @@ -25,4 +25,4 @@ As shown in a diagram  - Racks 27, 28, 29, 30, 31, 32 are equivalent to one Mcell rack. - Racks 33, 34, 35, 36, 37, 38 are equivalent to one Mcell rack. - \ No newline at end of file + diff --git a/docs.it4i/salomon/network/network.md b/docs.it4i/salomon/network/network.md index 32f35b72dc99a670ada0483008ec4af912eccd5d..beb5b7edce25dc0a9994b1e1abf5551cf75e3979 100644 --- a/docs.it4i/salomon/network/network.md +++ b/docs.it4i/salomon/network/network.md @@ -1,12 +1,12 @@ Network ======= -All compute and login nodes of Salomon are interconnected by 7D Enhanced hypercube [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) network and by Gigabit [Ethernet](http://en.wikipedia.org/wiki/Ethernet) -network. Only [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) network may be used to transfer user data. +All compute and login nodes of Salomon are interconnected by 7D Enhanced hypercube [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) network and by Gigabit [Ethernet](http://en.wikipedia.org/wiki/Ethernet) +network. Only [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) network may be used to transfer user data. Infiniband Network ------------------ -All compute and login nodes of Salomon are interconnected by 7D Enhanced hypercube [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) network (56 Gbps). The network topology is a [7D Enhanced hypercube](7d-enhanced-hypercube/). +All compute and login nodes of Salomon are interconnected by 7D Enhanced hypercube [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) network (56 Gbps). The network topology is a [7D Enhanced hypercube](7d-enhanced-hypercube/). Read more about schematic representation of the Salomon cluster [IB single-plain topology](ib-single-plane-topology/) ([hypercube dimension](7d-enhanced-hypercube/) 0). @@ -48,4 +48,4 @@ $ ip addr show ib0 .... inet 10.17.35.19.... .... -``` \ No newline at end of file +``` diff --git a/docs.it4i/salomon/prace.md b/docs.it4i/salomon/prace.md index 1025fb3ddbb0d75ddead8d44bb31bdd9129599cc..f8162731d527bae379e5bcd9bf050d098651ff60 100644 --- a/docs.it4i/salomon/prace.md +++ b/docs.it4i/salomon/prace.md @@ -5,11 +5,11 @@ Intro ----- PRACE users coming to Salomon as to TIER-1 system offered through the DECI calls are in general treated as standard users and so most of the general documentation applies to them as well. This section shows the main differences for quicker orientation, but often uses references to the original documentation. PRACE users who don't undergo the full procedure (including signing the IT4I AuP on top of the PRACE AuP) will not have a password and thus access to some services intended for regular users. This can lower their comfort, but otherwise they should be able to use the TIER-1 system as intended. Please see the [Obtaining Login Credentials section](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/), if the same level of access is required. -All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation/) should be read before continuing reading the local documentation here. +All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation/) should be read before continuing reading the local documentation here. Help and Support ------------------------ -If you have any troubles, need information, request support or want to install additional software, please use [PRACE Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/). +If you have any troubles, need information, request support or want to install additional software, please use [PRACE Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/). Information about the local services are provided in the [introduction of general user documentation](introduction/). Please keep in mind, that standard PRACE accounts don't have a password to access the web interface of the local (IT4Innovations) request tracker and thus a new ticket should be created by sending an e-mail to support[at]it4i.cz. @@ -30,11 +30,11 @@ The user will need a valid certificate and to be present in the PRACE LDAP (plea Most of the information needed by PRACE users accessing the Salomon TIER-1 system can be found here: -- [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs) -- [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ) -- [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh) -- [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details) -- [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer) +- [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs) +- [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ) +- [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh) +- [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details) +- [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer) Before you start to use any of the services don't forget to create a proxy certificate from your certificate: @@ -221,7 +221,7 @@ For production runs always use scratch file systems. The available file systems All system wide installed software on the cluster is made available to the users via the modules. The information about the environment and modules usage is in this [section of general documentation](environment-and-modules/). -PRACE users can use the "prace" module to use the [PRACE Common Production Environment](http://www.prace-ri.eu/PRACE-common-production). +PRACE users can use the "prace" module to use the [PRACE Common Production Environment](http://www.prace-ri.eu/PRACE-common-production). ```bash $ module load prace @@ -246,12 +246,12 @@ For PRACE users, the default production run queue is "qprace". PRACE users can a The resources that are currently subject to accounting are the core hours. The core hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. See [example in the general documentation](resource-allocation-and-job-execution/resources-allocation-policy/). -PRACE users should check their project accounting using the [PRACE Accounting Tool (DART)](http://www.prace-ri.eu/accounting-report-tool/). +PRACE users should check their project accounting using the [PRACE Accounting Tool (DART)](http://www.prace-ri.eu/accounting-report-tool/). Users who have undergone the full local registration procedure (including signing the IT4Innovations Acceptable Use Policy) and who have received local password may check at any time, how many core-hours have been consumed by themselves and their projects using the command "it4ifree". Please note that you need to know your user password to use the command and that the displayed core hours are "system core hours" which differ from PRACE "standardized core hours". !!! Note "Note" - The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients> + The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients> ```bash $ it4ifree @@ -269,4 +269,4 @@ By default file system quota is applied. To check the current status of the quot $ lfs quota -u USER_LOGIN /scratch ``` -If the quota is insufficient, please contact the [support](prace/#help-and-support) and request an increase. \ No newline at end of file +If the quota is insufficient, please contact the [support](prace/#help-and-support) and request an increase. diff --git a/docs.it4i/salomon/resource-allocation-and-job-execution/capacity-computing.md b/docs.it4i/salomon/resource-allocation-and-job-execution/capacity-computing.md index 3f929064b932b60878192cd87c18b21672667485..28c57a64e27567b5670b59a84330782fd47bd563 100644 --- a/docs.it4i/salomon/resource-allocation-and-job-execution/capacity-computing.md +++ b/docs.it4i/salomon/resource-allocation-and-job-execution/capacity-computing.md @@ -315,4 +315,4 @@ Unzip the archive in an empty directory on Anselm and follow the instructions in $ unzip capacity.zip $ cd capacity $ cat README -``` \ No newline at end of file +``` diff --git a/docs.it4i/salomon/resource-allocation-and-job-execution/introduction.md b/docs.it4i/salomon/resource-allocation-and-job-execution/introduction.md index 2e226402918ae4d262d2ee4c9f8a95bb4e964149..d4b62ec69124e9bad9f061a5f79df35190400cc0 100644 --- a/docs.it4i/salomon/resource-allocation-and-job-execution/introduction.md +++ b/docs.it4i/salomon/resource-allocation-and-job-execution/introduction.md @@ -26,4 +26,4 @@ Job submission and execution The qsub submits the job into the queue. The qsub command creates a request to the PBS Job manager for allocation of specified resources. The **smallest allocation unit is entire node, 24 cores**, with exception of the qexp queue. The resources will be allocated when available, subject to allocation policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on first of the allocated nodes.** -Read more on the [Job submission and execution](job-submission-and-execution/) page. \ No newline at end of file +Read more on the [Job submission and execution](job-submission-and-execution/) page. diff --git a/docs.it4i/salomon/resource-allocation-and-job-execution/job-priority.md b/docs.it4i/salomon/resource-allocation-and-job-execution/job-priority.md index c6ba19b458bcfe1bd6c2b7f97e54c814a336693d..c3ef759507de447e6c7d8a76848acfbc9e9e58ce 100644 --- a/docs.it4i/salomon/resource-allocation-and-job-execution/job-priority.md +++ b/docs.it4i/salomon/resource-allocation-and-job-execution/job-priority.md @@ -17,7 +17,7 @@ Queue priority is priority of queue where job is queued before execution. Queue priority has the biggest impact on job execution priority. Execution priority of jobs in higher priority queues is always greater than execution priority of jobs in lower priority queues. Other properties of job used for determining job execution priority (fairshare priority, eligible time) cannot compete with queue priority. -Queue priorities can be seen at <https://extranet.it4i.cz/rsweb/salomon/queues> +Queue priorities can be seen at <https://extranet.it4i.cz/rsweb/salomon/queues> ### Fairshare priority @@ -34,7 +34,7 @@ where MAX_FAIRSHARE has value 1E6, usage~Project~ is cumulated usage by all memb Usage counts allocated corehours (ncpus*walltime). Usage is decayed, or cut in half periodically, at the interval 168 hours (one week). Jobs queued in queue qexp are not calculated to project's usage. !!! Note "Note" - Calculated usage and fairshare priority can be seen at <https://extranet.it4i.cz/rsweb/salomon/projects>. + Calculated usage and fairshare priority can be seen at <https://extranet.it4i.cz/rsweb/salomon/projects>. Calculated fairshare priority can be also seen as Resource_List.fairshare attribute of a job. @@ -69,4 +69,4 @@ Specifying more accurate walltime enables better schedulling, better execution t ### Job placement -Job [placement can be controlled by flags during submission](job-submission-and-execution/#job_placement). \ No newline at end of file +Job [placement can be controlled by flags during submission](job-submission-and-execution/#job_placement). diff --git a/docs.it4i/salomon/resource-allocation-and-job-execution/job-submission-and-execution.md b/docs.it4i/salomon/resource-allocation-and-job-execution/job-submission-and-execution.md index c845affd9aae198cabf4d25ae0ea5115676ec105..4be50e35c11cd0f6ce1c1718ccdf8827ec186e61 100644 --- a/docs.it4i/salomon/resource-allocation-and-job-execution/job-submission-and-execution.md +++ b/docs.it4i/salomon/resource-allocation-and-job-execution/job-submission-and-execution.md @@ -463,4 +463,4 @@ cp output $PBS_O_WORKDIR/. exit ``` -In this example, some directory on the home holds the input file input and executable myprog.x . We copy input and executable files from the home directory where the qsub was invoked ($PBS_O_WORKDIR) to local scratch /lscratch/$PBS_JOBID, execute the myprog.x and copy the output file back to the /home directory. The myprog.x runs on one node only and may use threads. \ No newline at end of file +In this example, some directory on the home holds the input file input and executable myprog.x . We copy input and executable files from the home directory where the qsub was invoked ($PBS_O_WORKDIR) to local scratch /lscratch/$PBS_JOBID, execute the myprog.x and copy the output file back to the /home directory. The myprog.x runs on one node only and may use threads. diff --git a/docs.it4i/salomon/resource-allocation-and-job-execution/resources-allocation-policy.md b/docs.it4i/salomon/resource-allocation-and-job-execution/resources-allocation-policy.md index 366dad1cf21742bb8320aef06ec23fbc92a62bdb..9bbdb062eb7cccad3d827c2687af47f6c20a4e99 100644 --- a/docs.it4i/salomon/resource-allocation-and-job-execution/resources-allocation-policy.md +++ b/docs.it4i/salomon/resource-allocation-and-job-execution/resources-allocation-policy.md @@ -35,12 +35,12 @@ The job wall clock time defaults to **half the maximum time**, see table above. Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatically. Wall clock time limit can be changed for queuing jobs (state Q) using the qalter command, however can not be changed for a running job (state R). -Salomon users may check current queue configuration at <https://extranet.it4i.cz/rsweb/salomon/queues>. +Salomon users may check current queue configuration at <https://extranet.it4i.cz/rsweb/salomon/queues>. ### Queue status !!! Note "Note" - Check the status of jobs, queues and compute nodes at [https://extranet.it4i.cz/rsweb/salomon/](https://extranet.it4i.cz/rsweb/salomon) + Check the status of jobs, queues and compute nodes at [https://extranet.it4i.cz/rsweb/salomon/](https://extranet.it4i.cz/rsweb/salomon)  @@ -129,4 +129,4 @@ Password: -------- ------- ------ -------- ------- OPEN-0-0 1500000 400644 225265 1099356 DD-13-1 10000 2606 2606 7394 -``` \ No newline at end of file +``` diff --git a/docs.it4i/salomon/software/ansys/ansys-cfx.md b/docs.it4i/salomon/software/ansys/ansys-cfx.md index a450ed452af93df8aeffe089c0019775ab52b6e3..93dfb1d2bdb6b1d0f259f0713ab31fcb9444a226 100644 --- a/docs.it4i/salomon/software/ansys/ansys-cfx.md +++ b/docs.it4i/salomon/software/ansys/ansys-cfx.md @@ -1,8 +1,7 @@ ANSYS CFX ========= -[ANSYS CFX](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+CFX) -software is a high-performance, general purpose fluid dynamics program that has been applied to solve wide-ranging fluid flow problems for over 20 years. At the heart of ANSYS CFX is its advanced solver technology, the key to achieving reliable and accurate solutions quickly and robustly. The modern, highly parallelized solver is the foundation for an abundant choice of physical models to capture virtually any type of phenomena related to fluid flow. The solver and its many physical models are wrapped in a modern, intuitive, and flexible GUI and user environment, with extensive capabilities for customization and automation using session files, scripting and a powerful expression language. +[ANSYS CFX](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+CFX) software is a high-performance, general purpose fluid dynamics program that has been applied to solve wide-ranging fluid flow problems for over 20 years. At the heart of ANSYS CFX is its advanced solver technology, the key to achieving reliable and accurate solutions quickly and robustly. The modern, highly parallelized solver is the foundation for an abundant choice of physical models to capture virtually any type of phenomena related to fluid flow. The solver and its many physical models are wrapped in a modern, intuitive, and flexible GUI and user environment, with extensive capabilities for customization and automation using session files, scripting and a powerful expression language. To run ANSYS CFX in batch mode you can utilize/modify the default cfx.pbs script and execute it via the qsub command. @@ -54,4 +53,4 @@ Header of the pbs file (above) is common and description can be find on [this s Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. >Input file has to be defined by common CFX def file which is attached to the cfx solver via parameter -def **License** should be selected by parameter -P (Big letter **P**). Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**. -[More about licensing here](licensing/) \ No newline at end of file +[More about licensing here](licensing/) diff --git a/docs.it4i/salomon/software/ansys/ansys-fluent.md b/docs.it4i/salomon/software/ansys/ansys-fluent.md index aa43f6a9b8ad1229ee37fabb6bcf41366d9f4c36..9011b7d31798a676814ce4d03c490a36bd01a2bd 100644 --- a/docs.it4i/salomon/software/ansys/ansys-fluent.md +++ b/docs.it4i/salomon/software/ansys/ansys-fluent.md @@ -1,7 +1,7 @@ ANSYS Fluent ============ -[ANSYS Fluent](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+Fluent) +[ANSYS Fluent](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+Fluent) software contains the broad physical modeling capabilities needed to model flow, turbulence, heat transfer, and reactions for industrial applications ranging from air flow over an aircraft wing to combustion in a furnace, from bubble columns to oil platforms, from blood flow to semiconductor manufacturing, and from clean room design to wastewater treatment plants. Special models that give the software the ability to model in-cylinder combustion, aeroacoustics, turbomachinery, and multiphase systems have served to broaden its reach. 1. Common way to run Fluent over pbs file @@ -39,7 +39,7 @@ NCORES=`wc -l $PBS_NODEFILE |awk '{print $1}'` /ansys_inc/v145/fluent/bin/fluent 3d -t$NCORES -cnf=$PBS_NODEFILE -g -i fluent.jou ``` -Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common Fluent journal file which is attached to the Fluent solver via parameter -i fluent.jou @@ -161,4 +161,4 @@ ANSLIC_ADMIN Utility will be run ANSYS Academic Research license should be moved up to the top of the list. - \ No newline at end of file + diff --git a/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md b/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md index 47c66dd254b0e00bc5964a382d5cfcf92c7a7ab8..24a16a848199c9049b98a6112fecadc2bdb68364 100644 --- a/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md +++ b/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md @@ -1,7 +1,7 @@ ANSYS LS-DYNA ============= -**[ANSYSLS-DYNA](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/Explicit+Dynamics/ANSYS+LS-DYNA)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern graphical user environment. +**[ANSYSLS-DYNA](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/Explicit+Dynamics/ANSYS+LS-DYNA)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern graphical user environment. To run ANSYS LS-DYNA in batch mode you can utilize/modify the default ansysdyna.pbs script and execute it via the qsub command. @@ -51,6 +51,6 @@ echo Machines: $hl /ansys_inc/v145/ansys/bin/ansys145 -dis -lsdynampp i=input.k -machines $hl ``` -Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. -Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ansys solver via parameter i= \ No newline at end of file +Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ansys solver via parameter i= diff --git a/docs.it4i/salomon/software/ansys/ansys-mechanical-apdl.md b/docs.it4i/salomon/software/ansys/ansys-mechanical-apdl.md index 9bb3cdb458d5f14fd0fd13f237cf8edec20d0394..1bbaeae605c9bfbe13f5fbdadaecd9659e8362fc 100644 --- a/docs.it4i/salomon/software/ansys/ansys-mechanical-apdl.md +++ b/docs.it4i/salomon/software/ansys/ansys-mechanical-apdl.md @@ -1,7 +1,7 @@ ANSYS MAPDL =========== -**[ANSYS Multiphysics](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/ANSYS+Multiphysics)** +**[ANSYS Multiphysics](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/ANSYS+Multiphysics)** software offers a comprehensive product solution for both multiphysics and single-physics analysis. The product includes structural, thermal, fluid and both high- and low-frequency electromagnetic analysis. The product also contains solutions for both direct and sequentially coupled physics problems including direct coupled-field elements and the ANSYS multi-field solver. To run ANSYS MAPDL in batch mode you can utilize/modify the default mapdl.pbs script and execute it via the qsub command. @@ -50,8 +50,8 @@ echo Machines: $hl /ansys_inc/v145/ansys/bin/ansys145 -b -dis -p aa_r -i input.dat -o file.out -machines $hl -dir $WORK_DIR ``` -Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common APDL file which is attached to the ansys solver via parameter -i -**License** should be selected by parameter -p. Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**, aa_r_dy (ANSYS **Academic** AUTODYN) [More about licensing here](licensing/) \ No newline at end of file +**License** should be selected by parameter -p. Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**, aa_r_dy (ANSYS **Academic** AUTODYN) [More about licensing here](licensing/) diff --git a/docs.it4i/salomon/software/ansys/ansys.md b/docs.it4i/salomon/software/ansys/ansys.md index c58517a6cc5327002ebc8a0c471ec4fdb75de349..4140db0e993a6c862fe2695090113d38c8309563 100644 --- a/docs.it4i/salomon/software/ansys/ansys.md +++ b/docs.it4i/salomon/software/ansys/ansys.md @@ -1,7 +1,7 @@ Overview of ANSYS Products ========================== -**[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM) +**[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM) Anselm provides as commercial as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of license or by two letter preposition "**aa_**" in the license feature name. Change of license is realized on command line respectively directly in user's pbs file (see individual products). [ More about licensing here](licensing/) diff --git a/docs.it4i/salomon/software/ansys/workbench.md b/docs.it4i/salomon/software/ansys/workbench.md index 929019f1459affda975c4c9f6e93660defd0cf45..d6592d5351fd6c499886af5286059379ed102a05 100644 --- a/docs.it4i/salomon/software/ansys/workbench.md +++ b/docs.it4i/salomon/software/ansys/workbench.md @@ -60,4 +60,4 @@ Now, save the project and close Workbench. We will use this script to launch the runwb2 -R jou6.wbjn -B -F test9.wbpj ``` -The solver settings are saved in file solvehandlers.xml, which is not located in the project directory. Verify your solved settings when uploading a project from your local computer. \ No newline at end of file +The solver settings are saved in file solvehandlers.xml, which is not located in the project directory. Verify your solved settings when uploading a project from your local computer. diff --git a/docs.it4i/salomon/software/chemistry/molpro.md b/docs.it4i/salomon/software/chemistry/molpro.md index 15c62090c39e8638f4156e3469e85271148541f4..526dbd6624aea1dbc142e48f33822087575e4dab 100644 --- a/docs.it4i/salomon/software/chemistry/molpro.md +++ b/docs.it4i/salomon/software/chemistry/molpro.md @@ -5,13 +5,13 @@ Molpro is a complete system of ab initio programs for molecular electronic struc About Molpro ------------ -Molpro is a software package used for accurate ab-initio quantum chemistry calculations. More information can be found at the [official webpage](http://www.molpro.net/). +Molpro is a software package used for accurate ab-initio quantum chemistry calculations. More information can be found at the [official webpage](http://www.molpro.net/). License ------- Molpro software package is available only to users that have a valid license. Please contact support to enable access to Molpro if you have a valid license appropriate for running on our cluster (eg. academic research group licence, parallel execution). -To run Molpro, you need to have a valid license token present in " $HOME/.molpro/token". You can download the token from [Molpro website](https://www.molpro.net/licensee/?portal=licensee). +To run Molpro, you need to have a valid license token present in " $HOME/.molpro/token". You can download the token from [Molpro website](https://www.molpro.net/licensee/?portal=licensee). Installed version ----------------- @@ -31,7 +31,7 @@ Compilation parameters are default: Running ------ -Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details. +Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details. !!! Note "Note" The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS. @@ -61,4 +61,4 @@ You are advised to use the -d option to point to a directory in [SCRATCH filesys # delete scratch directory rm -rf /scratch/$USER/$PBS_JOBID -``` \ No newline at end of file +``` diff --git a/docs.it4i/salomon/software/chemistry/nwchem.md b/docs.it4i/salomon/software/chemistry/nwchem.md index dd257070c2b043549933cdb33db428b0a8ed5f78..8dfa682235e6ecf0dd64734b26a8b29c72ea1000 100644 --- a/docs.it4i/salomon/software/chemistry/nwchem.md +++ b/docs.it4i/salomon/software/chemistry/nwchem.md @@ -7,7 +7,7 @@ Introduction ------------------------- NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters. -[Homepage](http://www.nwchem-sw.org/index.php/Main_Page) +[Homepage](http://www.nwchem-sw.org/index.php/Main_Page) Installed versions ------------------ @@ -41,7 +41,7 @@ Running Options -------------------- -Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives : +Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives : - MEMORY : controls the amount of memory NWChem will use -- SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct" \ No newline at end of file +- SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct" diff --git a/docs.it4i/salomon/software/chemistry/phono3py.md b/docs.it4i/salomon/software/chemistry/phono3py.md index c05ebc563bb65f38fcd9b79037f9e2b7f0c81e35..d23a8b04bda061932e0268d14fbaa59df2978e4b 100644 --- a/docs.it4i/salomon/software/chemistry/phono3py.md +++ b/docs.it4i/salomon/software/chemistry/phono3py.md @@ -3,7 +3,7 @@ Phono3py Introduction ------------- -This GPL software calculates phonon-phonon interactions via the third order force constants. It allows to obtain lattice thermal conductivity, phonon lifetime/linewidth, imaginary part of self energy at the lowest order, joint density of states (JDOS) and weighted-JDOS. For details see Phys. Rev. B 91, 094306 (2015) and [http://atztogo.github.io/phono3py/index.html](http://atztogo.github.io/phono3py/index.html) +This GPL software calculates phonon-phonon interactions via the third order force constants. It allows to obtain lattice thermal conductivity, phonon lifetime/linewidth, imaginary part of self energy at the lowest order, joint density of states (JDOS) and weighted-JDOS. For details see Phys. Rev. B 91, 094306 (2015) and [http://atztogo.github.io/phono3py/index.html](http://atztogo.github.io/phono3py/index.html) !!! Note "Note" Load the phono3py/0.9.14-ictce-7.3.5-Python-2.7.9 module @@ -164,4 +164,4 @@ Finally the thermal conductivity result is produced by grouping single conductiv ```bash $ phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" --br --read_gamma -``` \ No newline at end of file +``` diff --git a/docs.it4i/salomon/software/compilers.md b/docs.it4i/salomon/software/compilers.md index 03b33da7e84fbb8cb30d35cf0ab259a4fb06d6ce..a5703fd4b70ce95172e11a9e002faeab2424adcc 100644 --- a/docs.it4i/salomon/software/compilers.md +++ b/docs.it4i/salomon/software/compilers.md @@ -53,7 +53,7 @@ PGDBG OpenMP/MPI debugger and PGPROF OpenMP/MPI profiler are available $ pgprof & ``` -For more information, see the [PGI page](http://www.pgroup.com/products/pgicdk.htm). +For more information, see the [PGI page](http://www.pgroup.com/products/pgicdk.htm). GNU --- @@ -190,4 +190,4 @@ For information how to use Java (runtime and/or compiler), please read the [Java ##nVidia CUDA -For information how to work with nVidia CUDA, please read the [nVidia CUDA page](../../anselm-cluster-documentation/software/nvidia-cuda/). \ No newline at end of file +For information how to work with nVidia CUDA, please read the [nVidia CUDA page](../../anselm-cluster-documentation/software/nvidia-cuda/). diff --git a/docs.it4i/salomon/software/comsol/comsol-multiphysics.md b/docs.it4i/salomon/software/comsol/comsol-multiphysics.md index da096283fe339a2c21aed4c4335fa1e4c1285c84..e40a879be096e93cef62514092be8938ee504128 100644 --- a/docs.it4i/salomon/software/comsol/comsol-multiphysics.md +++ b/docs.it4i/salomon/software/comsol/comsol-multiphysics.md @@ -3,13 +3,13 @@ COMSOL Multiphysics® Introduction ------------------------- -[COMSOL](http://www.comsol.com) is a powerful environment for modelling and solving various engineering and scientific problems based on partial differential equations. COMSOL is designed to solve coupled or multiphysics phenomena. For many standard engineering problems COMSOL provides add-on products such as electrical, mechanical, fluid flow, and chemical applications. +[COMSOL](http://www.comsol.com) is a powerful environment for modelling and solving various engineering and scientific problems based on partial differential equations. COMSOL is designed to solve coupled or multiphysics phenomena. For many standard engineering problems COMSOL provides add-on products such as electrical, mechanical, fluid flow, and chemical applications. -- [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module), -- [Heat Transfer Module](http://www.comsol.com/heat-transfer-module), -- [CFD Module](http://www.comsol.com/cfd-module), -- [Acoustics Module](http://www.comsol.com/acoustics-module), -- and [many others](http://www.comsol.com/products) +- [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module), +- [Heat Transfer Module](http://www.comsol.com/heat-transfer-module), +- [CFD Module](http://www.comsol.com/cfd-module), +- [Acoustics Module](http://www.comsol.com/acoustics-module), +- and [many others](http://www.comsol.com/products) COMSOL also allows an interface support for equation-based modelling of partial differential equations. @@ -117,4 +117,4 @@ cd /apps/cae/COMSOL/51/mli matlab -nodesktop -nosplash -r "mphstart; addpath /scratch/work/user/$USER/work; test_job" ``` -This example shows how to run Livelink for MATLAB with following configuration: 3 nodes and 16 cores per node. Working directory has to be created before submitting (comsol_matlab.pbs) job script into the queue. Input file (test_job.m) has to be in working directory or full path to input file has to be specified. The Matlab command option (-r ”mphstart”) created a connection with a COMSOL server using the default port number. \ No newline at end of file +This example shows how to run Livelink for MATLAB with following configuration: 3 nodes and 16 cores per node. Working directory has to be created before submitting (comsol_matlab.pbs) job script into the queue. Input file (test_job.m) has to be in working directory or full path to input file has to be specified. The Matlab command option (-r ”mphstart”) created a connection with a COMSOL server using the default port number. diff --git a/docs.it4i/salomon/software/debuggers/Introduction.md b/docs.it4i/salomon/software/debuggers/Introduction.md index 6db621b1c14f8589bdd5b77d5e2d063484e15afa..c85157da67777964414ed2133e82df75fc1a797d 100644 --- a/docs.it4i/salomon/software/debuggers/Introduction.md +++ b/docs.it4i/salomon/software/debuggers/Introduction.md @@ -62,4 +62,4 @@ Vampir is a GUI trace analyzer for traces in OTF format. $ vampir ``` -Read more at the [Vampir](vampir/) page. \ No newline at end of file +Read more at the [Vampir](vampir/) page. diff --git a/docs.it4i/salomon/software/debuggers/aislinn.md b/docs.it4i/salomon/software/debuggers/aislinn.md index 59dfd3bd84138b17cfaee8dbd36e1e54375ff23d..c2a9982448b0bee936940655f406615075d60301 100644 --- a/docs.it4i/salomon/software/debuggers/aislinn.md +++ b/docs.it4i/salomon/software/debuggers/aislinn.md @@ -4,7 +4,7 @@ Aislinn - Aislinn is a dynamic verifier for MPI programs. For a fixed input it covers all possible runs with respect to nondeterminism introduced by MPI. It allows to detect bugs (for sure) that occurs very rare in normal runs. - Aislinn detects problems like invalid memory accesses, deadlocks, misuse of MPI, and resource leaks. - Aislinn is open-source software; you can use it without any licensing limitations. -- Web page of the project: <http://verif.cs.vsb.cz/aislinn/> +- Web page of the project: <http://verif.cs.vsb.cz/aislinn/> !!! Note "Note" Aislinn is software developed at IT4Innovations and some parts are still considered experimental. If you have any questions or experienced any problems, please contact the author: <stanislav.bohm@vsb.cz>. @@ -99,4 +99,4 @@ There are also some limitations bounded to the current version and they will be - All files containing MPI calls have to be recompiled by MPI implementation provided by Aislinn. The files that does not contain MPI calls, they do not have to recompiled. Aislinn MPI implementation supports many commonly used calls from MPI-2 and MPI-3 related to point-to-point communication, collective communication, and communicator management. Unfortunately, MPI-IO and one-side communication is not implemented yet. - Each MPI can use only one thread (if you use OpenMP, set OMP_NUM_THREADS to 1). -- There are some limitations for using files, but if the program just reads inputs and writes results, it is ok. \ No newline at end of file +- There are some limitations for using files, but if the program just reads inputs and writes results, it is ok. diff --git a/docs.it4i/salomon/software/debuggers/allinea-ddt.md b/docs.it4i/salomon/software/debuggers/allinea-ddt.md index f09f506293fb3d7e157128b45de7730a43a40891..0693e6504c24fb5a2c6b69a76500f9ec36f4ed64 100644 --- a/docs.it4i/salomon/software/debuggers/allinea-ddt.md +++ b/docs.it4i/salomon/software/debuggers/allinea-ddt.md @@ -94,4 +94,4 @@ Users can find original User Guide after loading the DDT module: $DDTPATH/doc/userguide.pdf ``` -[1] Discipline, Magic, Inspiration and Science: Best Practice Debugging with Allinea DDT, Workshop conducted at LLNL by Allinea on May 10, 2013, [link](https://computing.llnl.gov/tutorials/allineaDDT/index.html) \ No newline at end of file +[1] Discipline, Magic, Inspiration and Science: Best Practice Debugging with Allinea DDT, Workshop conducted at LLNL by Allinea on May 10, 2013, [link](https://computing.llnl.gov/tutorials/allineaDDT/index.html) diff --git a/docs.it4i/salomon/software/debuggers/allinea-performance-reports.md b/docs.it4i/salomon/software/debuggers/allinea-performance-reports.md index 63b6b15a370150f655f804d17ad9f27cf342c6ad..6ab49b2d779ee27eef400e8ecbf227d58d01aa68 100644 --- a/docs.it4i/salomon/software/debuggers/allinea-performance-reports.md +++ b/docs.it4i/salomon/software/debuggers/allinea-performance-reports.md @@ -58,4 +58,4 @@ Now lets profile the code: $ perf-report mpirun ./mympiprog.x ``` -Performance report files [mympiprog_32p*.txt](mympiprog_32p_2014-10-15_16-56.txt) and [mympiprog_32p*.html](mympiprog_32p_2014-10-15_16-56.html) were created. We can see that the code is very efficient on MPI and is CPU bounded. \ No newline at end of file +Performance report files [mympiprog_32p*.txt](mympiprog_32p_2014-10-15_16-56.txt) and [mympiprog_32p*.html](mympiprog_32p_2014-10-15_16-56.html) were created. We can see that the code is very efficient on MPI and is CPU bounded. diff --git a/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md b/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md index b4c1a4ad9a962fa12475df6afcd361a300a40c35..c9a1473ef272151d2f47169e6bb29d59bc179306 100644 --- a/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md +++ b/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md @@ -90,6 +90,6 @@ You can obtain this command line by pressing the "Command line..." button on Ana References ---------- -1. <https://www.rcac.purdue.edu/tutorials/phi/PerformanceTuningXeonPhi-Tullos.pdf> Performance Tuning for Intel® Xeon Phi™ Coprocessors -2. <https://software.intel.com/en-us/intel-vtune-amplifier-xe-support/documentation> >Intel® VTune™ Amplifier Support -3. <https://software.intel.com/en-us/amplifier_help_linux> \ No newline at end of file +1. <https://www.rcac.purdue.edu/tutorials/phi/PerformanceTuningXeonPhi-Tullos.pdf> Performance Tuning for Intel® Xeon Phi™ Coprocessors +2. <https://software.intel.com/en-us/intel-vtune-amplifier-xe-support/documentation> >Intel® VTune™ Amplifier Support +3. <https://software.intel.com/en-us/amplifier_help_linux> diff --git a/docs.it4i/salomon/software/debuggers/total-view.md b/docs.it4i/salomon/software/debuggers/total-view.md index ea7df97257837a03dbd7215a06fb17f4cb04e063..daff0abb0bf8ab4f0078ba6ecdf2511578ac9883 100644 --- a/docs.it4i/salomon/software/debuggers/total-view.md +++ b/docs.it4i/salomon/software/debuggers/total-view.md @@ -14,7 +14,7 @@ On the cluster users can debug OpenMP or MPI code that runs up to 64 parallel pr Debugging of GPU accelerated codes is also supported. -You can check the status of the licenses [here](https://extranet.it4i.cz/rsweb/anselm/license/totalview). +You can check the status of the licenses [here](https://extranet.it4i.cz/rsweb/anselm/license/totalview). Compiling Code to run with TotalView ------------------------------------ @@ -116,7 +116,7 @@ the entire function: **source /apps/all/OpenMPI/1.10.1-GNU-4.9.3-2.25/etc/openmpi-totalview.tcl** -You need to do this step only once. See also [OpenMPI FAQ entry](https://www.open-mpi.org/faq/?category=running#run-with-tv) +You need to do this step only once. See also [OpenMPI FAQ entry](https://www.open-mpi.org/faq/?category=running#run-with-tv) Now you can run the parallel debugger using: @@ -151,4 +151,4 @@ More information regarding the command line parameters of the TotalView can be f Documentation ------------- -[1] The [TotalView documentation](http://www.roguewave.com/support/product-documentation/totalview-family.aspx#totalview) web page is a good resource for learning more about some of the advanced TotalView features. \ No newline at end of file +[1] The [TotalView documentation](http://www.roguewave.com/support/product-documentation/totalview-family.aspx#totalview) web page is a good resource for learning more about some of the advanced TotalView features. diff --git a/docs.it4i/salomon/software/debuggers/valgrind.md b/docs.it4i/salomon/software/debuggers/valgrind.md index c7ab91aeef77006b6c51ae8d657391fe7b38cf88..9c0798d20d79497ffc515b749bf48df3790f436e 100644 --- a/docs.it4i/salomon/software/debuggers/valgrind.md +++ b/docs.it4i/salomon/software/debuggers/valgrind.md @@ -5,7 +5,7 @@ About Valgrind -------------- Valgrind is an open-source tool, used mainly for debuggig memory-related problems, such as memory leaks, use of uninitalized memory etc. in C/C++ applications. The toolchain was however extended over time with more functionality, such as debugging of threaded applications, cache profiling, not limited only to C/C++. -Valgind is an extremely useful tool for debugging memory errors such as [off-by-one](http://en.wikipedia.org/wiki/Off-by-one_error). Valgrind uses a virtual machine and dynamic recompilation of binary code, because of that, you can expect that programs being debugged by Valgrind run 5-100 times slower. +Valgind is an extremely useful tool for debugging memory errors such as [off-by-one](http://en.wikipedia.org/wiki/Off-by-one_error). Valgrind uses a virtual machine and dynamic recompilation of binary code, because of that, you can expect that programs being debugged by Valgrind run 5-100 times slower. The main tools available in Valgrind are : @@ -14,7 +14,7 @@ The main tools available in Valgrind are : - **Hellgrind** and **DRD** can detect race conditions in multi-threaded applications. - **Cachegrind**, a cache profiler. - **Callgrind**, a callgraph analyzer. -- For a full list and detailed documentation, please refer to the [official Valgrind documentation](http://valgrind.org/docs/). +- For a full list and detailed documentation, please refer to the [official Valgrind documentation](http://valgrind.org/docs/). Installed versions ------------------ @@ -263,4 +263,4 @@ Prints this output : (note that there is output printed for every launched MPI p ==31319== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 4 from 4) ``` -We can see that Valgrind has reported use of unitialised memory on the master process (which reads the array to be broadcasted) and use of unaddresable memory on both processes. \ No newline at end of file +We can see that Valgrind has reported use of unitialised memory on the master process (which reads the array to be broadcasted) and use of unaddresable memory on both processes. diff --git a/docs.it4i/salomon/software/debuggers/vampir.md b/docs.it4i/salomon/software/debuggers/vampir.md index 60d9c4e12c1c66341cc7a8747782d1dc7527d610..c19f105f006d40733b80443f42b11b119db6a626 100644 --- a/docs.it4i/salomon/software/debuggers/vampir.md +++ b/docs.it4i/salomon/software/debuggers/vampir.md @@ -20,5 +20,5 @@ You can find the detailed user manual in PDF format in $EBROOTVAMPIR/doc/vampir- References ---------- -1. <https://www.vampir.eu> +1. <https://www.vampir.eu> diff --git a/docs.it4i/salomon/software/intel-suite/intel-advisor.md b/docs.it4i/salomon/software/intel-suite/intel-advisor.md index f8e86de141b212aaeec40e96671a3dc84072d60c..f02d18643c4fed2dc0b61bfbcefb635c45951632 100644 --- a/docs.it4i/salomon/software/intel-suite/intel-advisor.md +++ b/docs.it4i/salomon/software/intel-suite/intel-advisor.md @@ -27,6 +27,6 @@ In the left pane, you can switch between Vectorization and Threading workflows. References ---------- -1. [Intel® Advisor 2015 Tutorial: Find Where to Add Parallelism - C++ Sample](https://software.intel.com/en-us/advisorxe_2015_tut_lin_c) -2. [Product page](https://software.intel.com/en-us/intel-advisor-xe) -3. [Documentation](https://software.intel.com/en-us/intel-advisor-2016-user-guide-linux) \ No newline at end of file +1. [Intel® Advisor 2015 Tutorial: Find Where to Add Parallelism - C++ Sample](https://software.intel.com/en-us/advisorxe_2015_tut_lin_c) +2. [Product page](https://software.intel.com/en-us/intel-advisor-xe) +3. [Documentation](https://software.intel.com/en-us/intel-advisor-2016-user-guide-linux) diff --git a/docs.it4i/salomon/software/intel-suite/intel-compilers.md b/docs.it4i/salomon/software/intel-suite/intel-compilers.md index 2ad30c5fae0af41c4b22849cef724f3dd53f2a10..8185db79f5282a63f704b2158ece3d1f177fdc55 100644 --- a/docs.it4i/salomon/software/intel-suite/intel-compilers.md +++ b/docs.it4i/salomon/software/intel-suite/intel-compilers.md @@ -27,11 +27,11 @@ The compiler recognizes the omp, simd, vector and ivdep pragmas for OpenMP paral $ ifort -ipo -O3 -xCORE-AVX2 -qopt-report1 -qopt-report-phase=vec -openmp myprog.f mysubroutines.f -o myprog.x ``` -Read more at <https://software.intel.com/en-us/intel-cplusplus-compiler-16.0-user-and-reference-guide> +Read more at <https://software.intel.com/en-us/intel-cplusplus-compiler-16.0-user-and-reference-guide> Sandy Bridge/Ivy Bridge/Haswell binary compatibility ---------------------------------------------------- Anselm nodes are currently equipped with Sandy Bridge CPUs, while Salomon compute nodes are equipped with Haswell based architecture. The UV1 SMP compute server has Ivy Bridge CPUs, which are equivalent to Sandy Bridge (only smaller manufacturing technology). The new processors are backward compatible with the Sandy Bridge nodes, so all programs that ran on the Sandy Bridge processors, should also run on the new Haswell nodes. To get optimal performance out of the Haswell processors a program should make use of the special AVX2 instructions for this processor. One can do this by recompiling codes with the compiler flags designated to invoke these instructions. For the Intel compiler suite, there are two ways of doing this: - Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge/Ivy Bridge nodes. -- Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge/Ivy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this will result in larger binaries. \ No newline at end of file +- Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge/Ivy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this will result in larger binaries. diff --git a/docs.it4i/salomon/software/intel-suite/intel-debugger.md b/docs.it4i/salomon/software/intel-suite/intel-debugger.md index bf3005ddd24cdd597b931be77e5c00e412b2f366..c447a62778cfb36bfffc92d5e359d4a905946ef3 100644 --- a/docs.it4i/salomon/software/intel-suite/intel-debugger.md +++ b/docs.it4i/salomon/software/intel-suite/intel-debugger.md @@ -74,4 +74,4 @@ Run the idb debugger in GUI mode. The menu Parallel contains number of tools for Further information ------------------- -Exhaustive manual on idb features and usage is published at Intel website, <https://software.intel.com/sites/products/documentation/doclib/iss/2013/compiler/cpp-lin/> \ No newline at end of file +Exhaustive manual on idb features and usage is published at Intel website, <https://software.intel.com/sites/products/documentation/doclib/iss/2013/compiler/cpp-lin/> diff --git a/docs.it4i/salomon/software/intel-suite/intel-inspector.md b/docs.it4i/salomon/software/intel-suite/intel-inspector.md index 81c6abb2b2b9ff785c0424c6865ff3fd216eb2e2..992b7bd15450d1d523b2f149d1fd1e0c2f99b206 100644 --- a/docs.it4i/salomon/software/intel-suite/intel-inspector.md +++ b/docs.it4i/salomon/software/intel-suite/intel-inspector.md @@ -35,6 +35,6 @@ Results obtained from batch mode can be then viewed in the GUI by selecting File References ---------- -1. [Product page](https://software.intel.com/en-us/intel-inspector-xe) -2. [Documentation and Release Notes](https://software.intel.com/en-us/intel-inspector-xe-support/documentation) -3. [Tutorials](https://software.intel.com/en-us/articles/inspectorxe-tutorials) \ No newline at end of file +1. [Product page](https://software.intel.com/en-us/intel-inspector-xe) +2. [Documentation and Release Notes](https://software.intel.com/en-us/intel-inspector-xe-support/documentation) +3. [Tutorials](https://software.intel.com/en-us/articles/inspectorxe-tutorials) diff --git a/docs.it4i/salomon/software/intel-suite/intel-integrated-performance-primitives.md b/docs.it4i/salomon/software/intel-suite/intel-integrated-performance-primitives.md index 8f57de9bbd9886304c8bfa67183d198c764727d7..b324e2339143e63a43e6de72a86cd0d83682b9db 100644 --- a/docs.it4i/salomon/software/intel-suite/intel-integrated-performance-primitives.md +++ b/docs.it4i/salomon/software/intel-suite/intel-integrated-performance-primitives.md @@ -77,6 +77,6 @@ You will need the ipp module loaded to run the ipp enabled executable. This may Code samples and documentation ------------------------------ -Intel provides number of [Code Samples for IPP](https://software.intel.com/en-us/articles/code-samples-for-intel-integrated-performance-primitives-library), illustrating use of IPP. +Intel provides number of [Code Samples for IPP](https://software.intel.com/en-us/articles/code-samples-for-intel-integrated-performance-primitives-library), illustrating use of IPP. -Read full documentation on IPP [on Intel website,](http://software.intel.com/sites/products/search/search.php?q=&x=15&y=6&product=ipp&version=7.1&docos=lin) in particular the [IPP Reference manual.](http://software.intel.com/sites/products/documentation/doclib/ipp_sa/71/ipp_manual/index.htm) \ No newline at end of file +Read full documentation on IPP [on Intel website,](http://software.intel.com/sites/products/search/search.php?q=&x=15&y=6&product=ipp&version=7.1&docos=lin) in particular the [IPP Reference manual.](http://software.intel.com/sites/products/documentation/doclib/ipp_sa/71/ipp_manual/index.htm) diff --git a/docs.it4i/salomon/software/intel-suite/intel-mkl.md b/docs.it4i/salomon/software/intel-suite/intel-mkl.md index b84ad87d8b00346e2fc5ff6d68a93b5368dd4c11..4050e14a882ebb20494a0e7ea84390083bcb571d 100644 --- a/docs.it4i/salomon/software/intel-suite/intel-mkl.md +++ b/docs.it4i/salomon/software/intel-suite/intel-mkl.md @@ -15,7 +15,7 @@ Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, e - Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search. - Extended Eigensolver, a shared memory version of an eigensolver based on the Feast Eigenvalue Solver. -For details see the [Intel MKL Reference Manual](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mklman/index.htm). +For details see the [Intel MKL Reference Manual](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mklman/index.htm). Intel MKL version 11.2.3.187 is available on the cluster @@ -38,7 +38,7 @@ Intel MKL library provides number of interfaces. The fundamental once are the LP ### Linking -Linking Intel MKL libraries may be complex. Intel [mkl link line advisor](http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor) helps. See also [examples](intel-mkl/#examples) below. +Linking Intel MKL libraries may be complex. Intel [mkl link line advisor](http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor) helps. See also [examples](intel-mkl/#examples) below. You will need the mkl module loaded to run the mkl enabled executable. This may be avoided, by compiling library search paths into the executable. Include rpath on the compile line: @@ -123,4 +123,4 @@ MKL includes LAPACKE C Interface to LAPACK. For some reason, although Intel is t Further reading --------------- -Read more on [Intel website](http://software.intel.com/en-us/intel-mkl), in particular the [MKL users guide](https://software.intel.com/en-us/intel-mkl/documentation/linux). \ No newline at end of file +Read more on [Intel website](http://software.intel.com/en-us/intel-mkl), in particular the [MKL users guide](https://software.intel.com/en-us/intel-mkl/documentation/linux). diff --git a/docs.it4i/salomon/software/intel-suite/intel-parallel-studio-introduction.md b/docs.it4i/salomon/software/intel-suite/intel-parallel-studio-introduction.md index 88787b2443787145d2ac4bc0f21e1d26a2cc8de8..1fc21927a1f199d775f1894c5405069a3ea50069 100644 --- a/docs.it4i/salomon/software/intel-suite/intel-parallel-studio-introduction.md +++ b/docs.it4i/salomon/software/intel-suite/intel-parallel-studio-introduction.md @@ -67,4 +67,4 @@ Intel Threading Building Blocks (Intel TBB) is a library that supports scalable $ module load tbb ``` -Read more at the [Intel TBB](intel-tbb/) page. \ No newline at end of file +Read more at the [Intel TBB](intel-tbb/) page. diff --git a/docs.it4i/salomon/software/intel-suite/intel-tbb.md b/docs.it4i/salomon/software/intel-suite/intel-tbb.md index 80e0541c8893e6959d3c9e89d3ac328b70a98ace..3a83e9698bca2eede19a6805ff6435dcbe622ac7 100644 --- a/docs.it4i/salomon/software/intel-suite/intel-tbb.md +++ b/docs.it4i/salomon/software/intel-suite/intel-tbb.md @@ -38,4 +38,4 @@ You will need the tbb module loaded to run the tbb enabled executable. This may Further reading --------------- -Read more on Intel website, <http://software.intel.com/sites/products/documentation/doclib/tbb_sa/help/index.htm> \ No newline at end of file +Read more on Intel website, <http://software.intel.com/sites/products/documentation/doclib/tbb_sa/help/index.htm> diff --git a/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md b/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md index f90d945e291cedf2fcc42db4c68fdf2f0c0695b6..6d0703ed0929d8436c2c57867aa174cf199f7746 100644 --- a/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md +++ b/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md @@ -37,6 +37,6 @@ Please refer to Intel documenation about usage of the GUI tool. References ---------- -1. [Getting Started with Intel® Trace Analyzer and Collector](https://software.intel.com/en-us/get-started-with-itac-for-linux) -2. [Intel® Trace Analyzer and Collector - Documentation](http://Intel®%20Trace%20Analyzer%20and%20Collector%20-%20Documentation) +1. [Getting Started with Intel® Trace Analyzer and Collector](https://software.intel.com/en-us/get-started-with-itac-for-linux) +2. [Intel® Trace Analyzer and Collector - Documentation](http://Intel®%20Trace%20Analyzer%20and%20Collector%20-%20Documentation) diff --git a/docs.it4i/salomon/software/intel-xeon-phi.md b/docs.it4i/salomon/software/intel-xeon-phi.md index eba38dca8a6600f0ad8f06e2d29bf01e33b9718d..90b099d99ac7c53180ba1058fb6d7d456160eb06 100644 --- a/docs.it4i/salomon/software/intel-xeon-phi.md +++ b/docs.it4i/salomon/software/intel-xeon-phi.md @@ -242,7 +242,7 @@ Automatic Offload using Intel MKL Library ----------------------------------------- Intel MKL includes an Automatic Offload (AO) feature that enables computationally intensive MKL functions called in user code to benefit from attached Intel Xeon Phi coprocessors automatically and transparently. -Behavioral of automatic offload mode is controlled by functions called within the program or by environmental variables. Complete list of controls is listed [here](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mkl_userguide_lnx/GUID-3DC4FC7D-A1E4-423D-9C0C-06AB265FFA86.htm). +Behavioral of automatic offload mode is controlled by functions called within the program or by environmental variables. Complete list of controls is listed [here](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mkl_userguide_lnx/GUID-3DC4FC7D-A1E4-423D-9C0C-06AB265FFA86.htm). The Automatic Offload may be enabled by either an MKL function call within the code: @@ -256,7 +256,7 @@ or by setting environment variable $ export MKL_MIC_ENABLE=1 ``` -To get more information about automatic offload please refer to "[Using Intel® MKL Automatic Offload on Intel ® Xeon Phi™ Coprocessors](http://software.intel.com/sites/default/files/11MIC42_How_to_Use_MKL_Automatic_Offload_0.pdf)" white paper or [ Intel MKL documentation](https://software.intel.com/en-us/articles/intel-math-kernel-library-documentation). +To get more information about automatic offload please refer to "[Using Intel® MKL Automatic Offload on Intel ® Xeon Phi™ Coprocessors](http://software.intel.com/sites/default/files/11MIC42_How_to_Use_MKL_Automatic_Offload_0.pdf)" white paper or [ Intel MKL documentation](https://software.intel.com/en-us/articles/intel-math-kernel-library-documentation). ### Automatic offload example @@ -500,7 +500,7 @@ After executing the complied binary file, following output should be displayed. ``` !!! Note "Note" - More information about this example can be found on Intel website: <http://software.intel.com/en-us/vcsource/samples/caps-basic/> + More information about this example can be found on Intel website: <http://software.intel.com/en-us/vcsource/samples/caps-basic/> The second example that can be found in "/apps/intel/opencl-examples" directory is General Matrix Multiply. You can follow the the same procedure to download the example to your directory and compile it. @@ -901,4 +901,4 @@ Please note each host or accelerator is listed only per files. User has to speci Optimization ------------ -For more details about optimization techniques please read Intel document [Optimization and Performance Tuning for Intel® Xeon Phi™ Coprocessors](http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization "http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization") \ No newline at end of file +For more details about optimization techniques please read Intel document [Optimization and Performance Tuning for Intel® Xeon Phi™ Coprocessors](http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization "http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization") diff --git a/docs.it4i/salomon/software/java.md b/docs.it4i/salomon/software/java.md index 66df04d052fbfb76072c71b5eeb574ebc23b3eb3..5600b32c608d0dadd328ce160128587dc1b8dabc 100644 --- a/docs.it4i/salomon/software/java.md +++ b/docs.it4i/salomon/software/java.md @@ -25,4 +25,4 @@ With the module loaded, not only the runtime environment (JRE), but also the dev $ which javac ``` -Java applications may use MPI for interprocess communication, in conjunction with OpenMPI. Read more on <http://www.open-mpi.org/faq/?category=java>. This functionality is currently not supported on Anselm cluster. In case you require the java interface to MPI, please contact [cluster support](https://support.it4i.cz/rt/). \ No newline at end of file +Java applications may use MPI for interprocess communication, in conjunction with OpenMPI. Read more on <http://www.open-mpi.org/faq/?category=java>. This functionality is currently not supported on Anselm cluster. In case you require the java interface to MPI, please contact [cluster support](https://support.it4i.cz/rt/). diff --git a/docs.it4i/salomon/software/mpi/Running_OpenMPI.md b/docs.it4i/salomon/software/mpi/Running_OpenMPI.md index 8a4aa6f2de7f8946b4ee2b7b487cbff2a7e5e218..92c8d475c8123dfa12de47be2be9fc56feb393a6 100644 --- a/docs.it4i/salomon/software/mpi/Running_OpenMPI.md +++ b/docs.it4i/salomon/software/mpi/Running_OpenMPI.md @@ -210,4 +210,4 @@ Some options have changed in OpenMPI version 1.8. |--bind-to-socket |--bind-to socket | |-bysocket |--map-by socket | |-bycore |--map-by core | - |-pernode |--map-by ppr:1:node | \ No newline at end of file + |-pernode |--map-by ppr:1:node | diff --git a/docs.it4i/salomon/software/mpi/mpi.md b/docs.it4i/salomon/software/mpi/mpi.md index fb7826659ba3dc5cbdabef927fc00bc862175e8e..0af8a881a8d37b2210b27b381abcaf44dfd8e8da 100644 --- a/docs.it4i/salomon/software/mpi/mpi.md +++ b/docs.it4i/salomon/software/mpi/mpi.md @@ -138,6 +138,6 @@ In the previous two cases with one or two MPI processes per node, the operating ### Running OpenMPI -The [**OpenMPI 1.8.6**](http://www.open-mpi.org/) is based on OpenMPI. Read more on [how to run OpenMPI](Running_OpenMPI/) based MPI. +The [**OpenMPI 1.8.6**](http://www.open-mpi.org/) is based on OpenMPI. Read more on [how to run OpenMPI](Running_OpenMPI/) based MPI. -The Intel MPI may run on the[Intel Xeon Ph](../intel-xeon-phi/)i accelerators as well. Read more on [how to run Intel MPI on accelerators](../intel-xeon-phi/). \ No newline at end of file +The Intel MPI may run on the[Intel Xeon Ph](../intel-xeon-phi/)i accelerators as well. Read more on [how to run Intel MPI on accelerators](../intel-xeon-phi/). diff --git a/docs.it4i/salomon/software/mpi/mpi4py-mpi-for-python.md b/docs.it4i/salomon/software/mpi/mpi4py-mpi-for-python.md index 74ce3b56fade2949fff0464439ccc3e98e370509..21adfada029ab9f32e459e386bd16bfdab2540f0 100644 --- a/docs.it4i/salomon/software/mpi/mpi4py-mpi-for-python.md +++ b/docs.it4i/salomon/software/mpi/mpi4py-mpi-for-python.md @@ -93,4 +93,4 @@ Execute the above code as: $ mpiexec --map-by core --bind-to core python hello_world.py ``` -In this example, we run MPI4Py enabled code on 4 nodes, 24 cores per node (total of 96 processes), each python process is bound to a different core. More examples and documentation can be found on [MPI for Python webpage](https://pythonhosted.org/mpi4py/usrman/index.md). \ No newline at end of file +In this example, we run MPI4Py enabled code on 4 nodes, 24 cores per node (total of 96 processes), each python process is bound to a different core. More examples and documentation can be found on [MPI for Python webpage](https://pythonhosted.org/mpi4py/usrman/index.md). diff --git a/docs.it4i/salomon/software/numerical-languages/introduction.md b/docs.it4i/salomon/software/numerical-languages/introduction.md index e3a66c3111db0d1dfe20a8f3e8699c73646c3dc8..fdd9c0b404da6bc03bd3d5b607b5219957f75eab 100644 --- a/docs.it4i/salomon/software/numerical-languages/introduction.md +++ b/docs.it4i/salomon/software/numerical-languages/introduction.md @@ -39,4 +39,4 @@ The R is an interpreted language and environment for statistical computing and g $ R ``` -Read more at the [R page](r/). \ No newline at end of file +Read more at the [R page](r/). diff --git a/docs.it4i/salomon/software/numerical-languages/matlab.md b/docs.it4i/salomon/software/numerical-languages/matlab.md index c3feda0ccba6b841cdb5fc031514cee76cb69d61..f1631d175530f40e4da9b35ee4a9502eee84c540 100644 --- a/docs.it4i/salomon/software/numerical-languages/matlab.md +++ b/docs.it4i/salomon/software/numerical-languages/matlab.md @@ -44,7 +44,7 @@ Running parallel Matlab using Distributed Computing Toolbox / Engine ------------------------------------------------------------------------ Distributed toolbox is available only for the EDU variant -The MPIEXEC mode available in previous versions is no longer available in MATLAB 2015. Also, the programming interface has changed. Refer to [Release Notes](http://www.mathworks.com/help/distcomp/release-notes.html#buanp9e-1). +The MPIEXEC mode available in previous versions is no longer available in MATLAB 2015. Also, the programming interface has changed. Refer to [Release Notes](http://www.mathworks.com/help/distcomp/release-notes.html#buanp9e-1). Delete previously used file mpiLibConf.m, we have observed crashes when using Intel MPI. @@ -215,7 +215,7 @@ This method is a "hack" invented by us to emulate the mpiexec functionality foun Please note that this method is experimental. -For this method, you need to use SalomonDirect profile, import it using [the same way as SalomonPBSPro](matlab.md#running-parallel-matlab-using-distributed-computing-toolbox---engine) +For this method, you need to use SalomonDirect profile, import it using [the same way as SalomonPBSPro](matlab.md#running-parallel-matlab-using-distributed-computing-toolbox---engine) This is an example of m-script using direct mode: @@ -275,4 +275,4 @@ Since this is a SMP machine, you can completely avoid using Parallel Toolbox and ### Local cluster mode -You can also use Parallel Toolbox on UV2000. Use l[ocal cluster mode](matlab/#parallel-matlab-batch-job-in-local-mode), "SalomonPBSPro" profile will not work. \ No newline at end of file +You can also use Parallel Toolbox on UV2000. Use l[ocal cluster mode](matlab/#parallel-matlab-batch-job-in-local-mode), "SalomonPBSPro" profile will not work. diff --git a/docs.it4i/salomon/software/numerical-languages/octave.md b/docs.it4i/salomon/software/numerical-languages/octave.md index 21587e3f4b186afa25c5778d870d6f7aad90663d..a827c9d813e9c5060c730bd7be1bc0de25314eee 100644 --- a/docs.it4i/salomon/software/numerical-languages/octave.md +++ b/docs.it4i/salomon/software/numerical-languages/octave.md @@ -1,7 +1,7 @@ Octave ====== -GNU Octave is a high-level interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation. Octave is normally used through its interactive command line interface, but it can also be used to write non-interactive programs. The Octave language is quite similar to Matlab so that most programs are easily portable. Read more on <http://www.gnu.org/software/octave/> +GNU Octave is a high-level interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation. Octave is normally used through its interactive command line interface, but it can also be used to write non-interactive programs. The Octave language is quite similar to Matlab so that most programs are easily portable. Read more on <http://www.gnu.org/software/octave/> Two versions of octave are available on the cluster, via module @@ -54,4 +54,4 @@ The octave c compiler mkoctfile calls the GNU gcc 4.8.1 for compiling native c c $ mkoctfile -v ``` -Octave may use MPI for interprocess communication This functionality is currently not supported on the cluster cluster. In case you require the octave interface to MPI, please contact our [cluster support](https://support.it4i.cz/rt/). \ No newline at end of file +Octave may use MPI for interprocess communication This functionality is currently not supported on the cluster cluster. In case you require the octave interface to MPI, please contact our [cluster support](https://support.it4i.cz/rt/). diff --git a/docs.it4i/salomon/software/numerical-languages/r.md b/docs.it4i/salomon/software/numerical-languages/r.md index 6407d3c16a30e221268553417ace907b62687a79..838b43f86cb0f2a9932ee093b7e5c96ed91ceddc 100644 --- a/docs.it4i/salomon/software/numerical-languages/r.md +++ b/docs.it4i/salomon/software/numerical-languages/r.md @@ -11,7 +11,7 @@ Another convenience is the ease with which the C code or third party libraries m Extensive support for parallel computing is available within R. -Read more on <http://www.r-project.org/>, <http://cran.r-project.org/doc/manuals/r-release/R-lang.html> +Read more on <http://www.r-project.org/>, <http://cran.r-project.org/doc/manuals/r-release/R-lang.html> Modules ------- @@ -150,7 +150,7 @@ package Rmpi provides an interface (wrapper) to MPI APIs. It also provides interactive R slave environment. On the cluster, Rmpi provides interface to the [OpenMPI](../mpi/Running_OpenMPI/). -Read more on Rmpi at <http://cran.r-project.org/web/packages/Rmpi/>, reference manual is available at <http://cran.r-project.org/web/packages/Rmpi/Rmpi.pdf> +Read more on Rmpi at <http://cran.r-project.org/web/packages/Rmpi/>, reference manual is available at <http://cran.r-project.org/web/packages/Rmpi/Rmpi.pdf> When using package Rmpi, both openmpi and R modules must be loaded @@ -404,4 +404,4 @@ By leveraging MKL, R can accelerate certain computations, most notably linear al $ export MKL_MIC_ENABLE=1 ``` -[Read more about automatic offload](../intel-xeon-phi/) \ No newline at end of file +[Read more about automatic offload](../intel-xeon-phi/) diff --git a/docs.it4i/salomon/storage/cesnet-data-storage.md b/docs.it4i/salomon/storage/cesnet-data-storage.md index ff73c3b0ee01746f00b818ee1329e90631d439c1..ab5cd020c1520f2d208c50e8836838d4bd3e3e65 100644 --- a/docs.it4i/salomon/storage/cesnet-data-storage.md +++ b/docs.it4i/salomon/storage/cesnet-data-storage.md @@ -6,7 +6,7 @@ Introduction Do not use shared filesystems at IT4Innovations as a backup for large amount of data or long-term archiving purposes. !!! Note "Note" -../../img/The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service](https://du.cesnet.cz/). +../../img/The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service](https://du.cesnet.cz/). The CESNET Storage service can be used for research purposes, mainly by academic staff and students of research institutions in the Czech Republic. @@ -14,11 +14,11 @@ User of data storage CESNET (DU) association can become organizations or an indi User may only use data storage CESNET for data transfer and storage which are associated with activities in science, research, development, the spread of education, culture and prosperity. In detail see “Acceptable Use Policy CESNET Large Infrastructure (Acceptable Use Policy, AUP)”. -The service is documented at <https://du.cesnet.cz/wiki/doku.php/en/start>. For special requirements please contact directly CESNET Storage Department via e-mail [du-support(at)cesnet.cz](mailto:du-support@cesnet.cz). +The service is documented at <https://du.cesnet.cz/wiki/doku.php/en/start>. For special requirements please contact directly CESNET Storage Department via e-mail [du-support(at)cesnet.cz](mailto:du-support@cesnet.cz). The procedure to obtain the CESNET access is quick and trouble-free. -(source [https://du.cesnet.cz/](https://du.cesnet.cz/wiki/doku.php/en/start "CESNET Data Storage")) +(source [https://du.cesnet.cz/](https://du.cesnet.cz/wiki/doku.php/en/start "CESNET Data Storage")) CESNET storage access --------------------- @@ -26,9 +26,9 @@ CESNET storage access ### Understanding Cesnet storage !!! Note "Note" - It is very important to understand the Cesnet storage before uploading data. Please read <https://du.cesnet.cz/en/navody/home-migrace-plzen/start> first. + It is very important to understand the Cesnet storage before uploading data. Please read <https://du.cesnet.cz/en/navody/home-migrace-plzen/start> first. -Once registered for CESNET Storage, you may [access the storage](https://du.cesnet.cz/en/navody/faq/start) in number of ways. We recommend the SSHFS and RSYNC methods. +Once registered for CESNET Storage, you may [access the storage](https://du.cesnet.cz/en/navody/faq/start) in number of ways. We recommend the SSHFS and RSYNC methods. ### SSHFS Access @@ -84,7 +84,7 @@ Rsync is a fast and extraordinarily versatile file copying tool. It is famous fo Rsync finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time. Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated. -More about Rsync at <https://du.cesnet.cz/en/navody/rsync/start#pro_bezne_uzivatele> +More about Rsync at <https://du.cesnet.cz/en/navody/rsync/start#pro_bezne_uzivatele> Transfer large files to/from Cesnet storage, assuming membership in the Storage VO @@ -100,4 +100,4 @@ Transfer large directories to/from Cesnet storage, assuming membership in the St $ rsync --progress -av username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafolder . ``` -Transfer rates of about 28MB/s can be expected. \ No newline at end of file +Transfer rates of about 28MB/s can be expected. diff --git a/docs.it4i/salomon/storage/storage.md b/docs.it4i/salomon/storage/storage.md index 3f96799587aca0fa93babce92da5f8df55bb8c41..00de22d4a31a7eb9cb0989b59b295de516def4b0 100644 --- a/docs.it4i/salomon/storage/storage.md +++ b/docs.it4i/salomon/storage/storage.md @@ -46,11 +46,11 @@ Configuration of the SCRATCH Lustre storage ### Understanding the Lustre Filesystems -(source <http://www.nas.nasa.gov>) +(source <http://www.nas.nasa.gov>) A user file on the Lustre filesystem can be divided into multiple chunks (stripes) and stored across a subset of the object storage targets (OSTs) (disks). The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing. -When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server ( MDS) and the metadata target ( MDT) for the layout and location of the [file's stripes](http://www.nas.nasa.gov/hecc/support/kb/Lustre_Basics_224.html#striping). Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval. +When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server ( MDS) and the metadata target ( MDT) for the layout and location of the [file's stripes](http://www.nas.nasa.gov/hecc/support/kb/Lustre_Basics_224.html#striping). Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval. If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency so that all clients see consistent results. @@ -106,7 +106,7 @@ Another good practice is to make the stripe count be an integral factor of the n Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file. -Read more on <http://wiki.lustre.org/manual/LustreManual20_HTML/ManagingStripingFreeSpace.html> +Read more on <http://wiki.lustre.org/manual/LustreManual20_HTML/ManagingStripingFreeSpace.html> Disk usage and quota commands ------------------------------------------ @@ -211,14 +211,14 @@ other::--- Default ACL mechanism can be used to replace setuid/setgid permissions on directories. Setting a default ACL on a directory (-d flag to setfacl) will cause the ACL permissions to be inherited by any newly created file or subdirectory within the directory. Refer to this page for more information on Linux ACL: -[http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html ](http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html) +[http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html ](http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html) Shared Workspaces --------------------- ###HOME -Users home directories /home/username reside on HOME filesystem. Accessible capacity is 0.5PB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request. +Users home directories /home/username reside on HOME filesystem. Accessible capacity is 0.5PB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request. !!! Note "Note" The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects. @@ -264,7 +264,7 @@ The WORK workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as L ### TEMP -The TEMP workspace resides on SCRATCH filesystem. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6P, shared among all users on TEMP and WORK. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. >If 100TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request. +The TEMP workspace resides on SCRATCH filesystem. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6P, shared among all users on TEMP and WORK. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. >If 100TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request. !!! Note "Note" The TEMP workspace is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory. @@ -320,4 +320,4 @@ Summary | /home|home directory|NFS, 2-Tier|0.5 PB|6 GB/s|Quota 250GB|Compute and login nodes|backed up| |/scratch/work|large project files|Lustre|1.69 PB|30 GB/s|Quota|Compute and login nodes|none| |/scratch/temp|job temporary data|Lustre|1.69 PB|30 GB/s|Quota 100TB|Compute and login nodes|files older 90 days removed| -|/ramdisk|job temporary data, node local|local|120GB|90 GB/s|none|Compute nodes|purged after job ends| \ No newline at end of file +|/ramdisk|job temporary data, node local|local|120GB|90 GB/s|none|Compute nodes|purged after job ends| diff --git a/it4i_theme/assets/stylesheets/application.css b/it4i_theme/assets/stylesheets/application.css index 28a20a708b927d56155868673bc2a2f1e2c484c3..4366af6fca1814c9b7b70944e5690d8828b12c5b 100644 --- a/it4i_theme/assets/stylesheets/application.css +++ b/it4i_theme/assets/stylesheets/application.css @@ -1244,3 +1244,14 @@ h1 .article .headerlink { max-height: 600px; overflow: auto; } +a:not([href*="//"]) { + /* CSS for internal links */ +} + +a[href*="//"]:not([href*="https://gitlab.it4i.cz/kru0052/docs.it4i"]) { + /*CSS for external links */ + background: transparent url("/img/external.png") no-repeat right 0px top 1px; + background-size: 12px; + padding: 1px 16px 1px 0px; +} + diff --git a/it4i_theme/base.html b/it4i_theme/base.html index addc28a9410305aee03e854befeebb8b5ee22437..c49699ad3a7212c99f275275b76ea88753340977 100644 --- a/it4i_theme/base.html +++ b/it4i_theme/base.html @@ -34,7 +34,7 @@ {% if config.extra.logo %} <link rel="apple-touch-icon" href="{{ base_url }}/{{ config.extra.logo }}"> {% endif %} - {% set favicon = favicon | default("assets/images/favicon.ico", true) %} + {% set favicon = favicon | default("img/favicon.ico", true) %} <link rel="shortcut icon" type="image/x-icon" href="{{ base_url }}/{{ favicon }}"> <link rel="icon" type="image/x-icon" href="{{ base_url }}/{{ favicon }}"> <style> diff --git a/mkdocs.yml b/mkdocs.yml index 7284593c68aa6fa59cdd8d2508e2bce494b93d57..19872589b2acc1d006ff487516df11836ae6d50d 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -1,6 +1,5 @@ site_name: Documentation theme_dir: 'it4i_theme' -extra_css: [css/extra.css] docs_dir: docs.it4i # Copyright @@ -12,7 +11,7 @@ pages: - History of Downtimes: downtimes_history.md - Get Started with IT4Innovations: - Applying for Resources: get-started-with-it4innovations/applying-for-resources.md - - Get Started with IT4Innovations-Obtaining Login Credentials: + - Get Started with IT4Innovations-Obtaining Login Credentials: - Obtaining Login Credentials: get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md - Certificates FAQ: get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md - Get Started with IT4Innovations-Accessing the Clusters: