Commit 7dd1632e authored by Lukáš Krupčík's avatar Lukáš Krupčík

oprava internich linku test

parent 8b9eb2ce
Pipeline #1083 failed with stage
in 19 seconds
......@@ -70,7 +70,7 @@ To establish local proxy server on your workstation, install and run SOCKS proxy
local $ ssh -D 1080 localhost
```
On Windows, install and run the free, open source [Sock Puppet](http://sockspuppet.com/) server.
On Windows, install and run the free, open source [Sock Puppet](http://sockspuppet.com/)![external](../../img/external.png) server.
Once the proxy server is running, establish ssh port forwarding from Anselm to the proxy server, port 1080, exactly as [described above](outgoing-connections.html#port-forwarding-from-login-nodes).
......
......@@ -50,11 +50,11 @@ Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com
[username@login2.anselm ~]$
```
>The environment is **not** shared between login nodes, except for [shared filesystems](../storage-1.html#section-1).
>The environment is **not** shared between login nodes, except for [shared filesystems](../storage/storage/#section-1).
Data Transfer
-------------
Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp protocols.  (Not available yet.) In case large volumes of data are transferred, use dedicated data mover node dm1.anselm.it4i.cz for increased performance.
Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy)![external](../../img/external.png) and sftp protocols.  (Not available yet.) In case large volumes of data are transferred, use dedicated data mover node dm1.anselm.it4i.cz for increased performance.
|Address|Port|Protocol|
|---|---|
......@@ -104,7 +104,7 @@ $ man scp
$ man sshfs
```
On Windows, use [WinSCP client](http://winscp.net/eng/download.php) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/) provides a way to mount the Anselm filesystems directly as an external disc.
On Windows, use [WinSCP client](http://winscp.net/eng/download.php)![external](../../img/external.png) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/)![external](../../img/external.png) provides a way to mount the Anselm filesystems directly as an external disc.
More information about the shared file systems is available [here](../../storage.html).
......@@ -5,7 +5,7 @@ Accessing IT4Innovations internal resources via VPN
---------------------------------------------------
>**Failed to initialize connection subsystem Win 8.1 - 02-10-15 MS patch**
Workaround can be found at [https://docs.it4i.cz/vpn-connection-fail-in-win-8.1](../../vpn-connection-fail-in-win-8.1.html)
Workaround can be found at [vpn-connection-fail-in-win-8.1](../../vpn-connection-fail-in-win-8.1.html)
For using resources and licenses which are located at IT4Innovations local network, it is necessary to VPN connect to this network. We use Cisco AnyConnect Secure Mobility Client, which is supported on the following operating systems:
......
......@@ -76,7 +76,7 @@ PrgEnv-intel sets up the INTEL development environment in conjunction with the I
### Application Modules Path Expansion
All application modules on Salomon cluster (and further) will be build using tool called [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). In case that you want to use some applications that are build by EasyBuild already, you have to modify your MODULEPATH environment variable.
All application modules on Salomon cluster (and further) will be build using tool called [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild")![external](../img/external.png). In case that you want to use some applications that are build by EasyBuild already, you have to modify your MODULEPATH environment variable.
```bash
export MODULEPATH=$MODULEPATH:/apps/easybuild/modules/all/
......
......@@ -3,7 +3,7 @@ Hardware Overview
The Anselm cluster consists of 209 computational nodes named cn[1-209] of which 180 are regular compute nodes, 23 GPU Kepler K20 accelerated nodes, 4 MIC Xeon Phi 5110 accelerated nodes and 2 fat nodes. Each node is a powerful x86-64 computer, equipped with 16 cores (two eight-core Intel Sandy Bridge processors), at least 64GB RAM, and local hard drive. The user access to the Anselm cluster is provided by two login nodes login[1,2]. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 320TB /home disk storage to store the user files. The 146TB shared /scratch storage is available for the scratch data.
The Fat nodes are equipped with large amount (512GB) of memory. Virtualization infrastructure provides resources to run long term servers and services in virtual mode. Fat nodes and virtual servers may access 45 TB of dedicated block storage. Accelerated nodes, fat nodes, and virtualization infrastructure are available [upon request](https://support.it4i.cz/rt) made by a PI.
The Fat nodes are equipped with large amount (512GB) of memory. Virtualization infrastructure provides resources to run long term servers and services in virtual mode. Fat nodes and virtual servers may access 45 TB of dedicated block storage. Accelerated nodes, fat nodes, and virtualization infrastructure are available [upon request](https://support.it4i.cz/rt)![external](../img/external.png) made by a PI.
Schematic representation of the Anselm cluster. Each box represents a node (computer) or storage capacity:
......@@ -518,5 +518,4 @@ The parameters are summarized in the following tables:
|MIC accelerated|2x Intel Sandy Bridge E5-2470, 2.3GHz|96GB|Intel Xeon Phi P5110|
|Fat compute node|2x Intel Sandy Bridge E5-2665, 2.4GHz|512GB|-|
For more details please refer to the [Compute nodes](compute-nodes/), [Storage](storage/storage/), and [Network](network/).
For more details please refer to the [Compute nodes](compute-nodes/), [Storage](storage/storage/), and [Network](network/).
\ No newline at end of file
......@@ -3,7 +3,7 @@ Introduction
Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15TB RAM and giving over 94 Tflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64GB RAM, and 500GB harddrive. Nodes are interconnected by fully non-blocking fat-tree Infiniband network and equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/).
The cluster runs bullx Linux [](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)[operating system](software/operating-system/), which is compatible with the RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/).
The cluster runs bullx Linux ([bull](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)![external](../img/external.png)) [operating system](software/operating-system/), which is compatible with the RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg)![external](../img/external.png) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/).
User data shared file-system (HOME, 320TB) and job data shared file-system (SCRATCH, 146TB) are available to users.
......
Network
=======
All compute and login nodes of Anselm are interconnected by [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) QDR network and by Gigabit [Ethernet](http://en.wikipedia.org/wiki/Ethernet) network. Both networks may be used to transfer user data.
All compute and login nodes of Anselm are interconnected by [Infiniband](http://en.wikipedia.org/wiki/InfiniBand)![external](../img/external.png) QDR network and by Gigabit [Ethernet](http://en.wikipedia.org/wiki/Ethernet)![external](../img/external.png) network. Both networks may be used to transfer user data.
Infiniband Network
------------------
All compute and login nodes of Anselm are interconnected by a high-bandwidth, low-latency [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) QDR network (IB 4x QDR, 40 Gbps). The network topology is a fully non-blocking fat-tree.
All compute and login nodes of Anselm are interconnected by a high-bandwidth, low-latency [Infiniband](http://en.wikipedia.org/wiki/InfiniBand)![external](../img/external.png) QDR network (IB 4x QDR, 40 Gbps). The network topology is a fully non-blocking fat-tree.
The compute nodes may be accessed via the Infiniband network using ib0 network interface, in address range 10.2.1.1-209. The MPI may be used to establish native Infiniband connection among the nodes.
......
......@@ -5,11 +5,11 @@ Intro
-----
PRACE users coming to Anselm as to TIER-1 system offered through the DECI calls are in general treated as standard users and so most of the general documentation applies to them as well. This section shows the main differences for quicker orientation, but often uses references to the original documentation. PRACE users who don't undergo the full procedure (including signing the IT4I AuP on top of the PRACE AuP) will not have a password and thus access to some services intended for regular users. This can lower their comfort, but otherwise they should be able to use the TIER-1 system as intended. Please see the [Obtaining Login Credentials section](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/), if the same level of access is required.
All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation/) should be read before continuing reading the local documentation here.
All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation/)![external](../img/external.png) should be read before continuing reading the local documentation here.
Help and Support
--------------------
If you have any troubles, need information, request support or want to install additional software, please use [PRACE Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/).
If you have any troubles, need information, request support or want to install additional software, please use [PRACE Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/)![external](../img/external.png).
Information about the local services are provided in the [introduction of general user documentation](introduction/). Please keep in mind, that standard PRACE accounts don't have a password to access the web interface of the local (IT4Innovations) request tracker and thus a new ticket should be created by sending an e-mail to support[at]it4i.cz.
......@@ -30,11 +30,11 @@ The user will need a valid certificate and to be present in the PRACE LDAP (plea
Most of the information needed by PRACE users accessing the Anselm TIER-1 system can be found here:
- [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)
- [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)
- [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh)
- [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details)
- [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer)
- [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)![external](../img/external.png)
- [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)![external](../img/external.png)
- [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh)![external](../img/external.png)
- [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details)![external](../img/external.png)
- [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer)![external](../img/external.png)
Before you start to use any of the services don't forget to create a proxy certificate from your certificate:
......@@ -209,7 +209,7 @@ For production runs always use scratch file systems, either the global shared or
All system wide installed software on the cluster is made available to the users via the modules. The information about the environment and modules usage is in this [section of general documentation](environment-and-modules/).
PRACE users can use the "prace" module to use the [PRACE Common Production Environment](http://www.prace-ri.eu/PRACE-common-production).
PRACE users can use the "prace" module to use the [PRACE Common Production Environment](http://www.prace-ri.eu/PRACE-common-production)![external](../img/external.png).
```bash
$ module load prace
......@@ -231,13 +231,13 @@ qprace**, the PRACE \***: This queue is intended for normal production runs. It
### Accounting & Quota
The resources that are currently subject to accounting are the core hours. The core hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. See [example in the general documentation](resource-allocation-and-job-execution/resources-allocation-policy/).
The resources that are currently subject to accounting are the core hours. The core hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. See [example in the general documentation](resource-allocation-and-job-execution/resources-allocation-policy/)![external](../img/external.png).
PRACE users should check their project accounting using the [PRACE Accounting Tool (DART)](http://www.prace-ri.eu/accounting-report-tool/).
PRACE users should check their project accounting using the [PRACE Accounting Tool (DART)](http://www.prace-ri.eu/accounting-report-tool/)![external](../img/external.png).
Users who have undergone the full local registration procedure (including signing the IT4Innovations Acceptable Use Policy) and who have received local password may check at any time, how many core-hours have been consumed by themselves and their projects using the command "it4ifree". Please note that you need to know your user password to use the command and that the displayed core hours are "system core hours" which differ from PRACE "standardized core hours".
>The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients>
>The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients>![external](../img/external.png)
```bash
$ it4ifree
......
......@@ -30,7 +30,7 @@ How to use the service
### Setup and start your own TurboVNC server.
TurboVNC is designed and implemented for cooperation with VirtualGL and available for free for all major platforms. For more information and download, please refer to: <http://sourceforge.net/projects/turbovnc/>
TurboVNC is designed and implemented for cooperation with VirtualGL and available for free for all major platforms. For more information and download, please refer to: <http://sourceforge.net/projects/turbovnc/>![external](../img/external.png)
**Always use TurboVNC on both sides** (server and client) **don't mix TurboVNC and other VNC implementations** (TightVNC, TigerVNC, ...) as the VNC protocol implementation may slightly differ and diminish your user experience by introducing picture artifacts, etc.
......
......@@ -9,13 +9,13 @@ However, executing huge number of jobs via the PBS queue may strain the system.
>Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
- Use [Job arrays](capacity-computing.html#job-arrays) when running huge number of [multithread](capacity-computing.html#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
- Use [GNU parallel](capacity-computing.html#gnu-parallel) when running single core jobs
- Combine[GNU parallel with Job arrays](capacity-computing.html#combining-job-arrays-and-gnu-parallel) when running huge number of single core jobs
- Use [Job arrays](capacity-computing/#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
- Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
- Combine[GNU parallel with Job arrays](capacity-computing/#combining-job-arrays-and-gnu-parallel) when running huge number of single core jobs
Policy
------
1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing.html#job-arrays).
1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays).
2. The array size is at most 1000 subjobs.
Job arrays
......@@ -75,7 +75,7 @@ If huge number of parallel multicore (in means of multinode multithread, e. g. M
### Submit the job array
To submit the job array, use the qsub -J command. The 900 jobs of the [example above](capacity-computing.html#array_example) may be submitted like this:
To submit the job array, use the qsub -J command. The 900 jobs of the [example above](capacity-computing/#array_example) may be submitted like this:
```bash
$ qsub -N JOBNAME -J 1-900 jobscript
......@@ -144,7 +144,7 @@ Display status information for all user's subjobs.
$ qstat -u $USER -tJ
```
Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation.html).
Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/sitemap/).
GNU parallel
----------------
......@@ -205,7 +205,7 @@ In this example, tasks from tasklist are executed via the GNU parallel. The job
### Submit the job
To submit the job, use the qsub command. The 101 tasks' job of the [example above](capacity-computing.html#gp_example) may be submitted like this:
To submit the job, use the qsub command. The 101 tasks' job of the [example above](capacity-computing/#gp_example) may be submitted like this:
```bash
$ qsub -N JOBNAME jobscript
......@@ -286,7 +286,7 @@ When deciding this values, think about following guiding rules:
### Submit the job array
To submit the job array, use the qsub -J command. The 992 tasks' job of the [example above](capacity-computing.html#combined_example) may be submitted like this:
To submit the job array, use the qsub -J command. The 992 tasks' job of the [example above](capacity-computing/#combined_example) may be submitted like this:
```bash
$ qsub -N JOBNAME -J 1-992:32 jobscript
......@@ -299,7 +299,7 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo
Examples
--------
Download the examples in [capacity.zip](capacity-computing-examples), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs.
Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs.
Unzip the archive in an empty directory on Anselm and follow the instructions in the README file
......
......@@ -18,7 +18,7 @@ Queue priority is priority of queue where job is queued before execution.
Queue priority has the biggest impact on job execution priority. Execution priority of jobs in higher priority queues is always greater than execution priority of jobs in lower priority queues. Other properties of job used for determining job execution priority (fairshare priority, eligible time) cannot compete with queue priority.
Queue priorities can be seen at <https://extranet.it4i.cz/anselm/queues>
Queue priorities can be seen at <https://extranet.it4i.cz/anselm/queues>![external](../../img/external.png)
### Fairshare priority
......@@ -34,7 +34,7 @@ where MAX_FAIRSHARE has value 1E6, usage~Project~ is cumulated usage by all memb
Usage counts allocated corehours (ncpus*walltime). Usage is decayed, or cut in half periodically, at the interval 168 hours (one week). Jobs queued in queue qexp are not calculated to project's usage.
>Calculated usage and fairshare priority can be seen at <https://extranet.it4i.cz/anselm/projects>.
>Calculated usage and fairshare priority can be seen at <https://extranet.it4i.cz/anselm/projects>.![external](../../img/external.png)
Calculated fairshare priority can be also seen as Resource_List.fairshare attribute of a job.
......
......@@ -48,7 +48,7 @@ $ qsub -A OPEN-0-0 -q qfree -l select=10:ncpus=16 ./myjob
In this example, we allocate 10  nodes, 16 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. Jobscript myjob will be executed on the first node in the allocation.
All qsub options may be [saved directly into the jobscript](job-submission-and-execution.html#PBSsaved). In such a case, no options to qsub are needed.
All qsub options may be [saved directly into the jobscript](job-submission-and-execution/#PBSsaved). In such a case, no options to qsub are needed.
```bash
$ qsub ./myjob
......@@ -90,9 +90,9 @@ In this example, we allocate 4 nodes, 16 cores, selecting only the nodes with In
### Placement by IB switch
Groups of computational nodes are connected to chassis integrated Infiniband switches. These switches form the leaf switch layer of the [Infiniband network](../network.html) fat tree topology. Nodes sharing the leaf switch can communicate most efficiently. Sharing the same switch prevents hops in the network and provides for unbiased, most efficient network communication.
Groups of computational nodes are connected to chassis integrated Infiniband switches. These switches form the leaf switch layer of the [Infiniband network](../network/) fat tree topology. Nodes sharing the leaf switch can communicate most efficiently. Sharing the same switch prevents hops in the network and provides for unbiased, most efficient network communication.
Nodes sharing the same switch may be selected via the PBS resource attribute ibswitch. Values of this attribute are iswXX, where XX is the switch number. The node-switch mapping can be seen at [Hardware Overview](../hardware-overview.html) section.
Nodes sharing the same switch may be selected via the PBS resource attribute ibswitch. Values of this attribute are iswXX, where XX is the switch number. The node-switch mapping can be seen at [Hardware Overview](../hardware-overview/) section.
We recommend allocating compute nodes of a single switch when best possible computational network performance is required to run the job efficiently:
......@@ -334,7 +334,7 @@ exit
In this example, some directory on the /home holds the input file input and executable mympiprog.x . We create a directory myjob on the /scratch filesystem, copy input and executable files from the /home directory where the qsub was invoked ($PBS_O_WORKDIR) to /scratch, execute the MPI programm mympiprog.x and copy the output file back to the /home directory. The mympiprog.x is executed as one process per node, on all allocated nodes.
>Consider preloading inputs and executables onto [shared scratch](../storage.html) before the calculation starts.
>Consider preloading inputs and executables onto [shared scratch](../storage/storage/) before the calculation starts.
In some cases, it may be impractical to copy the inputs to scratch and outputs to home. This is especially true when very large input and output files are expected, or when the files should be reused by a subsequent calculation. In such a case, it is users responsibility to preload the input files on shared /scratch before the job submission and retrieve the outputs manually, after all calculations are finished.
......@@ -365,14 +365,14 @@ exit
In this example, input and executable files are assumed preloaded manually in /scratch/$USER/myjob directory. Note the **mpiprocs** and **ompthreads** qsub options, controlling behavior of the MPI execution. The mympiprog.x is executed as one process per node, on all 100 allocated nodes. If mympiprog.x implements OpenMP threads, it will run 16 threads per node.
More information is found in the [Running OpenMPI](../software/mpi-1/Running_OpenMPI.html) and [Running MPICH2](../software/mpi-1/running-mpich2.html)
More information is found in the [Running OpenMPI](../software/mpi/Running_OpenMPI/) and [Running MPICH2](../software/mpi/running-mpich2/)
sections.
### Example Jobscript for Single Node Calculation
>Local scratch directory is often useful for single node jobs. Local scratch will be deleted immediately after the job ends.
Example jobscript for single node calculation, using [local scratch](../storage.html) on the node:
Example jobscript for single node calculation, using [local scratch](../storage/storage/) on the node:
```bash
#!/bin/bash
......@@ -398,4 +398,4 @@ In this example, some directory on the home holds the input file input and execu
### Other Jobscript Examples
Further jobscript examples may be found in the [Software](../software.1.html) section and the [Capacity computing](capacity-computing.html) section.
\ No newline at end of file
Further jobscript examples may be found in the software section and the [Capacity computing](capacity-computing/) section.
\ No newline at end of file
......@@ -3,7 +3,7 @@ Resources Allocation Policy
Resources Allocation Policy
---------------------------
The resources are allocated to the job in a fairshare fashion, subject to constraints set by the queue and resources available to the Project. The Fairshare at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority.html) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
The resources are allocated to the job in a fairshare fashion, subject to constraints set by the queue and resources available to the Project. The Fairshare at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
|queue |active project |project resources |nodes|min ncpus*|priority|authorization|>walltime |
| --- | --- |
......@@ -25,15 +25,15 @@ The resources are allocated to the job in a fairshare fashion, subject to constr
### Notes
The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be  [set manually, see examples](job-submission-and-execution.html).
The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be  [set manually, see examples](job-submission-and-execution/).
Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatically. Wall clock time limit can be changed for queuing jobs (state Q) using the qalter command, however can not be changed for a running job (state R).
Anselm users may check current queue configuration at <https://extranet.it4i.cz/anselm/queues>.
Anselm users may check current queue configuration at <https://extranet.it4i.cz/anselm/queues>![external](../../img/external.png).
### Queue status
>Check the status of jobs, queues and compute nodes at <https://extranet.it4i.cz/anselm/>
>Check the status of jobs, queues and compute nodes at <https://extranet.it4i.cz/anselm/>![external](../../img/external.png)
![rspbs web interface](rsweb.png)
......@@ -106,12 +106,12 @@ Resources Accounting Policy
### The Core-Hour
The resources that are currently subject to accounting are the core-hours. The core-hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. 1 core-hour is defined as 1 processor core allocated for 1 hour of wall clock time. Allocating a full node (16 cores) for 1 hour accounts to 16 core-hours. See example in the  [Job submission and execution](job-submission-and-execution.html) section.
The resources that are currently subject to accounting are the core-hours. The core-hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. 1 core-hour is defined as 1 processor core allocated for 1 hour of wall clock time. Allocating a full node (16 cores) for 1 hour accounts to 16 core-hours. See example in the  [Job submission and execution](job-submission-and-execution/) section.
### Check consumed resources
>The **it4ifree** command is a part of it4i.portal.clients package, located here:
<https://pypi.python.org/pypi/it4i.portal.clients>
<https://pypi.python.org/pypi/it4i.portal.clients>![external](../../img/external.png)
User may check at any time, how many core-hours have been consumed by himself/herself and his/her projects. The command is available on clusters' login nodes.
......
ANSYS CFX
=========
[ANSYS CFX](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+CFX)
[ANSYS CFX](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+CFX)![external](../../../img/external.png)
software is a high-performance, general purpose fluid dynamics program that has been applied to solve wide-ranging fluid flow problems for over 20 years. At the heart of ANSYS CFX is its advanced solver technology, the key to achieving reliable and accurate solutions quickly and robustly. The modern, highly parallelized solver is the foundation for an abundant choice of physical models to capture virtually any type of phenomena related to fluid flow. The solver and its many physical models are wrapped in a modern, intuitive, and flexible GUI and user environment, with extensive capabilities for customization and automation using session files, scripting and a powerful expression language.
To run ANSYS CFX in batch mode you can utilize/modify the default cfx.pbs script and execute it via the qsub command.
......@@ -49,9 +49,9 @@ echo Machines: $hl
/ansys_inc/v145/CFX/bin/cfx5solve -def input.def -size 4 -size-ni 4x -part-large -start-method "Platform MPI Distributed Parallel" -par-dist $hl -P aa_r
```
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. >Input file has to be defined by common CFX def file which is attached to the cfx solver via parameter -def
**License** should be selected by parameter -P (Big letter **P**). Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**.
[More about licensing here](licensing.md)
\ No newline at end of file
[More about licensing here](licensing/)
\ No newline at end of file
ANSYS Fluent
============
[ANSYS Fluent](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+Fluent)
[ANSYS Fluent](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+Fluent)![external](../../../img/external.png)
software contains the broad physical modeling capabilities needed to model flow, turbulence, heat transfer, and reactions for industrial applications ranging from air flow over an aircraft wing to combustion in a furnace, from bubble columns to oil platforms, from blood flow to semiconductor manufacturing, and from clean room design to wastewater treatment plants. Special models that give the software the ability to model in-cylinder combustion, aeroacoustics, turbomachinery, and multiphase systems have served to broaden its reach.
1. Common way to run Fluent over pbs file
......
ANSYS LS-DYNA
=============
**[ANSYSLS-DYNA](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/Explicit+Dynamics/ANSYS+LS-DYNA)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern  graphical user environment.
**[ANSYSLS-DYNA](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/Explicit+Dynamics/ANSYS+LS-DYNA)![external](../../../img/external.png)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern  graphical user environment.
To run ANSYS LS-DYNA in batch mode you can utilize/modify the default ansysdyna.pbs script and execute it via the qsub command.
......@@ -51,6 +51,6 @@ echo Machines: $hl
/ansys_inc/v145/ansys/bin/ansys145 -dis -lsdynampp i=input.k -machines $hl
```
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz)![external](../../../img/external.png) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ansys solver via parameter i=
\ No newline at end of file
ANSYS MAPDL
===========
**[ANSYS Multiphysics](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/ANSYS+Multiphysics)**
**[ANSYS Multiphysics](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/ANSYS+Multiphysics)![external](../../../img/external.png)**
software offers a comprehensive product solution for both multiphysics and single-physics analysis. The product includes structural, thermal, fluid and both high- and low-frequency electromagnetic analysis. The product also contains solutions for both direct and sequentially coupled physics problems including direct coupled-field elements and the ANSYS multi-field solver.
To run ANSYS MAPDL in batch mode you can utilize/modify the default mapdl.pbs script and execute it via the qsub command.
......@@ -50,9 +50,9 @@ echo Machines: $hl
/ansys_inc/v145/ansys/bin/ansys145 -b -dis -p aa_r -i input.dat -o file.out -machines $hl -dir $WORK_DIR
```
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). [SVS FEM](http://www.svsfem.cz)![external](../../../img/external.png) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common APDL file which is attached to the ansys solver via parameter -i
**License** should be selected by parameter -p. Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**, aa_r_dy (ANSYS **Academic** AUTODYN)
[More about licensing here](licensing.md)
\ No newline at end of file
[More about licensing here](licensing/)
\ No newline at end of file
Overview of ANSYS Products
==========================
**[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM)
**[SVS FEM](http://www.svsfem.cz/)![external](../../../img/external.png)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM)
Anselm provides as commercial as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of  license or by two letter preposition "**aa_**" in the license feature name. Change of license is realized on command line respectively directly in user's pbs file (see individual products). [ More about licensing here](ansys/licensing.html)
Anselm provides as commercial as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of  license or by two letter preposition "**aa_**" in the license feature name. Change of license is realized on command line respectively directly in user's pbs file (see individual products). [ More about licensing here](ansys/licensing/)
To load the latest version of any ANSYS product (Mechanical, Fluent, CFX, MAPDL,...) load the module:
......
LS-DYNA
=======
[LS-DYNA](http://www.lstc.com/) is a multi-purpose, explicit and implicit finite element program used to analyze the nonlinear dynamic response of structures. Its fully automated contact analysis capability, a wide range of constitutive models to simulate a whole range of engineering materials (steels, composites, foams, concrete, etc.), error-checking features and the high scalability have enabled users worldwide to solve successfully many complex problems. Additionally LS-DYNA is extensively used to simulate impacts on structures from drop tests, underwater shock, explosions or high-velocity impacts. Explosive forming, process engineering, accident reconstruction, vehicle dynamics, thermal brake disc analysis or nuclear safety are further areas in the broad range of possible applications. In leading-edge research LS-DYNA is used to investigate the behaviour of materials like composites, ceramics, concrete, or wood. Moreover, it is used in biomechanics, human modelling, molecular structures, casting, forging, or virtual testing.
[LS-DYNA](http://www.lstc.com/)![external](../../../img/external.png) is a multi-purpose, explicit and implicit finite element program used to analyze the nonlinear dynamic response of structures. Its fully automated contact analysis capability, a wide range of constitutive models to simulate a whole range of engineering materials (steels, composites, foams, concrete, etc.), error-checking features and the high scalability have enabled users worldwide to solve successfully many complex problems. Additionally LS-DYNA is extensively used to simulate impacts on structures from drop tests, underwater shock, explosions or high-velocity impacts. Explosive forming, process engineering, accident reconstruction, vehicle dynamics, thermal brake disc analysis or nuclear safety are further areas in the broad range of possible applications. In leading-edge research LS-DYNA is used to investigate the behaviour of materials like composites, ceramics, concrete, or wood. Moreover, it is used in biomechanics, human modelling, molecular structures, casting, forging, or virtual testing.
Anselm provides **1 commercial license of LS-DYNA without HPC** support now.
......@@ -31,6 +31,6 @@ module load lsdyna
/apps/engineering/lsdyna/lsdyna700s i=input.k
```
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.html). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.html). [SVS FEM](http://www.svsfem.cz)![external](../../../img/external.png) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA **.k** file which is attached to the LS-DYNA solver via parameter i=
\ No newline at end of file
......@@ -5,13 +5,13 @@ Molpro is a complete system of ab initio programs for molecular electronic struc
About Molpro
------------
Molpro is a software package used for accurate ab-initio quantum chemistry calculations. More information can be found at the [official webpage](http://www.molpro.net/).
Molpro is a software package used for accurate ab-initio quantum chemistry calculations. More information can be found at the [official webpage](http://www.molpro.net/)![external](../../../img/external.png).
License
-------
Molpro software package is available only to users that have a valid license. Please contact support to enable access to Molpro if you have a valid license appropriate for running on our cluster (eg. academic research group licence, parallel execution).
To run Molpro, you need to have a valid license token present in " $HOME/.molpro/token". You can download the token from [Molpro website](https://www.molpro.net/licensee/?portal=licensee).
To run Molpro, you need to have a valid license token present in " $HOME/.molpro/token". You can download the token from [Molpro website](https://www.molpro.net/licensee/?portal=licensee)![external](../../../img/external.png).
Installed version
-----------------
......@@ -31,11 +31,11 @@ Compilation parameters are default:
Running
------
Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details.
Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html)![external](../../../img/external.png) for more details.
>The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS.
You are advised to use the -d option to point to a directory in [SCRATCH filesystem](../../storage.md). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch filesystem.
You are advised to use the -d option to point to a directory in [SCRATCH filesystem](../../storage/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch filesystem.
### Example jobscript
......
......@@ -7,7 +7,7 @@ Introduction
-------------------------
NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters.
[Homepage](http://www.nwchem-sw.org/index.php/Main_Page)
[Homepage](http://www.nwchem-sw.org/index.php/Main_Page)![external](../../../img/external.png)
Installed versions
------------------
......@@ -40,7 +40,7 @@ NWChem is compiled for parallel MPI execution. Normal procedure for MPI jobs app
Options
--------------------
Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level)![external](../../../img/external.png) and in the input file set the following directives :
- MEMORY : controls the amount of memory NWChem will use
- SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage.md#scratch) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
\ No newline at end of file
- SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage/storage/#scratch) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
\ No newline at end of file
......@@ -15,7 +15,7 @@ The C/C++ and Fortran compilers are divided into two main groups GNU and Intel.
Intel Compilers
---------------
For information about the usage of Intel Compilers and other Intel products, please read the [Intel Parallel studio](intel-suite.html) page.
For information about the usage of Intel Compilers and other Intel products, please read the [Intel Parallel studio](intel-suite/) page.
GNU C/C++ and Fortran Compilers
-------------------------------
......@@ -148,8 +148,8 @@ For more informations see the man pages.
Java
----
For information how to use Java (runtime and/or compiler), please read the [Java page](java.html).
For information how to use Java (runtime and/or compiler), please read the [Java page](java/).
nVidia CUDA
-----------
For information how to work with nVidia CUDA, please read the [nVidia CUDA page](nvidia-cuda.html).
\ No newline at end of file
For information how to work with nVidia CUDA, please read the [nVidia CUDA page](nvidia-cuda/).
\ No newline at end of file
......@@ -3,15 +3,15 @@ COMSOL Multiphysics®
Introduction
-------------------------
[COMSOL](http://www.comsol.com) is a powerful environment for modelling and solving various engineering and scientific problems based on partial differential equations. COMSOL is designed to solve coupled or multiphysics phenomena. For many
[COMSOL](http://www.comsol.com)![external](../../img/external.png) is a powerful environment for modelling and solving various engineering and scientific problems based on partial differential equations. COMSOL is designed to solve coupled or multiphysics phenomena. For many
standard engineering problems COMSOL provides add-on products such as electrical, mechanical, fluid flow, and chemical
applications.
- [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module),
- [Heat Transfer Module](http://www.comsol.com/heat-transfer-module),
- [CFD Module](http://www.comsol.com/cfd-module),
- [Acoustics Module](http://www.comsol.com/acoustics-module),
- and [many others](http://www.comsol.com/products)
- [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module)![external](../../img/external.png),
- [Heat Transfer Module](http://www.comsol.com/heat-transfer-module)![external](../../img/external.png),
- [CFD Module](http://www.comsol.com/cfd-module)![external](../../img/external.png),
- [Acoustics Module](http://www.comsol.com/acoustics-module)![external](../../img/external.png),
- and [many others](http://www.comsol.com/products)![external](../../img/external.png)
COMSOL also allows an interface support for equation-based modelling of partial differential equations.
......@@ -34,7 +34,7 @@ By default the **EDU variant** will be loaded. If user needs other version or va
$ module avail comsol
```
If user needs to prepare COMSOL jobs in the interactive mode it is recommend to use COMSOL on the compute nodes via PBS Pro scheduler. In order run the COMSOL Desktop GUI on Windows is recommended to use the [Virtual Network Computing (VNC)](https://docs.it4i.cz/anselm-cluster-documentation/software/comsol/resolveuid/11e53ad0d2fd4c5187537f4baeedff33).
If user needs to prepare COMSOL jobs in the interactive mode it is recommend to use COMSOL on the compute nodes via PBS Pro scheduler. In order run the COMSOL Desktop GUI on Windows is recommended to use the Virtual Network Computing (VNC).
```bash
$ xhost +
......@@ -76,7 +76,7 @@ LiveLink™* *for MATLAB®^
-------------------------
COMSOL is the software package for the numerical solution of the partial differential equations. LiveLink for MATLAB allows connection to the COMSOL**®** API (Application Programming Interface) with the benefits of the programming language and computing environment of the MATLAB.
LiveLink for MATLAB is available in both **EDU** and **COM** **variant** of the COMSOL release. On Anselm 1 commercial (**COM**) license and the 5 educational (**EDU**) licenses of LiveLink for MATLAB (please see the [ISV Licenses](../isv_licenses.html)) are available.
LiveLink for MATLAB is available in both **EDU** and **COM** **variant** of the COMSOL release. On Anselm 1 commercial (**COM**) license and the 5 educational (**EDU**) licenses of LiveLink for MATLAB (please see the [ISV Licenses](../isv_licenses/)) are available.
Following example shows how to start COMSOL model from MATLAB via LiveLink in the interactive mode.
```bash
......
......@@ -94,4 +94,4 @@ Users can find original User Guide after loading the DDT module:
$DDTPATH/doc/userguide.pdf
```
[1] Discipline, Magic, Inspiration and Science: Best Practice Debugging with Allinea DDT, Workshop conducted at LLNL by Allinea on May 10, 2013, [link](https://computing.llnl.gov/tutorials/allineaDDT/index.html)
\ No newline at end of file
[1] Discipline, Magic, Inspiration and Science: Best Practice Debugging with Allinea DDT, Workshop conducted at LLNL by Allinea on May 10, 2013, [link](https://computing.llnl.gov/tutorials/allineaDDT/index.html)![external](../../../img/external.png)
\ No newline at end of file
......@@ -13,7 +13,6 @@ Our license is limited to 64 MPI processes.
Modules
-------
Allinea Performance Reports version 6.0 is available
```bash
......@@ -26,13 +25,13 @@ Usage
-----
>Use the the perf-report wrapper on your (MPI) program.
Instead of [running your MPI program the usual way](../mpi-1.md), use the the perf report wrapper:
Instead of [running your MPI program the usual way](../mpi/), use the the perf report wrapper:
```bash
$ perf-report mpirun ./mympiprog.x
```
The mpi program will run as usual. The perf-report creates two additional files, in *.txt and *.html format, containing the performance report. Note that [demanding MPI codes should be run within the queue system](../../resource-allocation-and-job-execution/job-submission-and-execution.md).
The mpi program will run as usual. The perf-report creates two additional files, in *.txt and *.html format, containing the performance report. Note that [demanding MPI codes should be run within the queue system](../../resource-allocation-and-job-execution/job-submission-and-execution/).
Example
-------
......@@ -59,4 +58,4 @@ Now lets profile the code:
$ perf-report mpirun ./mympiprog.x
```
Performance report files [mympiprog_32p*.txt](mympiprog_32p_2014-10-15_16-56.txt) and [mympiprog_32p*.html](mympiprog_32p_2014-10-15_16-56.html) were created. We can see that the code is very efficient on MPI and is CPU bounded.
\ No newline at end of file
Performance report files [mympiprog_32p*.txt](mympiprog_32p_2014-10-15_16-56.txt)![external](../../../img/external.png) and [mympiprog_32p*.html](mympiprog_32p_2014-10-15_16-56.html)![external](../../../img/external.png) were created. We can see that the code is very efficient on MPI and is CPU bounded.
\ No newline at end of file
......@@ -19,19 +19,19 @@ Each node in the tree is colored by severity (the color scheme is displayed at t
Installed versions
------------------
Currently, there are two versions of CUBE 4.2.3 available as [modules](../../environment-and-modules.html) :
Currently, there are two versions of CUBE 4.2.3 available as [modules](../../environment-and-modules/) :
- cube/4.2.3-gcc, compiled with GCC
- cube/4.2.3-icc, compiled with Intel compiler
Usage
-----
CUBE is a graphical application. Refer to [Graphical User Interface documentation](https://docs.it4i.cz/anselm-cluster-documentation/software/debuggers/resolveuid/11e53ad0d2fd4c5187537f4baeedff33) for a list of methods to launch graphical applications on Anselm.
CUBE is a graphical application. Refer to Graphical User Interface documentation for a list of methods to launch graphical applications on Anselm.
>Analyzing large data sets can consume large amount of CPU and RAM. Do not perform large analysis on login nodes.
After loading the apropriate module, simply launch cube command, or alternatively you can use scalasca -examine command to launch the GUI. Note that for Scalasca datasets, if you do not analyze the data with scalasca -examine before to opening them with CUBE, not all performance data will be available.
References
1. <http://www.scalasca.org/software/cube-4.x/download.html>
1. <http://www.scalasca.org/software/cube-4.x/download.html>![external](../../../img/external.png)
......@@ -7,14 +7,14 @@ We provide state of the art programms and tools to develop, profile and debug HP