Skip to content
Snippets Groups Projects
Commit d0e6f7a6 authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

repair external and internal links

parent e52e6213
No related branches found
No related tags found
4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
Pipeline #
Showing
with 98 additions and 68 deletions
......@@ -5,7 +5,8 @@ Interactive Login
-----------------
The Salomon cluster is accessed by SSH protocol via login nodes login1, login2, login3 and login4 at address salomon.it4i.cz. The login nodes may be addressed specifically, by prepending the login node name to the address.
>The alias >salomon.it4i.cz is currently not available through VPN connection. Please use loginX.salomon.it4i.cz when connected to VPN.
!!! Note "Note"
The alias >salomon.it4i.cz is currently not available through VPN connection. Please use loginX.salomon.it4i.cz when connected to VPN.
|Login address|Port|Protocol|Login node|
|---|---|---|---|
......@@ -55,7 +56,8 @@ Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com
[username@login2.salomon ~]$
```
>The environment is **not** shared between login nodes, except for [shared filesystems](storage/storage/).
!!! Note "Note"
The environment is **not** shared between login nodes, except for [shared filesystems](storage/storage/).
Data Transfer
-------------
......
......@@ -12,7 +12,8 @@ Outgoing connections, from Salomon Cluster login nodes to the outside world, are
|443|https|
|9418|git|
>Please use **ssh port forwarding** and proxy servers to connect from Salomon to all other remote ports.
!!! Note "Note"
Please use **ssh port forwarding** and proxy servers to connect from Salomon to all other remote ports.
Outgoing connections, from Salomon Cluster compute nodes are restricted to the internal network. Direct connections form compute nodes to outside world are cut.
......@@ -21,7 +22,8 @@ Port forwarding
### Port forwarding from login nodes
>Port forwarding allows an application running on Salomon to connect to arbitrary remote host and port.
!!! Note "Note"
Port forwarding allows an application running on Salomon to connect to arbitrary remote host and port.
It works by tunneling the connection from Salomon back to users workstation and forwarding from the workstation to the remote host.
......@@ -61,7 +63,8 @@ In this example, we assume that port forwarding from login1:6000 to remote.host.
Port forwarding is static, each single port is mapped to a particular port on remote host. Connection to other remote host, requires new forward.
>Applications with inbuilt proxy support, experience unlimited access to remote hosts, via single proxy server.
!!! Note "Note"
Applications with inbuilt proxy support, experience unlimited access to remote hosts, via single proxy server.
To establish local proxy server on your workstation, install and run SOCKS proxy server software. On Linux, sshd demon provides the functionality. To establish SOCKS proxy server listening on port 1080 run:
......
......@@ -3,7 +3,6 @@ VPN Access
Accessing IT4Innovations internal resources via VPN
---------------------------------------------------
For using resources and licenses which are located at IT4Innovations local network, it is necessary to VPN connect to this network. We use Cisco AnyConnect Secure Mobility Client, which is supported on the following operating systems:
- Windows XP
......@@ -17,7 +16,6 @@ It is impossible to connect to VPN from other operating systems.
VPN client installation
------------------------------------
You can install VPN client from web interface after successful login with LDAP credentials on address <https://vpn.it4i.cz/user>![external](../../img/external.png)
![](vpn_web_login.png)
......
......@@ -24,7 +24,8 @@ then
fi
```
>Do not run commands outputing to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Take care for SSH session interactivity for such commands as stated in the previous example.
!!! Note "Note"
Do not run commands outputing to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Take care for SSH session interactivity for such commands as stated in the previous example.
### Application Modules
......@@ -56,7 +57,8 @@ Application modules on Salomon cluster are built using [EasyBuild](http://hpcuge
vis: Visualization, plotting, documentation and typesetting
```
>The modules set up the application paths, library paths and environment variables for running particular application.
!!! Note "Note"
The modules set up the application paths, library paths and environment variables for running particular application.
The modules may be loaded, unloaded and switched, according to momentary needs.
......
......@@ -19,7 +19,7 @@ Each colour in each physical IRU represents one dual-switch ASIC switch.
Each of the 3 inter-connected D racks are equivalent to one half of Mcell rack. 18x D rack with MIC accelerated nodes [r21-r38] are equivalent to 3 Mcell racks as shown in a diagram [7D Enhanced Hypercube](7d-enhanced-hypercube/).
As shown in a diagram ![IB Topology](Salomon_IB_topology.png):
As shown in a diagram ![IB Topology](Salomon_IB_topology.png)
- Racks 21, 22, 23, 24, 25, 26 are equivalent to one Mcell rack.
- Racks 27, 28, 29, 30, 31, 32 are equivalent to one Mcell rack.
......
......@@ -55,13 +55,13 @@ To access Salomon cluster, two login nodes running GSI SSH service are available
It is recommended to use the single DNS name salomon-prace.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
|Login address|Port|Protocol|Login node|
|---|---|
|salomon-prace.it4i.cz|2222|gsissh|login1, login2, login3 or login4|
|login1-prace.salomon.it4i.cz|2222|gsissh|login1|
|login2-prace.salomon.it4i.cz|2222|gsissh|login2|
|login3-prace.salomon.it4i.cz|2222|gsissh|login3|
|login4-prace.salomon.it4i.cz|2222|gsissh|login4|
|Login address|Port|Protocol|Login node|
|---|---|
|salomon-prace.it4i.cz|2222|gsissh|login1, login2, login3 or login4|
|login1-prace.salomon.it4i.cz|2222|gsissh|login1|
|login2-prace.salomon.it4i.cz|2222|gsissh|login2|
|login3-prace.salomon.it4i.cz|2222|gsissh|login3|
|login4-prace.salomon.it4i.cz|2222|gsissh|login4|
```bash
$ gsissh -p 2222 salomon-prace.it4i.cz
......@@ -251,7 +251,8 @@ PRACE users should check their project accounting using the [PRACE Accounting To
Users who have undergone the full local registration procedure (including signing the IT4Innovations Acceptable Use Policy) and who have received local password may check at any time, how many core-hours have been consumed by themselves and their projects using the command "it4ifree". Please note that you need to know your user password to use the command and that the displayed core hours are "system core hours" which differ from PRACE "standardized core hours".
>The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients>![external](../img/external.png)
!!! Note "Note"
The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients>![external](../img/external.png)
```bash
$ it4ifree
......
......@@ -7,7 +7,8 @@ In many cases, it is useful to submit huge (100+) number of computational jobs i
However, executing huge number of jobs via the PBS queue may strain the system. This strain may result in slow response to commands, inefficient scheduling and overall degradation of performance and user experience, for all users. For this reason, the number of jobs is **limited to 100 per user, 1500 per job array**
>Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
!!! Note "Note"
Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
- Use [Job arrays](capacity-computing.md#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
- Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
......@@ -20,7 +21,8 @@ Policy
Job arrays
--------------
>Huge number of jobs may be easily submitted and managed as a job array.
!!! Note "Note"
Huge number of jobs may be easily submitted and managed as a job array.
A job array is a compact representation of many jobs, called subjobs. The subjobs share the same job script, and have the same values for all attributes and resources, with the following exceptions:
......@@ -150,7 +152,8 @@ Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/)
GNU parallel
----------------
>Use GNU parallel to run many single core tasks on one node.
!!! Note "Note"
Use GNU parallel to run many single core tasks on one node.
GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on Anselm.
......@@ -220,11 +223,13 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo
Job arrays and GNU parallel
-------------------------------
>Combine the Job arrays and GNU parallel for best throughput of single core jobs
!!! Note "Note"
Combine the Job arrays and GNU parallel for best throughput of single core jobs
While job arrays are able to utilize all available computational nodes, the GNU parallel can be used to efficiently run multiple single-core jobs on single node. The two approaches may be combined to utilize all available (current and future) resources to execute single core jobs.
>Every subjob in an array runs GNU parallel to utilize all cores on the node
!!! Note "Note"
Every subjob in an array runs GNU parallel to utilize all cores on the node
### GNU parallel, shared jobscript
......@@ -278,7 +283,8 @@ cp output $PBS_O_WORKDIR/$TASK.out
In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node. Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The numtasks file controls how many tasks will be run per subjob. Once an task is finished, new task starts, until the number of tasks in numtasks file is reached.
>Select subjob walltime and number of tasks per subjob carefully
!!! Note "Note"
Select subjob walltime and number of tasks per subjob carefully
When deciding this values, think about following guiding rules :
......
......@@ -14,13 +14,15 @@ The resources are allocated to the job in a fairshare fashion, subject to constr
- **qfat**, the queue to access SMP UV2000 machine
- **qfree**, the Free resource utilization queue
>Check the queue status at <https://extranet.it4i.cz/rsweb/salomon/>
!!! Note "Note"
Check the queue status at <https://extranet.it4i.cz/rsweb/salomon/>
Read more on the [Resource Allocation Policy](resources-allocation-policy/) page.
Job submission and execution
----------------------------
>Use the **qsub** command to submit your jobs.
!!! Note "Note"
Use the **qsub** command to submit your jobs.
The qsub submits the job into the queue. The qsub command creates a request to the PBS Job manager for allocation of specified resources. The **smallest allocation unit is entire node, 24 cores**, with exception of the qexp queue. The resources will be allocated when available, subject to allocation policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on first of the allocated nodes.**
......
......@@ -33,7 +33,8 @@ where MAX_FAIRSHARE has value 1E6, usage~Project~ is cumulated usage by all memb
Usage counts allocated corehours (ncpus*walltime). Usage is decayed, or cut in half periodically, at the interval 168 hours (one week). Jobs queued in queue qexp are not calculated to project's usage.
>Calculated usage and fairshare priority can be seen at <https://extranet.it4i.cz/rsweb/salomon/projects>![external](../../img/external.png).
!!! Note "Note"
Calculated usage and fairshare priority can be seen at <https://extranet.it4i.cz/rsweb/salomon/projects>![external](../../img/external.png).
Calculated fairshare priority can be also seen as Resource_List.fairshare attribute of a job.
......@@ -61,7 +62,8 @@ The scheduler makes a list of jobs to run in order of execution priority. Schedu
It means, that jobs with lower execution priority can be run before jobs with higher execution priority.
>It is **very beneficial to specify the walltime** when submitting jobs.
!!! Note "Note"
It is **very beneficial to specify the walltime** when submitting jobs.
Specifying more accurate walltime enables better schedulling, better execution times and better resource usage. Jobs with suitable (small) walltime could be backfilled - and overtake job(s) with higher priority.
......
......@@ -12,7 +12,8 @@ When allocating computational resources for the job, please specify
5. Project ID
6. Jobscript or interactive switch
>Use the **qsub** command to submit your job to a queue for allocation of the computational resources.
!!! Note "Note"
Use the **qsub** command to submit your job to a queue for allocation of the computational resources.
Submit the job using the qsub command:
......@@ -22,7 +23,8 @@ $ qsub -A Project_ID -q queue -l select=x:ncpus=y,walltime=[[hh:]mm:]ss[.ms] job
The qsub submits the job into the queue, in another words the qsub command creates a request to the PBS Job manager for allocation of specified resources. The resources will be allocated when available, subject to above described policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on first of the allocated nodes.**
>PBS statement nodes (qsub -l nodes=nodespec) is not supported on Salomon cluster.
!!! Note "Note"
PBS statement nodes (qsub -l nodes=nodespec) is not supported on Salomon cluster.
### Job Submission Examples
......@@ -70,9 +72,10 @@ In this example, we allocate 4 nodes, with 24 cores per node (totalling 96 cores
### UV2000 SMP
>14 NUMA nodes available on UV2000
Per NUMA node allocation.
Jobs are isolated by cpusets.
!!! Note "Note"
14 NUMA nodes available on UV2000
Per NUMA node allocation.
Jobs are isolated by cpusets.
The UV2000 (node uv1) offers 3328GB of RAM and 112 cores, distributed in 14 NUMA nodes. A NUMA node packs 8 cores and approx. 236GB RAM. In the PBS the UV2000 provides 14 chunks, a chunk per NUMA node (see [Resource allocation policy](resources-allocation-policy/)). The jobs on UV2000 are isolated from each other by cpusets, so that a job by one user may not utilize CPU or memory allocated to a job by other user. Always, full chunks are allocated, a job may only use resources of the NUMA nodes allocated to itself.
......@@ -193,7 +196,8 @@ HTML commented section #1 (turbo boost is to be implemented)
Job Management
--------------
>Check status of your jobs using the **qstat** and **check-pbs-jobs** commands
!!! Note "Note"
Check status of your jobs using the **qstat** and **check-pbs-jobs** commands
```bash
$ qstat -a
......@@ -271,7 +275,8 @@ Run loop 3
In this example, we see actual output (some iteration loops) of the job 35141.dm2
>Manage your queued or running jobs, using the **qhold**, **qrls**, **qdel,** **qsig** or **qalter** commands
!!! Note "Note"
Manage your queued or running jobs, using the **qhold**, **qrls**, **qdel,** **qsig** or **qalter** commands
You may release your allocation at any time, using qdel command
......@@ -296,11 +301,13 @@ Job Execution
### Jobscript
>Prepare the jobscript to run batch jobs in the PBS queue system
!!! Note "Note"
Prepare the jobscript to run batch jobs in the PBS queue system
The Jobscript is a user made script, controlling sequence of commands for executing the calculation. It is often written in bash, other scripts may be used as well. The jobscript is supplied to PBS **qsub** command as an argument and executed by the PBS Professional workload manager.
>The jobscript or interactive shell is executed on first of the allocated nodes.
!!! Note "Note"
The jobscript or interactive shell is executed on first of the allocated nodes.
```bash
$ qsub -q qexp -l select=4:ncpus=24 -N Name0 ./myjob
......@@ -316,7 +323,8 @@ Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
In this example, the nodes r21u01n577, r21u02n578, r21u03n579, r21u04n580 were allocated for 1 hour via the qexp queue. The jobscript myjob will be executed on the node r21u01n577, while the nodes r21u02n578, r21u03n579, r21u04n580 are available for use as well.
>The jobscript or interactive shell is by default executed in home directory
!!! Note "Note"
The jobscript or interactive shell is by default executed in home directory
```bash
$ qsub -q qexp -l select=4:ncpus=24 -I
......@@ -329,7 +337,8 @@ $ pwd
In this example, 4 nodes were allocated interactively for 1 hour via the qexp queue. The interactive shell is executed in the home directory.
>All nodes within the allocation may be accessed via ssh. Unallocated nodes are not accessible to user.
!!! Note "Note"
All nodes within the allocation may be accessed via ssh. Unallocated nodes are not accessible to user.
The allocated nodes are accessible via ssh from login nodes. The nodes may access each other via ssh as well.
......@@ -360,7 +369,8 @@ In this example, the hostname program is executed via pdsh from the interactive
### Example Jobscript for MPI Calculation
>Production jobs must use the /scratch directory for I/O
!!! Note "Note"
Production jobs must use the /scratch directory for I/O
The recommended way to run production jobs is to change to /scratch directory early in the jobscript, copy all inputs to /scratch, execute the calculations and copy outputs to home directory.
......@@ -391,11 +401,13 @@ exit
In this example, some directory on the /home holds the input file input and executable mympiprog.x . We create a directory myjob on the /scratch filesystem, copy input and executable files from the /home directory where the qsub was invoked ($PBS_O_WORKDIR) to /scratch, execute the MPI programm mympiprog.x and copy the output file back to the /home directory. The mympiprog.x is executed as one process per node, on all allocated nodes.
>Consider preloading inputs and executables onto [shared scratch](../storage/storage/) before the calculation starts.
!!! Note "Note"
Consider preloading inputs and executables onto [shared scratch](../storage/storage/) before the calculation starts.
In some cases, it may be impractical to copy the inputs to scratch and outputs to home. This is especially true when very large input and output files are expected, or when the files should be reused by a subsequent calculation. In such a case, it is users responsibility to preload the input files on shared /scratch before the job submission and retrieve the outputs manually, after all calculations are finished.
>Store the qsub options within the jobscript. Use **mpiprocs** and **ompthreads** qsub options to control the MPI job execution.
!!! Note "Note"
Store the qsub options within the jobscript. Use **mpiprocs** and **ompthreads** qsub options to control the MPI job execution.
Example jobscript for an MPI job with preloaded inputs and executables, options for qsub are stored within the script :
......@@ -426,7 +438,8 @@ HTML commented section #2 (examples need to be reworked)
### Example Jobscript for Single Node Calculation
>Local scratch directory is often useful for single node jobs. Local scratch will be deleted immediately after the job ends. Be very careful, use of RAM disk filesystem is at the expense of operational memory.
!!! Note "Note"
Local scratch directory is often useful for single node jobs. Local scratch will be deleted immediately after the job ends. Be very careful, use of RAM disk filesystem is at the expense of operational memory.
Example jobscript for single node calculation, using [local scratch](../storage/storage/) on the node:
......
......@@ -15,7 +15,8 @@ The resources are allocated to the job in a fairshare fashion, subject to constr
|**qfree** Free resource queue|yes |none required |752 nodes, max 86 per job |24 |-1024 |no |12 / 12h |
|**qviz** Visualization queue |yes |none required |2 (with NVIDIA Quadro K5000) |4 |150 |no |1 / 2h |
>**The qfree queue is not free of charge**. [Normal accounting](resources-allocation-policy/#resources-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply for Directors Discreation's projects (DD projects) by default. Usage of qfree after exhaustion of DD projects computational resources is allowed after request for this queue.
!!! Note "Note"
**The qfree queue is not free of charge**. [Normal accounting](resources-allocation-policy/#resources-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply for Directors Discreation's projects (DD projects) by default. Usage of qfree after exhaustion of DD projects computational resources is allowed after request for this queue.
- **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
- **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, however only 86 per job. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
......@@ -25,7 +26,8 @@ The resources are allocated to the job in a fairshare fashion, subject to constr
- **qfree**, the Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 24 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
- **qviz**, the Visualization queue: Intended for pre-/post-processing using OpenGL accelerated graphics. Currently when accessing the node, each user gets 4 cores of a CPU allocated, thus approximately 73 GB of RAM and 1/7 of the GPU capacity (default "chunk"). If more GPU power or RAM is required, it is recommended to allocate more chunks (with 4 cores each) up to one whole node per user, so that all 28 cores, 512 GB RAM and whole GPU is exclusive. This is currently also the maximum allowed allocation per one user. One hour of work is allocated by default, the user may ask for 2 hours maximum.
>To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement](job-submission-and-execution/).
!!! Note "Note"
To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement](job-submission-and-execution/).
### Notes
......@@ -37,7 +39,8 @@ Salomon users may check current queue configuration at <https://extranet.it4i.cz
### Queue status
>Check the status of jobs, queues and compute nodes at [https://extranet.it4i.cz/rsweb/salomon/](https://extranet.it4i.cz/rsweb/salomon)![external](../../img/external.png)
!!! Note "Note"
Check the status of jobs, queues and compute nodes at [https://extranet.it4i.cz/rsweb/salomon/](https://extranet.it4i.cz/rsweb/salomon)![external](../../img/external.png)
![RSWEB Salomon](rswebsalomon.png "RSWEB Salomon")
......
Overview of ANSYS Products
==========================
**[SVS FEM](http://www.svsfem.cz/)![external](../../img/external.png)** as **[ANSYS Channel partner](http://www.ansys.com/)![external](../../img/external.png)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM)![external](../../img/external.png)
**[SVS FEM](http://www.svsfem.cz/)![external](../../../img/external.png)** as **[ANSYS Channel partner](http://www.ansys.com/)![external](../../../img/external.png)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM)![external](../../../img/external.png)
Anselm provides as commercial as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of license or by two letter preposition "**aa_**" in the license feature name. Change of license is realized on command line respectively directly in user's pbs file (see individual products). [ More about licensing here](licensing/)
......
......@@ -33,7 +33,8 @@ Running
------
Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html)![external](../../../img/external.png) for more details.
>The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS.
!!! Note "Note"
The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS.
You are advised to use the -d option to point to a directory in [SCRATCH filesystem](../../storage/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch filesystem.
......
......@@ -5,7 +5,8 @@ Introduction
-------------
This GPL software calculates phonon-phonon interactions via the third order force constants. It allows to obtain lattice thermal conductivity, phonon lifetime/linewidth, imaginary part of self energy at the lowest order, joint density of states (JDOS) and weighted-JDOS. For details see Phys. Rev. B 91, 094306 (2015) and [http://atztogo.github.io/phono3py/index.html](http://atztogo.github.io/phono3py/index.html)![external](../../../img/external.png)
>Load the phono3py/0.9.14-ictce-7.3.5-Python-2.7.9 module
!!! Note "Note"
Load the phono3py/0.9.14-ictce-7.3.5-Python-2.7.9 module
```bash
$ module load phono3py/0.9.14-ictce-7.3.5-Python-2.7.9
......
......@@ -22,13 +22,7 @@ Read more at the [Intel Debugger](../intel-suite/intel-debugger/) page.
Allinea Forge (DDT/MAP)
-----------------------
Allinea DDT, is a commercial debugger primarily for debugging parallel
MPI or OpenMP programs. It also has a support for GPU (CUDA) and Intel
Xeon Phi accelerators. DDT provides all the standard debugging features
(stack trace, breakpoints, watches, view variables, threads etc.) for
every thread running as part of your program, or for every process -
even if these processes are distributed across a cluster using an MPI
implementation.
Allinea DDT, is a commercial debugger primarily for debugging parallel MPI or OpenMP programs. It also has a support for GPU (CUDA) and Intel Xeon Phi accelerators. DDT provides all the standard debugging features (stack trace, breakpoints, watches, view variables, threads etc.) for every thread running as part of your program, or for every process - even if these processes are distributed across a cluster using an MPI implementation.
```bash
$ module load Forge
......
......@@ -6,8 +6,8 @@ Aislinn
- Aislinn is open-source software; you can use it without any licensing limitations.
- Web page of the project: <http://verif.cs.vsb.cz/aislinn/>![external](../../../img/external.png)
>Note
Aislinn is software developed at IT4Innovations and some parts are still considered experimental. If you have any questions or experienced any problems, please contact the author: <stanislav.bohm@vsb.cz>.
!!! Note "Note"
Aislinn is software developed at IT4Innovations and some parts are still considered experimental. If you have any questions or experienced any problems, please contact the author: <stanislav.bohm@vsb.cz>.
### Usage
......
......@@ -49,10 +49,10 @@ $ mpif90 -g -O0 -o test_debug test.f
Before debugging, you need to compile your code with theses flags:
>- **g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for
GNU and INTEL C/C++ and Fortran compilers.
!!! Note "Note"
- **g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
>- **O0** : Suppress all optimizations.
- - **O0** : Suppress all optimizations.
Starting a Job with DDT
-----------------------
......
......@@ -3,7 +3,7 @@ Intel VTune Amplifier XE
Introduction
------------
Intel*® *VTune™ >Amplifier, part of Intel Parallel studio, is a GUI profiling tool designed for Intel processors. It offers a graphical performance analysis of single core and multithreaded applications. A highlight of the features:
Intel*® *VTune™ Amplifier, part of Intel Parallel studio, is a GUI profiling tool designed for Intel processors. It offers a graphical performance analysis of single core and multithreaded applications. A highlight of the features:
- Hotspot analysis
- Locks and waits analysis
......@@ -69,7 +69,8 @@ This mode is useful for native Xeon Phi applications launched directly on the ca
This mode is useful for applications that are launched from the host and use offload, OpenCL or mpirun. In *Analysis Target* window, select *Intel Xeon Phi coprocessor (native), *choose path to the binaryand MIC card to run on.
>If the analysis is interrupted or aborted, further analysis on the card might be impossible and you will get errors like "ERROR connecting to MIC card". In this case please contact our support to reboot the MIC card.
!!! Note "Note"
If the analysis is interrupted or aborted, further analysis on the card might be impossible and you will get errors like "ERROR connecting to MIC card". In this case please contact our support to reboot the MIC card.
You may also use remote analysis to collect data from the MIC and then analyze it in the GUI later :
......
......@@ -47,9 +47,10 @@ Compile the code:
Before debugging, you need to compile your code with theses flags:
>**-g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
!!! Note "Note"
**-g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
>**-O0** : Suppress all optimizations.
**-O0** : Suppress all optimizations.
Starting a Job with TotalView
-----------------------------
......@@ -81,7 +82,8 @@ To debug a serial code use:
To debug a parallel code compiled with **OpenMPI** you need to setup your TotalView environment:
>**Please note:** To be able to run parallel debugging procedure from the command line without stopping the debugger in the mpiexec source code you have to add the following function to your **~/.tvdrc** file:
!!! Note "Note"
**Please note:** To be able to run parallel debugging procedure from the command line without stopping the debugger in the mpiexec source code you have to add the following function to your **~/.tvdrc** file:
```bash
proc mpi_auto_run_starter {loaded_id} {
......
Valgrind
========
Valgrind is a tool for memory debugging and profiling.
About Valgrind
--------------
Valgrind is an open-source tool, used mainly for debuggig memory-related problems, such as memory leaks, use of uninitalized memory etc. in C/C++ applications. The toolchain was however extended over time with more functionality, such as debugging of threaded applications, cache profiling, not limited only to C/C++.
......@@ -263,5 +261,6 @@ Prints this output : (note that there is output printed for every launched MPI p
==31319==
==31319== For counts of detected and suppressed errors, rerun with: -v
==31319== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 4 from 4)
```
We can see that Valgrind has reported use of unitialised memory on the master process (which reads the array to be broadcasted) and use of unaddresable memory on both processes.
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment