Commit 2d8ba497 authored by Lukáš Krupčík's avatar Lukáš Krupčík

3. -> 3.0

parent 899a6f1d
Pipeline #1886 passed with stages
in 1 minute and 7 seconds
......@@ -15,8 +15,8 @@ However, executing huge number of jobs via the PBS queue may strain the system.
## Policy
1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays).
2. The array size is at most 1000 subjobs.
1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays).
1. The array size is at most 1000 subjobs.
## Job Arrays
......@@ -286,9 +286,9 @@ In this example, the jobscript executes in multiple instances in parallel, on al
When deciding this values, think about following guiding rules:
1. Let n=N/16. Inequality (n+1) \* T < W should hold. The N is number of tasks per subjob, T is expected single task walltime and W is subjob walltime. Short subjob walltime improves scheduling and job throughput.
2. Number of tasks should be modulo 16.
3. These rules are valid only when all tasks have similar task walltimes T.
1. Let n=N/16. Inequality (n+1) \* T < W should hold. The N is number of tasks per subjob, T is expected single task walltime and W is subjob walltime. Short subjob walltime improves scheduling and job throughput.
1. Number of tasks should be modulo 16.
1. These rules are valid only when all tasks have similar task walltimes T.
### Submit the Job Array
......
......@@ -6,9 +6,9 @@ Scheduler gives each job an execution priority and then uses this job execution
Job execution priority on Anselm is determined by these job properties (in order of importance):
1. queue priority
2. fair-share priority
3. eligible time
1. queue priority
1. fair-share priority
1. eligible time
### Queue Priority
......
......@@ -4,12 +4,12 @@
When allocating computational resources for the job, please specify
1. suitable queue for your job (default is qprod)
2. number of computational nodes required
3. number of cores per node required
4. maximum wall time allocated to your calculation, note that jobs exceeding maximum wall time will be killed
5. Project ID
6. Jobscript or interactive switch
1. suitable queue for your job (default is qprod)
1. number of computational nodes required
1. number of cores per node required
1. maximum wall time allocated to your calculation, note that jobs exceeding maximum wall time will be killed
1. Project ID
1. Jobscript or interactive switch
!!! note
Use the **qsub** command to submit your job to a queue for allocation of the computational resources.
......
......@@ -276,6 +276,6 @@ Sample output:
## References
1. <https://software.intel.com/en-us/articles/intel-performance-counter-monitor-a-better-way-to-measure-cpu-utilization>
2. <https://software.intel.com/sites/default/files/m/3/2/2/xeon-e5-2600-uncore-guide.pdf> Intel® Xeon® Processor E5-2600 Product Family Uncore Performance Monitoring Guide.
3. <http://intel-pcm-api-documentation.github.io/classPCM.html> API Documentation
1. <https://software.intel.com/en-us/articles/intel-performance-counter-monitor-a-better-way-to-measure-cpu-utilization>
1. <https://software.intel.com/sites/default/files/m/3/2/2/xeon-e5-2600-uncore-guide.pdf> Intel® Xeon® Processor E5-2600 Product Family Uncore Performance Monitoring Guide.
1. <http://intel-pcm-api-documentation.github.io/classPCM.html> API Documentation
......@@ -70,4 +70,4 @@ You may also use remote analysis to collect data from the MIC and then analyze i
## References
1. <https://www.rcac.purdue.edu/tutorials/phi/PerformanceTuningXeonPhi-Tullos.pdf> Performance Tuning for Intel® Xeon Phi™ Coprocessors
1. <https://www.rcac.purdue.edu/tutorials/phi/PerformanceTuningXeonPhi-Tullos.pdf> Performance Tuning for Intel® Xeon Phi™ Coprocessors
......@@ -232,6 +232,6 @@ To use PAPI in offload mode, you need to provide both host and MIC versions of P
## References
1. <http://icl.cs.utk.edu/papi/> Main project page
2. <http://icl.cs.utk.edu/projects/papi/wiki/Main_Page> Wiki
3. <http://icl.cs.utk.edu/papi/docs/> API Documentation
1. <http://icl.cs.utk.edu/papi/> Main project page
1. <http://icl.cs.utk.edu/projects/papi/wiki/Main_Page> Wiki
1. <http://icl.cs.utk.edu/papi/docs/> API Documentation
......@@ -17,9 +17,9 @@ There are currently two versions of Scalasca 2.0 [modules](../../environment-and
Profiling a parallel application with Scalasca consists of three steps:
1. Instrumentation, compiling the application such way, that the profiling data can be generated.
2. Runtime measurement, running the application with the Scalasca profiler to collect performance data.
3. Analysis of reports
1. Instrumentation, compiling the application such way, that the profiling data can be generated.
1. Runtime measurement, running the application with the Scalasca profiler to collect performance data.
1. Analysis of reports
### Instrumentation
......@@ -67,4 +67,4 @@ Refer to [CUBE documentation](cube/) on usage of the GUI viewer.
## References
1. <http://www.scalasca.org/>
1. <http://www.scalasca.org/>
......@@ -17,9 +17,9 @@ There are currently two versions of Score-P version 1.2.6 [modules](../../enviro
There are three ways to instrument your parallel applications in order to enable performance data collection:
1. Automated instrumentation using compiler
2. Manual instrumentation using API calls
3. Manual instrumentation using directives
1. Automated instrumentation using compiler
1. Manual instrumentation using API calls
1. Manual instrumentation using directives
### Automated Instrumentation
......
......@@ -53,11 +53,11 @@ Our recommended solution is that job script creates distinct shared job director
### Procedure
1. Prepare image of your virtual machine
2. Optimize image of your virtual machine for Anselm's virtualization
3. Modify your image for running jobs
4. Create job script for executing virtual machine
5. Run jobs
1. Prepare image of your virtual machine
1. Optimize image of your virtual machine for Anselm's virtualization
1. Modify your image for running jobs
1. Create job script for executing virtual machine
1. Run jobs
### Prepare Image of Your Virtual Machine
......
......@@ -22,9 +22,9 @@ If multiple clients try to read and write the same part of a file at the same ti
There is default stripe configuration for Anselm Lustre filesystems. However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance:
1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Anselm Lustre filesystems
2. stripe_count the number of OSTs to stripe across; default is 1 for Anselm Lustre filesystems one can specify -1 to use all OSTs in the filesystem.
3. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Anselm Lustre filesystems
1. stripe_count the number of OSTs to stripe across; default is 1 for Anselm Lustre filesystems one can specify -1 to use all OSTs in the filesystem.
1. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
!!! note
Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
......
......@@ -141,7 +141,7 @@ PuTTY X11 proxy: unable to connect to forwarded X server: Network error: Connect
(gnome-session:23691): WARNING **: Cannot open display:**
```
1. Locate and modify Cygwin shortcut that uses [startxwin](http://x.cygwin.com/docs/man1/startxwin.1.html)
1. Locate and modify Cygwin shortcut that uses [startxwin](http://x.cygwin.com/docs/man1/startxwin.1.html)
locate
C:cygwin64binXWin.exe
change it
......@@ -150,7 +150,7 @@ PuTTY X11 proxy: unable to connect to forwarded X server: Network error: Connect
![XWin-listen-tcp.png](../../../img/XWinlistentcp.png "XWin-listen-tcp.png")
1. Check Putty settings:
1. Check Putty settings:
Enable X11 forwarding
![](../../../img/cygwinX11forwarding.png)
......@@ -31,9 +31,9 @@ Log in to the [IT4I Extranet portal](https://extranet.it4i.cz) using IT4I creden
In order to authorize a Collaborator to utilize the allocated resources, the PI should contact the [IT4I support](https://support.it4i.cz/rt/) (E-mail: [support\[at\]it4i.cz](mailto:support@it4i.cz)) and provide following information:
1. Identify your project by project ID
2. Provide list of people, including himself, who are authorized to use the resources allocated to the project. The list must include full name, e-mail and affiliation. Provide usernames as well, if collaborator login access already exists on the IT4I systems.
3. Include "Authorization to IT4Innovations" into the subject line.
1. Identify your project by project ID
1. Provide list of people, including himself, who are authorized to use the resources allocated to the project. The list must include full name, e-mail and affiliation. Provide usernames as well, if collaborator login access already exists on the IT4I systems.
1. Include "Authorization to IT4Innovations" into the subject line.
Example (except the subject line which must be in English, you may use Czech or Slovak language for communication with us):
......@@ -59,12 +59,12 @@ Should the above information be provided by e-mail, the e-mail **must be** digit
Once authorized by PI, every person (PI or Collaborator) wishing to access the clusters, should contact the [IT4I support](https://support.it4i.cz/rt/) (E-mail: [support\[at\]it4i.cz](mailto:support@it4i.cz)) providing following information:
1. Project ID
2. Full name and affiliation
3. Statement that you have read and accepted the [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf) (AUP).
4. Attach the AUP file.
5. Your preferred username, max 8 characters long. The preferred username must associate your surname and name or be otherwise derived from it. Only alphanumeric sequences, dash and underscore signs are allowed.
6. In case you choose [Alternative way to personal certificate](obtaining-login-credentials/#alternative-way-of-getting-personal-certificate), a **scan of photo ID** (personal ID or passport or driver license) is required
1. Project ID
1. Full name and affiliation
1. Statement that you have read and accepted the [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf) (AUP).
1. Attach the AUP file.
1. Your preferred username, max 8 characters long. The preferred username must associate your surname and name or be otherwise derived from it. Only alphanumeric sequences, dash and underscore signs are allowed.
1. In case you choose [Alternative way to personal certificate](obtaining-login-credentials/#alternative-way-of-getting-personal-certificate), a **scan of photo ID** (personal ID or passport or driver license) is required
Example (except the subject line which must be in English, you may use Czech or Slovak language for communication with us):
......@@ -94,9 +94,9 @@ For various reasons we do not accept PGP keys.** Please, use only X.509 PKI cert
You will receive your personal login credentials by protected e-mail. The login credentials include:
1. username
2. ssh private key and private key passphrase
3. system password
1. username
1. ssh private key and private key passphrase
1. system password
The clusters are accessed by the [private key](../accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) and username. Username and password is used for login to the [information systems](http://support.it4i.cz/).
......
......@@ -4,9 +4,9 @@ Welcome to IT4Innovations documentation pages. The IT4Innovations national super
## How to Read the Documentation
1. Read the list in the left column. Select the subject of interest. Alternatively, use the Search in the upper right corner.
2. Scan for all the notes and reminders on the page.
3. Read the details if still more information is needed. **Look for examples** illustrating the concepts.
1. Read the list in the left column. Select the subject of interest. Alternatively, use the Search in the upper right corner.
1. Scan for all the notes and reminders on the page.
1. Read the details if still more information is needed. **Look for examples** illustrating the concepts.
## Getting Help and Support
......
......@@ -15,8 +15,8 @@ However, executing huge number of jobs via the PBS queue may strain the system.
## Policy
1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays).
2. The array size is at most 1000 subjobs.
1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays).
1. The array size is at most 1000 subjobs.
## Job Arrays
......@@ -288,9 +288,9 @@ In this example, the jobscript executes in multiple instances in parallel, on al
When deciding this values, think about following guiding rules :
1. Let n = N / 24. Inequality (n + 1) x T < W should hold. The N is number of tasks per subjob, T is expected single task walltime and W is subjob walltime. Short subjob walltime improves scheduling and job throughput.
2. Number of tasks should be modulo 24.
3. These rules are valid only when all tasks have similar task walltimes T.
1. Let n = N / 24. Inequality (n + 1) x T < W should hold. The N is number of tasks per subjob, T is expected single task walltime and W is subjob walltime. Short subjob walltime improves scheduling and job throughput.
1. Number of tasks should be modulo 24.
1. These rules are valid only when all tasks have similar task walltimes T.
### Submit the Job Array
......
......@@ -6,9 +6,9 @@ Scheduler gives each job an execution priority and then uses this job execution
Job execution priority is determined by these job properties (in order of importance):
1. queue priority
2. fair-share priority
3. eligible time
1. queue priority
1. fair-share priority
1. eligible time
### Queue Priority
......
......@@ -4,12 +4,12 @@
When allocating computational resources for the job, please specify
1. suitable queue for your job (default is qprod)
2. number of computational nodes required
3. number of cores per node required
4. maximum wall time allocated to your calculation, note that jobs exceeding maximum wall time will be killed
5. Project ID
6. Jobscript or interactive switch
1. suitable queue for your job (default is qprod)
1. number of computational nodes required
1. number of cores per node required
1. maximum wall time allocated to your calculation, note that jobs exceeding maximum wall time will be killed
1. Project ID
1. Jobscript or interactive switch
!!! note
Use the **qsub** command to submit your job to a queue for allocation of the computational resources.
......
......@@ -3,7 +3,7 @@
[ANSYS Fluent](http://www.ansys.com/products/fluids/ansys-fluent)
software contains the broad physical modeling capabilities needed to model flow, turbulence, heat transfer, and reactions for industrial applications ranging from air flow over an aircraft wing to combustion in a furnace, from bubble columns to oil platforms, from blood flow to semiconductor manufacturing, and from clean room design to wastewater treatment plants. Special models that give the software the ability to model in-cylinder combustion, aeroacoustics, turbomachinery, and multiphase systems have served to broaden its reach.
1. Common way to run Fluent over pbs file
1. Common way to run Fluent over pbs file
To run ANSYS Fluent in batch mode you can utilize/modify the default fluent.pbs script and execute it via the qsub command.
......@@ -56,7 +56,7 @@ Journal file with definition of the input geometry and boundary conditions and d
The appropriate dimension of the problem has to be set by parameter (2d/3d).
2. Fast way to run Fluent from command line
1. Fast way to run Fluent from command line
```bash
fluent solver_version [FLUENT_options] -i journal_file -pbs
......@@ -64,7 +64,7 @@ fluent solver_version [FLUENT_options] -i journal_file -pbs
This syntax will start the ANSYS FLUENT job under PBS Professional using the qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of _job_ID.hostname_. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o _job_ID_.
3. Running Fluent via user's config file
1. Running Fluent via user's config file
The sample script uses a configuration file called pbs_fluent.conf if no command line arguments are present. This configuration file should be present in the directory from which the jobs are submitted (which is also the directory in which the jobs are executed). The following is an example of what the content of pbs_fluent.conf can be:
......@@ -141,7 +141,7 @@ To run ANSYS Fluent in batch mode with user's config file you can utilize/modify
It runs the jobs out of the directory from which they are submitted (PBS_O_WORKDIR).
4. Running Fluent in parralel
1. Running Fluent in parralel
Fluent could be run in parallel only under Academic Research license. To do so this ANSYS Academic Research license must be placed before ANSYS CFD license in user preferences. To make this change anslic_admin utility should be run
......
......@@ -19,4 +19,4 @@ You can find the detailed user manual in PDF format in $EBROOTVAMPIR/doc/vampir-
## References
1. <https://www.vampir.eu>
1. <https://www.vampir.eu>
......@@ -26,6 +26,6 @@ In the left pane, you can switch between Vectorization and Threading workflows.
## References
1. [Intel® Advisor 2015 Tutorial: Find Where to Add Parallelism - C++ Sample](https://software.intel.com/en-us/intel-advisor-tutorial-vectorization-windows-cplusplus)
2. [Product page](https://software.intel.com/en-us/intel-advisor-xe)
3. [Documentation](https://software.intel.com/en-us/intel-advisor-2016-user-guide-linux)
1. [Intel® Advisor 2015 Tutorial: Find Where to Add Parallelism - C++ Sample](https://software.intel.com/en-us/intel-advisor-tutorial-vectorization-windows-cplusplus)
1. [Product page](https://software.intel.com/en-us/intel-advisor-xe)
1. [Documentation](https://software.intel.com/en-us/intel-advisor-2016-user-guide-linux)
......@@ -34,6 +34,6 @@ Results obtained from batch mode can be then viewed in the GUI by selecting File
## References
1. [Product page](https://software.intel.com/en-us/intel-inspector-xe)
2. [Documentation and Release Notes](https://software.intel.com/en-us/intel-inspector-xe-support/documentation)
3. [Tutorials](https://software.intel.com/en-us/articles/inspectorxe-tutorials)
1. [Product page](https://software.intel.com/en-us/intel-inspector-xe)
1. [Documentation and Release Notes](https://software.intel.com/en-us/intel-inspector-xe-support/documentation)
1. [Tutorials](https://software.intel.com/en-us/articles/inspectorxe-tutorials)
......@@ -36,5 +36,5 @@ Please refer to Intel documenation about usage of the GUI tool.
## References
1. [Getting Started with Intel® Trace Analyzer and Collector](https://software.intel.com/en-us/get-started-with-itac-for-linux)
2. [Intel® Trace Analyzer and Collector - Documentation](https://software.intel.com/en-us/intel-trace-analyzer)
1. [Getting Started with Intel® Trace Analyzer and Collector](https://software.intel.com/en-us/get-started-with-itac-for-linux)
1. [Intel® Trace Analyzer and Collector - Documentation](https://software.intel.com/en-us/intel-trace-analyzer)
......@@ -9,8 +9,8 @@ All login and compute nodes may access same data on shared file systems. Compute
## Policy (In a Nutshell)
!!! note
_ Use [HOME](#home) for your most valuable data and programs.
_ Use [WORK](#work) for your large project files.
\* Use [HOME](#home) for your most valuable data and programs.
\* Use [WORK](#work) for your large project files.
\* Use [TEMP](#temp) for large scratch data.
!!! warning
......@@ -56,9 +56,9 @@ If multiple clients try to read and write the same part of a file at the same ti
There is default stripe configuration for Salomon Lustre file systems. However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance:
1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Salomon Lustre file systems
2. stripe_count the number of OSTs to stripe across; default is 1 for Salomon Lustre file systems one can specify -1 to use all OSTs in the file system.
3. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Salomon Lustre file systems
1. stripe_count the number of OSTs to stripe across; default is 1 for Salomon Lustre file systems one can specify -1 to use all OSTs in the file system.
1. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
!!! note
Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment