Skip to content
Snippets Groups Projects
Commit 461f7d2a authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

Merge branch 'add_page_actions' into 'master'

Add page actions

See merge request !168
parents adf98a01 c86bb7d0
No related branches found
No related tags found
7 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!196Master,!170Master,!168Add page actions
Showing
with 100 additions and 54 deletions
......@@ -6,17 +6,18 @@ stages:
docs:
stage: test
image: davidhrbac/docker-mdcheck:latest
allow_failure: true
script:
- mdl -r ~MD013,~MD033,~MD014,~MD026,~MD037 *.md docs.it4i/
- mdl -r ~MD013,~MD033,~MD014,~MD026,~MD037 *.md docs.it4i
two spaces:
stage: test
image: davidhrbac/docker-mdcheck:latest
allow_failure: true
script:
before_script:
- echo "== Files having more than one space betwee two characters =="
- find docs.it4i/ -name '*.md' -exec grep "[[:alpha:]] [[:alpha:]]" -l {} + || true
- find docs.it4i/ -name '*.md' -exec grep "[[:alpha:]] [[:alpha:]]" {} +
script:
- find docs.it4i/ -name '*.md' -exec grep "[[:alpha:]] [[:alpha:]]" -l {} + | grep docs.it4i/ -v
capitalize:
stage: test
......
## Resources Allocation Policy
# Resources Allocation Policy
### Job queue policies
## Job queue policies
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The Fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
......@@ -27,7 +27,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const
* **qnvidia**, qmic, qfat, the Dedicated queues: The queue qnvidia is dedicated to access the Nvidia accelerated nodes, the qmic to access MIC nodes and qfat the Fat nodes. It is required that active project with nonzero remaining resources is specified to enter these queues. 23 nvidia, 4 mic and 2 fat nodes are included. Full nodes, 16 cores per node are allocated. The queues run with very high priority, the jobs will be scheduled before the jobs coming from the qexp queue. An PI needs explicitly ask [support](https://support.it4i.cz/rt/) for authorization to enter the dedicated queues for all users associated to her/his Project.
* **qfree**, The Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 16 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
### Queue notes
## Queue notes
The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be [set manually, see examples](job-submission-and-execution/).
......@@ -35,7 +35,7 @@ Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatica
Anselm users may check current queue configuration at <https://extranet.it4i.cz/anselm/queues>.
### Queue status
## Queue status
!!! tip
Check the status of jobs, queues and compute nodes at <https://extranet.it4i.cz/anselm/>
......
......@@ -183,7 +183,7 @@ Entries: 0
Entries limit: 0
```
In this example, we view current size limits and space occupied on the /home and /scratch filesystem, for a particular user executing the command.
In this example, we view current size limits and space occupied on the /home and /scratch filesystem, for a particular user executing the command.
Note that limits are imposed also on number of objects (files, directories, links, etc...) that are allowed to create.
To have a better understanding of where the space is exactly used, you can use following command to find out.
......
......@@ -4,7 +4,7 @@ To run a [job](/#terminology-frequently-used-on-these-pages), [computational res
## Resources Allocation Policy
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share](/salomon/job-priority/#fair-share-priority) ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following queues are are the most important:
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share](/salomon/job-priority/#fair-share-priority) ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following queues are are the most important:
* **qexp**, the Express queue
* **qprod**, the Production queue
......@@ -36,4 +36,4 @@ Use GNU Parallel and/or Job arrays when running (many) single core jobs.
In many cases, it is useful to submit huge (100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization. In this chapter, we discuss the the recommended way to run huge number of jobs, including **ways to run huge number of single core jobs**.
Read more on [Capacity computing](/salomon/capacity-computing) page.
\ No newline at end of file
Read more on [Capacity computing](/salomon/capacity-computing) page.
......@@ -59,7 +59,7 @@ local $
## Errata
Although we have taken every care to ensure the accuracy of the content, mistakes do happen.
Although we have taken every care to ensure the accuracy of the content, mistakes do happen.
If you find an inconsistency or error, please report it by visiting <http://support.it4i.cz/rt>, creating a new ticket, and entering the details.
By doing so, you can save other readers from frustration and help us improve.
We will fix the problem as soon as possible.
## Resources Allocation Policy
# Resources Allocation Policy
### Job queue policies
## Job queue policies
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
......@@ -31,7 +31,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const
!!! note
To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement](job-submission-and-execution/).
### Queue notes
## Queue notes
The job wall-clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be [set manually, see examples](job-submission-and-execution/).
......@@ -39,7 +39,7 @@ Jobs that exceed the reserved wall-clock time (Req'd Time) get killed automatica
Salomon users may check current queue configuration at <https://extranet.it4i.cz/rsweb/salomon/queues>.
### Queue Status
## Queue Status
!!! note
Check the status of jobs, queues and compute nodes at [https://extranet.it4i.cz/rsweb/salomon/](https://extranet.it4i.cz/rsweb/salomon)
......
......@@ -38,7 +38,7 @@ An example of Clp enabled application follows. In this example, the library solv
int main (int argc, const char *argv[])
{
ClpSimplex model;
ClpSimplex model;
int status;
if (argc<2)
status=model.readMps("/apps/all/Clp/1.16.10-intel-2017a/lib/p0033.mps");
......
......@@ -154,7 +154,7 @@ Entries: 6
# based on Robinhood
```
In this example, we view current size limits and space occupied on the /home and /scratch filesystem, for a particular user executing the command.
In this example, we view current size limits and space occupied on the /home and /scratch filesystem, for a particular user executing the command.
Note that limits are imposed also on number of objects (files, directories, links, etc...) that are allowed to create.
To have a better understanding of where the space is exactly used, you can use following command to find out.
......
## Resource Accounting Policy
# Resource Accounting Policy
### Wall-clock Core-Hours WCH
## Wall-clock Core-Hours WCH
The wall-clock core-hours (WCH) are the basic metric of computer utilization time.
1 wall-clock core-hour is defined as 1 processor core allocated for 1 hour of wall-clock time. Allocating a full node (16 cores Anselm, 24 cores Salomon)
for 1 hour amounts to 16 wall-clock core-hours (Anselm) or 24 wall-clock core-hours (Salomon).
### Normalized Core-Hours NCH
## Normalized Core-Hours NCH
The resources subject to accounting are the normalized core-hours (NCH).
The normalized core-hours are obtained from WCH by applying a normalization factor:
......@@ -37,7 +37,7 @@ In future, the factors F will be updated, as new systems are installed. Factors
See examples in the [Job submission and execution](job-submission-and-execution/) section.
### Consumed Resources
## Consumed Resources
Check how many core-hours have been consumed. The command it4ifree is available on cluster login nodes.
......
......@@ -29,7 +29,7 @@ $ ml COMSOL
By default the **EDU variant** will be loaded. If user needs other version or variant, load the particular version. To obtain the list of available versions use
```console
$ ml av COMSOL
$ ml av COMSOL
```
If user needs to prepare COMSOL jobs in the interactive mode it is recommend to use COMSOL on the compute nodes via PBS Pro scheduler. In order run the COMSOL Desktop GUI on Windows is recommended to use the [Virtual Network Computing (VNC)](../../general/accessing-the-clusters/graphical-user-interface/x-window-system/).
......
......@@ -7,8 +7,8 @@ Conda as a package manager helps you find and install packages. If you need a pa
Conda treats Python the same as any other package, so it is easy to manage and update multiple installations.
Anaconda supports Python 2.7, 3.4, 3.5 and 3.6. The default is Python 2.7 or 3.6, depending on which installer you used:
* For the installers “Anaconda” and “Miniconda,” the default is 2.7.
* For the installers “Anaconda3” or “Miniconda3,” the default is 3.6.
* For the installers “Anaconda” and “Miniconda,” the default is 2.7.
* For the installers “Anaconda3” or “Miniconda3,” the default is 3.6.
## Conda on the IT4Innovations Clusters
......
......@@ -72,6 +72,7 @@ false
```
Show all files modified in last 5 days:
```csc
csharp> using System.IO;
csharp> from f in Directory.GetFiles ("mydirectory")
......
......@@ -12,35 +12,35 @@ $ ml av Tensorflow
Anselm provides beside others these three different TensorFlow modules:
* Tensorflow/1.1.0 (CPU only, not recommended), module built with:
* GCC/4.9.3
* Python/3.6.1
* GCC/4.9.3
* Python/3.6.1
* Tensorflow/1.1.0-CUDA-7.5.18-Python-3.6.1 (GPU enabled), module built with:
* GCC/4.9.3
* Python/3.6.1
* CUDA/7.5.18
* cuDNN/5.1-CUDA-7.5.18
* GCC/4.9.3
* Python/3.6.1
* CUDA/7.5.18
* cuDNN/5.1-CUDA-7.5.18
* Tensorflow/1.1.0-CUDA-8.0.44-Python-3.6.1 (GPU enabled), module built with:
* GCC/4.9.3
* Python/3.6.1
* CUDA/8.0.44
* cuDNN/5.1-CUDA-8.0.44
* GCC/4.9.3
* Python/3.6.1
* CUDA/8.0.44
* cuDNN/5.1-CUDA-8.0.44
## Salomon modules
Salomon provides beside others these three different TensorFlow modules:
* Tensorflow/1.1.0 (not recommended), module built with:
* GCC/4.9.3
* Python/3.6.1
* GCC/4.9.3
* Python/3.6.1
* Tensorflow/1.2.0-GCC-7.1.0-2.28 (default, recommended), module built with:
* TensorFlow 1.2 with SIMD support. TensorFlow build taking advantage of the Salomon CPU architecture.
* GCC/7.1.0-2.28
* Python/3.6.1
* protobuf/3.2.0-GCC-7.1.0-2.28-Python-3.6.1
* TensorFlow 1.2 with SIMD support. TensorFlow build taking advantage of the Salomon CPU architecture.
* GCC/7.1.0-2.28
* Python/3.6.1
* protobuf/3.2.0-GCC-7.1.0-2.28-Python-3.6.1
* Tensorflow/1.2.0-intel-2017.05-mkl (TensorFlow 1.2 with MKL support), module built with:
* icc/2017.4.196-GCC-7.1.0-2.28
* Python/3.6.1
* protobuf/3.2.0-GCC-7.1.0-2.28-Python-3.6.1
* icc/2017.4.196-GCC-7.1.0-2.28
* Python/3.6.1
* protobuf/3.2.0-GCC-7.1.0-2.28-Python-3.6.1
## TensorFlow application example
......
......@@ -239,7 +239,7 @@ $ spack edit git
```
!!! note
To change source link (ftp:// to http://) use `spack create URL -f` to regenerates rules.
To change source link (`ftp://` to `http://`) use `spack create URL -f` to regenerates rules.
#### **Example**
......
.md-icon--edit:before {
content: "edit";
}
.md-icon--check:before {
content: "check";
}
.md-icon--help:before {
content: "help";
}
a:not([href*="//"]) {
/* CSS for internal links */
}
......@@ -6,7 +18,7 @@ a.md-footer-social__link.fa.fa-globe {
!background: none;
}
a[href*="//"]:not( [href*='gitlab.it4i.cz'] ):not( [href*='code.it4i.cz'] ):not( [href*='https://www.it4i.cz'] ) {
a[href*="//"]:not( [href*='gitlab.it4i.cz'] ):not( [href*='code.it4i.cz'] ):not( [href*='https://www.it4i.cz'] ):not( [href*='https://support.it4i.cz'] ) {
/*CSS for external links */
background: transparent url("/img/external.png") no-repeat right 0px top 1px;
background-size: 12px;
......
......@@ -4,5 +4,5 @@
"footer.next": "Next",
"search.placeholder": "Search",
"source.link.title": "Go to repository",
"toc.title": "Table of contents"
"toc.title": "On this page"
}[key] }}{% endmacro %}
......@@ -4,12 +4,44 @@
{% if toc_ | first is defined and "\x3ch1 id=" in page.content %}
{% set toc_ = (toc_ | first).children %}
{% endif %}
{% if page.abs_url.rstrip('index.html').rstrip('/') == '' %}
{% set it4i_link = config.repo_url + '/edit/master/docs.it4i/index.md' %}
{% set it4i_page = '/index.md' %}
{% set it4i_url = 'https://docs.it4i.cz' %}
{% else %}
{% set it4i_link = config.repo_url + '/edit/master/docs.it4i' + page.abs_url.rstrip('index.html').rstrip('/') + '.md' %}
{% set it4i_page = page.abs_url.rstrip('index.html').rstrip('/') + '.md' %}
{% set it4i_url = 'https://docs.it4i.cz' + page.abs_url %}
{% endif %}
<ul class="md-nav__list" data-md-scrollfix>
<li class="md-nav__item">
<a href="{{ it4i_link }}" title="Edit This Page" class="md-nav__link" target="_blank">
<i class="md-icon md-icon--edit">
</i>
Edit This Page
</a>
</li>
<li class="md-nav__item">
<a href="https://code.it4i.cz/sccs/docs.it4i.cz/issues/new?issue%5Bdescription%5D=Requested change in page [{{ it4i_page }}]({{ it4i_url }}) /cc @hrb33 @kru0052" title="Request Change" class="md-nav__link" target="_blank">
<i class="md-icon md-icon--check">
</i>
Request Change
</a>
</li>
<li class="md-nav__item">
<a href="https://support.it4i.cz/rt" title="Get Support" class="md-nav__link" target="_blank">
<i class="md-icon md-icon--help">
</i>
Get Support
</a>
</li>
</ul>
{% if toc_ | first is defined %}
<label class="md-nav__title" for="toc">{{ lang.t('toc.title') }}</label>
<ul class="md-nav__list" data-md-scrollfix>
{% for toc_item in toc_ %}
{% include "partials/toc-item.html" %}
{% endfor %}
</ul>
<label class="md-nav__title" for="toc">{{ lang.t('toc.title') }}</label>
<ul class="md-nav__list" data-md-scrollfix>
{% for toc_item in toc_ %}
{% include "partials/toc-item.html" %}
{% endfor %}
</ul>
{% endif %}
</nav>
......@@ -25,7 +25,7 @@ pages:
- X Window System: general/accessing-the-clusters/graphical-user-interface/x-window-system.md
- VNC: general/accessing-the-clusters/graphical-user-interface/vnc.md
- VPN Access: general/accessing-the-clusters/vpn-access.md
- Resource allocation and job execution: general/resource_allocation_and_job_execution.md
- Resource Allocation and Job Execution: general/resource_allocation_and_job_execution.md
- Salomon Cluster:
- Introduction: salomon/introduction.md
- Hardware Overview: salomon/hardware-overview.md
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment