Commit be418f73 authored by David Hrbáč's avatar David Hrbáč

Links OK

parent 2c8516ae
Pipeline #5181 passed with stages
in 1 minute and 20 seconds
......@@ -70,7 +70,7 @@ Another good practice is to make the stripe count be an integral factor of the n
Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file.
Read more on [here][c].
Read more [here][c].
### Lustre on Anselm
......
......@@ -2,10 +2,10 @@
## Job Queue Policies
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](salomon/job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling][1] section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
!!! note
Check the queue status at <https://extranet.it4i.cz/rsweb/salomon/>
Check the queue status [here][a].
| queue | active project | project resources | nodes | min ncpus | priority | authorization | walltime |
| ------------------------------- | -------------- | -------------------- | ------------------------------------------------------------- | --------- | -------- | ------------- | --------- |
......@@ -19,7 +19,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const
| **qmic** Intel Xeon Phi cards | yes | > 0 | 864 Intel Xeon Phi cards, max 8 mic per job | 0 | 0 | no | 24 / 48h |
!!! note
**The qfree queue is not free of charge**. [Normal accounting](#resource-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply to Directors Discretion (DD projects) but may be allowed upon request.
**The qfree queue is not free of charge**. [Normal accounting][2] applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply to Directors Discretion (DD projects) but may be allowed upon request.
* **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
* **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, however only 86 per job. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
......@@ -31,20 +31,20 @@ The resources are allocated to the job in a fair-share fashion, subject to const
* **qmic**, the queue qmic to access MIC nodes. It is required that active project with nonzero remaining resources is specified to enter the qmic. All 864 MICs are included.
!!! note
To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement](job-submission-and-execution/).
To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement][3].
## Queue Notes
The job wall-clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be [set manually, see examples](salomon/job-submission-and-execution/).
The job wall-clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be [set manually, see examples][3].
Jobs that exceed the reserved wall-clock time (Req'd Time) get killed automatically. Wall-clock time limit can be changed for queuing jobs (state Q) using the qalter command, however can not be changed for a running job (state R).
Salomon users may check current queue configuration at [https://extranet.it4i.cz/rsweb/salomon/queues](https://extranet.it4i.cz/rsweb/salomon/queues).
Salomon users may check current queue configuration [here][b].
## Queue Status
!!! note
Check the status of jobs, queues and compute nodes at [https://extranet.it4i.cz/rsweb/salomon/](https://extranet.it4i.cz/rsweb/salomon)
Check the status of jobs, queues and compute nodes [here][a].
![RSWEB Salomon](../img/rswebsalomon.png "RSWEB Salomon")
......@@ -116,3 +116,10 @@ Options:
---8<--- "resource_accounting.md"
---8<--- "mathjax.md"
[1]: job-priority.md
[2]: #resource-accounting-policy
[3]: job-submission-and-execution.md
[a]: https://extranet.it4i.cz/rsweb/salomon/
[b]: https://extranet.it4i.cz/rsweb/salomon/queues
......@@ -15,7 +15,7 @@ The Salomon cluster is accessed by SSH protocol via login nodes login1, login2,
| login3.salomon.it4i.cz | 22 | ssh | login3 |
| login4.salomon.it4i.cz | 22 | ssh | login4 |
The authentication is by the [private key](../general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/)
The authentication is by the [private key][1] only.
!!! note
Please verify SSH fingerprints during the first logon. They are identical on all login nodes:
......@@ -44,7 +44,7 @@ If you see warning message "UNPROTECTED PRIVATE KEY FILE!", use this command to
local $ chmod 600 /path/to/id_rsa
```
On **Windows**, use [PuTTY ssh client](../general/accessing-the-clusters/shell-access-and-data-transfer/putty.md).
On **Windows**, use [PuTTY ssh client][2].
After logging in, you will see the command prompt:
......@@ -65,11 +65,11 @@ Last login: Tue Jul 9 15:57:38 2018 from your-host.example.com
```
!!! note
The environment is **not** shared between login nodes, except for [shared filesystems](salomon/storage/).
The environment is **not** shared between login nodes, except for [shared filesystems][3].
## Data Transfer
Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp protocols.
Data in and out of the system may be transferred by the [scp][a] and sftp protocols.
| Address | Port | Protocol |
| ---------------------- | ---- | --------- |
......@@ -79,7 +79,7 @@ Data in and out of the system may be transferred by the [scp](http://en.wikipedi
| login3.salomon.it4i.cz | 22 | scp, sftp |
| login4.salomon.it4i.cz | 22 | scp, sftp |
The authentication is by the [private key](general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/)
The authentication is by the [private key][1] only.
On linux or Mac, use scp or sftp client to transfer the data to Salomon:
......@@ -97,7 +97,7 @@ or
local $ sftp -o IdentityFile=/path/to/id_rsa username@salomon.it4i.cz
```
Very convenient way to transfer files in and out of the Salomon computer is via the fuse filesystem [sshfs](http://linux.die.net/man/1/sshfs)
Very convenient way to transfer files in and out of the Salomon computer is via the fuse filesystem [sshfs][b].
```console
local $ sshfs -o IdentityFile=/path/to/id_rsa username@salomon.it4i.cz:. mountpoint
......@@ -113,9 +113,9 @@ $ man scp
$ man sshfs
```
On Windows, use [WinSCP client](http://winscp.net/eng/download.php) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/) provides a way to mount the Salomon filesystems directly as an external disc.
On Windows, use [WinSCP client][c] to transfer the data. The [win-sshfs client][d] provides a way to mount the Salomon filesystems directly as an external disc.
More information about the shared file systems is available [here](salomon/storage/).
More information about the shared file systems is available [here][3].
## Connection Restrictions
......@@ -164,7 +164,7 @@ Note: Port number 6000 is chosen as an example only. Pick any free port.
Remote port forwarding from compute nodes allows applications running on the compute nodes to access hosts outside Salomon Cluster.
First, establish the remote port forwarding form the login node, as [described above](#port-forwarding-from-login-nodes).
First, establish the remote port forwarding form the login node, as [described above][4].
Second, invoke port forwarding from the compute node to the login node. Insert following line into your jobscript or interactive shell
......@@ -187,21 +187,38 @@ To establish local proxy server on your workstation, install and run SOCKS proxy
local $ ssh -D 1080 localhost
```
On Windows, install and run the free, open source [Sock Puppet](http://sockspuppet.com/) server.
On Windows, install and run the free, open source [Sock Puppet][e] server.
Once the proxy server is running, establish ssh port forwarding from Salomon to the proxy server, port 1080, exactly as [described above](#port-forwarding-from-login-nodes).
Once the proxy server is running, establish ssh port forwarding from Salomon to the proxy server, port 1080, exactly as [described above][4].
```console
local $ ssh -R 6000:localhost:1080 salomon.it4i.cz
```
Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well.
Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes][4] as well.
## Graphical User Interface
* The [X Window system](general/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters.
* The [Virtual Network Computing](../general/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
* The [X Window system][5] is a principal way to get GUI access to the clusters.
* The [Virtual Network Computing][6] is a graphical [desktop sharing][f] system that uses the [Remote Frame Buffer protocol][g] to remotely control another [computer][h].
## VPN Access
* Access to IT4Innovations internal resources via [VPN](general/accessing-the-clusters/vpn-access/).
* Access to IT4Innovations internal resources via [VPN][7].
[1]: ../general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
[2]: ../general/accessing-the-clusters/shell-access-and-data-transfer/putty.md
[3]: storage.md
[4]: #port-forwarding-from-login-nodes
[5]: ../general/accessing-the-clusters/graphical-user-interface/x-window-system.md
[6]: ../general/accessing-the-clusters/graphical-user-interface/vnc.md
[7]: ../general/accessing-the-clusters/vpn-access.md
[a]: http://en.wikipedia.org/wiki/Secure_copy
[b]: http://linux.die.net/man/1/sshfs
[c]: http://winscp.net/eng/download.php
[d]: http://code.google.com/p/win-sshfs/
[e]: http://sockspuppet.com/
[f]: http://en.wikipedia.org/wiki/Desktop_sharing
[g]: http://en.wikipedia.org/wiki/RFB_protocol
[h]: http://en.wikipedia.org/wiki/Computer
# Clp
## Introduction
Clp (Coin-or linear programming) is an open-source linear programming solver written in C++. It is primarily meant to be used as a callable library, but a basic, stand-alone executable version is also available.
Clp ([projects.coin-or.org/Clp](https://projects.coin-or.org/Clp)) is a part of the COIN-OR (The Computational Infrastracture for Operations Research) project ([projects.coin-or.org/](https://projects.coin-or.org/)).
Clp ([projects.coin-or.org/Clp][1]) is a part of the COIN-OR (The Computational Infrastracture for Operations Research) project ([projects.coin-or.org/][2]).
## Modules
......@@ -59,3 +58,6 @@ icc lp.c -o lp.x -Wl,-rpath=$LIBRARY_PATH -lClp
```
In this example, the lp.c code is compiled using the Intel compiler and linked with Clp. To run the code, the Intel module has to be loaded.
[1]: https://projects.coin-or.org/Clp
[2]: https://projects.coin-or.org/
This diff is collapsed.
......@@ -14,18 +14,18 @@ Remote visualization with NICE DCV software is availabe on two nodes.
## References
* [Graphical User Interface](salomon/shell-and-data-access/#graphical-user-interface)
* [VPN Access](salomon/shell-and-data-access/#vpn-access)
* [Graphical User Interface][1]
* [VPN Access][2]
## Install and Run
**Install NICE DCV 2016** (user-computer)
* [Overview](https://www.nice-software.com/download/nice-dcv-2016)
* [Linux download](http://www.nice-software.com/storage/nice-dcv/2016.0/endstation/linux/nice-dcv-endstation-2016.0-17066.run)
* [Windows download](http://www.nice-software.com/storage/nice-dcv/2016.0/endstation/win/nice-dcv-endstation-2016.0-17066-Release.msi)
* [Overview][a]
* [Linux download][b]
* [Windows download][c]
**Install VPN client** [VPN Access](general/accessing-the-clusters/vpn-access/) (user-computer)
**Install VPN client** [VPN Access][3]
!!! note
Visualisation server is a compute node. You are not able to SSH with your private key. There are two solutions available to solve login issue.
......@@ -46,7 +46,7 @@ Remote visualization with NICE DCV software is availabe on two nodes.
**Solution 2 - Copy private key from the cluster**
* Install WinSCP client (user-computer) [Download WinSCP installer](https://winscp.net/download/WinSCP-5.13.3-Setup.exe)
* Install WinSCP client (user-computer) [Download WinSCP installer][d]
* Add credentials
![](../img/viz1-win.png)
......@@ -65,8 +65,8 @@ Remote visualization with NICE DCV software is availabe on two nodes.
**Install PuTTY**
* [Overview](https://docs.it4i.cz/general/accessing-the-clusters/shell-access-and-data-transfer/putty/)
* [Download PuTTY installer](https://the.earth.li/~sgtatham/putty/latest/w64/putty-64bit-0.70-installer.msi)
* [Overview][4]
* [Download PuTTY installer][e]
* Configure PuTTY
![](../img/viz3-win.png)
......@@ -78,7 +78,7 @@ Remote visualization with NICE DCV software is availabe on two nodes.
* Save
**Run VPN client** [VPN IT4Innovations](https://vpn.it4i.cz/user) (user-computer)
**Run VPN client** [VPN IT4Innovations][f] (user-computer)
**Login to Salomon via PuTTY** (user-computer)
......@@ -138,7 +138,7 @@ $ ssh -i ~/salomon_key -TN -f user@vizserv1.salomon.it4i.cz -L 5901:localhost:59
$ ssh -i ~/salomon_key -TN -f user@vizserv2.salomon.it4i.cz -L 5902:localhost:5902 -L 7300:localhost:7300 -L 7301:localhost:7301 -L 7302:localhost:7302 -L 7303:localhost:7303 -L 7304:localhost:7304 -L 7305:localhost:7305
```
**Run VPN client** [VPN IT4Innovations](https://vpn.it4i.cz/user) (user-computer)
**Run VPN client** [VPN IT4Innovations][f] (user-computer)
**Login to Salomon** (user-computer)
......@@ -180,3 +180,15 @@ $ qsub -I -q qviz -A OPEN-XX-XX -l select=1:ncpus=4:host=vizserv2,walltime=04:00
![](../img/viz3.png)
**LOGOUT FROM MENU: System->Logout**
[1]: shell-and-data-access.md#graphical-user-interface
[2]: shell-and-data-access.md#vpn-access
[3]: ../general/accessing-the-clusters/vpn-access.md
[4]: ../general/accessing-the-clusters/shell-access-and-data-transfer/putty.md
[a]: https://www.nice-software.com/download/nice-dcv-2016
[b]: http://www.nice-software.com/storage/nice-dcv/2016.0/endstation/linux/nice-dcv-endstation-2016.0-17066.run
[c]: http://www.nice-software.com/storage/nice-dcv/2016.0/endstation/win/nice-dcv-endstation-2016.0-17066-Release.msi
[d]: https://winscp.net/download/WinSCP-5.13.3-Setup.exe
[e]: https://the.earth.li/~sgtatham/putty/latest/w64/putty-64bit-0.70-installer.msi
[f]: https://vpn.it4i.cz/user
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment