From be418f73b919a71efea013c255ba32e33fa05c90 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?David=20Hrb=C3=A1=C4=8D?= <david@hrbac.cz> Date: Wed, 31 Oct 2018 14:46:23 +0100 Subject: [PATCH] Links OK --- docs.it4i/anselm/storage.md | 2 +- .../salomon/resources-allocation-policy.md | 21 ++++-- docs.it4i/salomon/shell-and-data-access.md | 47 +++++++++---- .../software/numerical-libraries/Clp.md | 6 +- docs.it4i/salomon/storage.md | 68 ++++++++++++------- docs.it4i/salomon/visualization.md | 34 +++++++--- 6 files changed, 116 insertions(+), 62 deletions(-) diff --git a/docs.it4i/anselm/storage.md b/docs.it4i/anselm/storage.md index 6ab7ba919..3b0e0ae28 100644 --- a/docs.it4i/anselm/storage.md +++ b/docs.it4i/anselm/storage.md @@ -70,7 +70,7 @@ Another good practice is to make the stripe count be an integral factor of the n Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file. -Read more on [here][c]. +Read more [here][c]. ### Lustre on Anselm diff --git a/docs.it4i/salomon/resources-allocation-policy.md b/docs.it4i/salomon/resources-allocation-policy.md index 54a231a94..9182d657e 100644 --- a/docs.it4i/salomon/resources-allocation-policy.md +++ b/docs.it4i/salomon/resources-allocation-policy.md @@ -2,10 +2,10 @@ ## Job Queue Policies -The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](salomon/job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview: +The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling][1] section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview: !!! note - Check the queue status at <https://extranet.it4i.cz/rsweb/salomon/> + Check the queue status [here][a]. | queue | active project | project resources | nodes | min ncpus | priority | authorization | walltime | | ------------------------------- | -------------- | -------------------- | ------------------------------------------------------------- | --------- | -------- | ------------- | --------- | @@ -19,7 +19,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const | **qmic** Intel Xeon Phi cards | yes | > 0 | 864 Intel Xeon Phi cards, max 8 mic per job | 0 | 0 | no | 24 / 48h | !!! note - **The qfree queue is not free of charge**. [Normal accounting](#resource-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply to Directors Discretion (DD projects) but may be allowed upon request. + **The qfree queue is not free of charge**. [Normal accounting][2] applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply to Directors Discretion (DD projects) but may be allowed upon request. * **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour. * **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, however only 86 per job. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours. @@ -31,20 +31,20 @@ The resources are allocated to the job in a fair-share fashion, subject to const * **qmic**, the queue qmic to access MIC nodes. It is required that active project with nonzero remaining resources is specified to enter the qmic. All 864 MICs are included. !!! note - To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement](job-submission-and-execution/). + To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement][3]. ## Queue Notes -The job wall-clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be [set manually, see examples](salomon/job-submission-and-execution/). +The job wall-clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be [set manually, see examples][3]. Jobs that exceed the reserved wall-clock time (Req'd Time) get killed automatically. Wall-clock time limit can be changed for queuing jobs (state Q) using the qalter command, however can not be changed for a running job (state R). -Salomon users may check current queue configuration at [https://extranet.it4i.cz/rsweb/salomon/queues](https://extranet.it4i.cz/rsweb/salomon/queues). +Salomon users may check current queue configuration [here][b]. ## Queue Status !!! note - Check the status of jobs, queues and compute nodes at [https://extranet.it4i.cz/rsweb/salomon/](https://extranet.it4i.cz/rsweb/salomon) + Check the status of jobs, queues and compute nodes [here][a].  @@ -116,3 +116,10 @@ Options: ---8<--- "resource_accounting.md" ---8<--- "mathjax.md" + +[1]: job-priority.md +[2]: #resource-accounting-policy +[3]: job-submission-and-execution.md + +[a]: https://extranet.it4i.cz/rsweb/salomon/ +[b]: https://extranet.it4i.cz/rsweb/salomon/queues diff --git a/docs.it4i/salomon/shell-and-data-access.md b/docs.it4i/salomon/shell-and-data-access.md index e7fd6b57f..1ee44fcd8 100644 --- a/docs.it4i/salomon/shell-and-data-access.md +++ b/docs.it4i/salomon/shell-and-data-access.md @@ -15,7 +15,7 @@ The Salomon cluster is accessed by SSH protocol via login nodes login1, login2, | login3.salomon.it4i.cz | 22 | ssh | login3 | | login4.salomon.it4i.cz | 22 | ssh | login4 | -The authentication is by the [private key](../general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) +The authentication is by the [private key][1] only. !!! note Please verify SSH fingerprints during the first logon. They are identical on all login nodes: @@ -44,7 +44,7 @@ If you see warning message "UNPROTECTED PRIVATE KEY FILE!", use this command to local $ chmod 600 /path/to/id_rsa ``` -On **Windows**, use [PuTTY ssh client](../general/accessing-the-clusters/shell-access-and-data-transfer/putty.md). +On **Windows**, use [PuTTY ssh client][2]. After logging in, you will see the command prompt: @@ -65,11 +65,11 @@ Last login: Tue Jul 9 15:57:38 2018 from your-host.example.com ``` !!! note - The environment is **not** shared between login nodes, except for [shared filesystems](salomon/storage/). + The environment is **not** shared between login nodes, except for [shared filesystems][3]. ## Data Transfer -Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp protocols. +Data in and out of the system may be transferred by the [scp][a] and sftp protocols. | Address | Port | Protocol | | ---------------------- | ---- | --------- | @@ -79,7 +79,7 @@ Data in and out of the system may be transferred by the [scp](http://en.wikipedi | login3.salomon.it4i.cz | 22 | scp, sftp | | login4.salomon.it4i.cz | 22 | scp, sftp | -The authentication is by the [private key](general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) +The authentication is by the [private key][1] only. On linux or Mac, use scp or sftp client to transfer the data to Salomon: @@ -97,7 +97,7 @@ or local $ sftp -o IdentityFile=/path/to/id_rsa username@salomon.it4i.cz ``` -Very convenient way to transfer files in and out of the Salomon computer is via the fuse filesystem [sshfs](http://linux.die.net/man/1/sshfs) +Very convenient way to transfer files in and out of the Salomon computer is via the fuse filesystem [sshfs][b]. ```console local $ sshfs -o IdentityFile=/path/to/id_rsa username@salomon.it4i.cz:. mountpoint @@ -113,9 +113,9 @@ $ man scp $ man sshfs ``` -On Windows, use [WinSCP client](http://winscp.net/eng/download.php) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/) provides a way to mount the Salomon filesystems directly as an external disc. +On Windows, use [WinSCP client][c] to transfer the data. The [win-sshfs client][d] provides a way to mount the Salomon filesystems directly as an external disc. -More information about the shared file systems is available [here](salomon/storage/). +More information about the shared file systems is available [here][3]. ## Connection Restrictions @@ -164,7 +164,7 @@ Note: Port number 6000 is chosen as an example only. Pick any free port. Remote port forwarding from compute nodes allows applications running on the compute nodes to access hosts outside Salomon Cluster. -First, establish the remote port forwarding form the login node, as [described above](#port-forwarding-from-login-nodes). +First, establish the remote port forwarding form the login node, as [described above][4]. Second, invoke port forwarding from the compute node to the login node. Insert following line into your jobscript or interactive shell @@ -187,21 +187,38 @@ To establish local proxy server on your workstation, install and run SOCKS proxy local $ ssh -D 1080 localhost ``` -On Windows, install and run the free, open source [Sock Puppet](http://sockspuppet.com/) server. +On Windows, install and run the free, open source [Sock Puppet][e] server. -Once the proxy server is running, establish ssh port forwarding from Salomon to the proxy server, port 1080, exactly as [described above](#port-forwarding-from-login-nodes). +Once the proxy server is running, establish ssh port forwarding from Salomon to the proxy server, port 1080, exactly as [described above][4]. ```console local $ ssh -R 6000:localhost:1080 salomon.it4i.cz ``` -Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well. +Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes][4] as well. ## Graphical User Interface -* The [X Window system](general/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters. -* The [Virtual Network Computing](../general/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer). +* The [X Window system][5] is a principal way to get GUI access to the clusters. +* The [Virtual Network Computing][6] is a graphical [desktop sharing][f] system that uses the [Remote Frame Buffer protocol][g] to remotely control another [computer][h]. ## VPN Access -* Access to IT4Innovations internal resources via [VPN](general/accessing-the-clusters/vpn-access/). +* Access to IT4Innovations internal resources via [VPN][7]. + +[1]: ../general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md +[2]: ../general/accessing-the-clusters/shell-access-and-data-transfer/putty.md +[3]: storage.md +[4]: #port-forwarding-from-login-nodes +[5]: ../general/accessing-the-clusters/graphical-user-interface/x-window-system.md +[6]: ../general/accessing-the-clusters/graphical-user-interface/vnc.md +[7]: ../general/accessing-the-clusters/vpn-access.md + +[a]: http://en.wikipedia.org/wiki/Secure_copy +[b]: http://linux.die.net/man/1/sshfs +[c]: http://winscp.net/eng/download.php +[d]: http://code.google.com/p/win-sshfs/ +[e]: http://sockspuppet.com/ +[f]: http://en.wikipedia.org/wiki/Desktop_sharing +[g]: http://en.wikipedia.org/wiki/RFB_protocol +[h]: http://en.wikipedia.org/wiki/Computer diff --git a/docs.it4i/salomon/software/numerical-libraries/Clp.md b/docs.it4i/salomon/software/numerical-libraries/Clp.md index d74af1dc2..5f2335e54 100644 --- a/docs.it4i/salomon/software/numerical-libraries/Clp.md +++ b/docs.it4i/salomon/software/numerical-libraries/Clp.md @@ -1,11 +1,10 @@ - # Clp ## Introduction Clp (Coin-or linear programming) is an open-source linear programming solver written in C++. It is primarily meant to be used as a callable library, but a basic, stand-alone executable version is also available. -Clp ([projects.coin-or.org/Clp](https://projects.coin-or.org/Clp)) is a part of the COIN-OR (The Computational Infrastracture for Operations Research) project ([projects.coin-or.org/](https://projects.coin-or.org/)). +Clp ([projects.coin-or.org/Clp][1]) is a part of the COIN-OR (The Computational Infrastracture for Operations Research) project ([projects.coin-or.org/][2]). ## Modules @@ -59,3 +58,6 @@ icc lp.c -o lp.x -Wl,-rpath=$LIBRARY_PATH -lClp ``` In this example, the lp.c code is compiled using the Intel compiler and linked with Clp. To run the code, the Intel module has to be loaded. + +[1]: https://projects.coin-or.org/Clp +[2]: https://projects.coin-or.org/ diff --git a/docs.it4i/salomon/storage.md b/docs.it4i/salomon/storage.md index 53336d648..f043973cb 100644 --- a/docs.it4i/salomon/storage.md +++ b/docs.it4i/salomon/storage.md @@ -2,35 +2,35 @@ ## Introduction -There are two main shared file systems on Salomon cluster, the [HOME](#home) and [SCRATCH](#shared-filesystems). +There are two main shared file systems on Salomon cluster, the [HOME][1] and [SCRATCH][2]. All login and compute nodes may access same data on shared file systems. Compute nodes are also equipped with local (non-shared) scratch, ramdisk and tmp file systems. ## Policy (In a Nutshell) !!! note - \* Use [HOME](#home) for your most valuable data and programs. - \* Use [WORK](#work) for your large project files. - \* Use [TEMP](#temp) for large scratch data. + \* Use [HOME][1] for your most valuable data and programs. + \* Use [WORK][3] for your large project files. + \* Use [TEMP][4] for large scratch data. !!! warning - Do not use for [archiving](#archiving)! + Do not use for [archiving][5]! ## Archiving -Please don't use shared file systems as a backup for large amount of data or long-term archiving mean. The academic staff and students of research institutions in the Czech Republic can use [CESNET storage service](#cesnet-data-storage), which is available via SSHFS. +Please don't use shared file systems as a backup for large amount of data or long-term archiving mean. The academic staff and students of research institutions in the Czech Republic can use [CESNET storage service][6], which is available via SSHFS. ## Shared File Systems -Salomon computer provides two main shared file systems, the [HOME file system](#home-filesystem) and the [SCRATCH file system](#scratch-filesystem). The SCRATCH file system is partitioned to [WORK and TEMP workspaces](#shared-workspaces). The HOME file system is realized as a tiered NFS disk storage. The SCRATCH file system is realized as a parallel Lustre file system. Both shared file systems are accessible via the Infiniband network. Extended ACLs are provided on both HOME/SCRATCH file systems for the purpose of sharing data with other users using fine-grained control. +Salomon computer provides two main shared file systems, the [HOME file system][7] and the [SCRATCH file system][8]. The SCRATCH file system is partitioned to [WORK and TEMP workspaces][9]. The HOME file system is realized as a tiered NFS disk storage. The SCRATCH file system is realized as a parallel Lustre file system. Both shared file systems are accessible via the Infiniband network. Extended ACLs are provided on both HOME/SCRATCH file systems for the purpose of sharing data with other users using fine-grained control. ### HOME File System -The HOME file system is realized as a Tiered file system, exported via NFS. The first tier has capacity 100 TB, second tier has capacity 400 TB. The file system is available on all login and computational nodes. The Home file system hosts the [HOME workspace](#home). +The HOME file system is realized as a Tiered file system, exported via NFS. The first tier has capacity 100 TB, second tier has capacity 400 TB. The file system is available on all login and computational nodes. The Home file system hosts the [HOME workspace][1]. ### SCRATCH File System -The architecture of Lustre on Salomon is composed of two metadata servers (MDS) and six data/object storage servers (OSS). Accessible capacity is 1.69 PB, shared among all users. The SCRATCH file system hosts the [WORK and TEMP workspaces](#shared-workspaces). +The architecture of Lustre on Salomon is composed of two metadata servers (MDS) and six data/object storage servers (OSS). Accessible capacity is 1.69 PB, shared among all users. The SCRATCH file system hosts the [WORK and TEMP workspaces][9]. Configuration of the SCRATCH Lustre storage @@ -46,11 +46,9 @@ Configuration of the SCRATCH Lustre storage ### Understanding the Lustre File Systems -[http://www.nas.nasa.gov](http://www.nas.nasa.gov) - A user file on the Lustre file system can be divided into multiple chunks (stripes) and stored across a subset of the object storage targets (OSTs) (disks). The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing. -When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server ( MDS) and the metadata target ( MDT) for the layout and location of the [file's stripes](http://www.nas.nasa.gov/hecc/support/kb/Lustre_Basics_224.html#striping). Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval. +When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server ( MDS) and the metadata target ( MDT) for the layout and location of the [file's stripes][a]. Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval. If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency so that all clients see consistent results. @@ -106,7 +104,7 @@ Another good practice is to make the stripe count be an integral factor of the n Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file. -Read more on [http://wiki.lustre.org/manual/LustreManual20_HTML/ManagingStripingFreeSpace.html](http://wiki.lustre.org/manual/LustreManual20_HTML/ManagingStripingFreeSpace.html) +Read more [here][b]. ## Disk Usage and Quota Commands @@ -220,22 +218,20 @@ mask::rwx other::--- ``` -Default ACL mechanism can be used to replace setuid/setgid permissions on directories. Setting a default ACL on a directory (-d flag to setfacl) will cause the ACL permissions to be inherited by any newly created file or subdirectory within the directory. Refer to this page for more information on Linux ACL: - -[redhat guide](https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/ch09s05.html) +Default ACL mechanism can be used to replace setuid/setgid permissions on directories. Setting a default ACL on a directory (-d flag to setfacl) will cause the ACL permissions to be inherited by any newly created file or subdirectory within the directory. Refer to this page for more information on Linux ACL at [RedHat guide][c]. ## Shared Workspaces ### Home -Users home directories /home/username reside on HOME file system. Accessible capacity is 0.5 PB, shared among all users. Individual users are restricted by file system usage quotas, set to 250 GB per user. If 250 GB should prove as insufficient for particular user, contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request. +Users home directories /home/username reside on HOME file system. Accessible capacity is 0.5 PB, shared among all users. Individual users are restricted by file system usage quotas, set to 250 GB per user. If 250 GB should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request. !!! note The HOME file system is intended for preparation, evaluation, processing and storage of data generated by active Projects. The HOME should not be used to archive data of past Projects or other unrelated data. -The files on HOME will not be deleted until end of the [users lifecycle](general/obtaining-login-credentials/obtaining-login-credentials/). +The files on HOME will not be deleted until end of the [users lifecycle][10]. The workspace is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files. @@ -274,7 +270,7 @@ The WORK workspace is hosted on SCRATCH file system. The SCRATCH is realized as ### Temp -The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. >If 100 TB should prove as insufficient for particular user, contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request. +The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. If 100 TB should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request. !!! note The TEMP workspace is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory. @@ -395,7 +391,7 @@ N = number of compute nodes in the job. Do not use shared file systems at IT4Innovations as a backup for large amount of data or long-term archiving purposes. !!! note - The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service](https://du.cesnet.cz/). + The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service][e]. The CESNET Storage service can be used for research purposes, mainly by academic staff and students of research institutions in the Czech Republic. @@ -403,20 +399,18 @@ User of data storage CESNET (DU) association can become organizations or an indi User may only use data storage CESNET for data transfer and storage which are associated with activities in science, research, development, the spread of education, culture and prosperity. In detail see “Acceptable Use Policy CESNET Large Infrastructure (Acceptable Use Policy, AUP)”. -The service is documented [here](https://du.cesnet.cz/en/start). For special requirements contact directly CESNET Storage Department via e-mail [du-support(at)cesnet.cz](mailto:du-support@cesnet.cz). +The service is documented [here][f]. For special requirements contact directly CESNET Storage Department via e-mail [du-support(at)cesnet.cz][g]. The procedure to obtain the CESNET access is quick and trouble-free. -(source [https://du.cesnet.cz/](https://du.cesnet.cz/wiki/doku.php/en/start "CESNET Data Storage")) - ## CESNET Storage Access ### Understanding CESNET Storage !!! note - It is very important to understand the CESNET storage before uploading data. [Please read](https://du.cesnet.cz/en/navody/home-migrace-plzen/start) first. + It is very important to understand the CESNET storage before uploading data. [Please read][h] first. -Once registered for CESNET Storage, you may [access the storage](https://du.cesnet.cz/en/navody/faq/start) in number of ways. We recommend the SSHFS and RSYNC methods. +Once registered for CESNET Storage, you may [access the storage][i] in number of ways. We recommend the SSHFS and RSYNC methods. ### SSHFS Access @@ -472,7 +466,7 @@ Rsync is a fast and extraordinarily versatile file copying tool. It is famous fo Rsync finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time. Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated. -More about Rsync at [here](https://du.cesnet.cz/en/navody/rsync/start#pro_bezne_uzivatele) +More about Rsync [here][j]. Transfer large files to/from CESNET storage, assuming membership in the Storage VO @@ -489,3 +483,25 @@ $ rsync --progress -av username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafold ``` Transfer rates of about 28 MB/s can be expected. + +[1]: #home +[2]: #shared-filesystems +[3]: #work +[4]: #temp +[5]: #archiving +[6]: #cesnet-data-storage +[7]: #home-filesystem +[8]: #scratch-filesystem +[9]: #shared-workspaces +[10]: ../general/obtaining-login-credentials/obtaining-login-credentials.md + +[a]: http://www.nas.nasa.gov/hecc/support/kb/Lustre_Basics_224.html#striping +[b]: http://wiki.lustre.org/manual/LustreManual20_HTML/ManagingStripingFreeSpace.html +[c]: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/ch09s05.html +[d]: https://support.it4i.cz/rt +[e]: https://du.cesnet.cz/ +[f]: https://du.cesnet.cz/en/start +[g]: mailto:du-support@cesnet.cz +[h]: https://du.cesnet.cz/en/navody/home-migrace-plzen/start +[i]: https://du.cesnet.cz/en/navody/faq/start) +[j]: https://du.cesnet.cz/en/navody/rsync/start#pro_bezne_uzivatele diff --git a/docs.it4i/salomon/visualization.md b/docs.it4i/salomon/visualization.md index 769f7a024..0cd031b8a 100644 --- a/docs.it4i/salomon/visualization.md +++ b/docs.it4i/salomon/visualization.md @@ -14,18 +14,18 @@ Remote visualization with NICE DCV software is availabe on two nodes. ## References -* [Graphical User Interface](salomon/shell-and-data-access/#graphical-user-interface) -* [VPN Access](salomon/shell-and-data-access/#vpn-access) +* [Graphical User Interface][1] +* [VPN Access][2] ## Install and Run **Install NICE DCV 2016** (user-computer) -* [Overview](https://www.nice-software.com/download/nice-dcv-2016) -* [Linux download](http://www.nice-software.com/storage/nice-dcv/2016.0/endstation/linux/nice-dcv-endstation-2016.0-17066.run) -* [Windows download](http://www.nice-software.com/storage/nice-dcv/2016.0/endstation/win/nice-dcv-endstation-2016.0-17066-Release.msi) +* [Overview][a] +* [Linux download][b] +* [Windows download][c] -**Install VPN client** [VPN Access](general/accessing-the-clusters/vpn-access/) (user-computer) +**Install VPN client** [VPN Access][3] !!! note Visualisation server is a compute node. You are not able to SSH with your private key. There are two solutions available to solve login issue. @@ -46,7 +46,7 @@ Remote visualization with NICE DCV software is availabe on two nodes. **Solution 2 - Copy private key from the cluster** -* Install WinSCP client (user-computer) [Download WinSCP installer](https://winscp.net/download/WinSCP-5.13.3-Setup.exe) +* Install WinSCP client (user-computer) [Download WinSCP installer][d] * Add credentials  @@ -65,8 +65,8 @@ Remote visualization with NICE DCV software is availabe on two nodes. **Install PuTTY** -* [Overview](https://docs.it4i.cz/general/accessing-the-clusters/shell-access-and-data-transfer/putty/) -* [Download PuTTY installer](https://the.earth.li/~sgtatham/putty/latest/w64/putty-64bit-0.70-installer.msi) +* [Overview][4] +* [Download PuTTY installer][e] * Configure PuTTY  @@ -78,7 +78,7 @@ Remote visualization with NICE DCV software is availabe on two nodes. * Save -**Run VPN client** [VPN IT4Innovations](https://vpn.it4i.cz/user) (user-computer) +**Run VPN client** [VPN IT4Innovations][f] (user-computer) **Login to Salomon via PuTTY** (user-computer) @@ -138,7 +138,7 @@ $ ssh -i ~/salomon_key -TN -f user@vizserv1.salomon.it4i.cz -L 5901:localhost:59 $ ssh -i ~/salomon_key -TN -f user@vizserv2.salomon.it4i.cz -L 5902:localhost:5902 -L 7300:localhost:7300 -L 7301:localhost:7301 -L 7302:localhost:7302 -L 7303:localhost:7303 -L 7304:localhost:7304 -L 7305:localhost:7305 ``` -**Run VPN client** [VPN IT4Innovations](https://vpn.it4i.cz/user) (user-computer) +**Run VPN client** [VPN IT4Innovations][f] (user-computer) **Login to Salomon** (user-computer) @@ -180,3 +180,15 @@ $ qsub -I -q qviz -A OPEN-XX-XX -l select=1:ncpus=4:host=vizserv2,walltime=04:00  **LOGOUT FROM MENU: System->Logout** + +[1]: shell-and-data-access.md#graphical-user-interface +[2]: shell-and-data-access.md#vpn-access +[3]: ../general/accessing-the-clusters/vpn-access.md +[4]: ../general/accessing-the-clusters/shell-access-and-data-transfer/putty.md + +[a]: https://www.nice-software.com/download/nice-dcv-2016 +[b]: http://www.nice-software.com/storage/nice-dcv/2016.0/endstation/linux/nice-dcv-endstation-2016.0-17066.run +[c]: http://www.nice-software.com/storage/nice-dcv/2016.0/endstation/win/nice-dcv-endstation-2016.0-17066-Release.msi +[d]: https://winscp.net/download/WinSCP-5.13.3-Setup.exe +[e]: https://the.earth.li/~sgtatham/putty/latest/w64/putty-64bit-0.70-installer.msi +[f]: https://vpn.it4i.cz/user -- GitLab