Commit 991aeb25 authored by Pavel Jirásek's avatar Pavel Jirásek

Table fromating, links

parent 5433dd02
Pipeline #1638 passed with stages
in 57 seconds
......@@ -55,7 +55,7 @@ To access Salomon cluster, two login nodes running GSI SSH service are available
It is recommended to use the single DNS name salomon-prace.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
|Login address|Port|Protocol|Login node|
|---|---|
|---|---|---|---|
|salomon-prace.it4i.cz|2222|gsissh|login1, login2, login3 or login4|
|login1-prace.salomon.it4i.cz|2222|gsissh|login1|
|login2-prace.salomon.it4i.cz|2222|gsissh|login2|
......@@ -77,7 +77,7 @@ When logging from other PRACE system, the prace_service script can be used:
It is recommended to use the single DNS name salomon.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
|Login address|Port|Protocol|Login node|
|---|---|
|---|---|---|---|
|salomon.it4i.cz|2222|gsissh|login1, login2, login3 or login4|
|login1.salomon.it4i.cz|2222|gsissh|login1|
|login2-prace.salomon.it4i.cz|2222|gsissh|login2|
......@@ -132,7 +132,7 @@ There's one control server and three backend servers for striping and/or backup
**Access from PRACE network:**
|Login address|Port|Node role|
|---|---|
|---|---|---|
|gridftp-prace.salomon.it4i.cz|2812|Front end /control server|
|lgw1-prace.salomon.it4i.cz|2813|Backend / data mover server|
|lgw2-prace.salomon.it4i.cz|2813|Backend / data mover server|
......@@ -165,7 +165,7 @@ Or by using prace_service script:
**Access from public Internet:**
|Login address|Port|Node role|
|---|---|
|---|---|---|---|
|gridftp.salomon.it4i.cz|2812|Front end /control server|
|lgw1.salomon.it4i.cz|2813|Backend / data mover server|
|lgw2.salomon.it4i.cz|2813|Backend / data mover server|
......@@ -198,11 +198,11 @@ Or by using prace_service script:
Generally both shared file systems are available through GridFTP:
|File system mount point|Filesystem|Comment|
|---|---|
|---|---|---|
|/home|Lustre|Default HOME directories of users in format /home/prace/login/|
|/scratch|Lustre|Shared SCRATCH mounted on the whole cluster|
More information about the shared file systems is available [here](storage/storage/).
More information about the shared file systems is available [here](storage/).
Please note, that for PRACE users a "prace" directory is used also on the SCRATCH file system.
......@@ -234,7 +234,7 @@ General information about the resource allocation, job queuing and job execution
For PRACE users, the default production run queue is "qprace". PRACE users can also use two other queues "qexp" and "qfree".
|queue|Active project|Project resources|Nodes|priority|authorization|walltime |
|---|---|
|---|---|---|---|---|---|---|
|**qexp** Express queue|no|none required|32 nodes, max 8 per user|150|no|1 / 1h|
|**qprace** Production queue|yes|>0|1006 nodes, max 86 per job|0|no|24 / 48h|
|**qfree** Free resource queue|yes|none required|752 nodes, max 86 per job|-1024|no|12 / 12h|
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment