Skip to content
Snippets Groups Projects

Content revision

Merged David Hrbáč requested to merge content_revision into master
1 file
+ 7
7
Compare changes
  • Side-by-side
  • Inline
@@ -55,7 +55,7 @@ To access Salomon cluster, two login nodes running GSI SSH service are available
@@ -55,7 +55,7 @@ To access Salomon cluster, two login nodes running GSI SSH service are available
It is recommended to use the single DNS name salomon-prace.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
It is recommended to use the single DNS name salomon-prace.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
|Login address|Port|Protocol|Login node|
|Login address|Port|Protocol|Login node|
|---|---|
|---|---|---|---|
|salomon-prace.it4i.cz|2222|gsissh|login1, login2, login3 or login4|
|salomon-prace.it4i.cz|2222|gsissh|login1, login2, login3 or login4|
|login1-prace.salomon.it4i.cz|2222|gsissh|login1|
|login1-prace.salomon.it4i.cz|2222|gsissh|login1|
|login2-prace.salomon.it4i.cz|2222|gsissh|login2|
|login2-prace.salomon.it4i.cz|2222|gsissh|login2|
@@ -77,7 +77,7 @@ When logging from other PRACE system, the prace_service script can be used:
@@ -77,7 +77,7 @@ When logging from other PRACE system, the prace_service script can be used:
It is recommended to use the single DNS name salomon.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
It is recommended to use the single DNS name salomon.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
|Login address|Port|Protocol|Login node|
|Login address|Port|Protocol|Login node|
|---|---|
|---|---|---|---|
|salomon.it4i.cz|2222|gsissh|login1, login2, login3 or login4|
|salomon.it4i.cz|2222|gsissh|login1, login2, login3 or login4|
|login1.salomon.it4i.cz|2222|gsissh|login1|
|login1.salomon.it4i.cz|2222|gsissh|login1|
|login2-prace.salomon.it4i.cz|2222|gsissh|login2|
|login2-prace.salomon.it4i.cz|2222|gsissh|login2|
@@ -132,7 +132,7 @@ There's one control server and three backend servers for striping and/or backup
@@ -132,7 +132,7 @@ There's one control server and three backend servers for striping and/or backup
**Access from PRACE network:**
**Access from PRACE network:**
|Login address|Port|Node role|
|Login address|Port|Node role|
|---|---|
|---|---|---|
|gridftp-prace.salomon.it4i.cz|2812|Front end /control server|
|gridftp-prace.salomon.it4i.cz|2812|Front end /control server|
|lgw1-prace.salomon.it4i.cz|2813|Backend / data mover server|
|lgw1-prace.salomon.it4i.cz|2813|Backend / data mover server|
|lgw2-prace.salomon.it4i.cz|2813|Backend / data mover server|
|lgw2-prace.salomon.it4i.cz|2813|Backend / data mover server|
@@ -165,7 +165,7 @@ Or by using prace_service script:
@@ -165,7 +165,7 @@ Or by using prace_service script:
**Access from public Internet:**
**Access from public Internet:**
|Login address|Port|Node role|
|Login address|Port|Node role|
|---|---|
|---|---|---|---|
|gridftp.salomon.it4i.cz|2812|Front end /control server|
|gridftp.salomon.it4i.cz|2812|Front end /control server|
|lgw1.salomon.it4i.cz|2813|Backend / data mover server|
|lgw1.salomon.it4i.cz|2813|Backend / data mover server|
|lgw2.salomon.it4i.cz|2813|Backend / data mover server|
|lgw2.salomon.it4i.cz|2813|Backend / data mover server|
@@ -198,11 +198,11 @@ Or by using prace_service script:
@@ -198,11 +198,11 @@ Or by using prace_service script:
Generally both shared file systems are available through GridFTP:
Generally both shared file systems are available through GridFTP:
|File system mount point|Filesystem|Comment|
|File system mount point|Filesystem|Comment|
|---|---|
|---|---|---|
|/home|Lustre|Default HOME directories of users in format /home/prace/login/|
|/home|Lustre|Default HOME directories of users in format /home/prace/login/|
|/scratch|Lustre|Shared SCRATCH mounted on the whole cluster|
|/scratch|Lustre|Shared SCRATCH mounted on the whole cluster|
More information about the shared file systems is available [here](storage/storage/).
More information about the shared file systems is available [here](storage/).
Please note, that for PRACE users a "prace" directory is used also on the SCRATCH file system.
Please note, that for PRACE users a "prace" directory is used also on the SCRATCH file system.
@@ -234,7 +234,7 @@ General information about the resource allocation, job queuing and job execution
@@ -234,7 +234,7 @@ General information about the resource allocation, job queuing and job execution
For PRACE users, the default production run queue is "qprace". PRACE users can also use two other queues "qexp" and "qfree".
For PRACE users, the default production run queue is "qprace". PRACE users can also use two other queues "qexp" and "qfree".
|queue|Active project|Project resources|Nodes|priority|authorization|walltime |
|queue|Active project|Project resources|Nodes|priority|authorization|walltime |
|---|---|
|---|---|---|---|---|---|---|
|**qexp** Express queue|no|none required|32 nodes, max 8 per user|150|no|1 / 1h|
|**qexp** Express queue|no|none required|32 nodes, max 8 per user|150|no|1 / 1h|
|**qprace** Production queue|yes|>0|1006 nodes, max 86 per job|0|no|24 / 48h|
|**qprace** Production queue|yes|>0|1006 nodes, max 86 per job|0|no|24 / 48h|
|**qfree** Free resource queue|yes|none required|752 nodes, max 86 per job|-1024|no|12 / 12h|
|**qfree** Free resource queue|yes|none required|752 nodes, max 86 per job|-1024|no|12 / 12h|
Loading