Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
docs.it4i.cz
Manage
Activity
Members
Labels
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container Registry
Model registry
Operate
Environments
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
SCS
docs.it4i.cz
Commits
da85c165
Commit
da85c165
authored
3 years ago
by
Jan Siwiec
Browse files
Options
Downloads
Patches
Plain Diff
Update storage.md
parent
7e5d14a7
No related branches found
No related tags found
No related merge requests found
Pipeline
#20220
passed with warnings
3 years ago
Stage: test
Stage: build
Stage: deploy
Stage: after_test
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
docs.it4i/salomon/storage.md
+2
-46
2 additions, 46 deletions
docs.it4i/salomon/storage.md
with
2 additions
and
46 deletions
docs.it4i/salomon/storage.md
+
2
−
46
View file @
da85c165
...
...
@@ -261,52 +261,7 @@ It is not recommended to allocate large amount of memory and use large amount of
### Global RAM Disk
The Global RAM disk spans the local RAM disks of all the nodes within a single job.

The Global RAM disk deploys
BeeGFS On Demand parallel filesystem, using local RAM disks as a storage backend.
The Global RAM disk is mounted at /mnt/global_ramdisk.
!!! note
The global RAM disk is on-demand. It has to be activated by
**global_ramdisk=true**
in the qsub command.
```
console
$
qsub
-q
qprod
-l
select
=
4,global_ramdisk
=
true
./jobscript
```
This command would submit 4 node job in qprod queue, once running a 440GB RAM disk shared across the 4 nodes will be created.
The RAM disk will be accessible at /mnt/global_ramdisk, files written to this RAM disk will be visible on all 4 nodes.
The file system is private to a job and shared among the nodes, created when the job starts and deleted at the job end.
!!! note
The Global RAM disk will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
The files on the Global RAM disk will be equally striped across all the nodes, using 512k stripe size.
Check the Global RAM disk status:
```
console
$
beegfs-df
-p
/mnt/global_ramdisk
$
beegfs-ctl
--mount
=
/mnt/global_ramdisk
--getentryinfo
/mnt/global_ramdisk
```
Use Global RAM disk in case you need very large RAM disk space. The Global RAM disk allows for high performance sharing of data among compute nodes
within a job.
!!! warning
Be very careful, use of Global RAM disk file system is at the expense of operational memory.
| Global RAM disk | |
| ------------------ | --------------------------------------------------------------------------|
| Mountpoint | /mnt/global_ramdisk |
| Accesspoint | /mnt/global_ramdisk |
| Capacity | (N
*
110)GB |
| Throughput | 3
*
(N+1)GB/s, 2GB/s single POSIX thread |
| User quota | none |
N = number of compute nodes in the job.
For more information, see the
[
Job Features
][
12
]
section.
## Summary
...
...
@@ -382,6 +337,7 @@ N = number of compute nodes in the job.
[
9
]:
#shared-workspaces
[
10
]:
../general/obtaining-login-credentials/obtaining-login-credentials.md
[
11
]:
../storage/standard-file-acl.md
[
12
]:
../job-features.md#global-ram-disk
[
c
]:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/ch09s05.html
[
d
]:
https://support.it4i.cz/rt
This diff is collapsed.
Click to expand it.
Jan Siwiec
@siw019
·
3 years ago
Author
Maintainer
relates to #96
relates to #96
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment