Skip to content
Snippets Groups Projects
Commit 0eca56ed authored by Jan Siwiec's avatar Jan Siwiec
Browse files

proofread

parent 3111a93b
No related branches found
No related tags found
No related merge requests found
Pipeline #34802 failed
......@@ -20,9 +20,9 @@ that is well-suited for a wide range of applications and use cases.
The Proj4 object storage is accessible from all IT4Innovations clusters
as well as from the outside.
Additionaly, it allows to share data across clusters, etc.
Additionally, it allows to share data across clusters, etc.
User has to be part of project, witch is allowed to use S3 storage.
User has to be part of project, which is allowed to use S3 storage.
After that you will obtain role and credentials for using s3 storage.
## How to Configure S3 Client
......@@ -55,8 +55,8 @@ Configuration saved to '/home/dvo0012/.s3cfg'
```
now you have to make some bucket for you data with **your_policy** (for example ABC - if ABC is your project)
if you will make bucket without policy, we will not able to manage your data expiration of project - so please use policy
now you have to make some bucket for you data with **your_policy** (for example ABC - if ABC is your project).
If you make a bucket without policy, we will not able to manage your data expiration of project - so please use the policy.
```console
~ s3cmd --add-header=X-Storage-Policy:ABC mb s3://test-bucket
......@@ -72,7 +72,9 @@ upload: 'test.sh' -> 's3://test-bucket/test.sh' [1 of 1]
2023-10-17 13:09 1239 s3://test-bucket/test.sh
```
There is no posibility how to set permission for all members of project, so you have to set permission for each user in project. Permission sets only owner of bucket.
There is no possibility to set permissions for all members of a project,
so you have to set permissions for each user in a project.
Permission can be set only by the owner of the bucket.
```console
~ s3cmd setacl s3://test-bucket/test1.log --acl-grant=full\_control:user1
......@@ -80,13 +82,13 @@ There is no posibility how to set permission for all members of project, so you
~ s3cmd setacl s3://test-bucket/test1.log --acl-grant=full\_control:user2
s3://test-bucket/test1.log: ACL updated
~ s3cmd setacl s3://test-bucket/test1.log --acl-grant=read:user3
s3://test-bucket/test1.log: ACL updated
~ s3cmd setacl s3://test-bucket/test1.log --acl-grant=write:user4
s3://test-bucket/test1.log: ACL updated
~ s3cmd info s3://test-bucket/test1.log
s3://test-bucket/test1.log (object):
File size: 1024000000
......@@ -109,12 +111,18 @@ There is no posibility how to set permission for all members of project, so you
## Bugs & Features
The S3cmd client uses by default the so-called "multipart upload", which means that it splits the uploaded file into "chunks" with a default size of 15 MB. However, this upload method has major implications for the data capacity of the filesystem/fileset when overwriting existing files. When overwriting an existing file in "multipart" mode, the capacity is duplicated (the file is not overwritten, but rewritten and the original file remains - but the capacity is allocated by both files). This is a described swift bug for which there is no fix yet. But there is a workaround and that is to disable "multipart upload" on the S3CMD client side.
By default, the S3cmd client uses the so-called "multipart upload",
which means that it splits the uploaded file into "chunks" with a default size of 15 MB.
However, this upload method has major implications for the data capacity of the filesystem/fileset when overwriting existing files.
When overwriting an existing file in a "multipart" mode, the capacity is duplicated
(the file is not overwritten, but rewritten and the original file remains - but the capacity is allocated by both files).
This is a described swift bug for which there is no fix yet.
But there is a workaround and that is to disable "multipart upload" on the S3CMD client side.
```console
~ s3cmd --disable-multipart put /install/test1.log s3://test-bucket1
upload: '/install/test1.log' -> 's3://test-bucket1/test1.log' [1 of 1]
1024000000 of 1024000000 100% in 9s 99.90 MB/s done
```
this method is not recomended for large files, because it is not so fast and reliable as multipart upload, but it is only way how to overwrite files without duplicating capacity.
this method is not recommended for large files, because it is not as fast and reliable as multipart upload, but it is the only way how to overwrite files without duplicating capacity.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment