Skip to content
Snippets Groups Projects
Commit a96f6865 authored by Jan Siwiec's avatar Jan Siwiec
Browse files

Merge branch 'storage' into 'master'

new info about s3 storage

See merge request !444
parents 12687d4e 56ee7810
No related branches found
No related tags found
1 merge request!444new info about s3 storage
Pipeline #34800 failed
# Proj4 Object Storage
OpenStack Swift is a highly scalable, distributed object storage system that is designed to store and retrieve large amounts of unstructured data. It is an open-source project that provides a simple, scalable, and durable storage system for applications and services. Swift is built to be highly available, fault-tolerant, and scalable, making it an ideal choice for storing large amounts of data.
Swift is designed to be highly modular, with a simple API that allows developers to easily integrate it into their applications. It provides a RESTful API that can be accessed using a variety of programming languages, making it easy to integrate with existing applications.
One of the key features of Swift is its ability to scale horizontally, allowing it to handle large amounts of data and high levels of traffic. It is also designed to be highly durable, with data being replicated across multiple nodes to ensure that it is always available.
Overall, OpenStack Swift is a powerful and flexible object storage system that is well-suited for a wide range of applications and use cases.
## Accessing Proj4
The Proj4 object storage is accessible from all IT4Innovations clusters and from outside world also, allows to share data across clusters etc.
User has to be part of project, witch is allowed to use S3 storage. After that you will obtain role and credentials for using s3 storage.
## How to Configure S3 Client
```console
$ sudo yum install s3cmd -y ## for debian based systems use apt-get
$ s3cmd --configure
.
.
.
.
Access Key: ***your_access***
Secret Key: ***your_secret_key***
Default Region: US
S3 Endpoint: 195.113.250.1:8080
DNS-style bucket+hostname:port template for accessing a bucket: 195.113.250.1:8080
Encryption password: random
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0
.
.
.
Configuration saved to '/home/dvo0012/.s3cfg'
.
.
```
now you have to make some bucket for you data with **your_policy** (for example ABC - if ABC is your project)
if you will make bucket without policy, we will not able to manage your data expiration of project - so please use policy
```console
~ s3cmd --add-header=X-Storage-Policy:ABC mb s3://test-bucket
~ $ s3cmd put test.sh s3://test-bucket/
upload: 'test.sh' -> 's3://test-bucket/test.sh' [1 of 1]
1239 of 1239 100% in 0s 19.59 kB/s done
~ $ s3cmd ls
2023-10-17 13:00 s3://test-bucket
~ $ s3cmd ls s3://test-bucket
2023-10-17 13:09 1239 s3://test-bucket/test.sh
```
There is no posibility how to set permission for all members of project, so you have to set permission for each user in project. Permission sets only owner of bucket.
```console
~ s3cmd setacl s3://test-bucket/test1.log --acl-grant=full\_control:user1
s3://test-bucket/test1.log: ACL updated
~ s3cmd setacl s3://test-bucket/test1.log --acl-grant=full\_control:user2
s3://test-bucket/test1.log: ACL updated
~ s3cmd setacl s3://test-bucket/test1.log --acl-grant=read:user3
s3://test-bucket/test1.log: ACL updated
~ s3cmd setacl s3://test-bucket/test1.log --acl-grant=write:user4
s3://test-bucket/test1.log: ACL updated
~ s3cmd info s3://test-bucket/test1.log
s3://test-bucket/test1.log (object):
File size: 1024000000
Last mod: Mon, 09 Oct 2023 08:06:12 GMT
MIME type: application/xml
Storage: STANDARD
MD5 sum: b5c667a723a10a3485a33263c4c2b978
SSE: none
Policy: none
CORS: none
ACL: OBJtest:user2: FULL\_CONTROL
ACL: \*anon\*: READ
ACL: user1: FULL\_CONTROL
ACL: user2: FULL\_CONTROL
ACL: user3: READ
ACL: user4: WRITE
URL: [http://195.113.250.1:8080/test-bucket/test1.log](http://195.113.250.1:8080/test-bucket/test1.log)
x-amz-meta-s3cmd-attrs: atime:1696588450/ctime:1696588452/gid:1001/gname:******/md5:******/mode:33204/mtime:1696588452/uid:******/uname:******
```console
# Bugs&Features
The S3cmd client uses by default the so-called "multipart upload", which means that it splits the uploaded file into "chunks" with a default size of 15 MB. However, this upload method has major implications for the data capacity of the filesystem/fileset when overwriting existing files. When overwriting an existing file in "multipart" mode, the capacity is duplicated (the file is not overwritten, but rewritten and the original file remains - but the capacity is allocated by both files). This is a described swift bug for which there is no fix yet. But there is a workaround and that is to disable "multipart upload" on the S3CMD client side.
```
~ s3cmd --disable-multipart put /install/test1.log s3://test-bucket1
upload: '/install/test1.log' -> 's3://test-bucket1/test1.log' [1 of 1]
1024000000 of 1024000000 100% in 9s 99.90 MB/s done
```
this method is not recomended for large files, because it is not so fast and reliable as multipart upload, but it is only way how to overwrite files without duplicating capacity.
\ No newline at end of file
...@@ -125,7 +125,9 @@ nav: ...@@ -125,7 +125,9 @@ nav:
- LUMI: lumi.md - LUMI: lumi.md
- Support: general/support.md - Support: general/support.md
- Storage: - Storage:
- PROJECT: storage/project-storage.md - PROJECT:
- Project storage: storage/project-storage.md
- Proj4 storage - S3: storage/proj4-storage.md
- CESNET: storage/cesnet-storage.md - CESNET: storage/cesnet-storage.md
- Access Control List: - Access Control List:
- Standard File ACL: storage/standard-file-acl.md - Standard File ACL: storage/standard-file-acl.md
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment