banner
xingli

xingli

猫娘爱好者

scaleway free 75g object storage

Free Expansion of 75G Data Disk for VPS#

Preface#

I don't know if any of you have encountered the following situations: when purchasing a VPS, the data disk purchased is too small due to budget issues; or you need to add a more secure storage space on the VPS for file backup; or you simply want to play with object storage according to my video. This article is very suitable for you!

Specific Approach#

In fact, what we often do is mount the S3 bucket locally. However, this time, we will use the tool s3fs (https://github.com/s3fs-fuse/s3fs-fuse) to mount the S3 bucket to Linux, macOS, etc. through FUSE!

So, I thought of finding a free S3 and then mounting it to our VPS, which can be used as both VPS expansion storage and secure backup space, killing two birds with one stone.

In theory, all S3 protocol buckets can be mounted to VPS or local using this method!

Tools and Materials#

For this tutorial, I used a VPS with Debian 10 system, plus the free 75G object storage provided by Scaleway (https://console.scaleway.com/register);

Scaleway.png

Scaleway's parent company is online, founded in early 2002, with three data centers in Europe (Paris, Amsterdam, Warsaw). Its distinctive service is the ability to directly create remote hosts with Apple M1 chips for €0.11/hour. Scaleway claims that its object storage server is located in a radiation shelter 25 meters underground, so I feel that it is very secure. Therefore, I chose Scaleway for this tutorial!

Scaleway requires credit card registration and supports Euro consumption. If you don't have a credit card, you can refer to my articles: Virtual Card and Physical Card. If you still don't have one, you can consider Oracle's free 20G S3, AWS's free 5G S3, and so on.

Operation Steps#

  1. Create a bucket in Scaleway;

创建对象存储.png

Note: Record the bucket name; do not choose Paris for the region; select "Public" for visibility!

  1. Create an API key in Scaleway;

apikey.png

access.png

Note: When creating the API key, the owner is IAM, the expiration time is Never expires, and whether it is used for object storage, we choose Yes. After clicking "Generate Key", make sure to copy and save the Access Key and Secret Key. They are only displayed once!

  1. Operations inside the VPS:
apt update && apt install -y s3fs

echo "user_allow_other" >> /etc/fuse.conf

mkdir -p /oss

echo ACCESS_KEY:SECRET_KEY > ~/.passwd-s3fs

chmod 600 ~/.passwd-s3fs

Modify ACCESS_KEY to your own!

s3fs BUCKET_ID /oss -o allow_other -o passwd_file=~/.passwd-s3fs -o use_path_request_style -o endpoint=BUCKET_REGION -o parallel_count=15 -o multipart_size=128 -o nocopyapi -o url=https://s3.BUCKET_REGION.scw.cloud

Modify BUCKET_ID to the bucket name, and modify BUCKET_REGION in two places to your own! In the Bucket Endpoint, we can locate the region of the bucket. Amsterdam is nl-ams and Warsaw is pl-waw.

Next, check if the VPS has mounted the bucket:

df -h

Here, it will show 256T of space, but don't get too excited. We can only use 75G, and anything beyond that will be charged monthly at a rate of 0.01 Euro per 1G!

  1. At this point, we have completed the mounting. We can test the speed of the disk! It should be noted that after testing with dd, the mounted bucket will have obvious speed bottlenecks due to different block sizes, with a minimum of 13M/s and a maximum of 39M/s. It cannot read and write like a local device and is limited by network speed and latency.

Therefore, if you want to put website files in the bucket, the speed will be slightly slower. For example, using a VPS in the United States and object storage in Europe, you have to travel half the world. If you use the bucket as a backup disk, it is very suitable. You can also set up scheduled backups, and the bucket is separate from the VPS. Even if the VPS is damaged, it doesn't matter. We can remotely download the files in your bucket. I feel that this is the highlight of this video.

  1. We also need to set up automatic startup if needed!
apt install -y supervisor

systemctl enable supervisor

vi /etc/supervisor/conf.d/s3fs.conf

Then add the following code:

[program:s3fs]
command=/bin/bash -c "s3fs vps-mount-amsterdam /oss -o allow_other -o passwd_file=~/.passwd-s3fs -o use_path_request_style -o endpoint=Bnl-ams -o parallel_count=15 -o multipart_size=128 -o nocopyapi -o url=https://s3.nl-ams.scw.cloud"
directory=/
autorestart=true
stderr_logfile=/supervisor-err.log
stdout_logfile=/supervisor-out.log
user=root
stopsignal=INT

Note: The content inside the double quotes is the same as the manual mount!! Then you can restart to see the effect!

Where are the files?#

After mounting the S3 to the VPS, we can upload and download files through FTP and SFTP. Considering the security of object storage, we have to sacrifice some speed. It is not recommended to use this method for live streaming, as there will be lag when streaming video documents from the storage bucket. Finally, the files we backed up can be viewed and downloaded in the Scaleway object storage backend.

Advanced Play#

What I can think of now is to automatically back up website files and databases. The specific implementation methods are as follows:

  1. Write backup.sh as follows:
user=database_username
key=database_password
dbname=database_name
date=$(date +%Y%m%d)
bak=${dbname}_${date}
mysqldump -u$user --password="${key}" ${dbname} > /root/${bak}.sql
tar czvf /root/${bak}.zip /www/wwwroot/your_website_path
mv *.sql *.zip /oss
  1. Run chmod u+x backup.sh

  2. Add a cron job:

Add:

30 10 * * * /root/backup.sh

Save, and then check the task:

If there are no issues, the server will automatically package the website files and database at 10:30 every day, and then send the backup files to /oss, which is the database we mounted. Now it's secure.

References#

https://blog.51sec.org/

Video Tutorial#

Bilibili: https://www.bilibili.com/video/BV1LY411X78j/?spm_id_from=333.337.search-card.all.click&vd_source=2cbd72b9c63fa5b17f4fdd314add7688

Article source: https://iweec.com/700.html

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.