Many a times, I need to send backup files to the AWS S3 bucket, and there are many ways to do that; most commonly, system administrators use AWS CLI commands to achieve that.
I am a lazy person; I don’t mind accepting it; and I don’t like using AWS CLI for these purposes.
So, I found a way to permanently fix this.
This might not be a good solution, but at least it worked for me without any glitches or performance issues.
All you need is the s3fs.
s3fs allows Linux, macOS, and FreeBSD to mount an S3 bucket via FUSE (Filesystem in Userspace).
- s3fs makes you operate files and directories in the S3 bucket like a local file system.
- s3fs preserves the native object format for files, allowing use of other tools like AWS CLI.
Steps
1. Install s3fs
- Ubuntu/Debian
sudo apt update
sudo apt install s3fs
- CentOS/RHEL
sudo yum install s3fs-fuse
- macOS 10.12 and newer via Homebrew
brew install --cask macfuse
brew install gromgit/fuse/s3fs-mac
- FreeBSD
pkg install fusefs-s3fs
- Windows: Windows has its own install steps in this link.
2. AWS Credentials
There are three ways:
- 1 — use a text file (passwd-s3fs) to store the access key and secret key;
- 2 — use the AWS configuration profile; or
- 3—use role-based access if the server is an EC2 instance. (most secure)
💡 With options 1 and 2, the AWS secret keys are stored on the server in a plain text file, and therefore they must be restricted from access. Use a dedicated user for this purpose.
Using the password file
Create an ~/.passwd-s3fs
file with your AWS credentials:
The s3fs password file can be created:
- using a
.passwd-s3f
s file in the users home directory (i.e.${HOME}/.passwd-s3fs
) - using the system-wide
/etc/passwd-s3fs
file
echo "AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY" > ~/.passwd-s3fs
chmod 600 ~/.passwd-s3fs
#Replace Key ID and Secret with actual credentials
AWS Configuration profile
echo "
[s3bucket]
aws_access_key_id=AWS_ACCESS_KEY_ID
aws_secret_access_key=AWS_SECRET_ACCESS_KEY" >> ~/.aws/credentials
#Replace Key ID and Secret with actual credentials
AWS EC2 role
Refer to the AWS official KB article for this.
3. Mount
- Create a Mount Point
mkdir /mnt/s3bucket
#replace this directory as per your requirement
- Mount the S3 bucket:
If using the password file
s3fs BUCKET_NAME /mnt/s3bucket
#replace the BUCKET_NAME with actual S3 bucket
— S3fs will use the password file automatically.
If using the AWS credentials file
s3fs BUCKET_NAME /mnt/s3bucket -o profile=<profile name>
#replce the <profile name> with actual profile saved in ~/.aws/credentials file
If using the AWS IAM role
s3fs BUCKET_NAME /mnt/s3bucket
For this, you don’t need to specify credentials.
4. Verify Mount
Execute the following command to see the bucket contents
ls /mnt/s3bucket
5. Optional — Persistent Mount
Add this to /etc/fstab
for auto-mounting on reboot:
s3fs#BUCKET_NAME /mnt/s3bucket fuse _netdev,passwd_file=~/.passwd-s3fs 0 0
Adjust the parameters as per set up.
Other Important Parameters (to be used with -o parameter)
nonempty
: to mount a folder that is not empty with a S3 mount point.allow_other
: Allow other users to use this mount.use_cache =/tmp
: Sets the base directory where cached s3fs files will be stored. The size of this directory will be equal to your bucket size. This is disabled by default. Enabling this could consume your server’s disk space.dbglevel=info
: to debug the mountstorage_class
: store object with specified storage class. Possible values: standard, standard_ia, onezone_ia, reduced_redundancy, intelligent_tiering, glacier, and deep_archive. Default is “standard” class.
Limitations
Generally, S3 cannot offer the same performance or semantics as a local file system. More specifically:
- random writes or appends to files require rewriting the entire object, optimized with multi-part upload copy
- metadata operations such as listing directories have poor performance due to network latency
- non-AWS providers may have eventual consistency so reads can temporarily yield stale data (AWS offers read-after-write consistency since Dec 2020)
- no atomic renames of files or directories
- no coordination between multiple clients mounting the same bucket
- no hard links
That’s a wrap!!
If you enjoyed the article, please follow me and like this article. Also, don’t forget to share with your friends.
I am available on Instagram as @techbyteswithsuyash.