Skip to main content

SeaweedFS for S3 workloads

· 3 min read
Stephan Hochdörfer
Head of IT Business Operations

With MinIO now in maintenance mode, you might be looking for alternative solutions to handle your S3 workloads.

We've been successfully using SeaweedFS in our Nomad cluster since 2023, with no issues to date. Our setup utilizes the SeaweedFS CSI plugin, although it's worth noting that SeaweedFS also offers a native S3 API integration as an alternative option.

How to run SeaweedFS for S3 workloads?

  1. A simplified approach suitable for local development environments only

The easiest way for local development is to run SeaweedFS directly in S3 mode on your local machine with the following command:

docker run -p 8333:8333 chrislusf/seaweedfs:latest server -s3
  1. The recommended approach for standard deployments

A typical SeaweedFS deployment consists of three core components:

  • a Master node, responsible for cluster management
  • a Volume node, which handles data storage
  • a Filer node, storing metadata for files and directories
  • to enable S3 API access, an additional S3 service is required.

For a concrete example, refer to this Docker Compose file, which demonstrates a fully functional setup:

services:
master:
image: chrislusf/seaweedfs:latest
ports:
- 9333:9333
command: 'master -ip=master -ip.bind=0.0.0.0'

volume:
image: chrislusf/seaweedfs:latest
ports:
- 8080:8080
command: 'volume -ip=volume -mserver="master:9333" -ip.bind=0.0.0.0 -port=8080'
depends_on:
- master

filer:
image: chrislusf/seaweedfs:latest
ports:
- 8888:8888
command: 'filer -ip=filer -master="master:9333" -ip.bind=0.0.0.0'
depends_on:
- master
- volume

s3:
image: chrislusf/seaweedfs:latest
ports:
- 8333:8333
command: 's3 -filer="filer:8888" -ip.bind=0.0.0.0'
depends_on:
- master
- volume
- filer

How to access SeaweedFS via S3?

Interacting with SeaweedFS via the S3 API is seamless and similar to working with any other S3-compatible service. For instance, you can leverage libraries like Flysystem, which provides a convenient S3 adapter for easy integration.

Download the dependencies with Composer first:

composer require league/flysystem-aws-s3-v3 aws/aws-sdk-php

Configure the S3 client and point it to the SeaweedFS S3 endpoint you have set up before:

<?php

require 'vendor/autoload.php';

use Aws\S3\S3Client;
use League\Flysystem\Filesystem;
use League\Flysystem\AwsS3V3\AwsS3V3Adapter;

$bucket = 'mybucket';

$s3Client = new S3Client([
'version' => 'latest',
'region' => 'us-east-1',
'endpoint' => 'http://localhost:8333', // SeaweedFS S3 endpoint
'use_path_style_endpoint' => true,
'credentials' => false,
]);

Since we haven't set up any credentials on the S3 server, we have to explicitly disable the credential check of our client.

Next, we check if the bucket we want to write to exists and create it if it doesn't:

if (!$s3Client->doesBucketExist($bucket)) {
$s3Client->createBucket(['Bucket' => $bucket]);
}

Finally, we write everything together and create a new filesystem instance:

$adapter = new AwsS3V3Adapter($s3Client, $bucket);
$filesystem = new Filesystem($adapter);

Now we can write a file to the configured S3 bucket like this:

$filesystem->write('hello.txt', "Hello from Flysystem + SeaweedFS!");

Content can be read back like this:

$content = $filesystem->read('hello.txt');

As you can see, using SeaweedFS as an S3 backend is easy and straightforward.

But there's more. SeaweedFS also supports S3 Authentication, Server Side Encryption, and S3 Object Versioning. For a complete list of supported features, please refer to the SeaweedFS documentation.