-
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Security Configuration
For an overview of SeaweedFS security architecture and communication layers, see Security-Overview. For FIPS 140-3 compliance information, see Cryptography-and-FIPS-Compliance.
The first step is generating security.toml file via weed scaffold -config=security:
$ weed scaffold -config=security
# Put this file to one of the location, with descending priority
# ./security.toml
# $HOME/.seaweedfs/security.toml
# /etc/seaweedfs/security.toml
# this file is read by master, volume server, and filer
# comma separated origins allowed to make requests to the filer and s3 gateway.
# enter in this format: https://domain.com, or http://localhost:port
[cors.allowed_origins]
values = "*"
# this jwt signing key is read by master and volume server, and it is used for write operations:
# - the Master server generates the JWT, which can be used to write a certain file on a volume server
# - the Volume server validates the JWT on writing
# the jwt defaults to expire after 10 seconds.
[jwt.signing]
key = ""
expires_after_seconds = 10 # seconds
# by default, if the signing key above is set, the Volume UI over HTTP is disabled.
# by setting ui.access to true, you can re-enable the Volume UI. Despite
# some information leakage (as the UI is not authenticated), this should not
# pose a security risk.
[access]
ui = false
# by default the filer UI is enabled. This can be a security risk if the filer is exposed to the public
# and the JWT for reads is not set. If you don't want the public to have access to the objects in your
# storage, and you haven't set the JWT for reads it is wise to disable access to directory metadata.
# This disables access to the Filer UI, and will no longer return directory metadata in GET requests.
[filer.expose_directory_metadata]
enabled = true
# this jwt signing key is read by master and volume server, and it is used for read operations:
# - the Master server generates the JWT, which can be used to read a certain file on a volume server
# - the Volume server validates the JWT on reading
# NOTE: jwt for read is only supported with master+volume setup. Filer does not support this mode.
[jwt.signing.read]
key = ""
expires_after_seconds = 10 # seconds
# If this JWT key is configured, Filer only accepts writes over HTTP if they are signed with this JWT:
# - f.e. the S3 API Shim generates the JWT
# - the Filer server validates the JWT on writing
# the jwt defaults to expire after 10 seconds.
[jwt.filer_signing]
key = ""
expires_after_seconds = 10 # seconds
# If this JWT key is configured, Filer only accepts reads over HTTP if they are signed with this JWT:
# - f.e. the S3 API Shim generates the JWT
# - the Filer server validates the JWT on writing
# the jwt defaults to expire after 10 seconds.
[jwt.filer_signing.read]
key = ""
expires_after_seconds = 10 # seconds
# gRPC mTLS configuration
# All gRPC TLS authentications are mutual (mTLS)
# The values for ca, cert, and key are paths to the certificate/key files
# The host name is not checked, so the certificate files can be shared
[grpc]
ca = ""
# Set wildcard domain for enable TLS authentication by common names
allowed_wildcard_domain = "" # .mycompany.com
# Volume server gRPC options (server-side)
# Enables mTLS for incoming gRPC connections to volume server
[grpc.volume]
cert = ""
key = ""
allowed_commonNames = "" # comma-separated SSL certificate common names
# Master server gRPC options (server-side)
# Enables mTLS for incoming gRPC connections to master server
[grpc.master]
cert = ""
key = ""
allowed_commonNames = "" # comma-separated SSL certificate common names
# Filer server gRPC options (server-side)
# Enables mTLS for incoming gRPC connections to filer server
[grpc.filer]
cert = ""
key = ""
allowed_commonNames = "" # comma-separated SSL certificate common names
# S3 server gRPC options (server-side)
# Enables mTLS for incoming gRPC connections to S3 server
[grpc.s3]
cert = ""
key = ""
allowed_commonNames = "" # comma-separated SSL certificate common names
[grpc.msg_broker]
cert = ""
key = ""
allowed_commonNames = "" # comma-separated SSL certificate common names
[grpc.msg_agent]
cert = ""
key = ""
allowed_commonNames = "" # comma-separated SSL certificate common names
# gRPC client configuration for outgoing gRPC connections
# Used by clients (S3, mount, backup, benchmark, filer.copy, filer.replicate, upload, etc.)
# when connecting to any gRPC server (master, volume, filer)
[grpc.client]
cert = ""
key = ""
# HTTPS client configuration for outgoing HTTP connections
# Used by S3, mount, filer.copy, backup, and other clients when communicating with master/volume/filer
# Set enabled=true to use HTTPS instead of HTTP for data operations (separate from gRPC)
# If [https.filer] or [https.volume] are enabled on servers, clients must have [https.client] enabled=true
[https.client]
enabled = false # Set to true to enable HTTPS for all outgoing HTTP client connections
cert = "" # Client certificate for mTLS (optional if server doesn't require client cert)
key = "" # Client key for mTLS (optional if server doesn't require client cert)
ca = "" # CA certificate to verify server certificates (required when enabled=true)
# Volume server HTTPS options (server-side)
# Enables HTTPS for incoming HTTP connections to volume server
[https.volume]
cert = ""
key = ""
ca = ""
# Master server HTTPS options (server-side)
# Enables HTTPS for incoming HTTP connections to master server (web UI, HTTP API)
[https.master]
cert = ""
key = ""
ca = ""
# Filer server HTTPS options (server-side)
# Enables HTTPS for incoming HTTP connections to filer server (web UI, HTTP API)
[https.filer]
cert = ""
key = ""
ca = ""
# disable_tls_verify_client_cert = true|false (default: false)
# white list. It's checking request ip address.
[guard]
white_list = ""
The following command is what I used to generate the private key and certificate files, using https://github.com/square/certstrap. To compile this tool, you can run go get github.com/square/certstrap - or alternatively brew install certstrap if you are on Mac OS and use Homebrew.
certstrap init --common-name "SeaweedFS CA"
certstrap request-cert --common-name volume01
certstrap request-cert --common-name master01
certstrap request-cert --common-name filer01
certstrap request-cert --common-name client01
certstrap sign --expires "2 years" --CA "SeaweedFS CA" volume01
certstrap sign --expires "2 years" --CA "SeaweedFS CA" master01
certstrap sign --expires "2 years" --CA "SeaweedFS CA" filer01
certstrap sign --expires "2 years" --CA "SeaweedFS CA" client01
Here is my security.toml file content:
# Put this file to one of the location, with descending priority
# ./security.toml
# $HOME/.seaweedfs/security.toml
# /etc/seaweedfs/security.toml
[jwt.signing]
key = "blahblahblahblah"
[jwt.filer_signing]
key = "blahblahblahblah"
# all grpc tls authentications are mutual
[grpc]
ca = "/Users/chris/.seaweedfs/out/SeaweedFS_CA.crt"
[grpc.volume]
cert = "/Users/chris/.seaweedfs/out/volume01.crt"
key = "/Users/chris/.seaweedfs/out/volume01.key"
[grpc.master]
cert = "/Users/chris/.seaweedfs/out/master01.crt"
key = "/Users/chris/.seaweedfs/out/master01.key"
[grpc.filer]
cert = "/Users/chris/.seaweedfs/out/filer01.crt"
key = "/Users/chris/.seaweedfs/out/filer01.key"
[grpc.client]
cert = "/Users/chris/.seaweedfs/out/client01.crt"
key = "/Users/chris/.seaweedfs/out/client01.key"
Java gRPC uses Netty's SslContext. From https://netty.io/wiki/sslcontextbuilder-and-private-key.html
The SslContextBuilder and so Netty's SslContext implementations only support PKCS8 keys.
If you have a key with another format you need to convert it to PKCS8 first to be able to use it. This can be done easily by using openssl.
For example to convert a non-encrypted PKCS1 key to PKCS8 you would use:
openssl pkcs8 -topk8 -nocrypt -in pkcs1_key_file -out pkcs8_key.pem
If you are using existing certificates: make sure they all have the Extended Key Usage 'TLS Web Server Authentication' AND 'TLS Web Client Authentication' set - as grpc uses them for both use-cases!
Else you will see those errors: error reading server preface: remote error: tls: bad certificate
Choose your CA carefully when using existing certificates in an enterprise environment. Since Seaweed only checks the certificate against the CA and optionally validates the CN of the certificate against a whitelist, do not use the root CA of the company or any other CA you don't control, as this means someone else can generate a client certificate your cluster will accept as legitimate. Instead, generate an intermediate CA for each Seaweed cluster and use it to generate server and client certificates. This intermediate CA is the one you should use in grpc.ca and https.*.ca properties.
- Replication
- Store file with a Time To Live
- Failover Master Server
- Erasure coding for warm storage
- Server Startup via Systemd
- Environment Variables
- Filer Setup
- Directories and Files
- File Operations Quick Reference
- Data Structure for Large Files
- Filer Data Encryption
- Filer Commands and Operations
- Filer JWT Use
- TUS Resumable Uploads
- Filer Cassandra Setup
- Filer Redis Setup
- Super Large Directories
- Path-Specific Filer Store
- Choosing a Filer Store
- Customize Filer Store
- Migrate to Filer Store
- Add New Filer Store
- Filer Store Replication
- Filer Active Active cross cluster continuous synchronization
- Filer as a Key-Large-Value Store
- Path Specific Configuration
- Filer Change Data Capture
- Cloud Drive Benefits
- Cloud Drive Architecture
- Configure Remote Storage
- Mount Remote Storage
- Cache Remote Storage
- Cloud Drive Quick Setup
- Gateway to Remote Object Storage
- Amazon S3 API
- S3 Conditional Operations
- S3 CORS
- S3 Object Lock and Retention
- S3 Object Versioning
- S3 API Benchmark
- S3 API FAQ
- S3 Bucket Quota
- S3 Rate Limiting
- S3 API Audit log
- S3 Nginx Proxy
- Docker Compose for S3
- S3 Configuration - Start Here
-
S3 Credentials (
-s3.config) -
OIDC Integration (
-s3.iam.config) - Amazon IAM API
- AWS IAM CLI
- AWS CLI with SeaweedFS
- s3cmd with SeaweedFS
- rclone with SeaweedFS
- restic with SeaweedFS
- nodejs with Seaweed S3
- Hadoop Compatible File System
- run Spark on SeaweedFS
- run HBase on SeaweedFS
- run Presto on SeaweedFS
- Hadoop Benchmark
- HDFS via S3 connector
- Async Replication to another Filer [Deprecated]
- Async Backup
- Async Filer Metadata Backup
- Async Replication to Cloud [Deprecated]
- Kubernetes Backups and Recovery with K8up
- Structured Data Lake with SMQ and SQL
- Seaweed Message Queue
- SQL Queries on Message Queue
- SQL Quick Reference
- PostgreSQL-compatible Server weed db
- Pub-Sub to SMQ to SQL
- Kafka to Kafka Gateway to SMQ to SQL
- Large File Handling
- Optimization
- Volume Management
- Tiered Storage
- Cloud Tier
- Cloud Monitoring
- Load Command Line Options from a file
- SRV Service Discovery
- Volume Files Structure