Atrium is a self-hosted web app for browsing and managing S3-compatible object storage.
Note
Hello! Human here. This project is vibe-coded with various AI models (you will see Claude Sonnet 4.5, OpenAI GPT-5.3 Codex, and some other) during a long weekend. I was trying what is it to really one-shot a software that I actually need in real life, not just some janky project with AI slop in it.
Turns out, it actually works and by the time you read this, I probably have this application running on my company.
My specific use case is that I'm moving away from MinIO to SeaweedFS because of the drama and no-more-support, obviously. SeaweedFS does not provide a web UI similar to MinIO did. In my company I have a really weird process that needs the UI. Therefore, I really need one. Other solutions such as S3 Browser and Cyberduck are not viable for me, because it's a freemium, not fully open source.
- Login with Access Key ID + Secret Access Key (no user account system)
- Redis-backed secure session tokens with TTL and httpOnly cookie auth
- Provider-agnostic S3 support (AWS S3, Cloudflare R2, MinIO, ...you name it)
- Bucket and folder navigation with breadcrumbs
- File upload (drag/drop or picker) with progress indication
- File download with proper content-type and filename
- Delete file or folder prefix with confirmation dialogs
- File preview for:
- Images:
jpg,jpeg,png,gif,webp,svg - Text:
txt,md,json,xml,csv,log
- Images:
- Client-side filtering of current folder entries by name
This implementation uses a Vite + React frontend with a Fastify backend in one repository.
- Frontend: React + TypeScript (dashboard in
src/app, reusable UI insrc/components) - Backend: Node.js + Fastify + TypeScript (
src/server) - S3 operations: AWS SDK v3 (
@aws-sdk/client-s3) - Session store: Redis (
token -> credentials, TTL) - Deployment: Docker Compose (
atrium-app,redis,minio)
edge tag refers to the default branch. The following example is intended for a
local development setup where Redis and MinIO run as sibling containers; service
hostnames are used rather than localhost which would point to the container
itself.
services:
atrium:
image: "ghcr.io/aldy505/atrium:edge"
ports:
- "3000:3000"
environment:
REDIS_URL: "redis://redis:6379"
S3_ENDPOINT: "http://minio:9000"
S3_REGION: "us-east-1"
S3_FORCE_PATH_STYLE: true
redis:
image: redis:7-alpine
minio:
image: minio/minio:latest
command: server /data
environment:
MINIO_ROOT_USER: "minioadmin"
MINIO_ROOT_PASSWORD: "minioadmin"
ports:
- "9000:9000"
- "9001:9001"If you prefer to run it directly using Node, you can download the pre-built artifacts from the latest GitHub Actions run.
- Download the artifacts that correspond with your environment (Linux amd64, Linux arm64, or Windows).
- Extract the archive (
unzip atrium-{platform}-{sha}.zip) - Run it using Node.js (
node --import ./dist/server/sentry.server.js ./dist/server/index.js)
- Copy env template:
cp .env.example .env- Start all services:
docker compose up --build- Open Atrium:
- App: http://localhost:3000
- MinIO API: http://localhost:9000
- MinIO Console: http://localhost:9001
- Login to Atrium using MinIO test credentials:
- Access Key ID:
minioadmin - Secret Access Key:
minioadmin
- Create a bucket in MinIO Console first, then browse/upload/download/delete through Atrium.
| Variable | Required | Default | Description |
|---|---|---|---|
NODE_ENV |
no | development |
Runtime environment |
PORT |
no | 3000 |
API/web app port |
REDIS_URL |
yes | - | Redis connection URL |
SESSION_TTL_SECONDS |
no | 86400 |
Session TTL in seconds |
COOKIE_NAME |
no | atrium_session |
Session cookie name |
BUCKET_SIZE_CALC_INTERVAL_HOURS |
no | 1 |
Background bucket-size job interval (hours) |
BUCKET_SIZE_MAX_DURATION_MS |
no | 300000 |
Max runtime per bucket size calculation |
BUCKET_SIZE_MAX_OBJECTS |
no | 1000000 |
Max objects scanned per calculation |
ENABLE_S3_URI_COPY |
no | false |
Show "Copy S3 URI" action in object sidebar |
S3_ENDPOINT |
yes | - | S3-compatible endpoint URL |
S3_REGION |
yes | - | S3 region string |
S3_FORCE_PATH_STYLE |
no | true |
Use path-style S3 URLs (needed by MinIO) |
MAX_UPLOAD_SIZE_MB |
no | 100 |
Per-file upload size limit |
AUDIT_LOG_SINK |
no | filesystem |
Audit log sink (filesystem, loki, none) |
AUDIT_LOG_DIR |
no | audit-logs |
Filesystem audit log directory |
AUDIT_LOG_RETENTION_DAYS |
no | 30 |
Filesystem audit log retention (days) |
AUDIT_LOG_LOKI_URL |
no | - | Loki push endpoint (required for loki) |
SENTRY_DSN |
no | - | Backend Sentry DSN |
SENTRY_ENVIRONMENT |
no | development |
Backend Sentry environment |
SENTRY_RELEASE |
no | [email protected] |
Backend release identifier |
SENTRY_TRACES_SAMPLE_RATE |
no | 0.1 |
Backend tracing sample rate |
SENTRY_ENABLE_LOGS |
no | true |
Enable backend Sentry logs |
SENTRY_ENABLE_METRICS |
no | true |
Enable backend Sentry metrics |
FRONTEND_SENTRY_DSN |
no | - | Frontend Sentry DSN (runtime via API) |
FRONTEND_SENTRY_ENVIRONMENT |
no | NODE_ENV |
Frontend Sentry environment (runtime) |
FRONTEND_SENTRY_RELEASE |
no | - | Frontend release identifier (runtime) |
FRONTEND_SENTRY_TRACES_SAMPLE_RATE |
no | 0.1 |
Frontend tracing sample rate (runtime) |
FRONTEND_SENTRY_ENABLE_LOGS |
no | true |
Enable frontend Sentry logs (runtime) |
FRONTEND_SENTRY_ENABLE_METRICS |
no | true |
Enable frontend Sentry metrics (runtime) |
FRONTEND_SENTRY_REPLAYS_SESSION_SAMPLE_RATE |
no | 0.1 |
Frontend replay session sample (runtime) |
FRONTEND_SENTRY_REPLAYS_ON_ERROR_SAMPLE_RATE |
no | 1.0 |
Frontend replay-on-error sample (runtime) |
| Variable | Required | Default | Description |
| ---------------------------------------------- | -------- | ---------------- | --------------------------------------------- |
NODE_ENV |
no | development |
Runtime environment |
PORT |
no | 3000 |
API/web app port |
REDIS_URL |
yes | - | Redis connection URL |
SESSION_TTL_SECONDS |
no | 86400 |
Session TTL in seconds |
COOKIE_NAME |
no | atrium_session |
Session cookie name |
BUCKET_SIZE_CALC_INTERVAL_HOURS |
no | 1 |
Background bucket-size job interval (hours) |
BUCKET_SIZE_MAX_DURATION_MS |
no | 300000 |
Max runtime per bucket size calculation |
BUCKET_SIZE_MAX_OBJECTS |
no | 1000000 |
Max objects scanned per calculation |
S3_ENDPOINT |
yes | - | S3-compatible endpoint URL |
S3_REGION |
yes | - | S3 region string |
S3_FORCE_PATH_STYLE |
no | true |
Use path-style S3 URLs (needed by MinIO) |
MAX_UPLOAD_SIZE_MB |
no | 100 |
Per-file upload size limit |
AUDIT_LOG_SINK |
no | filesystem |
Audit log sink (filesystem, loki, none) |
AUDIT_LOG_DIR |
no | audit-logs |
Filesystem audit log directory |
AUDIT_LOG_RETENTION_DAYS |
no | 30 |
Filesystem audit log retention (days) |
AUDIT_LOG_LOKI_URL |
no | - | Loki push endpoint (required for loki) |
SENTRY_DSN |
no | - | Backend Sentry DSN |
SENTRY_ENVIRONMENT |
no | development |
Backend Sentry environment |
SENTRY_RELEASE |
no | [email protected] |
Backend release identifier |
SENTRY_TRACES_SAMPLE_RATE |
no | 0.1 |
Backend tracing sample rate |
SENTRY_ENABLE_LOGS |
no | true |
Enable backend Sentry logs |
SENTRY_ENABLE_METRICS |
no | true |
Enable backend Sentry metrics |
FRONTEND_SENTRY_DSN |
no | - | Frontend Sentry DSN (runtime via API) |
FRONTEND_SENTRY_ENVIRONMENT |
no | NODE_ENV |
Frontend Sentry environment (runtime) |
FRONTEND_SENTRY_RELEASE |
no | - | Frontend release identifier (runtime) |
FRONTEND_SENTRY_TRACES_SAMPLE_RATE |
no | 0.1 |
Frontend tracing sample rate (runtime) |
FRONTEND_SENTRY_ENABLE_LOGS |
no | true |
Enable frontend Sentry logs (runtime) |
FRONTEND_SENTRY_ENABLE_METRICS |
no | true |
Enable frontend Sentry metrics (runtime) |
FRONTEND_SENTRY_REPLAYS_SESSION_SAMPLE_RATE |
no | 0.1 |
Frontend replay session sample (runtime) |
FRONTEND_SENTRY_REPLAYS_ON_ERROR_SAMPLE_RATE |
no | 1.0 |
Frontend replay-on-error sample (runtime) |
Only backend env changes are needed. The UI always asks only for access key + secret key.
S3_ENDPOINT=https://s3.amazonaws.com
S3_REGION=us-east-1
S3_FORCE_PATH_STYLE=falseS3_ENDPOINT=https://<ACCOUNT_ID>.r2.cloudflarestorage.com
S3_REGION=auto
S3_FORCE_PATH_STYLE=trueS3_ENDPOINT=https://s3.<REGION>.backblazeb2.com
S3_REGION=<REGION>
S3_FORCE_PATH_STYLE=trueS3_ENDPOINT=https://<REGION>.digitaloceanspaces.com
S3_REGION=<REGION>
S3_FORCE_PATH_STYLE=falsepnpm install
pnpm dev- Vite dev server runs on
5173 - API server runs on
3000 - Vite proxies
/apito backend - MinIO test credentials are
minioadminfor access key andminioadminfor secret key.
ENABLE_S3_URI_COPY=trueenables a Copy S3 URI button in the object detail sidebar.- Copies
s3://<bucket>/<key>to clipboard for files and folders. - Uses Clipboard API with a fallback for older browsers.
- Copies
pnpm build
pnpm start- Backend Sentry initializes via Node ESM preload (
--import ./dist/server/sentry.server.js) before Fastify boot. - Frontend Sentry runtime settings are fetched from
/api/runtime-configat app startup. Configure values viaFRONTEND_SENTRY_*environment variables; the server does not rely on build-time values.
- Object listing is cursor-paginated server-side via S3
ListObjectsV2(maxKeys+ continuation token). - Server-side folder/page listing cache is stored in Redis and keyed by session + bucket + prefix + continuation token + page size.
- Cached list responses use TTL-based expiration (default
300s) and can be toggled withS3_LIST_CACHE_ENABLED. - Cache invalidation runs after upload/delete operations with default
targetedmode (prefix + parent prefixes), orbucketmode viaS3_LIST_CACHE_INVALIDATION_MODE. - Optional diagnostics header
X-Atrium-S3-List-CachereportsHIT,MISS, orBYPASSwhenS3_LIST_CACHE_INCLUDE_HEADERS=true. - Optional background bucket-size calculation can be enabled with OpenFeature flag
enable-background-bucket-size-calculation. - Bucket-size API routes (feature-gated):
GET /api/s3/buckets/:bucketName/sizeandPOST /api/s3/buckets/:bucketName/size/calculate. - Frontend requests objects in pages of
200and merges pages in memory. - The object table supports:
- Manual pagination with Load more
- Optional Auto-load on scroll (IntersectionObserver)
- For stress testing, this repository has been validated with a generated dataset of
5000objects in MinIO.
- Bucket-size calculation uses paged
ListObjectsV2calls (up to1000objects per request). - AWS S3 pricing is roughly
$0.005per1,000LIST requests. - A bucket with
1,000,000objects needs around1,000list calls (~$0.005) for one full calculation. - Running this hourly across many large buckets can add noticeable cost; tune interval and limits accordingly.
http.requests.total(count)http.server.duration(distribution, milliseconds)http.requests.errors(count)
- Latency per operation (distribution, milliseconds):
s3.list_buckets.latencys3.list_objects_v2.latencys3.put_object.latencys3.get_object.latencys3.delete_object.latency
- Transfer activity gauges:
s3.upload.files_in_flights3.download.files_in_flight
auth.successauth.failure
Notes:
- Auth gauges represent process-local totals and reset on server restart.
- Frontend emits
frontend.app.bootat initialization when frontend Sentry is enabled.
- Credentials are never persisted in browser storage.
- Session cookie is
httpOnly;secureis enabled automatically in production mode. - Credentials are stored server-side in Redis under random high-entropy tokens with TTL.
- Sessions are isolated; each token maps to one credential pair.
- Endpoint and region are controlled by backend environment variables only.
This repository is designed for self-hosting and extension. Future versions can add sharing links, batch operations, richer previews, and ACL tooling.
Copyright 2026 Reinaldy Rafli
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
See LICENSE.