remote_fs is a remote filesystem based on a client-server architecture, designed to be mounted anywhere on your operating system while providing transparent filesystem functionalities over the network.
The project is split into two main components:
-
π¦ Client (Rust)
- πΎ Mounts the remote filesystem as a local drive (Windows) or mountpoint (Linux).
- π‘ Communicates with the server via HTTP/REST APIs.
- β‘ Handles local caching, permissions, and system call translation.
- π§ Supports FUSE on Linux (stable) and πͺ WinFsp on Windows (best-effort).
- π» Runs as a daemon on Linux architecture for seamless background operation.
- π§ Debug mode: Runs in foreground when compiled in debug mode for easier development.
-
π’ Server (Node.js/TypeScript)
- π Exposes REST APIs for all file and directory operations (read, write, create, delete, permissions, etc.).
- π Manages metadata persistence and disk space.
- π³ Can be run in a Docker container for easy deployment.
- π Transparent mounting of a remote filesystem.
- π§ Background daemon handles seamlessly the requests (Linux)
- π Support for permissions, ownership, and metadata.
- π Full file and directory operations: read, write, create, delete, rename, mkdir, rmdir.
- π½ Disk space management and quota enforcement.
- π Cross-platform compatibility (Linux: β
stable, Windows:
β οΈ best-effort). - π RESTful API for all server-side operations.
- π³ Docker support for the server.
- π HTTPS support with SSL/TLS encryption.
remote_fs/
βββ client/ # π¦Rust client code (FUSE/WinFsp)
β βββ src/
β βββ ...
βββ server/ # π’Node.js/TypeScript server code
β βββ src/
β βββ docker/
β βββ ...
βββ README.md
- π’ The server exposes REST APIs for file and directory management.
- π¦ The client mounts the remote filesystem and translates OS-level calls into HTTP requests to the server.
- π All operations (read, write, permissions, etc.) are handled transparently.
The server supports multiple storage backends that can be configured via environment variables:
# Set storage strategy (default: localfs)
STORAGE_STRATEGY=localfs # Use local filesystem storage
STORAGE_STRATEGY=gcs # Use Google Cloud Storage
STORAGE_STRATEGY=s3 # Use Amazon S3 storage
β οΈ Important Note on Cloud Storage:GCS and S3 support is provided as a proof of concept only. These cloud storage backends have significant limitations:
- π« No Native Random Write Support: Object storage services like GCS and S3 don't support random/positional writes within files
- π Performance Impact: Each write operation requires reading the entire file, modifying it, and re-uploading the complete file
- π° Cost Implications: Excessive read/write operations can lead to high cloud storage costs
β οΈ Not Production Ready: These implementations are not recommended for production deploymentsπ Recommendation: Use
STORAGE_STRATEGY=localfs(default) for production deployments. Cloud storage support is intended for experimentation and development purposes only.
Uses the local filesystem for file storage:
# No additional configuration needed for local storage
STORAGE_STRATEGY=localfsConfigure GCS as the storage backend:
# Required: Set storage strategy to GCS
STORAGE_STRATEGY=gcs
# Required: GCS bucket name
GCS_BUCKET_NAME=your-bucket-name
# Required: GCS project ID
GCS_PROJECT_ID=your-project-id
# Authentication Options (choose one):
# Option 1: API Key authentication (recommended for development)
GCS_API_KEY=AIzaSyD-9tSrke72PouQMnMX-a7UUUWDT-r6eBE
# Option 2: Service Account file (recommended for production)
GCS_KEY_FILE=/path/to/service-account-key.json
# Option 3: Application Default Credentials (ADC)
# No additional config needed - uses default GCP credentialsConfigure Amazon S3 as the storage backend:
# Required: Set storage strategy to S3
STORAGE_STRATEGY=s3
# Required: AWS region
AWS_REGION=us-east-1
# Required: S3 bucket name
AWS_S3_BUCKET_NAME=your-s3-bucket-name
# Required: AWS credentials
AWS_ACCESS_KEY_ID=your-access-key-id
AWS_SECRET_ACCESS_KEY=your-secret-access-key-
π Access Keys (Most common):
- Go to AWS IAM Console
- Navigate to "Users" > Select/Create user
- Go to "Security credentials" > "Create access key"
- Ensure user has S3 permissions (e.g.,
AmazonS3FullAccesspolicy) - Set
AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEY
-
ποΈ Alternative Methods:
- Instance Profile (when running on EC2)
- AWS CLI configured credentials (
aws configure) - Environment variables (AWS_PROFILE, etc.)
Each server instance creates a unique folder in cloud storage to avoid conflicts:
- Instance ID: Automatically generated and persisted in
data/.instance-id - GCS Path Structure:
gs://bucket/instances/{instance-id}/your-files - S3 Path Structure:
s3://bucket/instances/{instance-id}/your-files - Persistence: Instance ID survives server restarts
-
π API Key (Easiest setup):
- Go to Google Cloud Console
- Navigate to "APIs & Services" > "Credentials"
- Create "API key" and restrict to "Cloud Storage JSON API"
- Set
GCS_API_KEYenvironment variable
-
π Service Account (Production recommended):
- Create service account in Google Cloud Console
- Download JSON key file
- Set
GCS_KEY_FILEto the file path - Ensure service account has Storage Admin permissions
-
π§ Application Default Credentials (Server deployment):
- Uses default GCP credentials from the environment
- Works automatically on GCE, GKE, Cloud Run, etc.
Create a .env file in the server directory:
# Storage configuration
STORAGE_STRATEGY=gcs
GCS_BUCKET_NAME=my-remote-fs-storage
GCS_PROJECT_ID=my-project-123
GCS_API_KEY=AIzaSyD-9tSrke72PouQMnMX-a7UUUWDT-r6eBE
# Optional: Authentication secret key (if not provided the default will be used)
SECRET_KEY=your-secret-key-here
# Optional: Instance size in GB (for local metadata database)
INSTANCE_SIZE=4When using Docker with cloud storage backends, mount your environment file:
# Using docker-compose with environment file
cd server/docker
echo "STORAGE_STRATEGY=gcs" >> ../.env.docker
echo "GCS_BUCKET_NAME=your-bucket" >> ../.env.docker
echo "GCS_PROJECT_ID=your-project" >> ../.env.docker
echo "GCS_API_KEY=your-api-key" >> ../.env.docker
docker compose up --build
# Or set environment variables directly
docker compose up --build \
-e STORAGE_STRATEGY=gcs \
-e GCS_BUCKET_NAME=your-bucket \
-e GCS_PROJECT_ID=your-project \
-e GCS_API_KEY=your-api-key# Using docker-compose with environment file
cd server/docker
echo "STORAGE_STRATEGY=s3" >> ../.env.docker
echo "AWS_S3_BUCKET_NAME=your-s3-bucket" >> ../.env.docker
echo "AWS_REGION=us-east-1" >> ../.env.docker
echo "AWS_ACCESS_KEY_ID=your-access-key" >> ../.env.docker
echo "AWS_SECRET_ACCESS_KEY=your-secret-key" >> ../.env.docker
docker compose up --build
# Or set environment variables directly
docker compose up --build \
-e STORAGE_STRATEGY=s3 \
-e AWS_S3_BUCKET_NAME=your-s3-bucket \
-e AWS_REGION=us-east-1 \
-e AWS_ACCESS_KEY_ID=your-access-key \
-e AWS_SECRET_ACCESS_KEY=your-secret-key# No additional configuration needed for local storage
cd server/docker
docker compose up --build
# Uses STORAGE_STRATEGY=localfs by defaultπ‘ Note: With cloud storage (GCS/S3), file metadata is still stored locally in the database for performance, while actual file content is stored in the respective cloud storage buckets.
-
π¦ Client:
- Rust (stable)
- π§ FUSE (Linux, β
stable support)
β οΈ Linux:user_allow_othermust be enabled in/etc/fuse.conffor proper operation
- πͺ WinFsp (Windows,
β οΈ best-effort support)β οΈ Windows: WinFsp installation directory must be added to PATH environment variable
-
π’ Server:
- Node.js >= 20
- π³ Docker (optional, for containerized deployment)
- π― Suggested option
cd server
npm install
npm run build
npm start
# or with Docker (π― suggested)
cd server/docker
docker compose up --buildcd client
# Basic usage with defaults (daemon mode)
cargo run
# Custom mount point
cargo run -- --mount-point /my/custom/mount
# Custom server configuration
cargo run -- --server-url https://myserver.com --server-port 8443
# Custom authentication
cargo run -- --secret-key "your-secret-key-here"
# Cache configuration options
cargo run -- --cache-strategy both --cache-capacity 2000 --cache-ttl 120
# Full custom configuration
cargo run -- --mount-point /mnt/remote --server-url https://myserver.com --server-port 8443 --secret-key "your-key" --cache-strategy lru --cache-capacity 500 --cache-ttl 30
# Run in foreground mode (useful for debugging)
cargo run -- --mode foreground
# Release mode (daemon in background on Linux by default)
cargo build --release
./target/release/remote_fs --mount-point /mnt/remote_fs
# Release mode in foreground
./target/release/remote_fs --mount-point /mnt/remote_fs --mode foreground
# Help and available options
cargo run -- --helpThe client includes a configurable cache system to improve performance:
-
π― Cache Strategy (
--cache-strategy):both(default): Use both TTL and LRU caching strategiesttl: Time-based cache expiration onlylru: Least Recently Used cache eviction onlydisabled: Disable caching completely
-
π Cache Capacity (
--cache-capacity):- Default:
1000entries - Maximum number of items to store in cache
- Used for LRU eviction when capacity is reached
- Default:
-
β° Cache TTL (
--cache-ttl):- Default:
60seconds - Time-to-live for cached entries
- Used when TTL strategy is enabled
- Default:
# Examples of cache configurations
# High-performance setup (both TTL and LRU)
cargo run -- --cache-strategy both --cache-capacity 5000 --cache-ttl 300
# Memory-efficient setup (LRU only)
cargo run -- --cache-strategy lru --cache-capacity 500
# Time-based caching only
cargo run -- --cache-strategy ttl --cache-ttl 30
# Development/debugging (no cache)
cargo run -- --cache-strategy disabled --mode foreground
# Production balanced setup
cargo run -- --cache-strategy both --cache-capacity 2000 --cache-ttl 120π‘ Performance Tips:
- Use
bothstrategy for optimal performance with TTL expiration and memory management- Use
lrustrategy when you want cache but don't need time-based expiration- Use
ttlstrategy for guaranteed fresh data after a specific time- Use
disabledfor development to always get fresh data from server
- π Release mode (
cargo build --release):- Runs as daemon in background on Linux by default
- Best for production and normal usage
- Use
--mode foregroundflag to run in foreground if needed
- π§ Debug mode (
cargo runorcargo build):- Runs as daemon in background by default
- Use
--mode foregroundfor development and debugging
# Development with local Docker server (daemon mode)
cargo run
# Development in foreground mode for debugging
cargo run -- --mode foreground
# Production with custom server (daemon mode)
cargo build --release
./target/release/remote_fs -u https://myserver.com -p 443 -m /mnt/remote
# Production in foreground mode (useful for debugging or containers)
./target/release/remote_fs -u https://myserver.com -p 443 -m /mnt/remote --mode foreground
# Windows (will mount as X: drive)
cargo run -- --server-url https://myserver.comThe server supports HTTPS with SSL/TLS encryption for secure communication:
- π Secret Key Authentication: Client and server must share the same secret key
- π‘οΈ HMAC Integrity: Every request is signed using HMAC-SHA256(secret_key + timestamp)
- β° Timestamp Protection: Prevents replay attacks (requests expire after 15 seconds)
- π End-to-End Security: Combines HTTPS encryption with HMAC message authentication
- π¦ Client generates timestamp and creates HMAC signature:
HMAC-SHA256(secret_key + timestamp) - π€ Client sends request with
X-signatureheader containing{signature, timestamp} - π’ Server receives request and validates:
- Timestamp is within 15 seconds window
- Recreates HMAC using the same secret key
- Compares signatures for authentication
- β Access granted only if signatures match
β οΈ Important: The secret key must be identical on both client and server for authentication to work. Use the--secret-keyparameter on the client or set it via environment variables in server configuration.
- π HTTPS enabled: When using Docker Compose, Nginx handles SSL/TLS termination
- π Production ready: Supports both self-signed and valid certificates
-
π Local Development:
- Self-signed certificates are provided in the repository
β οΈ Only works locally (browser will show security warnings)- π Located in
server/docker/nginx/certs/(cert.crt, key.pem)
-
π Network/Production Deployment:
- Replace the provided self-signed certificates with your valid SSL certificates in:
server/docker/nginx/certs/cert.crt- Your SSL certificatekey.pem- Your private key
- π Docker will automatically use your certificates instead of the self-signed ones
- β Works across networks with trusted certificates
- Replace the provided self-signed certificates with your valid SSL certificates in:
-
π§ Custom Configuration:
- Modify
server/docker/docker-compose.ymlto suit your deployment needs - π Change port mappings, environment variables, or volume mounts
- π Add custom Nginx configuration in
server/docker/nginx/nginx.conf - π¦ Customize container settings for production environments
- Modify
- π€ Client β
https://localhost/api/health - π Nginx (port 443) receives HTTPS request
- π Nginx decrypts TLS and proxies to:
http://api_server:3000/api/health - π’ Node.js App (container
api_server) responds on port 3000 - π Nginx receives response and encrypts it back to client
β οΈ HTTP only: Running the server manually (without Docker) serves over HTTP on port 3000- π« No HTTPS: SSL/TLS encryption is not available in manual mode
- π Local development only: Recommended only for local testing
cd server
npm install
npm run build
npm start
# Server runs on http://localhost:3000 (no HTTPS)π Security Recommendation: For any network deployment or production use, always use the Docker setup with HTTPS enabled for encrypted communication.
The server by default listens on port 3000 but can be configured using docker or manually otherwise.
The server exposes a RESTful API to perform all filesystem operations.
Below is a summary of the main endpoints:
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/path/:ino |
π Get file path from inode |
| GET | /api/lookup/:path(*) |
π Get file or directory metadata |
| GET | /api/list/:path(*) |
π List directory contents |
| POST | /api/files |
β Create a new file |
| PUT | /api/files/:path(*) |
βοΈ Write to a file |
| GET | /api/files/:path(*) |
π Read file content |
| DELETE | /api/files/:path(*) |
ποΈ Delete a file |
| POST | /api/mkdir/:path(*) |
π Create a new directory |
| DELETE | /api/rmdir/:path(*) |
ποΈ Remove a directory |
| PUT | /api/rename/files |
π Rename a file or directory |
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/attributes/:ino |
π Get attributes by inode |
| PUT | /api/attributes/:ino |
π§ Set attributes by inode |
| GET | /api/volume/statfs |
π½ Get volume statistics |
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/health |
β€οΈ Health check |
π Note: All endpoints expect and return JSON unless otherwise specified.
For details on request/response formats, see the dedicated 'RemoteFS API Documentation'.
- π§ The project is designed to be easily extensible and adaptable to different storage backends.
- π Permission and metadata management is inspired by POSIX filesystems.
- π All communication is secured via HTTPS with proper SSL/TLS encryption.
π§ Linux support is stable and recommended for production use.
πͺ Windows support is best-effort and may not work as intended