Thanks to visit codestin.com
Credit goes to github.com

Skip to content

FedeCarollo/remote_fs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

91 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🌐 remote_fs

remote_fs is a remote filesystem based on a client-server architecture, designed to be mounted anywhere on your operating system while providing transparent filesystem functionalities over the network.

πŸ—οΈ Architecture

The project is split into two main components:

  • πŸ¦€ Client (Rust)

    • πŸ’Ύ Mounts the remote filesystem as a local drive (Windows) or mountpoint (Linux).
    • πŸ“‘ Communicates with the server via HTTP/REST APIs.
    • ⚑ Handles local caching, permissions, and system call translation.
    • 🐧 Supports FUSE on Linux (stable) and πŸͺŸ WinFsp on Windows (best-effort).
    • πŸ‘» Runs as a daemon on Linux architecture for seamless background operation.
    • πŸ”§ Debug mode: Runs in foreground when compiled in debug mode for easier development.
  • 🟒 Server (Node.js/TypeScript)

    • πŸ”Œ Exposes REST APIs for all file and directory operations (read, write, create, delete, permissions, etc.).
    • πŸ“Š Manages metadata persistence and disk space.
    • 🐳 Can be run in a Docker container for easy deployment.

✨ Features

  • πŸ”„ Transparent mounting of a remote filesystem.
    • πŸ”§ Background daemon handles seamlessly the requests (Linux)
  • πŸ” Support for permissions, ownership, and metadata.
  • πŸ“‚ Full file and directory operations: read, write, create, delete, rename, mkdir, rmdir.
  • πŸ’½ Disk space management and quota enforcement.
  • 🌍 Cross-platform compatibility (Linux: βœ… stable, Windows: ⚠️ best-effort).
  • 🌐 RESTful API for all server-side operations.
  • 🐳 Docker support for the server.
  • πŸ”’ HTTPS support with SSL/TLS encryption.

πŸ“ Project Structure

remote_fs/
β”œβ”€β”€ client/      # πŸ¦€Rust client code (FUSE/WinFsp)
β”‚   β”œβ”€β”€ src/
β”‚   └── ...
β”œβ”€β”€ server/      # 🟒Node.js/TypeScript server code
β”‚   β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ docker/
β”‚   └── ...
└── README.md

βš™οΈ How it works

  1. 🟒 The server exposes REST APIs for file and directory management.
  2. πŸ¦€ The client mounts the remote filesystem and translates OS-level calls into HTTP requests to the server.
  3. πŸ”„ All operations (read, write, permissions, etc.) are handled transparently.

βš™οΈ Configuration

πŸ—‚οΈ Storage Strategy

The server supports multiple storage backends that can be configured via environment variables:

# Set storage strategy (default: localfs)
STORAGE_STRATEGY=localfs    # Use local filesystem storage
STORAGE_STRATEGY=gcs        # Use Google Cloud Storage
STORAGE_STRATEGY=s3         # Use Amazon S3 storage

⚠️ Important Note on Cloud Storage:

GCS and S3 support is provided as a proof of concept only. These cloud storage backends have significant limitations:

  • 🚫 No Native Random Write Support: Object storage services like GCS and S3 don't support random/positional writes within files
  • πŸ“ˆ Performance Impact: Each write operation requires reading the entire file, modifying it, and re-uploading the complete file
  • πŸ’° Cost Implications: Excessive read/write operations can lead to high cloud storage costs
  • ⚠️ Not Production Ready: These implementations are not recommended for production deployments

πŸ“‹ Recommendation: Use STORAGE_STRATEGY=localfs (default) for production deployments. Cloud storage support is intended for experimentation and development purposes only.

🏠 Local Storage (Default)

Uses the local filesystem for file storage:

# No additional configuration needed for local storage
STORAGE_STRATEGY=localfs

☁️ Google Cloud Storage (GCS)

Configure GCS as the storage backend:

# Required: Set storage strategy to GCS
STORAGE_STRATEGY=gcs

# Required: GCS bucket name
GCS_BUCKET_NAME=your-bucket-name

# Required: GCS project ID
GCS_PROJECT_ID=your-project-id

# Authentication Options (choose one):

# Option 1: API Key authentication (recommended for development)
GCS_API_KEY=AIzaSyD-9tSrke72PouQMnMX-a7UUUWDT-r6eBE

# Option 2: Service Account file (recommended for production)
GCS_KEY_FILE=/path/to/service-account-key.json

# Option 3: Application Default Credentials (ADC)
# No additional config needed - uses default GCP credentials

πŸͺ£ Amazon S3 Storage

Configure Amazon S3 as the storage backend:

# Required: Set storage strategy to S3
STORAGE_STRATEGY=s3

# Required: AWS region
AWS_REGION=us-east-1

# Required: S3 bucket name
AWS_S3_BUCKET_NAME=your-s3-bucket-name

# Required: AWS credentials
AWS_ACCESS_KEY_ID=your-access-key-id
AWS_SECRET_ACCESS_KEY=your-secret-access-key
πŸ” AWS S3 Authentication
  1. πŸ”‘ Access Keys (Most common):

    • Go to AWS IAM Console
    • Navigate to "Users" > Select/Create user
    • Go to "Security credentials" > "Create access key"
    • Ensure user has S3 permissions (e.g., AmazonS3FullAccess policy)
    • Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
  2. πŸ—οΈ Alternative Methods:

    • Instance Profile (when running on EC2)
    • AWS CLI configured credentials (aws configure)
    • Environment variables (AWS_PROFILE, etc.)

πŸ—οΈ Instance Isolation (GCS/S3)

Each server instance creates a unique folder in cloud storage to avoid conflicts:

  • Instance ID: Automatically generated and persisted in data/.instance-id
  • GCS Path Structure: gs://bucket/instances/{instance-id}/your-files
  • S3 Path Structure: s3://bucket/instances/{instance-id}/your-files
  • Persistence: Instance ID survives server restarts

πŸ” GCS Authentication Methods

  1. πŸ”‘ API Key (Easiest setup):

    • Go to Google Cloud Console
    • Navigate to "APIs & Services" > "Credentials"
    • Create "API key" and restrict to "Cloud Storage JSON API"
    • Set GCS_API_KEY environment variable
  2. πŸ“„ Service Account (Production recommended):

    • Create service account in Google Cloud Console
    • Download JSON key file
    • Set GCS_KEY_FILE to the file path
    • Ensure service account has Storage Admin permissions
  3. πŸ”§ Application Default Credentials (Server deployment):

    • Uses default GCP credentials from the environment
    • Works automatically on GCE, GKE, Cloud Run, etc.

πŸ—ƒοΈ Environment File Example

Create a .env file in the server directory:

# Storage configuration
STORAGE_STRATEGY=gcs
GCS_BUCKET_NAME=my-remote-fs-storage
GCS_PROJECT_ID=my-project-123
GCS_API_KEY=AIzaSyD-9tSrke72PouQMnMX-a7UUUWDT-r6eBE

# Optional: Authentication secret key (if not provided the default will be used)
SECRET_KEY=your-secret-key-here

# Optional: Instance size in GB (for local metadata database)
INSTANCE_SIZE=4

🐳 Docker with Cloud Storage

When using Docker with cloud storage backends, mount your environment file:

πŸ—‚οΈ GCS (Google Cloud Storage)
# Using docker-compose with environment file
cd server/docker
echo "STORAGE_STRATEGY=gcs" >> ../.env.docker
echo "GCS_BUCKET_NAME=your-bucket" >> ../.env.docker
echo "GCS_PROJECT_ID=your-project" >> ../.env.docker
echo "GCS_API_KEY=your-api-key" >> ../.env.docker
docker compose up --build

# Or set environment variables directly
docker compose up --build \
  -e STORAGE_STRATEGY=gcs \
  -e GCS_BUCKET_NAME=your-bucket \
  -e GCS_PROJECT_ID=your-project \
  -e GCS_API_KEY=your-api-key
πŸͺ£ S3 (Amazon S3)
# Using docker-compose with environment file
cd server/docker
echo "STORAGE_STRATEGY=s3" >> ../.env.docker
echo "AWS_S3_BUCKET_NAME=your-s3-bucket" >> ../.env.docker
echo "AWS_REGION=us-east-1" >> ../.env.docker
echo "AWS_ACCESS_KEY_ID=your-access-key" >> ../.env.docker
echo "AWS_SECRET_ACCESS_KEY=your-secret-key" >> ../.env.docker
docker compose up --build

# Or set environment variables directly
docker compose up --build \
  -e STORAGE_STRATEGY=s3 \
  -e AWS_S3_BUCKET_NAME=your-s3-bucket \
  -e AWS_REGION=us-east-1 \
  -e AWS_ACCESS_KEY_ID=your-access-key \
  -e AWS_SECRET_ACCESS_KEY=your-secret-key
🏠 Local Storage (Default)
# No additional configuration needed for local storage
cd server/docker
docker compose up --build
# Uses STORAGE_STRATEGY=localfs by default

πŸ’‘ Note: With cloud storage (GCS/S3), file metadata is still stored locally in the database for performance, while actual file content is stored in the respective cloud storage buckets.

πŸ“‹ Requirements

  • πŸ¦€ Client:

    • Rust (stable)
    • 🐧 FUSE (Linux, βœ… stable support)
      • ⚠️ Linux: user_allow_other must be enabled in /etc/fuse.conf for proper operation
    • πŸͺŸ WinFsp (Windows, ⚠️ best-effort support)
      • ⚠️ Windows: WinFsp installation directory must be added to PATH environment variable
  • 🟒 Server:

    • Node.js >= 20
    • 🐳 Docker (optional, for containerized deployment)
      • 🎯 Suggested option

πŸš€ Quick Start

🟒 Server

cd server
npm install
npm run build
npm start
# or with Docker (🎯 suggested)
cd server/docker
docker compose up --build

πŸ¦€ Client

cd client

# Basic usage with defaults (daemon mode)
cargo run

# Custom mount point
cargo run -- --mount-point /my/custom/mount

# Custom server configuration
cargo run -- --server-url https://myserver.com --server-port 8443

# Custom authentication
cargo run -- --secret-key "your-secret-key-here"

# Cache configuration options
cargo run -- --cache-strategy both --cache-capacity 2000 --cache-ttl 120

# Full custom configuration
cargo run -- --mount-point /mnt/remote --server-url https://myserver.com --server-port 8443 --secret-key "your-key" --cache-strategy lru --cache-capacity 500 --cache-ttl 30

# Run in foreground mode (useful for debugging)
cargo run -- --mode foreground

# Release mode (daemon in background on Linux by default)
cargo build --release
./target/release/remote_fs --mount-point /mnt/remote_fs

# Release mode in foreground
./target/release/remote_fs --mount-point /mnt/remote_fs --mode foreground

# Help and available options
cargo run -- --help

πŸ—„οΈ Cache Configuration

The client includes a configurable cache system to improve performance:

  • 🎯 Cache Strategy (--cache-strategy):

    • both (default): Use both TTL and LRU caching strategies
    • ttl: Time-based cache expiration only
    • lru: Least Recently Used cache eviction only
    • disabled: Disable caching completely
  • πŸ“Š Cache Capacity (--cache-capacity):

    • Default: 1000 entries
    • Maximum number of items to store in cache
    • Used for LRU eviction when capacity is reached
  • ⏰ Cache TTL (--cache-ttl):

    • Default: 60 seconds
    • Time-to-live for cached entries
    • Used when TTL strategy is enabled
# Examples of cache configurations

# High-performance setup (both TTL and LRU)
cargo run -- --cache-strategy both --cache-capacity 5000 --cache-ttl 300

# Memory-efficient setup (LRU only)
cargo run -- --cache-strategy lru --cache-capacity 500

# Time-based caching only
cargo run -- --cache-strategy ttl --cache-ttl 30

# Development/debugging (no cache)
cargo run -- --cache-strategy disabled --mode foreground

# Production balanced setup
cargo run -- --cache-strategy both --cache-capacity 2000 --cache-ttl 120

πŸ’‘ Performance Tips:

  • Use both strategy for optimal performance with TTL expiration and memory management
  • Use lru strategy when you want cache but don't need time-based expiration
  • Use ttl strategy for guaranteed fresh data after a specific time
  • Use disabled for development to always get fresh data from server

πŸ”§ Build Modes

  • πŸš€ Release mode (cargo build --release):
    • Runs as daemon in background on Linux by default
    • Best for production and normal usage
    • Use --mode foreground flag to run in foreground if needed
  • πŸ”§ Debug mode (cargo run or cargo build):
    • Runs as daemon in background by default
    • Use --mode foreground for development and debugging

πŸ’‘ Examples

# Development with local Docker server (daemon mode)
cargo run

# Development in foreground mode for debugging
cargo run -- --mode foreground

# Production with custom server (daemon mode)
cargo build --release
./target/release/remote_fs -u https://myserver.com -p 443 -m /mnt/remote

# Production in foreground mode (useful for debugging or containers)
./target/release/remote_fs -u https://myserver.com -p 443 -m /mnt/remote --mode foreground

# Windows (will mount as X: drive)
cargo run -- --server-url https://myserver.com

πŸ”’ HTTPS & Security

The server supports HTTPS with SSL/TLS encryption for secure communication:

πŸ” Authentication & Message Integrity

  • πŸ”‘ Secret Key Authentication: Client and server must share the same secret key
  • πŸ›‘οΈ HMAC Integrity: Every request is signed using HMAC-SHA256(secret_key + timestamp)
  • ⏰ Timestamp Protection: Prevents replay attacks (requests expire after 15 seconds)
  • πŸ”’ End-to-End Security: Combines HTTPS encryption with HMAC message authentication

πŸ”§ How Authentication Works:

  1. πŸ¦€ Client generates timestamp and creates HMAC signature: HMAC-SHA256(secret_key + timestamp)
  2. πŸ“€ Client sends request with X-signature header containing {signature, timestamp}
  3. 🟒 Server receives request and validates:
    • Timestamp is within 15 seconds window
    • Recreates HMAC using the same secret key
    • Compares signatures for authentication
  4. βœ… Access granted only if signatures match

⚠️ Important: The secret key must be identical on both client and server for authentication to work. Use the --secret-key parameter on the client or set it via environment variables in server configuration.

🐳 Docker Deployment (HTTPS via Nginx)

  • πŸ”’ HTTPS enabled: When using Docker Compose, Nginx handles SSL/TLS termination
  • 🌐 Production ready: Supports both self-signed and valid certificates

πŸ›‘οΈ Certificate Management

  • 🏠 Local Development:

    • Self-signed certificates are provided in the repository
    • ⚠️ Only works locally (browser will show security warnings)
    • πŸ“ Located in server/docker/nginx/certs/ (cert.crt, key.pem)
  • 🌐 Network/Production Deployment:

    • Replace the provided self-signed certificates with your valid SSL certificates in: server/docker/nginx/certs/
      • cert.crt - Your SSL certificate
      • key.pem - Your private key
    • πŸ”„ Docker will automatically use your certificates instead of the self-signed ones
    • βœ… Works across networks with trusted certificates
  • πŸ”§ Custom Configuration:

    • Modify server/docker/docker-compose.yml to suit your deployment needs
    • πŸ”Œ Change port mappings, environment variables, or volume mounts
    • 🌐 Add custom Nginx configuration in server/docker/nginx/nginx.conf
    • πŸ“¦ Customize container settings for production environments

πŸ”§ HTTPS Traffic Flow:

  1. πŸ‘€ Client β†’ https://localhost/api/health
  2. 🌐 Nginx (port 443) receives HTTPS request
  3. πŸ”“ Nginx decrypts TLS and proxies to: http://api_server:3000/api/health
  4. 🟒 Node.js App (container api_server) responds on port 3000
  5. πŸ”’ Nginx receives response and encrypts it back to client

πŸ–₯️ Manual Server Deployment (HTTP Only)

  • ⚠️ HTTP only: Running the server manually (without Docker) serves over HTTP on port 3000
  • 🚫 No HTTPS: SSL/TLS encryption is not available in manual mode
  • 🏠 Local development only: Recommended only for local testing
cd server
npm install
npm run build
npm start
# Server runs on http://localhost:3000 (no HTTPS)

πŸ” Security Recommendation: For any network deployment or production use, always use the Docker setup with HTTPS enabled for encrypted communication.

🌐 API Endpoints

The server by default listens on port 3000 but can be configured using docker or manually otherwise.

The server exposes a RESTful API to perform all filesystem operations.
Below is a summary of the main endpoints:

πŸ“‚ File and Directory Operations

Method Endpoint Description
GET /api/path/:ino πŸ”„ Get file path from inode
GET /api/lookup/:path(*) πŸ” Get file or directory metadata
GET /api/list/:path(*) πŸ“‹ List directory contents
POST /api/files βž• Create a new file
PUT /api/files/:path(*) ✏️ Write to a file
GET /api/files/:path(*) πŸ“– Read file content
DELETE /api/files/:path(*) πŸ—‘οΈ Delete a file
POST /api/mkdir/:path(*) πŸ“ Create a new directory
DELETE /api/rmdir/:path(*) πŸ—‚οΈ Remove a directory
PUT /api/rename/files πŸ”„ Rename a file or directory

βš™οΈ Attributes and Volume Info

Method Endpoint Description
GET /api/attributes/:ino πŸ“Š Get attributes by inode
PUT /api/attributes/:ino πŸ”§ Set attributes by inode
GET /api/volume/statfs πŸ’½ Get volume statistics

🩺 Health Check

Method Endpoint Description
GET /api/health ❀️ Health check

πŸ“ Note: All endpoints expect and return JSON unless otherwise specified.
For details on request/response formats, see the dedicated 'RemoteFS API Documentation'.


πŸ“ Notes

  • πŸ”§ The project is designed to be easily extensible and adaptable to different storage backends.
  • πŸ” Permission and metadata management is inspired by POSIX filesystems.
  • πŸ”’ All communication is secured via HTTPS with proper SSL/TLS encryption.

🐧 Linux support is stable and recommended for production use.

πŸͺŸ Windows support is best-effort and may not work as intended