A DigitalOcean Function that backs up the complete contents of a DigitalOcean Spaces bucket by archiving all files into a ZIP and uploading it to a destination bucket.
- Downloads all objects from a source Spaces bucket
- Creates a compressed ZIP archive
- Uploads the archive to a destination bucket
- Uses streaming for memory-efficient processing
- Handles pagination for large buckets
- Configurable via environment variables
- Supports cross-region backups
- DigitalOcean Account
- DigitalOcean CLI (
doctl) - Two DigitalOcean Spaces buckets (source and destination)
- Spaces API credentials (Access Key and Secret)
# macOS
brew install doctl
# Authenticate
doctl auth initdoctl serverless connectCopy the example environment file and fill in your values:
cp .env.example .envEdit .env with your configuration
npm installDeploy the function to DigitalOcean:
doctl serverless deploy .The function will be available at a URL like:
https://faas-nyc1-x.doserverless.co/api/v1/web/fn-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/backup/backup
You can invoke the function via HTTP request or using the CLI:
# Using doctl
doctl serverless functions invoke backup/backup
# Using curl (replace with your function URL)
curl -X POST https://your-function-url/backup/backupTo run backups automatically, you can:
- Use DigitalOcean Functions Triggers (when available)
- Use a cron job with
doctl:# Add to crontab to run daily at 2 AM 0 2 * * * /usr/local/bin/doctl serverless functions invoke backup/backup
- Use a third-party service like cron-job.org to hit the function URL
Successful backup:
{
"statusCode": 200,
"body": {
"message": "Backup completed successfully",
"sourceBucket": "my-source-bucket",
"destinationBucket": "my-backup-bucket",
"archiveName": "backups/backup-my-source-bucket-2025-12-19T10-30-00-000Z.zip",
"filesBackedUp": 150,
"archiveSize": 104857600,
"durationSeconds": 45.32
}
}Error response:
{
"statusCode": 500,
"body": {
"error": "Backup failed",
"message": "Error description",
"stack": "..."
}
}| Variable | Required | Default | Description |
|---|---|---|---|
SOURCE_BUCKET |
Yes | - | Name of the source Spaces bucket |
SOURCE_REGION |
No | nyc3 |
Region of the source bucket |
SOURCE_ENDPOINT |
No | Auto-generated | Custom S3 endpoint for source |
DEST_BUCKET |
Yes | - | Name of the destination bucket |
DEST_REGION |
No | nyc3 |
Region of the destination bucket |
DEST_ENDPOINT |
No | Auto-generated | Custom S3 endpoint for destination |
SPACES_KEY |
Yes | - | Spaces access key ID |
SPACES_SECRET |
Yes | - | Spaces secret access key |
ARCHIVE_PREFIX |
No | backups |
Prefix/folder for backup archives |
Configured in project.yml:
- Timeout: 15 minutes (900,000 ms)
- Memory: 1GB RAM
Adjust these if needed for larger buckets.
- List Objects: The function retrieves a complete list of all objects in the source bucket using
@aws-sdk/client-s3, handling pagination automatically - Create Archive: Uses the
archiverlibrary (v7) to create a streaming ZIP archive - Stream Download: Each object is streamed from the source bucket using the AWS SDK v3
GetObjectCommand - Stream Upload: The archive is simultaneously streamed to the destination bucket using
@aws-sdk/lib-storagefor efficient multipart uploads - Complete: Returns a summary with file count, size, and duration
- Size: Limited by function memory (1GB) and timeout (15 minutes)
- Large Buckets: For very large buckets (100GB+), consider:
- Increasing function memory and timeout in
project.yml - Splitting the backup into multiple archives by prefix
- Using a DigitalOcean Droplet instead
- Increasing function memory and timeout in
- Bandwidth: Subject to DigitalOcean bandwidth limits
The function uses the following key libraries:
- @aws-sdk/client-s3 (v3) - S3-compatible operations for DigitalOcean Spaces
- @aws-sdk/lib-storage (v3) - Managed multipart uploads with streaming support
- archiver (v7) - Streaming ZIP archive creation
doctl serverless activations list
doctl serverless activations get <activation-id> --logsMIT