Project Version: 1.x
db_backups is a lightweight, modular collection of Bash scripts designed to automate periodic database dumps for MySQL, MariaDB, and PostgreSQL.
Backups are organized by frequency (minutely, hourly, daily, weekly, monthly) and can be pushed to AWS S3 or Cloudflare R2. Old archives are cleaned up automatically based on configurable TTL (time-to-live).
- Flexible Frequency: Minutely, hourly, daily, weekly, and monthly snapshots.
- Multi-DB Support: MySQL, MariaDB, PostgreSQL.
- Compression: Dumps are zipped to save space.
- Cloud Sync: Optional upload to AWS S3 or Cloudflare R2 (via aws-cli or s3cmd).
- Local Storage: Can be configured to save backups locally.
- TTL-Based Cleanup: Automatically prune old backups according to TTL settings.
- Cron-Friendly: Frequency scripts for easy cron integration.
- Install the tool with the online installer.
- Set the bucket configurations (if you want cloud backups).
- Run a backup manually to test it.
- Set up the cronjob for the given frequency.
- Enjoy set-and-forget automation.
Install via Online Installer (as root):
curl -sSL https://raw.githubusercontent.com/capuromeyer/db_backups/master/online_install.sh | sudo bash
Configure Credentials (S3):
-
For aws-cli: aws configure (enter IAM user key, secret, default region).
sudo aws configure -
For s3cmd: s3cmd --configure (enter same credentials).
sudo s3cmd --configure -
For R2, both
aws configureands3cmd --configureare needed, but you need to create a special profile. (See more in the Cloudflare R2 installation section.) -
Deploy Project Config:
-
Duplicate project configuration files
sudo cp /etc/db_backups/conf.d.sample/project.conf.sample /etc/db_backups/conf.d/project.confEdit itsudo nano /etc/db_backups/conf.d/project.conf -
Duplicate global master manifest
sudo cp /etc/db_backups/db_backups.conf.sample /etc/db_backups/db_backups.confEdit it to include (enable) the project in the manifest listsudo nano /etc/db_backups/db_backups.conf
-
-
Run a Test Backup:
sudo /usr/local/sbin/db_backups/dbb-tester.sh
The tester script uses hourly frequency.
-
Create your cronjobs
sudo crontab -e1 * * * * /usr/local/sbin/db_backups/hourly.sh--- Quick Start end. ---
This will install all dependencies and the necessary directory structure on your server:
curl -sSL https://raw.githubusercontent.com/capuromeyer/db_backups/main/online_install.sh \ | sudo bash
If you prefer to install it yourself, you need to install the dependencies and create the directory structure.
Install General Dependencies To install general dependencies: Git (to pull the repository), Zip (to compress the database backup files), Snap (to install AWS CLI required for working with cloud buckets).
sudo apt update
sudo apt install git zip snapd
Install Specific Dependencies
AWS CLI To install AWS CLI, you need to run the following. Note that this is a snap installer, so you need snap on your system previously to run this.
AWS CLI is used to perform operations on the cloud bucket.
sudo snap install aws-cli --classic
S3CMD
To install s3cmd, you need to run:
sudo apt install s3cmd
S3cmd is used to take advantage of the sync functions, which allow you to save backups progressively (only missing non-uploaded parts)
sudo git clone https://github.com/capuromeyer/db_backups.git /opt/db_backups
sudo mkdir -p /usr/local/sbin/db_backups
sudo cp -r /opt/db_backups/staging_root/usr/local/sbin/db_backups/* \
/usr/local/sbin/db_backups/
sudo mkdir -p /etc/db_backups/conf.d /etc/db_backups/conf.d.sample /etc/db_backups/autogen_conf.d
sudo cp /opt/db_backups/staging_root/etc/db_backups/db_backups.conf /etc/db_backups/
sudo cp /opt/db_backups/staging_root/etc/db_backups/conf.d.sample/* \
/etc/db_backups/conf.d.sample/
sudo chmod +x /usr/local/sbin/db_backups/*.sh \
/usr/local/sbin/db_backups/lib/*.sh
For db_backups to run, we need to set up 3 configuration file sets:
- Master Manifest Config:
/etc/db_backups/db_backups.confโ enables or disables database projects. - Per-Project Configs:
/etc/db_backups/conf.d/your_project.confโ sets the configuration needed for one project. - Credentials (third-party): These are managed externally using either
aws configurefor AWS CLI ors3cmd --configurefor S3CMD, which establish the necessary cloud credentials for S3/R2.
Apart from what is needed by AWS CLI and S3CMD, the configuration for db_backups set in /etc/db_backups is split across two distinct files, both of which are required for the system to work: Master Manifest Config and Project Config File.
- Path:
/etc/db_backups/conf.d/<project>.conf - Definition: A project is a simple way to group one or more databases of the same type (like all your MySQL databases) under shared backup settings. It's a label you create to organize your backups, not necessarily tied to a specific client, website, or application. Choose a name that makes sense for how you want to manage your backups.
- Key Principle: One project config = one database engine type (MySQL, MariaDB, or PostgreSQL).
- Run:
sudo cp /etc/db_backups/conf.d.sample/project.conf.sample /etc/db_backups/conf.d/project.confsudo nano /etc/db_backups/conf.d/project.conf
- Security: Restrict access since this file contains DB passwords:
sudo chmod 600 /etc/db_backups/conf.d/project.confsudo chown root:root /etc/db_backups/conf.d/project.conf
- Example Fields:
# Unique project name
PROJECT_NAME="my_project"
# List of Databases for this project (space-separated strings in a bash array)
DBS_TO_BACKUP=("db1" "db2")
# Database Type
# Supported: "mysql", "mariadb", "postgres"
DB_TYPE="mysql"
# Database Credentials
DB_USER="root"
DB_PASSWORD="mypassword"
# --- Backup Target & Storage ---
# BACKUP_TYPE: Specifies where to store backups for this project.
# Supported: "local", "cloud", "both"."
BACKUP_TYPE="both"
# Cloud Storage Settings (Required if BACKUP_TYPE is "cloud" or "both")
# Supported: "s3", "r2"
CLOUD_STORAGE_PROVIDER="s3"
# Bucket Name
S3_BUCKET_NAME="db-backups"
# S3 Path
S3_PATH="db_backups_server01"
# --- Frequency Enablement ---
# For each backup frequency, set to "on" to enable backups for THIS project.
BACKUP_FREQUENCY_MINUTELY="on"
BACKUP_FREQUENCY_HOURLY="on"
BACKUP_FREQUENCY_DAILY="on"
BACKUP_FREQUENCY_WEEKLY="on"
BACKUP_FREQUENCY_MONTHLY="on"
# --- Time To Live (TTL) / Retention Settings ---
# Define how long to keep backups for each frequency enabled above.
# Format: <number><unit> (m=minutes, h=hours, d=days, w=weeks, M=months, y=years)
# Example: "60m", "24h", "7d", "4w", "6M", "1y".
# Keep minutely backups for 2 hours
TTL_MINUTELY_BACKUP="120m"
# Keep hourly backups for 24 hours (1 day)
TTL_HOURLY_BACKUP="24h"
# Keep daily backups for 14 days (2 weeks)
TTL_DAILY_BACKUP="14d"
# Keep weekly backups for 5 weeks
TTL_WEEKLY_BACKUP="5w"
# Keep monthly backups for 12 months (1 year)
TTL_MONTHLY_BACKUP="12M"
# ----- Advanced Settings (Optional) -----
#
# -- Settings for Cloudflare R2 --
# aws cli
# Define the profile name for AWS CLI that connects to Cloudflare R2
# R2_AWS_PROFILE_NAME="r2"
# s3cmd
# Define the path to config file needed by S3cmd to connect to Cloudflare R2
# R2_S3CMD_CONFIG_PATH="/root/.s3cfg_r2"
# -- Override Default Local Backup Location --
# By default, LOCAL_BACKUP_ROOT is /var/backups/db_backups/
# Uncomment and set this if you need a custom absolute path for this project's local backups.
# LOCAL_BACKUP_ROOT="/opt/custom_backup_location/my_project_backups"
#
# Override Default Temporary Directory:
# By default, TEMP_DIR is /var/cache/db_backups/PROJECT_NAME_temp/
# Uncomment and set this if you need a custom absolute path for this project's temporary files.
# TEMP_DIR="/opt/custom_temp_location/my_project_temp"
# ==============================================================================
# End of Sample Project Configuration
# ==============================================================================
Building on the concept of Project Config Files, here are the rules for how you should group your databases within these projects:
- Each project configuration file is designed to manage backups for databases of a single type (e.g., all MySQL databases, or all PostgreSQL databases). This is because different database systems require specific tools and commands for dumping their data.
- If you have a group of related databases (like those for a specific client or application) that use different database engines, you'll need a separate project config file for each engine type. For example:
- WordPress on MariaDB โ
wordpress_mariadb.conf - n8n on PostgreSQL โ
n8n_postgresql.conf - Mautic on MySQL โ
mautic_mysql.conf
- WordPress on MariaDB โ
- Conversely, if you have multiple databases that share the same engine, you can group them under a single project config file. For instance, if both your WordPress and Mautic installations use MariaDB, they could both be included in a
websites_mariadb.conffile. - Important Note on Backup Retention (TTL): If you configure multiple project files to save backups to the exact same storage location (e.g., the same S3 bucket path), be aware that their automatic cleanup routines (based on TTL) might interfere with each other. This could lead to backups being deleted sooner than intended by one project's settings if another project's settings have a shorter retention period.
Limitation If projects share the same backup location, their automatic cleanup routines (based on TTL) might interfere with each other. To prevent this, it's crucial to use identical TTL values across all project configurations that share the same backup directory.
After we have at least one project configuration file, we can now enable it. To do this, we need to set the Master Manifest Config.
- Path:
/etc/db_backups/db_backups.conf - Purpose: Enumerates which projects to process.
- Contents: Only include statements, one per project:
- Run:
sudo cp /etc/db_backups/db_backups.conf.sample /etc/db_backups/db_backups.confsudo nano /etc/db_backups/db_backups.conf
- Enable/Disable: Remove/add # before an include line to enable or disable a project's backups.
- Isolation: Each included project runs in its own subshell; global variables here do not leak.
# /etc/db_backups/db_backups.conf
include /etc/db_backups/conf.d/my_project.conf;
# include /etc/db_backups/conf.d/another_project.conf;
Add/Remove # before an include line to disable or enable a project's backups in the manifest.
- Root Execution: All scripts must run as root (sudo) to access databases, write dumps, and enforce permissions.
- File Ownership: Backup archives and config files are owned by rootโonly root may read or modify them.
Understanding the workflow will help you configure and use db_backups effectively:
The Master Manifest (/etc/db_backups/db_backups.conf) acts like a master switch that controls which projects are active. Think of it as your project playlist:
- Enabled projects (uncommented
includelines) will be processed when backups run - Disabled projects (commented with
#) will be completely ignored - You can easily enable/disable entire projects by adding/removing the
#comment
Each project has its own configuration file (/etc/db_backups/conf.d/project.conf) that defines:
- Which databases to backup
- Database connection details
- Storage preferences (local, cloud, or both)
- Frequency preferences for that specific project
The system is designed to work with cronjobs that trigger wrapper scripts at different frequencies:
dbb-minutely.sh- triggered every few minutesdbb-hourly.sh- triggered every hourdbb-daily.sh- triggered once per daydbb-weekly.sh- triggered once per weekdbb-monthly.sh- triggered once per month
Here's where it gets clever: All projects share the same code, but each project can choose which frequencies to participate in.
Example Workflow:
- Your cronjob triggers
dbb-hourly.sh - The script reads the Master Manifest and finds all enabled projects
- For each enabled project, it checks that project's individual settings
- If
BACKUP_FREQUENCY_HOURLY="off"in a project's config, that project gets skipped for this hourly run - If
BACKUP_FREQUENCY_HOURLY="on", the project's databases get backed up
This means:
- Project A might backup every hour and daily (hourly=on, daily=on, weekly=off)
- Project B might only backup weekly and monthly (hourly=off, daily=off, weekly=on, monthly=on)
- Project C might be completely disabled (commented out in Master Manifest)
- Flexible scheduling: Different projects can have different backup frequencies
- Easy management: Enable/disable entire projects without touching individual settings
- Resource control: Skip heavy backups during peak hours by adjusting frequency settings
- Maintenance mode: Quickly disable all backups by commenting out projects in the Master Manifest
| Package | Purpose |
|---|---|
| bash | shell for running the scripts |
| git | cloning this repository |
| zip | compressing dumps |
| bc | arithmetic for TTL calculations (legacy) |
| aws-cli | uploading to cloud storage (S3/R2) |
| s3cmd | sync cloud bucket (S3/R2) |
| snapd | required to install aws-cli via snap |
Git and Zip are prerequisites for the project to run, needed to pull the repository and to compress the database backup files.
sudo apt install git zip
If you want to save the backups on the cloud (highly recommended), you will need a working account. The project currently supports AWS S3 and Cloudflare R2 buckets.
๐ธ AWS S3 -- Create an Account and Bucket Step 1: Sign Up for AWS
- Go to https://aws.amazon.com
- Click "Create an AWS Account"
- Fill in your email, password, and account name
- Enter payment info (required for usage, even with Free Tier)
- Complete identity verification and select a support plan
Step 2: Set Up an S3 Bucket
- Log in to the AWS Console: https://console.aws.amazon.com/
- Search for S3 in the services search bar
- Click "Create bucket"
- Fill in:
- Bucket name (must be globally unique)
- Region
- Configure options as needed (default is fine for most)
- Click "Create bucket"
Step 3: Get Access Credentials
- Go to IAM > Users
- Create a new user with programmatic access
- Attach a policy like AmazonS3FullAccess
- Save your Access Key ID and Secret Access Key
๐ธ Cloudflare R2 -- Create an Account and Bucket Step 1: Create a Cloudflare Account
- Go to https://dash.cloudflare.com
- Click "Sign Up"
- Enter your email and password, verify your email
- Enter payment info (required for R2, even with Free Tier)
Step 2: Set Up R2
- Go to R2 Object Storage (you can search in the dashboard)
- Click "Create bucket"
- Enter a Bucket name
- (Optional) Set public/private permissions
Step 3: Generate R2 API Token
- Go to R2 Object Storage Overview > API
- Click "API"
- Choose "Use R2 with APIs"
- Save the given Account ID and URL for future reference, pick "S3 Compatible API" and click on "Create an API token"
- This will send you to Manage API Token for R2, click on "Create Account API Token"
- Create bucket name and permissions (note that db_backups requires write permission)
- Click on "Create Account API Token"
- Securely save all the info for future reference (Token value, Access Key ID, Secret Access Key, URL endpoint)
sudo snap install aws-cli --classic
aws configure
Provide your access key, secret key, and default region when prompted.
sudo apt install s3cmd
s3cmd --configure
Enter your credentials to create ~/.s3cfg.
If you prefer to use Cloudflare R2 instead of AWS S3, you'll need to configure both AWS CLI and S3cmd with special R2-specific settings. Unlike AWS S3, R2 requires custom endpoints and profiles.
Step 1: Create an R2-Specific AWS CLI Profile
You can access Cloudflare R2 with the AWS CLI by creating a dedicated profile pointing to your R2 endpoint. Once set up, standard AWS CLI S3 commands work seamlessly against R2.
Run the interactive setup to capture your R2 credentials:
sudo aws configure --profile r2Where 'r2' is the profile name (you can choose any name you prefer).
When prompted, enter:
- AWS Access Key ID:
your-r2-access-key-id(32-character hex string, e.g.,e1506511492856e02e36aa47e66b3b68) - AWS Secret Access Key:
your-r2-secret-access-key(64-character hex string, e.g.,9606eb769f01b1ba468d6a32cce7e0118ea0c72730e831cc44cd62c3e20f18ba) - Default region name:
auto - Default output format:
json(or press Enter for default)
Step 2: Configure the R2 Endpoint
To avoid errors, you need to add your R2 endpoint under the r2 profile configuration:
sudo su
cd ~/.aws
sudo nano configAdd the endpoint configuration to the file:
[default]
region = us-east-1
[profile r2]
region = auto
endpoint_url = https://your-account-id.r2.cloudflarestorage.comReplace your-account-id with your actual Cloudflare account ID (32-character hex string, e.g., 957ba64b82843e3ee1a7a08aaf0fd92c) from your R2 dashboard.
Step 3: Test Your R2 Configuration
Test by listing objects in your R2 bucket:
sudo aws s3 ls s3://your-r2-bucket-name --profile r2Make sure you have some files in the bucket for testing.
Step 1: Create R2 Profile for S3cmd
Create a separate configuration file for R2:
sudo su
cd
sudo s3cmd --configure -c ~/.s3cfg_r2Where ~/.s3cfg_r2 is the R2-specific configuration file.
When prompted, enter:
- Access Key:
your-r2-access-key-id(32-character hex string, e.g.,e1506511492856e02e36aa47e66b3b68) - Secret Key:
your-r2-secret-access-key(64-character hex string, e.g.,9606eb769f01b1ba468d6a32cce7e0118ea0c72730e831cc44cd62c3e20f18ba) - Default Region:
auto - S3 Endpoint:
your-account-id.r2.cloudflarestorage.com(without https://, e.g.,957ba64b82843e3ee1a7a08aaf0fd92c.r2.cloudflarestorage.com) - DNS-style bucket+hostname:port template:
%(bucket)s.your-account-id.r2.cloudflarestorage.com(e.g.,%(bucket)s.957ba64b82843e3ee1a7a08aaf0fd92c.r2.cloudflarestorage.com)
Note: %(bucket)s. is a required placeholder. For the rest of the prompted parameters, you can use defaults by pressing Enter.
Step 2: Handle Expected Configuration Test Error
During configuration, you may encounter:
ERROR: Test failed: 501
This is expected behavior. Cloudflare R2 intentionally rejects S3 ListBuckets calls that include unsupported query parameters with an HTTP 501 NotImplemented response. The s3cmd configuration test includes these parameters, so it fails predictably.
This failure does not indicate a DNS or credential problem.
Step 3: Verify R2 S3cmd Configuration
Ignore the 501 error from the configuration test. Instead, verify your setup by targeting a specific bucket:
sudo s3cmd -c ~/.s3cfg_r2 ls s3://your-r2-bucket-nameWhen using R2, update your project configuration file (/etc/db_backups/conf.d/project.conf) to include these R2-specific settings:
# Cloud Storage Settings
CLOUD_STORAGE_PROVIDER="r2"
# R2-specific configurations (uncomment and configure)
R2_AWS_PROFILE_NAME="r2"
R2_S3CMD_CONFIG_PATH="/root/.s3cfg_r2"sudo apt install git zip
You can run a manual backup to verify everything works:
sudo /usr/local/sbin/db_backups/dbb-tester.sh
The tester script uses hourly frequency
After running a test, you can proceed to create cronjobs:
Example:
# Hourly backup
1 * * * * /usr/local/sbin/db_backups/dbb-hourly.sh >> /var/log/db_backups/hourly.log
sudo crontab -e
Set up cron entries for each required frequency:
# Every 30 minutes
*/30 * * * * /usr/local/sbin/db_backups/dbb-minutely.sh >> /var/log/db_backups/01.minutely.log
# Every hour at 01 minute
1 * * * * /usr/local/sbin/db_backups/dbb-hourly.sh >> /var/log/db_backups/02.hourly.log
# Every day at 03:03
3 3 * * * /usr/local/sbin/db_backups/dbb-daily.sh >> /var/log/db_backups/03.daily.log
# Every week on Monday at 00:02
2 0 * * MON /usr/local/sbin/db_backups/dbb-weekly.sh >> /var/log/db_backups/04.weekly.log
# Every month 1st day at 01:07
7 1 1 * * /usr/local/sbin/db_backups/dbb-monthly.sh >> /var/log/db_backups/05.monthly.log
Wrapper scripts exist for all frequencies (minutely, daily, etc.).
To remove the tool while keeping existing backups and logs:
sudo /usr/local/sbin/db_backups/uninstall.sh
- /usr/local/sbin/db_backups/ โ installed scripts and libraries
- /etc/db_backups/ โ main manifest and project configs
- /var/backups/db_backups/ โ default local backup location
- /var/log/db_backups/ โ cron and tool logs
This project is licensed under the GPL v3 License - see the LICENSE file for details.
Copyright ยฉ 2025 Alejandro Capuro Meyer
All rights reserved. This software is provided under the terms of the GPL v3 license.
Author: Alejandro Capuro Meyer
Project Repository: https://github.com/capuromeyer/db_backups
This project was developed with the assistance of various AI tools including Gemini, ChatGPT, Claude, Jules, Firebase, and others. The project lead, architecture, logic design, debugging, testing, refactoring decisions, and overall direction were provided by the author. AI tools were used as coding assistants to implement the specified requirements and logic under human guidance and supervision.