This tool lets you deploy a RonDB cluster on AWS, monitor the cluster and run benchmarks.
A minimal guide to get started.
- The easiest way to get all necessary dependencies is to install
nixand runnix-shell. You'll get a bash session with access to the dependencies, without modifying your system directories. Otherwise:- Install terraform
- Install AWS CLI, python3, rsync and tmux.
- On Debian/Ubuntu:
apt-get update && apt-get install -y awscli python3 rsync tmux - On macOS:
brew install awscli python3 rsync tmux
- On Debian/Ubuntu:
- You need an AWS API key to create cloud resources.
Run
aws configureand enter it. - Run
./cluster_ctl deployto initialize and deploy the cluster. Go to the printed web address to monitor the cluster using grafana dashboards. - Run
./cluster_ctl bench_locustto create test data and start locust. Go to the printed web address to access locust. - In the locust GUI, set the number of users to the same as the number of workers and press
START. During the benchmark run, you can use both locust and grafana to gather statistics, from the client's and server's point of view, respectively. - Run
./cluster_ctl cleanupto destroy the AWS resources and local files created.
A cluster can be (re)configured by changing the values in config.py.
Before running ./cluster_ctl deploy (or ./cluster_ctl terraform) and after running ./cluster_ctl cleanup, terraform is not initialized and you may edit config.py freely.
Otherwise, follow these steps:
-
If you intend to change region, run
./cluster_ctl cleanup. This is becauseregioncannot be updated for an existing cluster. -
Edit
config.py. -
Apply the changes. The easiest way to do this is to run
./cluster_ctl deploy.If the cluster already existed, it might be possible to apply the changes faster, but it's more complex:
- If you have changed variables that affect AWS resources, you must run
./cluster_ctl terraform. These variables includenum_azs,cpu_platform,*_instance_type,*_disk_sizeand*_count. - Run
./cluster_ctl installat least for affected nodes. This may include more nodes than you suppose. For example, changingndbmtd_countwill affect the sequence of node IDs assigned tomysqld,rdrsandbenchnodes, and will also affect thendb_mgmdconfig. - Run
./cluster_ctl startat least for affected nodes.
- If you have changed variables that affect AWS resources, you must run
-
If all data nodes have been reinstalled or restarted, the test data table is lost. You'll have to repopulate using
./cluster_ctl bench_locustor./cluster_ctl populate. -
If you have run
./cluster_ctl deployor./cluster_ctl startonbenchnodes, locust will be stopped but not started. You'll have to use./cluster_ctl bench_locustor./cluster_ctl start_locustfor that.
Some sub-commands take a NODES argument.
It is a comma-separated list of node names and node types.
If given, the command will operate only on matching nodes, otherwise on all nodes.
Available node types are: ndb_mgmd, ndbmtd, mysqld, rdrs, prometheus, grafana and bench.
-
./cluster_ctl deploy [NODES]A convenience command equivalent toterraform,installandstart. -
./cluster_ctl terraformConfigure, initialize and apply terraform as needed. In detail,- create or update
terraform.tfvars. - call
terraform initunless it is already initialized. - call
terraform apply.
- create or update
-
./cluster_ctl install [NODES]Install necessary software and configuration files. Depending on the node type this may include RonDB, prometheus, grafana and locust. -
./cluster_ctl start [NODES]Start or restart services. (This will not start locust, seebench_locust.) -
./cluster_ctl stop [NODES]Stop services. -
./cluster_ctl open_tmux [NODES]Create a tmux session with ssh connections opened to all applicable nodes.When attached to the tmux session, you can press
C-b nto go to the next window, andC-b dto detach.If a session already exists, ignore the
NODESargument and reattach to it. To connect to a different set of nodes, usekill_tmuxfirst.See also the
./cluster_ctl sshcommand. -
./cluster_ctl kill_tmuxKill the tmux session created by open_tmux. -
./cluster_ctl ssh NODE_NAME [COMMAND ...]Connect to a node using ssh. Run./cluster_ctl listto see the options forNODE_NAME.See also the
./cluster_ctl open_tmuxcommand. -
./cluster_ctl bench_locust [--cols COLUMNS] [--rows ROWS] [--types COLUMN_TYPES] [--workers WORKERS]A convenience command equivalent topopulatefollowed bystart_locust. -
./cluster_ctl populate [--cols COLUMNS] [--rows ROWS] [--types COLUMN_TYPES]Create and populate a table with test data. If a table already exists, recreate it if any parameter has changed, otherwise do nothing. Parameters:COLUMNS: Number of columns in table. Default 10.ROWS: Number of rows in table. Default 100000.COLUMN_TYPES: The columns types in table; one ofINT,VARCHAR,INT_AND_VARCHAR. DefaultINT.
-
./cluster_ctl start_locust [--rows ROWS] [--workers WORKERS]Start or restart locust services. You need to runpopulatefirst. Parameters:ROWS: Number of rows in table. Default 100000. Warning: This must be set to the same value as the last call topopulate. Prefer usingbench_locust, which will do the right thing automatically.WORKERS: Number of locust workers. Defaults to the maximum possible (one per available CPU).
-
./cluster_ctl stop_locustStop locust. -
./cluster_ctl listPrint node information, including node name, node type, public and private IP, NDB Node IDs, and running processes. -
./cluster_ctl cleanupThis will- destroy the terraform cluster created by
./cluster_ctl terraform - destroy the AWS SSH key created by
./cluster_ctl terraform - remove the tmux session created by
./cluster_ctl open_tmux - delete temporary files
- destroy the terraform cluster created by