Thanks to visit codestin.com
Credit goes to docs.restate.dev

Skip to main content
Restate stores cluster configuration data in a dedicated metadata store. This page explains the available metadata backend options, and how to choose and configure the right one for your deployment.
To understand the terminology used on this page, it might be helpful to read through the architecture reference.

What is Metadata?

Metadata in Restate includes critical cluster coordination information:
  • Cluster membership: Which nodes are part of the cluster and their roles
  • Partition assignments: How partitions are distributed across worker nodes
  • Service deployments: Registered service endpoints and their versions
  • Log configuration: How the replicated log is configured and which nodes participate
The metadata store is essential for cluster operation. While metadata is updated relatively infrequently, its availability is crucial for cluster health.
Choose your metadata provider during initial deployment based on your infrastructure and requirements. While migration between providers is possible from replicated to external stores, this operation requires a maintenance window (downtime) and careful execution.

Metadata Storage Providers

Restate supports four metadata storage providers, each suited to different deployment scenarios:
  1. Replicated (Default) - Built-in Raft-based consensus metadata server
  2. Etcd - External etcd cluster integration
  3. DynamoDB - Amazon DynamoDB
  4. Amazon S3 - We strongly recommend against using other S3-compatible stores for metadata

When to Use Each Provider

Replicated (Default)

The replicated metadata server uses Raft consensus to provide a highly-available metadata store built directly into Restate nodes. Use cases:
  • Standard production deployments
  • On-premises or multi-cloud environments
  • Deployments where you want minimal external dependencies
  • Environments where you already run stateful services
Requirements:
  • Nodes must run the metadata-server role
  • Persistent storage for metadata server nodes
  • Odd number of metadata-server nodes for quorum (typically 3 or 5)
Trade-offs:
  • ✅ No external dependencies - everything runs within Restate
  • ✅ Battle-tested Raft consensus for strong consistency
  • ✅ Works in any environment (cloud, on-prem, hybrid)
  • ⚠️ Needs careful attention to node placement for fault tolerance
  • ⚠️ Requires manual provisioning after majority of nodes are up to avoid split brain (other backends safely support auto-provisioning in a multi-node cluster)

Etcd

Use an external etcd cluster as the metadata store. This is ideal if you already operate etcd infrastructure or prefer to centralize metadata management. Use cases:
  • Organizations with existing etcd infrastructure
  • Environments with dedicated etcd operations expertise
  • Multi-tenant deployments with centralized control
Requirements:
  • External etcd cluster (v3.x) must be available
  • Network connectivity between Restate nodes and etcd
  • etcd cluster properly configured for production use
Trade-offs:
  • ✅ Leverage existing etcd infrastructure
  • ✅ Separate metadata concerns from Restate node management
  • ✅ Allows sharing metadata across clusters
  • ⚠️ Adds external dependency that must be separately managed
  • ⚠️ Network latency to etcd can affect cluster operations
  • ⚠️ Requires etcd operations expertise

DynamoDB

Use Amazon DynamoDB as a fully-managed metadata store. This simplifies operations by eliminating the need to run the stateful metadata role on your nodes. Use cases:
  • AWS-native deployments
  • Serverless or fully-managed infrastructure preferences
  • Deployments prioritizing operational simplicity
  • Multi-region AWS deployments
Requirements:
  • AWS account with DynamoDB access
  • DynamoDB table created with correct schema
  • IAM permissions: dynamodb:GetItem, dynamodb:PutItem, dynamodb:DeleteItem
  • Low-latency network access to DynamoDB
Trade-offs:
  • ✅ Fully managed - no metadata servers to operate
  • ✅ Automatic scaling and high availability from DynamoDB
  • ✅ It’s possible to use multi-region DynamoDB table for consistent global metadata resilient to regional outages.
  • ✅ Native AWS integration
  • ✅ Supports multi-cluster setups with key prefixes
  • ⚠️ AWS-specific - not portable to other clouds
  • ⚠️ Additional AWS service costs
  • ⚠️ Network latency-sensitive operations
  • ⚠️ Requires proper IAM configuration

Object Store (Amazon S3)

Use Amazon S3 as a metadata store. This provides a lightweight alternative to running metadata servers while keeping all data in object storage. Use cases:
  • AWS deployments with existing S3 infrastructure
  • Minimizing stateful components
  • Environments optimizing for S3 as primary storage
Requirements:
  • Amazon S3 bucket (not S3-compatible alternatives)
  • IAM permissions for S3 operations
  • Very low-latency access to S3 (same region recommended)
Trade-offs:
  • ✅ No metadata servers to manage
  • ✅ Integrates with existing S3 buckets
  • ✅ Simple backup and recovery
  • ⚠️ Only Amazon S3 is tested and supported (notably, MinIO is not supported for metadata)
  • ⚠️ Tied to a single region!
  • ⚠️ Network latency to S3 directly affects cluster operations
  • ⚠️ Must ensure very low-latency access (typically same region)
For object store metadata: Only Amazon S3 is currently tested and supported. Using S3-compatible alternatives is not supported and may lead to correctness issues. MinIO in particular is known to be incompatible as a metadata backend.

Configuration Examples

Replicated (Default)

This is the default configuration for multi-node clusters. Each node needs to know about all metadata server addresses.
restate.toml
# Node configuration - each node needs these settings
node-name = "node-1"
cluster-name = "my-cluster"

# Enable the metadata-server role on this node
roles = [
    "metadata-server",
    "worker",
    "log-server",
    "admin",
    "http-ingress"
]

# List all metadata server nodes (all nodes need this)
[metadata-client]
type = "replicated"
addresses = [
    "http://node-1:5122/",
    "http://node-2:5122/",
    "http://node-3:5122/"
]
For fault tolerance, run metadata-server on an odd number of nodes (typically 3 or 5). To tolerate n node failures, you need at least 2n + 1 metadata server nodes.

Etcd

Configure Restate to use an external etcd cluster:
restate.toml
# No metadata-server role needed
roles = [
    "worker",
    "log-server",
    "admin",
    "http-ingress"
]

[metadata-client]
type = "etcd"
# Etcd cluster addresses (host:port format)
addresses = [
    "etcd-1.example.com:2379",
    "etcd-2.example.com:2379",
    "etcd-3.example.com:2379"
]

# Optional: Adjust timeouts if needed
connect-timeout = "5s"
keep-alive-interval = "10s"

DynamoDB

Configure Restate to use AWS DynamoDB as the metadata store. 1. Create the DynamoDB table:
aws dynamodb create-table \
    --table-name restate-metadata \
    --attribute-definitions AttributeName=pk,AttributeType=S \
    --key-schema AttributeName=pk,KeyType=HASH \
    --billing-mode PAY_PER_REQUEST \
    --region us-east-1
The table must have:
  • A string partition key named pk (this is the only required attribute)
  • Choose an appropriate billing mode for your usage (PAY_PER_REQUEST or PROVISIONED)
2. Required IAM permissions: Your Restate nodes need these DynamoDB permissions:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "dynamodb:GetItem",
        "dynamodb:PutItem",
        "dynamodb:DeleteItem"
      ],
      "Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/restate-metadata"
    }
  ]
}
3. Configure Restate:
restate.toml
# No metadata-server role needed
roles = [
    "worker",
    "log-server",
    "admin",
    "http-ingress"
]

[metadata-client]
type = "dynamo-db"
table = "restate-metadata"

# Authentication: use EITHER aws-profile OR explicit credentials
aws-profile = "my-profile"
# aws-access-key-id = "..."
# aws-secret-access-key = "..."
# aws-session-token = "..."        # For STS temporary credentials

# Required unless inferred from profile/environment
aws-region = "us-east-1"

# Optional: Namespace for multi-cluster setups
# key-prefix = "prod-cluster/"

# Optional: For local testing with DynamoDB Local
# aws-endpoint-url = "http://localhost:8000"
Multi-cluster support: Use the key-prefix option to share a single DynamoDB table across multiple Restate clusters. Each cluster will use its own namespace within the table.

Object Store (Amazon S3)

Configure Restate to use Amazon S3 for metadata storage:
restate.toml
# No metadata-server role needed
roles = [
    "worker",
    "log-server",
    "admin",
    "http-ingress"
]

[metadata-client]
type = "object-store"
# S3 bucket and optional prefix
path = "s3://restate-metadata-bucket/cluster-prod"

# Authentication options
aws-profile = "my-profile"
# Or use explicit credentials:
# aws-access-key-id = "..."
# aws-secret-access-key = "..."
# aws-session-token = "..."

aws-region = "us-east-1"

# Optional: For testing with local S3-compatible storage
# aws-endpoint-url = "http://localhost:9000"
# aws-allow-http = true
Object store metadata requires low-latency access to S3. Deploy your Restate cluster in the same AWS region as your S3 bucket to minimize latency impact on cluster operations.

Migrating Between Providers

Restate supports migrating metadata from the replicated provider to external providers (etcd, DynamoDB, or object-store) using the restatectl metadata migrate command. This allows you to change metadata providers without losing cluster state.
Important limitations:
  • Migration only supports moving FROM replicated TO external providers (not the reverse)
  • Migration requires all cluster nodes to be restarted in migration mode
  • Plan for a maintenance window as the cluster will not process invocations during migration

Migration Process

Prerequisites

Before starting the migration:
  1. Target metadata store must be accessible and ready:
    • DynamoDB: Table created with correct schema
    • Etcd: Cluster running and accessible
    • Object Store: S3 bucket created and accessible
  2. Prepare target configuration:
    • Have your target [metadata-client] configuration ready in TOML format
    • Test connectivity from Restate nodes to the target store
  3. Plan for downtime:
    • All nodes will need to be restarted in migration mode
    • Invocation processing will be paused during migration

Step-by-Step Migration

1

Prepare target metadata store

For DynamoDB:
aws dynamodb create-table \
    --table-name restate-metadata \
    --attribute-definitions AttributeName=pk,AttributeType=S \
    --key-schema AttributeName=pk,KeyType=HASH \
    --billing-mode PAY_PER_REQUEST \
    --region us-east-1
For Etcd:
# Ensure your etcd cluster is running and accessible
etcdctl endpoint health
For Object Store:
# Ensure S3 bucket exists and is accessible
aws s3 ls s3://restate-metadata-bucket/
2

Restart all nodes in migration mode

Stop all cluster nodes and restart them with the --metadata-migration-mode flag:
restate --metadata-migration-mode --config-file restate.toml
In migration mode, only the Admin and MetadataServer roles are active. The cluster will not process invocations or serve requests until migration is complete and nodes are restarted normally.
3

Create target configuration file

Create a TOML file (e.g., target-metadata.toml) with your target metadata client configuration:For DynamoDB:
target-metadata.toml
[metadata-client]
type = "dynamo-db"
table = "restate-metadata"
aws-profile = "my-profile"
aws-region = "us-east-1"
For Etcd:
target-metadata.toml
[metadata-client]
type = "etcd"
addresses = [
    "etcd-1.example.com:2379",
    "etcd-2.example.com:2379",
    "etcd-3.example.com:2379"
]
For Object Store:
target-metadata.toml
[metadata-client]
type = "object-store"
path = "s3://restate-metadata-bucket/cluster-prod"
aws-profile = "my-profile"
aws-region = "us-east-1"
4

Run migration command

Execute the migration using restatectl:
restatectl metadata migrate --toml target-metadata.toml
By default, the migration will fail if any keys already exist in the target store. To override existing keys:
restatectl metadata migrate --toml target-metadata.toml --force
Use --force with caution. It will overwrite any existing metadata in the target store, which could cause issues if the target is shared with other clusters.
The migration command will:
  1. Read all metadata from the replicated store
  2. Write it to the target store
  3. Verify the migration succeeded
  4. Display next steps
Expected output:
✅ Metadata migration completed

1. Please make sure to update all nodes config with
   [metadata-client]
   type = "dynamo-db"
   table = "restate-metadata"
   aws-profile = "my-profile"
   aws-region = "us-east-1"

2. Restart all nodes without the --metadata-migration-mode
5

Update node configurations

Update the [metadata-client] section in all node configuration files with the target configuration:
restate.toml
# Remove or comment out metadata-server role if present
roles = [
    "worker",
    "log-server",
    "admin",
    "http-ingress"
]

# Replace [metadata-client] section with target configuration
[metadata-client]
type = "dynamo-db"
table = "restate-metadata"
aws-profile = "my-profile"
aws-region = "us-east-1"
Also remove or comment out the [metadata-server] section if present, as it’s no longer needed with external metadata providers.
6

Restart all nodes normally

Stop all nodes running in migration mode and restart them with their updated configurations:
restate --config-file restate.toml
Verify the cluster is healthy:
restatectl status
restatectl nodes list

Troubleshooting Migration

Migration fails with “target keys exist”:
  • The target store already contains metadata keys
  • Either use --force to override, or clean the target store first
  • Verify you’re not accidentally sharing a metadata store with another cluster
Migration command times out:
  • Check that nodes are actually running in migration mode
  • Verify network connectivity to both source and target stores
  • Check Restate server logs for errors
Cluster won’t start after migration:
  • Verify all nodes have the updated [metadata-client] configuration
  • Check IAM permissions for DynamoDB or S3
  • Verify etcd cluster is accessible if using etcd
  • Check server logs for connection errors