The programmatic order framework requires a watch-tower to monitor the blockchain for new orders, and to post them to the CoW Protocol OrderBook API. The watch-tower is a standalone application that can be run locally as a script for development, or deployed as a docker container to a server, or dappnode.
If running your own watch-tower instance, you will need the following:
- An RPC node connected to the Ethereum mainnet, Arbitrum One, Gnosis Chain, Base, or Sepolia.
- Internet access to the CoW Protocol
OrderBookAPI.
CAUTION: Conditional order types may consume considerable RPC calls.
NOTE: deployment-block refers to the block number at which the ComposableCoW contract was deployed to the respective chain. This is used to optimise the watch-tower by only fetching events from the blockchain after this block number. Refer to Deployed Contracts for the respective chains.
NOTE: The pageSize option is used to specify the number of blocks to fetch from the blockchain when querying historical events (eth_getLogs). The default is 5000, which is the maximum number of blocks that can be fetched in a single request from Infura. If you are running the watch-tower against your own RPC, you may want to set this to 0 to fetch all blocks in one request, as opposed to paging requests.
The preferred method of deployment is using docker. The watch-tower is available as a docker image on GitHub. The tags available are:
latest- the latest version of the watch-tower.vX.Y.Z- the version of the watch-tower.main- the latest version of the watch-tower on themainbranch.pr-<PR_NUMBER>- the latest version of the watch-tower on the PR.
As an example, to run the latest version of the watch-tower via docker:
docker run --rm -it \
-v "$(pwd)/config.json.example:/config.json" \
ghcr.io/cowprotocol/watch-tower:latest \
run \
--config-path /config.jsonNOTE: See the example config.json.example for an example configuration file.
For DAppNode, the watch-tower is available as a package. This package is held in a separate repository.
node(>= v16.18.0)yarn
-
Copy the environment example file:
cp .env.example .env
-
Edit
.envwith your configuration:# Networks to monitor (comma-separated list, e.g., "base,mainnet,arbitrum") # Defaults to "base" if not specified NETWORKS=base # Base Network Configuration BASE_RPC_URL=https://rpc.base.org BASE_DEPLOYMENT_BLOCK=31084679 BASE_HANDLER_ADDRESS=0xYourHandlerAddressHere BASE_BLOCK_POLLING_RATE=60 # Optional: process every N blocks when using block-by-block mode (default: 1) BASE_POLLING_INTERVAL_SECONDS=300 # Optional: time-based polling every N seconds (e.g., 300 = 5 minutes). If set, uses time-based polling instead of block-by-block. BASE_KEEP_EXPIRED_ORDERS=true # Optional: keep expired orders in registry (default: true) # To add more networks, add NETWORKS=base,mainnet and configure: # MAINNET_RPC_URL=https://eth.llamarpc.com # MAINNET_DEPLOYMENT_BLOCK=12345678 # MAINNET_HANDLER_ADDRESS=0xYourHandlerAddressHere # MAINNET_BLOCK_POLLING_RATE=1 # MAINNET_POLLING_INTERVAL_SECONDS=300 # Storage Configuration (optional) # Options: "leveldb" or "redis" (default: "leveldb") STORAGE_TYPE=redis REDIS_HOST=localhost REDIS_PORT=6379 REDIS_PASSWORD=
-
Generate
config.jsonfrom environment variables:yarn generate-config
This will create a
config.jsonfile with your network configuration(s), filtering only orders from your specified handler address(es).Environment Variable Pattern:
NETWORKS: Comma-separated list of network names (e.g.,"base,mainnet")- For each network
{NETWORK}:{NETWORK}_RPC_URL(required): RPC endpoint URL{NETWORK}_DEPLOYMENT_BLOCK(required): Block number where ComposableCoW was deployed{NETWORK}_HANDLER_ADDRESS(required): Handler address to filter orders{NETWORK}_BLOCK_POLLING_RATE(optional): Process every N blocks when using block-by-block mode (default: 1){NETWORK}_POLLING_INTERVAL_SECONDS(optional): Time-based polling interval in seconds. If set, the watch-tower will poll for new blocks at this interval instead of listening to every block. Useful for handlers that don't require immediate updates (e.g., Dutch auctions updating every 10 minutes). If not set, uses block-by-block polling.{NETWORK}_KEEP_EXPIRED_ORDERS(optional): Keep expired orders (default: true)
# Install dependencies
yarn
# Option 1: Use the start command (generates config and runs automatically)
yarn start
# Option 2: Manual steps
# Generate config.json from .env
yarn generate-config
# Run watch-tower
yarn cli run --config-path ./config.jsonNote: The yarn start command will automatically generate config.json from your .env file before starting the watch-tower. If Redis configuration is not set in your .env, it will default to localhost:6379.
The watch-tower monitors the following events:
ConditionalOrderCreated- emitted when a single new conditional order is created.MerkleRootSet- emitted when a new merkle root (ie.nconditional orders) is set for a safe.
When a new event is discovered, the watch-tower will:
- Fetch the conditional order(s) from the blockchain.
- Post the discrete order(s) to the CoW Protocol
OrderBookAPI.
The watch-tower stores the following state:
- All owners (ie. safes) that have created at least one conditional order.
- All conditional orders by safe that have not expired or been cancelled.
As orders expire, or are cancelled, they are removed from the registry to conserve storage space.
The chosen architecture for the storage is a NoSQL (key-value) store. The watch-tower supports two storage backends:
LevelDB (default):
- Uses
levelpackage - Default location:
$PWD/database - Provides ACID guarantees
- Simple key-value store, ideal for local deployments
- All writes are batched, and if a write fails, the watch-tower will throw an error and exit
Redis (optional):
- Uses
ioredispackage - Configured via
STORAGE_TYPE=redisin.env - Requires Redis host, port, and optionally password
- All keys are prefixed with
cryptoswap:watch-tower:for namespacing - Ideal for shared deployments or when integrating with existing Redis infrastructure
- Defaults to
localhost:6379if not configured
On restarting, the watch-tower will attempt to re-process from the last block that was successfully indexed, resulting in the database becoming eventually consistent with the blockchain.
The following keys are used:
LAST_PROCESSED_BLOCK- the last block (number, timestamp, and hash) that was processed by the watch-tower.CONDITIONAL_ORDER_REGISTRY- the registry of conditional orders by safe.CONDITIONAL_ORDER_REGISTRY_VERSION- the version of the registry. This is used to migrate the registry when the schema changes.LAST_NOTIFIED_ERROR- the last time an error was notified via Slack. This is used to prevent spamming the slack channel.
To control logging level, you can set the LOG_LEVEL environment variable with one of the following values: TRACE, DEBUG, INFO, WARN, ERROR:
LOG_LEVEL=WARNAdditionally, you can enable module specific logging by specifying the log level for the module name:
# Enable logging for an specific module (chainContext in this case)
LOG_LEVEL=chainContext=INFO
# Of-course, you can provide the root log level, and the override at the same time
# - All loggers will have WARN level
# - Except the "chainContext" which will have INFO level
LOG_LEVEL=WARN,chainContext=INFOYou can specify more than one overrides
LOG_LEVEL=chainContext=INFO,_placeOrder=TRACEThe module definition is actually a regex pattern, so you can make more complex definitions:
# Match a logger using a pattern
# Matches: chainContext:processBlock:100:30212964
# Matches: chainContext:processBlock:1:30212964
# Matches: chainContext:processBlock:5:30212964
LOG_LEVEL=chainContext:processBlock:(\d{1,3}):(\d*)$=DEBUG
# Another example
# Matches: chainContext:processBlock:100:30212964
# Matches: chainContext:processBlock:1:30212964
# But not: chainContext:processBlock:5:30212964
LOG_LEVEL=chainContext:processBlock:(100|1):(\d*)$=DEBUGCombine all of the above to control the log level of any modules:
LOG_LEVEL="WARN,commands=DEBUG,^checkForAndPlaceOrder=WARN,^chainContext=INFO,_checkForAndPlaceOrder:1:=INFO" yarn cliCommands that run the watch-tower in a watching mode, will also start an API server. By default the API server will start on port 8080. You can change the port using the --api-port <apiPort> CLI option.
The server exposes automatically:
- An API, with:
- Version info: http://localhost:8080/api/version
- Config: http://localhost:8080/api/config
- Dump Database:
http://localhost:8080/api/dump/:chainIde.g. http://localhost:8080/api/dump/1
- Prometheus Metrics: http://localhost:8080/metrics
You can prevent the API server from starting by setting the --disable-api flag for the run command.
The /api/version endpoint, exposes the information in the package.json. This can be helpful to identify the version of the watch-tower. Additionally, for environments using docker, the environment variable DOCKER_IMAGE_TAG can be used to specify the Docker image tag used.
node(>= v16.18.0)yarnnpm
It is recommended to test against the Goerli testnet. To run the watch-tower:
# Install dependencies
yarn
# Run watch-tower
yarn cli run --config-path ./config.jsonTo run the tests:
yarn test# To lint the code
yarn lint# To fix linting errors
yarn lint:fix# To format the code
yarn fmtTo build the docker image:
docker build -t watch-tower .