Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Pooler is the first adapter that'll make way for future curation on PowerLoom. We are snapshotting data from Uniswap like AMMs on EVM chains.

License

Notifications You must be signed in to change notification settings

powerloom/pooler

Repository files navigation

Table of Contents

Overview

Pooler workflow

Pooler is the component of a fully functional, distributed system that works alongside Audit Protocol, and together they are responsible for

  • generating a time series database of changes occurring over smart contract state data and event logs, that live on decentralized storage protocols
  • higher order aggregated information calculated over decentralized indexes maintained atop the database mentioned above

Pooler by itself performs the following functions:

  1. Tracks the blockchain on which the data source smart contract lives
  2. In equally spaced 'epochs'
    • it snapshots raw smart contract state variables, event logs, etc
    • transforms the same
    • and submits these snapshots to audit-protocol

This specific implementation is called Pooler since it tracks Uniswap v2 'pools'.

Together with an Audit Protocol instance, they form a recently released PoC whose objectives were

  • to present a fully functional, distributed system comprised of lightweight services that can be deployed over multiple instances on a network or even on a single instance
  • to be able to serve the most frequently sought data points on Uniswap v2
    • Total Value Locked (TVL)
    • Trade Volume, Liquidity reserves, Fees earned
      • grouped by
        • Pair contracts
        • Individual tokens participating in pair contract
      • aggregated overtime periods
        • 24 hours
        • 7 days
    • Transactions containing Swap, Mint, and Burn events

You can read more about Audit Protocol and the Uniswap v2 PoC in the Powerloom Protocol Overview document

Setup

Pooler is part of a distributed system with multiple moving parts. The easiest way to get started is by using the docker-based setup from the deploy repository.

If you're planning to participate as a snapshotter, refer to these instructions to start snapshotting.

If you're a developer, you can follow this for a more hands on approach.

Development Instructions

These instructions are needed if you're planning to run the system using build-dev.sh from deploy.

Generate Config

Pooler needs the following config files to be present

  1. settings.json in pooler/auth/settings. This doesn't need much change, you can just copy settings.example.json present in the pooler/auth/settings directory.
  2. cached_pair_addresses.json in pooler/static, copy over static/cached_pair_addresses.example.json to pooler/static/cached_pair_addresses.json. These are the pair contracts for uniswapv2 usecase that will be tracked.
  3. settings.json in pooler/settings This one is the main configuration file. We've provided a settings template in pooler/settings/settings.example.json to help you get started. Copy over settings.example.json to pooler/settings/settings.json

Configuring pooler/settings/settings.json

There's a lot of configuration in settings.json but to get started, you just need to focus on the following.

  • instance_id, it is currently generated on invite only basis (refer deploy repo for more details)
  • namespace, it is the unique key used to identify your project namespace
  • rpc url and rate limit config in rpc.full_nodes, this depends on which rpc node you're using
  • consensus url in consensus.url, this the the offchain-consensus service url where snapshots will be submitted

Monitoring and Debugging

Login to pooler docker container and use the following commands for monitoring and debugging

  • To monitor the status of running processes, you simply need to run pm2 status.
  • To see all logs you can run pm2 logs
  • To see logs for a specific process you can run pm2 logs <Process Identifier>
  • To see only error logs you can run pm2 logs --err

For Contributors

We use pre-commit hooks to ensure our code quality is maintained over time. For this contributors need to do a one-time setup by running the following commands.

  • Install the required dependencies using pip install -r dev-requirements.txt, this will setup everything needed for pre-commit checks.
  • Run pre-commit install

Now, whenever you commit anything, it'll automatically check the files you've changed/edited for code quality issues and suggest improvements.

Epoch Generation

An epoch denotes a range of block heights on the data source blockchain, Ethereum mainnet in the case of Uniswap v2. This makes it easier to collect state transitions and snapshots of data on equally spaced block height intervals, as well as to support future work on other lightweight anchor proof mechanisms like Merkle proofs, succinct proofs, etc.

The size of an epoch is configurable. Let that be referred to as size(E)

  • A service keeps track of the head of the chain as it moves ahead, and a marker h₀ against the max block height from the last released epoch. This makes the beginning of the next epoch, h₁ = h₀ + 1

  • Once the head of the chain has moved sufficiently ahead so that an epoch can be published, an epoch finalization service takes into account the following factors

    • chain reorganization reports where the reorganized limits are a subset of the epoch qualified to be published
    • a configurable ‘offset’ from the bleeding edge of the chain

and then publishes an epoch (h₁, h₂) so that h₂ - h₁ + 1 == size(E). The next epoch, therefore, is tracked from h₂ + 1.

Snapshot Building

Overview of broadcasted epoch processing, building snapshot, and submitting it to audit-protocol (whitepaper-ref):

  1. Published/broadcasted epochs are received by PairTotalReservesProcessorDistributor, and get distributed to callback workers by publishing messages on respective queues (code-ref). Distributor code-module
queue.enqueue_msg_delivery(
    exchange=f'pair_processor_exchange',
    routing_key='callback_routing_key',
    msg_body={epoch_begin, epoch_end, pair_contract, broadcast_id}
)
  1. The Distributor's messages are received by the PairTotalReservesProcessor on_message handler. Multiple workers are running parallelly consuming incoming messages (code-ref). Processor code-module

  2. Each message goes through capturing smart-contract data and transforming it into a standardized JSON schema. All these data-point operations are detailed in the next section.

  3. Generated snapshots get submitted to audit-protocol with appropriate status updated against message broadcast_id (code-ref).

await AuditProtocolCommandsHelper.commit_payload(
    pair_contract_address=epoch_snapshot.contract, stream='pair_total_reserves',
    report_payload=payload, ...
)

Implementation breakdown of all snapshot data-point operations to transform smart-contract data and generate snapshots for each epoch. For more explanation check out the whitepaper section:

Token Price in USD

Token price in USD(stable coins) more details in whitepaper.

Steps to calculate the token price:

  1. Calculate Eth USD price (code-ref)
eth_price_dict = await get_eth_price_usd(from_block, to_block, web3_provider, ...)

get_eth_price_usd() function calculates the average eth price using stablecoin pools (USDC, USDT, and DAI) ( code-ref ):

[whitepaper-ref]

Eth_Price_Usd = daiWeight * dai_price + usdcWeight * usdc_price + usdtWeight * usdt_price
  1. Find a pair of target tokens with whitelisted tokens (code-ref):
for -> (settings.UNISWAP_V2_WHITELIST):
    pair_address = await get_pair(white_token, token_adress, ...)
    if pair_address != "0x0000000000000000000000000000000000000000":
        //process...
        break

get_pair() function returns a pair address given token addresses, more detail in Uniswap doc.

  1. Calculate the derived Eth of the target token using the whitelist pair(code-ref):
white_token_derived_eth_dict = await get_token_derived_eth(
    from_block, to_block, white_token_metadata, web3_provider, ...
)

get_token_derived_eth() function return the derived ETH amount of the given token(code-ref):

token_derived_eth_list = batch_eth_call_on_block_range(
    'getAmountsOut', UNISWAP_V2_ROUTER, from_block, to_block=to_block,
    params=[10, [whitelist_token_address, weth_address]], ...
)

getAmountsOut() is a uniswap-router2 smart contract function, more details in Uniswap-doc.

  1. Check if the Eth reserve of the whitelisted token is more than the threshold, else repeat steps 2 and 3 (code-ref):
...
if white_token_reserves < threshold:
    continue
else:
    for ->(from_block, to_block):
        token_price_dict[block] = token_eth_price[block] * eth_usd_price[block]

Important formulas to calculate tokens price

whitepaper-ref

  • EthPriceUSD = StableCoinReserves / EthReserves
  • TokenPriceUSD = EthPriceUSD * tokenDerivedEth

Pair Metadata

Fetch the symbol, name, and decimal of a given pair from RPC and store them in the cache.

  1. check if cache exists, metadata cache is stored as a Redis-hashmap (code-ref):
pair_token_addresses_cache, pair_tokens_data_cache = await asyncio.gather(
    redis_conn.hgetall(uniswap_pair_contract_tokens_addresses.format(pair_address)),
    redis_conn.hgetall(uniswap_pair_contract_tokens_data.format(pair_address))
)
  1. fetch metadata from pair smart contract (code-ref):
web3_provider.batch_call([
    token0-> name, symbol, decimals,
    token1-> name, symbol, decimals,
])
  1. store cache and return prepared metadata, return metadata (core-ref).

Liquidity / Pair Reserves

code-ref Reserves of each token in pair contract (more details in whitepaper):

Steps to calculate liquidity:

  1. Fetch block details from RPC and return {block->details} dictionary (code-ref):
block_details_dict = await get_block_details_in_block_range(
    from_block, to_block, web3_provider, ...
)

get_block_details_in_block_range() is a batch RPC call to fetch data of each block for a given range (code-ref).

  1. Fetch pair metadata of the pair smart contract e.g. symbol, decimal, etc (code-ref): get_pair_metadata() invoke symbol(), decimal() and name() functions on the pair's smart contract, more details in the metadata section.
pair_per_token_metadata = await get_pair_metadata(
    pair_address, ...
)
  1. Calculate the pair's token price (code-ref):
token0_price_map = await get_token_price_in_block_range(token0, from_block, to_block, ...)
token1_price_map = await get_token_price_in_block_range(token1, from_block, to_block, ...)

get_token_price_in_block_range() returns {block->price} dictionary for a given token, more details in the price section.

  1. Fetch pair reserves for each token (code-ref):
reserves_array = batch_eth_call_on_block_range(
    web3_provider, abi_dict, 'getReserves',
    pair_address, from_block, to_block, ...
)

reserves_array is an array array, each element containing:[token0Reserves, token1Reserves, timestamp]. It invokes the getReserves() function on pair contracts, more details on Uniswap-docs.

  1. Prepare the final liquidity snapshot, by iterating on each block and taking a product of tokenReserve and tokenPrice (code-ref):

  2. get_pair_reserve() return type data-model.

Fetch event logs

code-ref Fetch event logs to calculate trade volume using eth_getLogs, more details in whitepaper.

# fetch logs for swap, mint & burn
event_sig, event_abi = get_event_sig_and_abi(UNISWAP_TRADE_EVENT_SIGS, UNISWAP_EVENTS_ABI)
get_events_logs, **{
    'contract_address': pair_address, 'topics': [event_sig], 'event_abi': event_abi, ...
}

get_events_logs() function is written in rpc_helpers.py module. It uses the eth.get_logs RPC function to fetch event logs of given topics in block range (code-ref):

event_log = web3Provider.eth.get_logs({
    'address': contract_address, 'toBlock': toBlock,
    'fromBlock': fromBlock, 'topics': topics
})
for -> (event_log):
    evt = get_event_data(ABICodec, abi, log)

ABICodec is used to decode the event logs in plain text using the get_event_data function, check out more details in the library.

Pair trade volume

Calculate The Trade volume of a pair using event logs, more details in whitepaper.

Trade Volume = SwapValueUSD = token0Amount * token0PriceUSD = token1Amount * token1PriceUSD

Steps to calculate trade volume:

  1. Fetch block details from RPC and return {block->details} dictionary (code-ref):
block_details_dict = await get_block_details_in_block_range(
    from_block, to_block, web3_provider, ...
)

get_block_details_in_block_range() is a batch RPC call to fetch data of each block for a given range (code-ref).

  1. Fetch pair metadata of the pair smart contract e.g. symbol, decimal, etc (code-ref): get_pair_metadata() invoke symbol(), decimal() and name() functions on the pair's smart contract, more details in the metadata section.
pair_per_token_metadata = await get_pair_metadata(
    pair_address, ...
)
  1. Calculate the pair's token price (code-ref):
token0_price_map = await get_token_price_in_block_range(token0, from_block, to_block, ...)
token1_price_map = await get_token_price_in_block_range(token1, from_block, to_block, ...)

get_token_price_in_block_range() returns {block->price} dictionary for a given token, more details in the price section.

  1. Fetch event logs in the given block range, following the event log section.

  2. Group logs by transaction hash (code-ref)

for -> (event_logs):
    transaction_dict[tx_hash].append(log)
  1. Iterate on grouped logs, and group again by event type(swap, mint and burn) (code-ref)

  2. Add swap logs amount as effective trade volume (code-ref)

  3. get_pair_trade_volume() return type data-model

Rate Limiter

code-ref All RPC nodes specified in settings.json has a rate limit to them, every RPC calls honor this limit (more details).

rate_limiter:
    1. init_rate_limiter #// get limits from settings configuration, load Lua script on Redis, etc.
    2. verify if we have a quota for another request?
    if can_request:
        3.enjoy_rpc!
    else:
        4. panic! rate limit exhaust error

Batching RPC calls

Batch RPC calls by sending multiple queries in a single request, details in Geth docs.

[
    { id: unique_id, method: eth_function, params: params, jsonrpc: '2.0' },
    ...
]

rpc_helper.py (code-ref) module contains several helpers which use batching:

Architecture Details

Details about working of various components is present in Details.md if you're interested to know more about Pooler.

About

Pooler is the first adapter that'll make way for future curation on PowerLoom. We are snapshotting data from Uniswap like AMMs on EVM chains.

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors 8

Languages