| C | O | S | M |
|---|---|---|---|
| W | A | S | M |
| D | A | T | A |
| B | A | S | E |
Simple indexer that queries cw contracts, specifically focusing on CW721, CW404, and other NFT contracts. Gathers contract information, token details, ownership, and related data to store in a local SQLite database.
- Fetches code IDs and associated contract addresses.
- Identifies contract types for filtering and categorization.
- Fetches tokens from NFT contracts (CW721, CW404, etc.) with paginated querying.
- Tracks progress and resumes indexing from the last checkpoint if interrupted.
- Supports batch insertion for database efficiency.
- Provides optional real-time updates using WebSocket connections. [WIP]
- Node.js
- SQLite3
- Compatible [Preferably non-rate-limited] endpoints.
The main configuration is in config.js:
// config.js
export const config = {
blockHeight: "null",
paginationLimit: 100,
concurrencyLimit: 5,
numWorkers: 4,
rpcAddress: "http://localhost:26657",
restAddress: "http://localhost:1317",
grpcAddress: "localhost:9090",
wsAddress: "ws://localhost:26657/websocket",
timeout: 5000,
logLevel: 'DEBUG',
logToFile: true,
retryConfig: {
retries: 3,
delay: 1000,
backoffFactor: 2
}
};The following tables are used in the SQLite database:
indexer_progress: Tracks progress for each indexing step.code_ids: Stores code ID metadata.contracts: Stores contract addresses and types.contract_tokens: Stores token data for each contract.contract_history: Stores history details for each contract.nft_owners: Stores ownership details for each token.
Operating is extremely simple:
-
Complete
config.js -
Install dependencies
-
Run:
yarn install && yarn start
A simple unit test is provided to verify the indexing functionality with a limited dataset. To run the test:
yarn test-indexerThis will run through the entire indexing process using only the specified code_id ("100") for quicker testing and debugging.
The indexer_progress table tracks the last processed contract and token during each stage, allowing resuming after interruption without losing progress.
- Failed operations are retried up to 3 times, with an exponential backoff (default delay of 1000ms, doubling each attempt).
- Errors such as HTTP 400 are not retried, while others will trigger retries.
- Errors during batch processing are logged, and the indexing process continues.
Please submit issues or a pull request for any bug fixes or enhancements.