A focused data scraping project that collects and structures market expectation indicators published by Brazil’s Central Bank. It helps analysts and developers turn scattered economic outlook data into clean, usable datasets for analysis and automation.
Created by Bitbash, built to showcase our approach to Scraping and Automation!
If you are looking for bcb-expectativa-de-mercado you've just found your team — Let’s Chat. 👆👆
This project extracts market expectation data related to Brazilian economic indicators and organizes it into a structured format. It solves the problem of manually collecting and standardizing expectation metrics that are often spread across multiple pages or releases. It’s designed for data analysts, economists, developers, and researchers who need reliable expectation data for modeling or reporting.
- Targets structured economic expectation indicators released periodically
- Normalizes values for easy comparison over time
- Designed to integrate smoothly into data pipelines
- Suitable for both exploratory analysis and automated workflows
| Feature | Description |
|---|---|
| Automated data collection | Consistently gathers expectation data without manual effort. |
| Structured outputs | Delivers clean, predictable fields ready for analysis. |
| Scalable architecture | Handles growing datasets and repeated runs efficiently. |
| Configurable inputs | Allows adjustment of sources or parameters with minimal changes. |
| Field Name | Field Description |
|---|---|
| indicator | Name of the economic indicator (e.g., inflation, GDP). |
| reference_date | Date or period the expectation refers to. |
| expected_value | Market’s expected value for the indicator. |
| unit | Measurement unit used for the indicator. |
| source | Origin or publication reference of the data. |
[
{
"indicator": "Inflation Rate",
"reference_date": "2025-12",
"expected_value": 3.75,
"unit": "percentage",
"source": "Market Survey"
}
]
BCB - Expectativa de Mercado/
├── src/
│ ├── runner.py
│ ├── spiders/
│ │ └── expectativa_spider.py
│ ├── pipelines/
│ │ └── data_pipeline.py
│ └── config/
│ └── settings.json
├── data/
│ ├── samples/
│ │ └── sample_output.json
│ └── inputs.json
├── requirements.txt
└── README.md
- Economists use it to track expectation trends, so they can forecast economic scenarios more accurately.
- Data analysts use it to feed dashboards, so they can visualize market sentiment over time.
- Developers use it to automate data ingestion, so they can reduce manual data collection work.
- Researchers use it to build datasets, so they can support empirical economic studies.
How do I configure which indicators are collected? You can adjust the input configuration file to specify which indicators or periods should be included before running the scraper.
Is this scraper suitable for long-term historical data? Yes, it’s designed to handle repeated runs and can be extended to backfill historical expectation data where available.
Can the output be integrated into other systems? The structured JSON output makes it easy to plug into databases, analytics tools, or downstream APIs.
Primary Metric: Processes dozens of expectation records per run with consistent field accuracy.
Reliability Metric: Maintains a high success rate across repeated executions with minimal data loss.
Efficiency Metric: Low memory footprint and fast execution suitable for scheduled jobs.
Quality Metric: Delivers complete, normalized records ready for direct analytical use.