Releases
v0.2.0
[0.2.0] - 2024-06-11
Added - CU Benchmarking Framework
Systematic CU analysis framework with instruction and transaction paradigms
Dual benchmarking modes : Pure instruction measurement vs complete transaction workflows
Statistical analysis engine with percentile-based estimates (min, conservative, balanced, safe, very_high, unsafe_max)
Rich execution context discovery through simulation (SVM state, program details, CPI analysis)
Professional tooling integration with env_logger and clean JSON output
Comprehensive unit tests (324 lines) covering edge cases and percentile calculations
Framework Features
InstructionBenchmark
trait for pure CU measurement without framework overhead
TransactionBenchmark
trait for multi-program workflow analysis
Two-phase measurement: simulation for context + execution for statistics
Address book system for human-readable program names
Type-safe domain modeling with StatType
enum
Examples & Documentation
Working benchmarks : SOL transfer (150 CU), SPL token transfer, token setup workflow (28K-38K CU)
Comprehensive guide : BENCHMARKING.md
with living documentation approach
Enhanced README : Repositioned as testing + benchmarking platform
Technical Improvements
Fixed percentile calculation bugs that showed incorrect variance
Removed automatic ComputeBudgetInstruction for measurement transparency
SVM state accumulation for realistic vs isolated measurements
Professional logging (quiet by default, optional progress via RUST_LOG)
Breaking Changes
None - all existing testing functionality preserved
New benchmarking features require --features cu_bench
opt-in
You can’t perform that action at this time.