-
Notifications
You must be signed in to change notification settings - Fork 0
feat: add priority queue for protocol messages (with Docker build) #20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: v1.2.0-base
Are you sure you want to change the base?
Conversation
Transaction costsSizes and execution budgets for Hydra protocol transactions. Note that unlisted parameters are currently using
Script summary
|
| Parties | Tx size | % max Mem | % max CPU | Min fee ₳ |
|---|---|---|---|---|
| 1 | 5834 | 10.38 | 3.29 | 0.51 |
| 2 | 6037 | 12.25 | 3.87 | 0.54 |
| 3 | 6239 | 14.67 | 4.64 | 0.58 |
| 5 | 6638 | 18.72 | 5.91 | 0.64 |
| 10 | 7646 | 28.71 | 9.03 | 0.78 |
| 43 | 14279 | 98.58 | 30.79 | 1.80 |
Commit transaction costs
This uses ada-only outputs for better comparability.
| UTxO | Tx size | % max Mem | % max CPU | Min fee ₳ |
|---|---|---|---|---|
| 1 | 559 | 2.44 | 1.16 | 0.20 |
| 2 | 742 | 3.38 | 1.73 | 0.22 |
| 3 | 919 | 4.36 | 2.33 | 0.24 |
| 5 | 1277 | 6.41 | 3.60 | 0.28 |
| 10 | 2180 | 12.13 | 7.25 | 0.40 |
| 54 | 10045 | 98.61 | 68.52 | 1.88 |
CollectCom transaction costs
| Parties | UTxO (bytes) | Tx size | % max Mem | % max CPU | Min fee ₳ |
|---|---|---|---|---|---|
| 1 | 56 | 524 | 25.20 | 7.30 | 0.43 |
| 2 | 114 | 640 | 34.23 | 9.85 | 0.53 |
| 3 | 171 | 747 | 42.46 | 12.22 | 0.61 |
| 4 | 228 | 858 | 51.15 | 14.72 | 0.71 |
| 5 | 283 | 969 | 56.31 | 16.35 | 0.77 |
| 6 | 340 | 1085 | 74.62 | 21.10 | 0.95 |
| 7 | 396 | 1192 | 84.82 | 24.02 | 1.06 |
| 8 | 449 | 1303 | 94.23 | 26.73 | 1.16 |
Cost of Increment Transaction
| Parties | Tx size | % max Mem | % max CPU | Min fee ₳ |
|---|---|---|---|---|
| 1 | 1823 | 24.29 | 7.69 | 0.48 |
| 2 | 1934 | 25.51 | 8.70 | 0.50 |
| 3 | 2083 | 27.35 | 9.87 | 0.53 |
| 5 | 2436 | 32.24 | 12.58 | 0.61 |
| 10 | 3216 | 41.39 | 18.49 | 0.76 |
| 40 | 7704 | 97.95 | 54.20 | 1.67 |
Cost of Decrement Transaction
| Parties | Tx size | % max Mem | % max CPU | Min fee ₳ |
|---|---|---|---|---|
| 1 | 623 | 22.54 | 7.30 | 0.41 |
| 2 | 763 | 24.28 | 8.45 | 0.44 |
| 3 | 939 | 27.00 | 9.87 | 0.48 |
| 5 | 1147 | 28.07 | 11.49 | 0.51 |
| 10 | 2102 | 42.27 | 18.78 | 0.72 |
| 41 | 6639 | 97.77 | 54.93 | 1.63 |
Close transaction costs
| Parties | Tx size | % max Mem | % max CPU | Min fee ₳ |
|---|---|---|---|---|
| 1 | 650 | 29.17 | 8.91 | 0.48 |
| 2 | 732 | 30.27 | 9.86 | 0.50 |
| 3 | 1008 | 31.61 | 10.96 | 0.53 |
| 5 | 1221 | 37.10 | 13.80 | 0.60 |
| 10 | 1895 | 46.08 | 19.62 | 0.75 |
| 36 | 6092 | 97.46 | 51.51 | 1.58 |
Contest transaction costs
| Parties | Tx size | % max Mem | % max CPU | Min fee ₳ |
|---|---|---|---|---|
| 1 | 674 | 33.87 | 10.16 | 0.53 |
| 2 | 803 | 35.88 | 11.39 | 0.56 |
| 3 | 1054 | 39.22 | 13.02 | 0.61 |
| 5 | 1157 | 41.22 | 14.85 | 0.64 |
| 10 | 2037 | 54.13 | 21.82 | 0.83 |
| 28 | 4746 | 95.73 | 45.40 | 1.46 |
Abort transaction costs
There is some variation due to the random mixture of initial and already committed outputs.
| Parties | Tx size | % max Mem | % max CPU | Min fee ₳ |
|---|---|---|---|---|
| 1 | 5812 | 26.92 | 9.04 | 0.69 |
| 2 | 5930 | 36.00 | 12.09 | 0.79 |
| 3 | 6062 | 42.52 | 14.27 | 0.86 |
| 4 | 6192 | 54.26 | 18.23 | 0.99 |
| 5 | 6436 | 65.67 | 22.13 | 1.12 |
| 6 | 6499 | 70.65 | 23.77 | 1.18 |
| 7 | 6810 | 85.70 | 29.00 | 1.35 |
| 8 | 6914 | 91.11 | 30.82 | 1.41 |
FanOut transaction costs
Involves spending head output and burning head tokens. Uses ada-only UTXO for better comparability.
| Parties | UTxO | UTxO (bytes) | Tx size | % max Mem | % max CPU | Min fee ₳ |
|---|---|---|---|---|---|---|
| 10 | 1 | 57 | 5868 | 22.04 | 7.50 | 0.64 |
| 10 | 5 | 285 | 6005 | 28.46 | 10.13 | 0.72 |
| 10 | 10 | 570 | 6174 | 39.51 | 14.45 | 0.85 |
| 10 | 39 | 2220 | 7159 | 99.12 | 37.95 | 1.54 |
End-to-end benchmark results
This page is intended to collect the latest end-to-end benchmark results produced by Hydra's continuous integration (CI) system from the latest master code.
Please note that these results are approximate as they are currently produced from limited cloud VMs and not controlled hardware. Rather than focusing on the absolute results, the emphasis should be on relative results, such as how the timings for a scenario evolve as the code changes.
Generated at 2026-01-08 07:43:07.121581411 UTC
Baseline Scenario
| Number of nodes | 1 |
|---|---|
| Number of txs | 300 |
| Avg. Confirmation Time (ms) | 99.916696183 |
| P99 | 106.01945525999999ms |
| P95 | 101.44171940000001ms |
| P50 | 99.953839ms |
| Number of Invalid txs | 0 |
Memory data
| Time | Used | Free |
|---|---|---|
| 2026-01-08 07:40:59.933113591 UTC | 1511M | 7018M |
| 2026-01-08 07:41:00.933098157 UTC | 1514M | 6982M |
| 2026-01-08 07:41:01.936238579 UTC | 1542M | 6954M |
| 2026-01-08 07:41:02.933136268 UTC | 1558M | 6937M |
| 2026-01-08 07:41:03.932991454 UTC | 1572M | 6874M |
| 2026-01-08 07:41:04.933020759 UTC | 1597M | 6822M |
| 2026-01-08 07:41:05.933021408 UTC | 1614M | 6804M |
| 2026-01-08 07:41:06.933065208 UTC | 1619M | 6798M |
| 2026-01-08 07:41:07.933032684 UTC | 1621M | 6797M |
| 2026-01-08 07:41:08.933065172 UTC | 1621M | 6795M |
| 2026-01-08 07:41:09.933072058 UTC | 1619M | 6798M |
| 2026-01-08 07:41:10.933058529 UTC | 1617M | 6799M |
| 2026-01-08 07:41:11.933062071 UTC | 1617M | 6799M |
| 2026-01-08 07:41:12.933033894 UTC | 1616M | 6800M |
| 2026-01-08 07:41:13.933034591 UTC | 1617M | 6799M |
| 2026-01-08 07:41:14.933060404 UTC | 1617M | 6798M |
| 2026-01-08 07:41:15.933048495 UTC | 1621M | 6793M |
| 2026-01-08 07:41:16.933061146 UTC | 1622M | 6792M |
| 2026-01-08 07:41:17.933008872 UTC | 1623M | 6791M |
| 2026-01-08 07:41:18.933052559 UTC | 1623M | 6790M |
| 2026-01-08 07:41:19.933026969 UTC | 1624M | 6789M |
| 2026-01-08 07:41:20.932995311 UTC | 1625M | 6788M |
| 2026-01-08 07:41:21.933009246 UTC | 1626M | 6787M |
| 2026-01-08 07:41:22.933000998 UTC | 1626M | 6786M |
| 2026-01-08 07:41:23.933029496 UTC | 1626M | 6786M |
| 2026-01-08 07:41:24.933000546 UTC | 1633M | 6779M |
| 2026-01-08 07:41:25.93299299 UTC | 1632M | 6779M |
| 2026-01-08 07:41:26.933017543 UTC | 1633M | 6778M |
| 2026-01-08 07:41:27.93302154 UTC | 1633M | 6777M |
| 2026-01-08 07:41:28.933005859 UTC | 1634M | 6777M |
| 2026-01-08 07:41:29.932996877 UTC | 1634M | 6776M |
| 2026-01-08 07:41:30.932983321 UTC | 1635M | 6775M |
| 2026-01-08 07:41:31.932991102 UTC | 1636M | 6773M |
| 2026-01-08 07:41:32.933050432 UTC | 1638M | 6771M |
| 2026-01-08 07:41:33.933061438 UTC | 1638M | 6771M |
| 2026-01-08 07:41:34.936453755 UTC | 1638M | 6770M |
| 2026-01-08 07:41:35.933105993 UTC | 1638M | 6770M |
| 2026-01-08 07:41:36.933074669 UTC | 1638M | 6770M |
| 2026-01-08 07:41:37.933059303 UTC | 1638M | 6769M |
| 2026-01-08 07:41:38.933081672 UTC | 1638M | 6769M |
| 2026-01-08 07:41:39.933023316 UTC | 1639M | 6769M |
| 2026-01-08 07:41:40.93302543 UTC | 1639M | 6768M |
| 2026-01-08 07:41:41.933029033 UTC | 1640M | 6768M |
| 2026-01-08 07:41:42.933086251 UTC | 1639M | 6768M |
| 2026-01-08 07:41:43.933065291 UTC | 1640M | 6767M |
| 2026-01-08 07:41:44.933012145 UTC | 1641M | 6766M |
| 2026-01-08 07:41:45.933118777 UTC | 1641M | 6766M |
| 2026-01-08 07:41:46.933086713 UTC | 1641M | 6766M |
| 2026-01-08 07:41:47.933077721 UTC | 1641M | 6766M |
| 2026-01-08 07:41:48.933026199 UTC | 1641M | 6765M |
| 2026-01-08 07:41:49.933025516 UTC | 1642M | 6765M |
| 2026-01-08 07:41:50.93305086 UTC | 1642M | 6765M |
| 2026-01-08 07:41:51.933111258 UTC | 1642M | 6765M |
| 2026-01-08 07:41:52.933026465 UTC | 1642M | 6764M |
| 2026-01-08 07:41:53.933024666 UTC | 1638M | 6769M |
Three local nodes
| Number of nodes | 3 |
|---|---|
| Number of txs | 900 |
| Avg. Confirmation Time (ms) | 98.264874237 |
| P99 | 109.86085944ms |
| P95 | 104.29307789999999ms |
| P50 | 99.764579ms |
| Number of Invalid txs | 0 |
Memory data
| Time | Used | Free |
|---|---|---|
| 2026-01-08 07:42:05.813251982 UTC | 1541M | 6905M |
| 2026-01-08 07:42:06.813234594 UTC | 1542M | 6904M |
| 2026-01-08 07:42:07.813224984 UTC | 1545M | 6901M |
| 2026-01-08 07:42:08.813249323 UTC | 1547M | 6899M |
| 2026-01-08 07:42:09.813165942 UTC | 1547M | 6899M |
| 2026-01-08 07:42:10.81312453 UTC | 1550M | 6895M |
| 2026-01-08 07:42:11.813139293 UTC | 1566M | 6879M |
| 2026-01-08 07:42:12.813153732 UTC | 1596M | 6821M |
| 2026-01-08 07:42:13.813145615 UTC | 1643M | 6746M |
| 2026-01-08 07:42:14.813629455 UTC | 1710M | 6651M |
| 2026-01-08 07:42:15.813134957 UTC | 1752M | 6609M |
| 2026-01-08 07:42:16.813738978 UTC | 1752M | 6609M |
| 2026-01-08 07:42:17.813536154 UTC | 1759M | 6599M |
| 2026-01-08 07:42:18.814582383 UTC | 1768M | 6588M |
| 2026-01-08 07:42:19.814006366 UTC | 1776M | 6577M |
| 2026-01-08 07:42:20.813612581 UTC | 1790M | 6561M |
| 2026-01-08 07:42:21.813559851 UTC | 1796M | 6552M |
| 2026-01-08 07:42:22.813350196 UTC | 1803M | 6543M |
| 2026-01-08 07:42:23.813657835 UTC | 1810M | 6531M |
| 2026-01-08 07:42:24.813334003 UTC | 1813M | 6526M |
| 2026-01-08 07:42:25.813440842 UTC | 1836M | 6500M |
| 2026-01-08 07:42:26.816392307 UTC | 1838M | 6495M |
| 2026-01-08 07:42:27.813451367 UTC | 1839M | 6491M |
| 2026-01-08 07:42:28.8142619 UTC | 1839M | 6489M |
| 2026-01-08 07:42:29.814196573 UTC | 1839M | 6486M |
| 2026-01-08 07:42:30.813703431 UTC | 1854M | 6469M |
| 2026-01-08 07:42:31.813908704 UTC | 1853M | 6467M |
| 2026-01-08 07:42:32.81363017 UTC | 1854M | 6464M |
| 2026-01-08 07:42:33.814226234 UTC | 1858M | 6457M |
| 2026-01-08 07:42:34.813334701 UTC | 1860M | 6453M |
| 2026-01-08 07:42:35.814526092 UTC | 1865M | 6445M |
| 2026-01-08 07:42:36.813301997 UTC | 1865M | 6442M |
| 2026-01-08 07:42:37.813724072 UTC | 1876M | 6429M |
| 2026-01-08 07:42:38.81324682 UTC | 1877M | 6425M |
| 2026-01-08 07:42:39.813632079 UTC | 1882M | 6418M |
| 2026-01-08 07:42:40.814628452 UTC | 1882M | 6415M |
| 2026-01-08 07:42:41.813646908 UTC | 1886M | 6408M |
| 2026-01-08 07:42:42.813380301 UTC | 1887M | 6405M |
| 2026-01-08 07:42:43.813530892 UTC | 1888M | 6401M |
| 2026-01-08 07:42:44.813326473 UTC | 1889M | 6398M |
| 2026-01-08 07:42:45.813135999 UTC | 1890M | 6394M |
| 2026-01-08 07:42:46.813140096 UTC | 1890M | 6392M |
| 2026-01-08 07:42:47.813132438 UTC | 1890M | 6392M |
| 2026-01-08 07:42:48.813133066 UTC | 1890M | 6391M |
| 2026-01-08 07:42:49.813194515 UTC | 1890M | 6391M |
| 2026-01-08 07:42:50.813132479 UTC | 1893M | 6389M |
| 2026-01-08 07:42:51.813129538 UTC | 1893M | 6388M |
| 2026-01-08 07:42:52.813135377 UTC | 1893M | 6388M |
| 2026-01-08 07:42:53.813135983 UTC | 1894M | 6387M |
| 2026-01-08 07:42:54.813115982 UTC | 1894M | 6387M |
| 2026-01-08 07:42:55.813138604 UTC | 1894M | 6386M |
| 2026-01-08 07:42:56.813141137 UTC | 1896M | 6385M |
| 2026-01-08 07:42:57.813133861 UTC | 1895M | 6385M |
| 2026-01-08 07:42:58.813140251 UTC | 1895M | 6385M |
| 2026-01-08 07:42:59.813129511 UTC | 1895M | 6384M |
| 2026-01-08 07:43:00.813137782 UTC | 1895M | 6384M |
| 2026-01-08 07:43:01.813138541 UTC | 1895M | 6384M |
| 2026-01-08 07:43:02.8131393 UTC | 1897M | 6382M |
| 2026-01-08 07:43:03.813140158 UTC | 1899M | 6381M |
| 2026-01-08 07:43:04.813131728 UTC | 1898M | 6381M |
| 2026-01-08 07:43:05.813133127 UTC | 1898M | 6381M |
| 2026-01-08 07:43:06.81314747 UTC | 1899M | 6380M |
Transaction cost differencesNo cost or size differences found |
…oss under high load Under high transaction load, snapshot signatures were being lost because: - All messages shared a single FIFO queue - ReqSn/AckSn protocol messages got buried behind ReqTx transactions - When AckSn arrived before local ReqSn was processed, it got re-enqueued to the back of the queue, causing signature collection to fail Solution: Dual-queue system that processes protocol messages before transactions - HighPriority: ReqSn, AckSn, ChainInput, ClientInput, ConnectivityEvent - LowPriority: ReqTx, ReqDec This ensures protocol state machine messages are never starved by transaction load.
GitHub Actions runners have limited disk space (~14GB available). When building uncached Nix derivations (like our modified hydra-node), the build can exhaust disk space during compilation. This adds a cleanup step that removes unused tools before the build: - .NET SDK (~1.8GB) - Android SDK (~9GB) - GHC (~5GB) - CodeQL (~2.5GB) - Unused Docker images This frees up ~20GB of disk space, ensuring builds complete successfully.
- Add pull_request trigger for PRs targeting master branch - Tag PR builds as pr-<number> for easy identification - Use PR head SHA as version for traceability
f4c2dad to
9787bce
Compare
The primeWith function uses NodeStateHandler to query the current chain slot for the Tick event, but the import was missing.
The primeWith function now injects a Tick event before processing inputs, which increases the total event count in tests. Updated test expectations: - 'rotates while running': 5+1 tick = 6 inputs, RotateAfter 5 - 'consistent state after restarting': 5+1+2 ticks = 8 inputs - 'rotated and non-rotated node': 6+1 tick = 7 inputs - 'restarted and non-restarted node': restarted has 8 events (2 ticks), non-restarted has 7 events (1 tick), so eventId differs by 1
With primeWith adding a Tick, we have 6 events total. To get checkpoint + 1 leftover (2 events), rotation must trigger after 5th event. RotateAfter(4) means rotate when count > 4, so 5 > 4 = TRUE triggers rotation.
5ee5a1f to
0ae2ff9
Compare
Add configurable batch-size and time-interval based snapshot throttling: - Add --snapshot-batch-size CLI option (default: 10) to trigger snapshots when pending transaction count reaches threshold - Add --snapshot-interval CLI option (default: 0.1s) to trigger snapshots after time interval when transactions are pending - Track lastSnapshotTime in CoordinatedHeadState for time-based throttling - Implement hybrid logic: snapshots trigger on EITHER batch size OR time interval - Add requestedAt timestamp to SnapshotRequestDecided for observability Backward compatible: set snapshotBatchSize=1 for legacy per-tx behavior. Files changed: - Environment.hs: Add snapshotBatchSize/snapshotInterval fields - Options.hs: Add CLI parsers and defaults - HeadLogic.hs: Implement batch-size and time-based throttling logic - State.hs: Add lastSnapshotTime tracking - Outcome.hs: Add requestedAt to SnapshotRequestDecided - api.yaml: Update JSON schema with new fields - Test fixtures and golden files updated accordingly
f54093d to
c4055c0
Compare
Summary
Changes
Testing
This branch is for independent testing of the priority queue fix with Docker images.
Related
priority-queuebranch (PR feat: add priority queue for protocol messages to prevent signature loss under high load #19)feature/datum-cache-memory-optimization