This note explains how the new private token workflow operates inside Solo, what changes were required in both the Solo tooling and the consensus node, and how those changes allow the private transfer path to function end‑to‑end. It is intended for engineers already familiar with Solo’s architecture and deployment model.
scripts/private-token-e2e.shprovisions a Kind cluster, deploys a Solo one‑shot (Falcon) network, and captures the resulting deployment metadata under.solo-private-token.- After the network is ACTIVE, the script patches the live consensus pod so that
blockStream.streamMode=RECORDSandblockStream.writerMode=FILE. This effectively disables the block stream writer (worker == null), which is a prerequisite for the current private token code path. - The script stages a locally built
HederaNode.jar(from../hiero-consensus-node/hedera-node/data/apps) so the cluster runs the fork that contains the block stream fixes and the ServiceScopeLookup update described below. - The spec runner (
examples/tests/privateToken.spec.ts) uses the recorded metadata to:- Spin up a JS SDK client pointed at the forwarded node gRPC port (default
127.0.0.1:50211). - Create a recipient account and a fungible private token.
- Associate the recipient with the token.
- Pull the treasury’s initial private commitment from
swirlds.log, craft new Pedersen commitments for the recipient/change, and submit aPRIVATE_TOKEN_TRANSFER. - Verify the transaction receipt and assert that visible balances for both treasury and recipient remain zero. (The spec now coerces SDK
Longvalues to JS numbers before comparing.) - Attempt a public
TokenTransferto prove the token is non‑transferable via the public path (NOT_SUPPORTED).
- Spin up a JS SDK client pointed at the forwarded node gRPC port (default
The transfer logic itself lives in hedera-token-service-impl’s PrivateTokenTransferHandler. It:
- Loads the
WritableTokenStore/WritableTokenRelationStorevia theStoreFactory(see §3), validates the token isFUNGIBLE_PRIVATE, confirms payer ownership of the input commitments, and checks the recipient/change outputs are properly associated. - Uses
PedersenCommitments.sumsMatchto verify input/output amounts match without revealing them. - Removes consumed commitments from
PrivateCommitmentRegistry, inserts the new commitments, and records the transaction viaTokenBaseStreamBuilder.
No cleartext amount or recipient is written to public state; only commitments and associations exist, satisfying the “truly private transfer” requirement for the visible ledger.
Key additions under lines 150‑210:
- Metadata capture: after
solo one-shot falcon deployfinishes, the script copies the generatedone-shot-solo-deployment-<id>directory into a spec‑specific temp directory so repeated spec invocations reuse the same deployment. - In‑pod config patch (
patch_block_stream_config):- Locates the consensus pod (
solo.hedera.com/type=network-node). kubectl execs into the container, edits/opt/hgcapp/services-hedera/HapiApp2.0/data/config/application.propertiesin place, forcingblockStream.streamMode=RECORDSandblockStream.writerMode=FILE.- Restarts
network-node.serviceand waits for the pod to report Ready. This ensures the running process (not just the mounted ConfigMap) reflects the block stream settings before tests run.
- Locates the consensus pod (
- Local build staging: the script copies the local services build (
../hiero-consensus-node/hedera-node/data) into a temp directory (.solo-private-token/falcon-build.*) and passes it via--local-build-pathin the values file. This guarantees the deployment uses the patched consensus jar you built locally.
examples/tests/privateToken.spec.ts now converts token balances returned by the SDK to primitive numbers before comparison:
const raw = balance.tokens?.get(tokenId) ?? 0;
const bal =
typeof raw === 'number'
? raw
: raw && typeof raw.toNumber === 'function'
? raw.toNumber()
: Number(raw ?? 0);Without this, Long objects coming back from AccountBalanceQuery never compared equal to zero, falsely failing the visibility check even when the transfer succeeded.
All consensus changes live in ../hiero-consensus-node and are baked into HederaNode.jar before deployment.
In writeItem()/prngSeed() we now short‑circuit when worker == null. When the block stream is configured for RECORDS/FILE, Solo doesn’t instantiate a BlockStreamManagerTask, so any attempt to emit block items previously threw an NPE (in BlockStreamManagerImpl.writeItem) during private token handling. The new guard makes block stream writes a no‑op in this configuration, preventing the crash while keeping the rest of the workflow intact.
PRIVATE_TOKEN_TRANSFER is now routed to TokenService.NAME. Previously it fell through to the default case, resulting in WritableTokenStore not being registered in the WritableStoreFactory for that transaction. The handler then threw IllegalArgumentException: No store of the given class is available com.hedera.node.app.service.token.impl.WritableTokenStore. Mapping the functionality to TokenService aligns it with the rest of the HTS operations and unlocks the necessary stores.
With the above pieces in place:
- Private supply and transfer state lives entirely in
PrivateCommitmentRegistry. Only Pedersen commitments (Bytes) plus owner/token IDs are persisted. Amounts and destinations never appear inReadableTokenStore,TokenRelation, or record stream outputs. - Ledger state still enforces association, KYC, and treasury constraints because
WritableTokenRelationStorelookups occur before commitments are consumed/created. That keeps token semantics consistent with public HTS flows. - Because we force block stream mode to
RECORDSand disable gRPC/file streaming, the transfer does not emit block items. Combined with the existing record stream behavior in Solo (which still shows only standard transaction metadata), no observable artifact reveals the private transfer’s amount or recipients.
- Build consensus locally (
../hiero-consensus-node):- Install JDK 21 (
brew install openjdk@21). cd ../hiero-consensus-node && ./gradlew :app:assemble.- Ensure
hedera-node/data/apps/HederaNode.jaris updated.
- Install JDK 21 (
- Run the provisioning script from
solo:KEEP_ENVIRONMENT=1 ./scripts/private-token-e2e.sh
- (Optional) Inspect the running pod to confirm block stream config and local jar:
kubectl -n <namespace> exec network-node1-0 -c root-container -- \ grep '^blockStream' /opt/hgcapp/services-hedera/HapiApp2.0/data/config/application.properties
- Execute the spec:
npm run private-token-spec
- Tear down when finished:
kind delete cluster --name solo-private-token
These steps ensure every engineer runs the same patched node binary, with the same block stream settings, producing deterministic private transfer results.
- Automated detection of stale deployments: At present the spec will fail with a kubectl error if the cached namespace is gone. A follow‑up could verify namespace existence and instruct the user to rerun the provisioning script.
- Re‑enabling block streaming: Once block streaming supports private transfers, we can revert the script to allow
BOTH/FILE_AND_GRPCand drop thewriteItem()guard in favor of a proper writer implementation. - Hardening commitment persistence: Today
PrivateCommitmentRegistryis local JVM state. Persistence/backups (MinIO, etc.) aren’t wired in. Productionizing private transfers would require durable commitment storage and recovery logic.
With these changes, Solo can mint and transfer private fungible tokens entirely via the canonical HTS/JS SDK workflows while keeping amounts and destinations off the public ledger. The work demonstrates how a privacy layer can be integrated into Solo’s deployment tooling and services stack with minimal surface area changes.
- Consensus throughput: Forcing
blockStream.streamMode=RECORDSandwriterMode=FILEeliminates the block streaming worker and gRPC back‑pressure logic. On a single-node Kind cluster this is inconsequential, but on a multi-node network block nodes would stop receiving block data. Until the block streaming pipeline supports private transfers, deploying this configuration network-wide would effectively disable block ingest/export. - State growth: Private commitments are stored in-memory (
PrivateCommitmentRegistry). In a production deployment we would need to persist them (e.g., in state or external storage) to avoid data loss on restart. Otherwise any crash/restart would orphan unspent commitments, causing token balances to become unrecoverable. - Scheduling/handling cost: The handler itself performs additional cryptographic checks (Pedersen sums, registry lookups). On a large network with high HTS throughput, profiling is necessary to ensure these checks do not materially increase handle latency. If private transfers become common, consider batching or moving proofs off the critical path.
- Observability trade-offs: Disabling the block stream and suppressing public record data limits operators’ ability to audit private tokens. Before enabling this mode on a shared network, provide alternative telemetry (internal metrics, commitment state dumps, etc.) so SREs can still diagnose issues.
solo creation ... ✅ Created accounts saved to file in JSON format: /Users/.../.../GitHub/hedera/solo-private-tokens/workspace/solo/.solo-private-token/one-shot-solo-deployment-353e91d4/accounts.json For more information on public and private keys see: https://docs.hedera.com/hedera/core-concepts/keys-and-signatures ✅ Deployment name (solo-deployment-353e91d4) saved to file: /Users/patches/Documents/GitHub/hedera/solo-private-tokens/workspace/solo/.solo-private-token/cache/last-one-shot-deployment.txt Stopping port-forwarder for port [30212] [2025-11-20T16:17:37-0700] Captured Solo deployment metadata: deployment=solo-deployment-353e91d4, namespace=solo-353e91d4, clusterRef=solo-353e91d4 [2025-11-20T16:17:37-0700] Patching blockStream configuration inside pod network-node1-0... [2025-11-20T16:17:37-0700] Waiting for network node to return to Ready... [2025-11-20T16:17:37-0700] Running private token end-to-end spec... ^[
@hashgraph/[email protected] private-token-spec tsx --no-deprecation --no-warnings examples/tests/privateToken.spec.ts
--- Private Token end-to-end spec --- Using deployment solo-deployment-353e91d4 in namespace solo-353e91d4 via 127.0.0.1:50211 Created recipient account 0.0.1032 Created private token 0.0.1033 Associated 0.0.1032 with 0.0.1033 Treasury commitment: 02ac0ceffdeb476a4753162283a8b09bfc7517d0236d41744bacba52b64272e71d Crafted output commitments: recipient 022b4e0a5572db89183fe92381dfda19e09fa31000e5f2958eab914d450abc7905, change 03b26090ceec4d49293f0b52e9d89c29f2b4686ed9db02086c6443cc76400b85bb PrivateTokenTransfer submitted as [email protected] Private token transfer receipt returned SUCCESS Visible balance for 0.0.2 and 0.0.1033 is 0 Visible balance for 0.0.1032 and 0.0.1033 is 0 Expected failure of public transfer: {"name":"StatusError","status":"NOT_SUPPORTED","transactionId":"[email protected]","message":"receipt for transaction [email protected] contained error status NOT_SUPPORTED"} All checks passed.
📝 Solo has a new one-shot command! check it out: Solo User Guide, Solo CLI Commands
An opinionated CLI tool to deploy and manage standalone test networks.
Solo releases are supported for one month after their release date, after which they are no longer maintained. It is recommended to upgrade to the latest version to benefit from new features and improvements. Every quarter a version will be designated as LTS (Long-Term Support) and will be supported for three months.
| Solo Version | Node.js | Kind | Solo Chart | Hedera | Kubernetes | Kubectl | Helm | k9s | Docker Resources | Release Date | End of Support |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 0.49.0 | >= 22.0.0 (lts/jod) | >= v0.26.0 | v0.57.0 | v0.66.0+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-11-06 | 2025-12-06 |
| 0.48.0 (LTS) | >= 22.0.0 (lts/jod) | >= v0.26.0 | v0.56.0 | v0.66.0+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-10-24 | 2026-01-24 |
To see a list of legacy releases, please check the legacy versions documentation page.
To run a one-node network, you will need to set up Docker Desktop with at least 12GB of memory and 4 CPUs.
# install specific nodejs version
# nvm install <version>
# install nodejs version 22.0.0
nvm install v22.0.0
# lists available node versions already installed
nvm ls
# switch to selected node version
# nvm use <version>
nvm use v22.0.0
- Run
npx @hashgraph/solo
Contributions are welcome. Please see the contributing guide to see how you can get involved.
This project is governed by the Contributor Covenant Code of Conduct. By participating, you are expected to uphold this code of conduct.