tokf.net — reduce LLM context consumption from CLI commands by 60–90%.
Commands like git push, cargo test, and docker build produce verbose output packed with progress bars, compile noise, and boilerplate. tokf intercepts that output, applies a TOML filter, and emits only what matters — so your AI agent sees a clean signal instead of hundreds of wasted tokens.
cargo test — 61 lines → 1 line:
| Without tokf | With tokf |
|---|---|
|
|
git push — 8 lines → 1 line:
| Without tokf | With tokf |
|---|---|
|
|
brew install mpecan/tokf/tokfcargo install tokfgit clone https://github.com/mpecan/tokf
cd tokf
cargo build --release
# binary at target/release/tokftokf run git push origin main
tokf looks up a filter for git push, runs the command, and applies the filter. The filter logic lives in plain TOML files — no recompilation required. Anyone can author, share, or override a filter.
If you use an AI coding tool, install the hook so every command is filtered automatically — no tokf run prefix needed:
# Claude Code (recommended: --global so it works in every project)
tokf hook install --global
# OpenCode
tokf hook install --tool opencode --global
# OpenAI Codex CLI
tokf hook install --tool codex --globalDrop --global to install for the current project only. See Claude Code hook for details on each tool, the --path flag, and optional extras like the filter-authoring skill.
tokf run git push origin main
tokf run cargo test
tokf run docker build .tokf test filters/git/push.toml tests/fixtures/git_push_success.txt --exit-code 0tokf verify # run all test suites
tokf verify git/push # run a specific suite
tokf verify --list # list available suites and case counts
tokf verify --json # output results as JSON
tokf verify --require-all # fail if any filter has no test suite
tokf verify --list --require-all # show coverage per filter
tokf verify --scope project # only project-local filters (.tokf/filters/)
tokf verify --scope global # only user-level filters (~/.config/tokf/filters/)
tokf verify --scope stdlib # only built-in stdlib (filters/ in CWD)tokf automatically wraps make and just so that each recipe line is individually filtered:
make check # each recipe line (cargo test, cargo clippy, ...) is filtered
just test # same — each recipe runs through tokfSee Rewrite configuration for details and customization.
tokf ls # list all filters
tokf which "cargo test" # which filter would match
tokf show git/push # print the TOML sourcetokf eject cargo/build # copy to .tokf/filters/ (project-local)
tokf eject cargo/build --global # copy to ~/.config/tokf/filters/ (user-level)This copies the filter TOML and its test suite to your config directory, where it shadows the built-in. Edit the ejected copy freely — tokf's priority system ensures your version is used instead of the original.
| Flag | Description |
|---|---|
--timing |
Print how long filtering took |
--verbose |
Show which filter was matched (also explains skipped rewrites) |
--no-filter |
Pass output through without filtering |
--no-cache |
Bypass the filter discovery cache |
--no-mask-exit-code |
Disable exit-code masking. By default tokf exits 0 and prepends Error: Exit code N on failure |
--preserve-color |
Preserve ANSI color codes in filtered output (env: TOKF_PRESERVE_COLOR=1). See Color passthrough below |
--baseline-pipe |
Pipe command for fair baseline accounting (injected by rewrite) |
--prefer-less |
Compare filtered vs piped output and use whichever is smaller (requires --baseline-pipe) |
By default, filters with strip_ansi = true permanently remove ANSI escape codes. The --preserve-color flag changes this: tokf strips ANSI internally for pattern matching (skip, keep, dedup) but restores the original colored lines in the final output. When --preserve-color is active it overrides strip_ansi = true in the filter config.
tokf does not force commands to emit color — you must ensure the child command outputs ANSI codes (e.g. via FORCE_COLOR=1 or --color=always):
# Node.js / Vitest / Jest
FORCE_COLOR=1 tokf run --preserve-color npm test
# Cargo
tokf run --preserve-color cargo test -- --color=always
# Or set the env var once for all invocations
export TOKF_PRESERVE_COLOR=1
FORCE_COLOR=1 tokf run npm testLimitations: color passthrough applies to the skip/keep/dedup pipeline (stages 2–2.5). The match_output, parse, and lua_script stages operate on clean text and are unaffected by this flag. [[replace]] rules run on the raw text before the color split, so when --preserve-color is enabled their patterns may need to account for ANSI escape codes, similar to branch-level skip patterns, which also match against the restored colored text.
| Filter | Command |
|---|---|
git/add |
git add |
git/commit |
git commit |
git/diff |
git diff |
git/log |
git log |
git/push |
git push |
git/show |
git show |
git/status |
git status — runs git status --porcelain -b; shows branch name + one porcelain-format line per changed file (e.g. M src/main.rs, ?? scratch.rs) |
cargo/build |
cargo build |
cargo/check |
cargo check |
cargo/clippy |
cargo clippy |
cargo/fmt |
cargo fmt |
cargo/install |
cargo install * |
cargo/test |
cargo test |
docker/* |
docker build, docker compose, docker images, docker ps |
npm/run |
npm run * |
npm/test |
npm test, pnpm test, yarn test (with vitest/jest variants) |
pnpm/* |
pnpm add, pnpm install |
go/* |
go build, go vet |
gradle/* |
gradle build, gradle test, gradle dependencies |
gh/* |
gh pr list, gh pr view, gh pr checks, gh issue list, gh issue view |
kubectl/* |
kubectl get pods |
next/* |
next build |
prisma/* |
prisma generate |
pytest |
Python test runner |
tsc |
TypeScript compiler |
ls |
ls |
Filters are TOML files placed in .tokf/filters/ (project-local) or ~/.config/tokf/filters/ (user-level). Project-local filters take priority over user-level, which take priority over the built-in library.
command = "my-tool"
[on_success]
output = "ok ✓"
[on_failure]
tail = 10tokf matches commands against filter patterns using two built-in behaviours:
Basename matching — the first word of a pattern is compared by basename, so a filter with command = "git push" will also match /usr/bin/git push or ./git push. This works automatically; no special pattern syntax is required.
Transparent global flags — flag-like tokens between the command name and a subcommand keyword are skipped during matching. A filter for git log will match all of:
git log
git -C /path log
git --no-pager -C /path log --oneline
/usr/bin/git --no-pager -C /path log
The skipped flags are preserved in the command that actually runs — they are only bypassed during the pattern match.
Note on
runoverride and transparent flags: If a filter sets arunfield, transparent global flags are not included in{args}. Only the arguments that appear after the matched pattern words are available as{args}.
command = "git push" # command pattern to match (supports wildcards and arrays)
run = "git push {args}" # override command to actually execute
skip = ["^Enumerating", "^Counting"] # drop lines matching these regexes
keep = ["^error"] # keep only lines matching (inverse of skip)
# Per-line regex replacement — applied before skip/keep, in order.
# Capture groups use {1}, {2}, … . Invalid patterns are silently skipped.
[[replace]]
pattern = '^(\S+)\s+\S+\s+(\S+)\s+(\S+)'
output = "{1}: {2} → {3}"
dedup = true # collapse consecutive identical lines
dedup_window = 10 # optional: compare within a N-line sliding window
strip_ansi = true # strip ANSI escape sequences before processing
trim_lines = true # trim leading/trailing whitespace from each line
strip_empty_lines = true # remove all blank lines from the final output
collapse_empty_lines = true # collapse consecutive blank lines into one
show_history_hint = true # append a hint line pointing to the full output in history
# Lua escape hatch — for logic TOML can't express (see Lua Escape Hatch section)
[lua_script]
lang = "luau"
source = 'return output:upper()' # inline script
# file = "transform.luau" # or reference a local file (auto-inlined on publish)
match_output = [ # whole-output substring checks, short-circuit the pipeline
{ contains = "rejected", output = "push rejected" },
]
[on_success] # branch for exit code 0
output = "ok ✓ {2}" # template; {output} = pre-filtered output
[on_failure] # branch for non-zero exit
tail = 10 # keep the last N linesOutput templates support pipe chains: {var | pipe | pipe: "arg"}.
| Pipe | Input → Output | Description |
|---|---|---|
join: "sep" |
Collection → Str | Join items with separator |
each: "tmpl" |
Collection → Collection | Map each item through a sub-template |
truncate: N |
Str → Str | Truncate to N characters, appending … |
lines |
Str → Collection | Split on newlines |
keep: "re" |
Collection → Collection | Retain items matching the regex |
where: "re" |
Collection → Collection | Alias for keep: |
Example — filter a multi-line output variable to only error lines:
[on_failure]
output = "{output | lines | keep: \"^error\" | join: \"\\n\"}"Example — for each collected block, show only > (pointer) and E (assertion) lines:
[on_failure]
output = "{failure_lines | each: \"{value | lines | keep: \\\"^[>E] \\\"}\" | join: \"\\n\"}"Sections collect lines into named buckets using a state-machine model. They are processed on the raw output (before skip/keep filtering) so structural markers like blank lines are available.
[[section]]
name = "failures"
enter = "^failures:$" # regex that starts collecting
exit = "^failures:$" # regex that stops collecting (second occurrence)
split_on = "^\\s*$" # split collected lines into blocks at blank lines
collect_as = "failure_blocks" # name used in templates: {failure_blocks}
[[section]]
name = "summary"
match = "^test result:" # stateless: collect any matching line
collect_as = "summary_lines"Stateful sections (with enter/exit) toggle on/off as the state machine hits the enter/exit patterns. Stateless sections (with match only) collect every matching line regardless of state.
Section data is available in templates:
{failure_blocks}— the collected items{failure_blocks.count}— number of items (blocks ifsplit_onis set, otherwise lines){failure_blocks | each: "..." | join: "\\n"}— iterate over items
Aggregates extract numeric values from section items and produce named variables for templates.
Single aggregate (backwards compatible):
[on_success]
output = "{passed} passed ({suites} suites)"
[on_success.aggregate]
from = "summary_lines"
pattern = 'ok\. (\d+) passed'
sum = "passed"
count_as = "suites"Multiple aggregates — use [[on_success.aggregates]] (plural) to define several rules:
[on_success]
output = "✓ {passed} passed, {failed} failed, {ignored} ignored ({suites} suites)"
[[on_success.aggregates]]
from = "summary_lines"
pattern = 'ok\. (\d+) passed'
sum = "passed"
count_as = "suites"
[[on_success.aggregates]]
from = "summary_lines"
pattern = '(\d+) failed'
sum = "failed"
[[on_success.aggregates]]
from = "summary_lines"
pattern = '(\d+) ignored'
sum = "ignored"Each rule scans the named section's items. sum accumulates the first capture group as a number. count_as counts the number of matching lines. Both singular aggregate and plural aggregates can be used together — they are merged at runtime.
Chunks split raw output into repeating structural blocks, extract structured data per-block, and produce named collections for template rendering. Use chunks when you need per-block breakdown (e.g., per-crate test results in a Cargo workspace).
Note: Like sections, chunks operate on the raw (unfiltered) command output. Skip/keep patterns do not affect chunk processing. This ensures structural markers are available for splitting.
[[chunk]]
split_on = "^\\s*Running " # regex that marks the start of each chunk
include_split_line = true # include the splitting line in the chunk (default: true)
collect_as = "suites_detail" # name for the structured collection
group_by = "crate_name" # merge chunks sharing this field value
[chunk.extract]
pattern = 'deps/([\w_-]+)-' # extract a field from the split (header) line
as = "crate_name"
[[chunk.aggregate]]
pattern = '(\d+) passed' # aggregates run within each chunk's own lines
sum = "passed"
[[chunk.aggregate]]
pattern = '(\d+) failed'
sum = "failed"
[[chunk.aggregate]]
pattern = '^test result:'
count_as = "suite_count"Fields:
| Field | Description |
|---|---|
split_on |
Regex marking the start of each chunk |
include_split_line |
Whether the splitting line is part of the chunk (default: true) |
collect_as |
Name for the resulting structured collection |
extract |
Extract a named field from the header line (pattern + as) |
body_extract |
Extract fields from body lines (pattern + as, first match wins) |
aggregate |
Per-chunk aggregation rules (run within each chunk's own lines) |
group_by |
Merge chunks sharing the same field value, summing numeric fields |
children_as |
When set with group_by, preserve original items as a nested collection under this name |
carry_forward |
On extract or body_extract: inherit value from the previous chunk when the pattern doesn't match |
The resulting structured collection is available in templates as {suites_detail} and supports field access in each pipes.
When a chunk produces a structured collection, each item has named fields. Use each to iterate with field access:
[on_success]
output = """✓ cargo test: {passed} passed ({suites} suites)
{suites_detail | each: " {crate_name}: {passed} passed ({suite_count} suites)" | join: "\\n"}"""Inside the each template, all named fields from the chunk item are available as variables ({crate_name}, {passed}, {suite_count}), plus {index} (1-based) and {value} (debug representation).
{suites_detail.count} returns the number of items in the collection.
When a chunk's extract or body_extract rule has carry_forward = true, chunks that don't match the pattern inherit the value from the most recent chunk that did. This is useful when boundary markers (like Running unittests) identify a group, and subsequent chunks (like integration test suites) should inherit that identity.
[chunk.extract]
pattern = 'unittests.+deps/([\w_-]+)-'
as = "crate_name"
carry_forward = trueWhen children_as is set alongside group_by, the grouped collection preserves each group's original items as a nested collection. Inside an each template, the children are accessible by the children_as name and support their own each/join pipes:
[[chunk]]
split_on = "^\\s*Running "
collect_as = "suites_detail"
group_by = "crate_name"
children_as = "children"
[on_success]
output = """✓ {passed} passed ({suites} suites)
{suites_detail | each: " {crate_name}: {passed} passed\n{children | each: \" {suite_name}: {passed}\" | join: \"\\n\"}" | join: "\\n"}"""This produces tree output like:
✓ 565 passed (2 suites)
tokf: 565 passed
unittests src/lib.rs: 550
tests/cli_basic.rs: 15
Some commands are wrappers around different underlying tools (e.g. npm test may run Jest, Vitest, or Mocha). A parent filter can declare [[variant]] entries that delegate to specialized child filters based on project context:
command = ["npm test", "pnpm test", "yarn test"]
strip_ansi = true
skip = ["^> ", "^\\s*npm (warn|notice|WARN|verbose|info|timing|error|ERR)"]
[on_success]
output = "{output}"
[on_failure]
tail = 20
[[variant]]
name = "vitest"
detect.files = ["vitest.config.ts", "vitest.config.js", "vitest.config.mts"]
filter = "npm/test-vitest"
[[variant]]
name = "jest"
detect.files = ["jest.config.js", "jest.config.ts", "jest.config.json"]
filter = "npm/test-jest"Detection is two-phase:
- File detection (before execution) — checks if config files exist in the current directory. First match wins.
- Output pattern (after execution) — regex-matches command output. Used as a fallback when no file was detected.
When no variant matches, the parent filter's own fields (skip, on_success, etc.) apply as the fallback.
The filter field references another filter by its discovery name (relative path without .toml). Use tokf which "npm test" -v to see variant resolution.
TOML ordering:
[[variant]]entries must appear after all top-level fields (skip,[on_success], etc.) because TOML array-of-tables sections capture subsequent keys.
.tokf/filters/in the current directory (repo-local overrides)~/.config/tokf/filters/(user-level overrides)- Built-in library (embedded in the binary)
First match wins. Use tokf which "git push" to see which filter would activate.
Filter tests live in a <stem>_test/ directory adjacent to the filter TOML:
filters/
git/
push.toml <- filter config
push_test/ <- test suite
success.toml
rejected.toml
Each test case is a TOML file specifying a fixture (inline or file path), expected exit code, and one or more [[expect]] assertions:
name = "rejected push shows pull hint"
fixture = "tests/fixtures/git_push_rejected.txt"
exit_code = 1
[[expect]]
equals = "✗ push rejected (try pulling first)"For quick inline fixtures without a file:
name = "clean tree shows nothing to commit"
inline = "## main...origin/main\n"
exit_code = 0
[[expect]]
contains = "clean"Assertion types:
| Field | Description |
|---|---|
equals |
Output exactly equals this string |
contains |
Output contains this substring |
not_contains |
Output does not contain this substring |
starts_with |
Output starts with this string |
ends_with |
Output ends with this string |
line_count |
Output has exactly N non-empty lines |
matches |
Output matches this regex |
not_matches |
Output does not match this regex |
Exit codes from tokf verify: 0 = all pass, 1 = assertion failure, 2 = config/IO error or uncovered filters (--require-all).
For logic that TOML can't express — numeric math, multi-line lookahead, conditional branching — embed a Luau script:
command = "my-tool"
[lua_script]
lang = "luau"
source = '''
if exit_code == 0 then
return "passed"
else
return "FAILED: " .. output:match("Error: (.+)") or output
end
'''Available globals: output (string), exit_code (integer — the underlying command's real exit code, unaffected by --no-mask-exit-code), args (table).
Return a string to replace output, or nil to fall through to the rest of the TOML pipeline.
All Lua execution is sandboxed — both in the CLI and on the server:
- Blocked libraries:
io,os,package— no filesystem or network access. - Instruction limit: 1 million VM instructions (prevents infinite loops).
- Memory limit: 16 MB (prevents memory exhaustion).
Scripts that exceed these limits are terminated and treated as a passthrough (the TOML pipeline continues as if no Lua script was configured).
For local development you can keep the script in a separate .luau file:
[lua_script]
lang = "luau"
file = "transform.luau"Only one of file or source may be set — not both. When you run tokf publish, file references are automatically inlined (the file content is embedded as source) so the published filter is self-contained. The script file must reside within the filter's directory — path traversal (e.g. ../secret.txt) is rejected.
tokf records input/output byte counts per run in a local SQLite database:
tokf gain # summary: total bytes saved and reduction %
tokf gain --daily # day-by-day breakdown
tokf gain --by-filter # breakdown by filter
tokf gain --json # machine-readable outputView aggregate savings across all your registered machines via the tokf server:
tokf gain --remote # summary across all machines
tokf gain --remote --by-filter # breakdown by filter
tokf gain --remote --json # machine-readable outputRemote gain requires authentication (tokf auth login). The --daily flag is not available remotely. See Remote Sharing for the full setup workflow.
tokf records raw and filtered outputs in a local SQLite database, useful for debugging filters or reviewing what an AI agent saw:
tokf history list # recent entries (current project)
tokf history list -l 20 # show 20 entries
tokf history list --all # entries from all projects
tokf history show 42 # full details for entry #42
tokf history show --raw 42 # print only the raw captured output
tokf history search "error" # search by command or output content
tokf history clear # clear current project history
tokf history clear --all # clear all history (destructive)When an LLM receives filtered output it may not realise the full output exists. Two mechanisms can automatically append a hint line pointing to the history entry:
1. Filter opt-in — set show_history_hint = true in a filter TOML to always append the hint for that command:
command = "git status"
show_history_hint = true
[on_success]
output = "{branch} — {counts}"2. Automatic repetition detection — tokf detects when the same command is run twice in a row for the same project. This is a signal the caller didn't act on the previous filtered output and may need the full content:
✓ cargo test: 42 passed (2.31s)
[tokf] output filtered — to see what was omitted: `tokf history show --raw 99`
The hint is appended to stdout so it is visible to both humans and LLMs in the tool output. The history entry itself always stores the clean filtered output, without the hint line.
tokf auth login # authenticate via GitHub device flow
tokf remote setup # register this machine
tokf sync # upload pending usage events
tokf gain --remote # view aggregate savings across all machinestokf uses the GitHub device flow so no secrets are handled locally. Tokens are stored in your OS keyring (Keychain on macOS, Secret Service on Linux, Credential Manager on Windows).
tokf auth login # start device flow — prints a one-time code, opens browser
tokf auth status # show current login state and server URL
tokf auth logout # remove stored credentialsEach machine gets a UUID that links usage events to a physical device. Registration is idempotent — running it again re-syncs the existing record.
tokf remote setup # register this machine with the server
tokf remote status # show local machine ID and hostname (no network call)Machine config is stored in ~/.config/tokf/machine.toml (or $TOKF_HOME/machine.toml).
tokf sync uploads pending local usage events to the remote server. Events are deduplicated by cursor — re-syncing the same events is safe.
tokf sync # upload pending events
tokf sync --status # show last sync time and pending event count (no network call)A file lock prevents concurrent syncs. Both tokf auth login and tokf remote setup must be completed before syncing.
View aggregate token savings across all your registered machines:
tokf gain --remote # summary: total runs, tokens saved, reduction %
tokf gain --remote --by-filter # breakdown by filter
tokf gain --remote --json # machine-readable outputNote:
--dailyis not available with--remote. Use localtokf gain --dailyfor day-by-day breakdowns.
Usage events recorded before hash-based tracking was added may be missing filter hashes. Backfill resolves them from currently installed filters:
tokf remote backfill # update events with missing hashes
tokf remote backfill --no-cache # skip binary config cache during discoveryBackfill runs locally — no network call required.
For discovering and installing community filters, see Community Filters. To publish your own, see Publishing Filters. For the full server API reference, see docs/reference/api.md.
tokf integrates with Claude Code as a PreToolUse hook that automatically filters every Bash tool call — no changes to your workflow required.
tokf hook install # project-local (.tokf/)
tokf hook install --global # user-level (~/.config/tokf/)Once installed, every command Claude runs through the Bash tool is filtered transparently. Track cumulative savings with tokf gain.
By default the generated hook script calls bare tokf, relying on PATH at runtime. If tokf isn't on PATH in the hook's execution environment (common with Linuxbrew or cargo install when PATH is only set in interactive shell profiles), pass --path to embed a specific binary location:
tokf hook install --global --path ~/.cargo/bin/tokf
tokf hook install --tool opencode --path /home/linuxbrew/.linuxbrew/bin/tokftokf also ships a filter-authoring skill that teaches Claude the complete filter schema:
tokf skill install # project-local (.claude/skills/)
tokf skill install --global # user-level (~/.claude/skills/)tokf integrates with OpenCode via a plugin that applies filters in real-time before command execution.
Requirements: OpenCode with Bun runtime installed.
Install (project-local):
tokf hook install --tool opencodeInstall (global):
tokf hook install --tool opencode --globalThis writes .opencode/plugins/tokf.ts (or ~/.config/opencode/plugins/tokf.ts for --global), which OpenCode auto-loads. The plugin uses OpenCode's tool.execute.before hook to intercept bash tool calls and rewrites the command in-place when a matching filter exists. Restart OpenCode after installation for the plugin to take effect.
If tokf rewrite fails or no filter matches, the command passes through unmodified (fail-safe).
tokf integrates with OpenAI Codex CLI via a skill that instructs the agent to prefix supported commands with tokf run.
Install (project-local):
tokf hook install --tool codexInstall (global):
tokf hook install --tool codex --globalThis writes .agents/skills/tokf-run/SKILL.md (or ~/.agents/skills/tokf-run/SKILL.md for --global), which Codex auto-discovers. Unlike the Claude Code hook (which intercepts commands at the tool level), the Codex integration is skill-based: it teaches the agent to use tokf run as a command prefix. If tokf is not installed, the agent falls back to running commands without the prefix (fail-safe).
tokf ships a Claude Code skill that teaches Claude the complete filter schema, processing order, step types, template pipes, and naming conventions.
Invoke automatically: Claude will activate the skill whenever you ask to create or modify a filter — just describe what you want in natural language:
"Create a filter for
npm installoutput that keeps only warnings and errors" "Write a tokf filter forpytestthat shows a summary on success and failure details on fail"
Invoke explicitly with the /tokf-filter slash command:
/tokf-filter create a filter for docker build output
The skill is in .claude/skills/tokf-filter/SKILL.md. Reference material (exhaustive step docs and an annotated example TOML) lives in .claude/skills/tokf-filter/references/.
tokf also integrates with task runners like make and just by injecting itself as the task runner's shell. Each recipe line is individually filtered while exit codes propagate correctly. See Rewrite configuration for details.
tokf looks for a rewrites.toml file in two locations (first found wins):
- Project-local:
.tokf/rewrites.toml— scoped to the current repository - User-level:
~/.config/tokf/rewrites.toml— applies to all projects
This file controls custom rewrite rules, skip patterns, and pipe handling. All [pipe], [skip], and [[rewrite]] sections documented below go in this file.
Task runners like make and just execute recipe lines via a shell ($SHELL -c 'recipe_line'). By default, only the outer make/just command is visible to tokf — child commands (cargo test, uv run mypy, etc.) pass through unfiltered.
tokf solves this with built-in wrapper rules that inject tokf as the task runner's shell. Each recipe line is then individually matched against installed filters:
# What you type:
make check
# What tokf rewrites it to:
make SHELL=tokf check
# What make then does for each recipe line:
tokf -c 'cargo test' → filter matches → filtered output
tokf -c 'cargo clippy' → filter matches → filtered output
tokf -c 'echo done' → no filter → delegates to shFor just, the --shell flag is used instead:
just test → just --shell tokf --shell-arg -cu testShell mode (tokf -c '...') always propagates the real exit code — no masking, no "Error: Exit code N" prefix. This means make sees the actual exit code from each recipe line and stops on failure as expected.
When invoked as tokf -c 'command' (or with combined flags like -cu, -ec), tokf enters shell mode. It tries to match the command against installed filters. If a match is found, the command runs through tokf's filter pipeline and filtered output is printed. If no match is found, the command is delegated to sh with the same flags — so unfiltered recipes run normally.
This mode is not typically invoked directly; it is called by task runners (make, just) after the rewrite injects tokf as their shell.
Recipe lines with shell metacharacters — operators (&&, ||, ;), pipes (|), redirections (>, <), quotes, globs, or subshells — are delegated to the real shell (sh) so that their semantics are preserved. (Operators inside quoted strings may also trigger delegation — this is a safe false positive since sh handles them correctly.) Only simple command arg arg recipe lines are matched against filters.
Use tokf rewrite --verbose "make check" to confirm the wrapper rewrite is active and see which rule fired.
Shell mode also respects environment variables for diagnostics (since it has no access to CLI flags like --verbose):
TOKF_VERBOSE=1 make check # print filter resolution details for each recipe line
TOKF_NO_FILTER=1 make check # bypass filtering entirely, delegate all recipe lines to shThe built-in wrappers for make and just can be overridden or disabled via [[rewrite]] or [skip] entries in .tokf/rewrites.toml:
# Override the make wrapper with a custom one:
# "make check" → "make SHELL=tokf .SHELLFLAGS=-ec check"
# Note: use (?:[^\\s]*/)? prefix to also match full paths like /usr/bin/make
[[rewrite]]
match = "^(?:[^\\s]*/)?make(\\s.*)?$"
replace = "make SHELL=tokf .SHELLFLAGS=-ec{1}"
# Or disable it entirely:
[skip]
patterns = ["^make"]You can add wrappers for other task runners via [[rewrite]]. The exact mechanism depends on how the task runner invokes recipe lines — check its documentation for shell override options:
# Example: if your task runner respects $SHELL for recipe execution
[[rewrite]]
match = "^(?:[^\\s]*/)?mise run(\\s.*)?$"
replace = "SHELL=tokf mise run{1}"When a command is piped to a simple output-shaping tool (grep, tail, or head), tokf strips the pipe automatically and uses its own structured filter output instead. The original pipe suffix is passed to --baseline-pipe so token savings are still calculated accurately.
# These ARE rewritten — pipe is stripped, tokf applies its filter:
cargo test | grep FAILED
cargo test | tail -20
git diff HEAD | head -5Multi-pipe chains, pipes to other commands, or pipe targets with unsupported flags are left unchanged:
# These are NOT rewritten — tokf leaves them alone:
kubectl get pods | grep Running | wc -l # multi-pipe chain
cargo test | wc -l # wc not supported
cargo test | tail -f # -f (follow) not supportedIf you want tokf to wrap a piped command that wouldn't normally be rewritten, add an explicit rule to .tokf/rewrites.toml:
[[rewrite]]
match = "^cargo test \\| tee"
replace = "tokf run {0}"Use tokf rewrite --verbose "cargo test | grep FAILED" to see how a command is being rewritten.
If you prefer tokf to never strip pipes (leaving piped commands unchanged), add a [pipe] section to .tokf/rewrites.toml:
[pipe]
strip = false # default: trueWhen strip = false, commands like cargo test | tail -5 pass through the shell unchanged. Non-piped commands are still rewritten normally.
Sometimes the piped output (e.g. tail -5) is actually smaller than the filtered output. The prefer_less option tells tokf to compare both at runtime and use whichever is smaller:
[pipe]
prefer_less = true # default: falseWhen a pipe is stripped, tokf injects --prefer-less alongside --baseline-pipe. At runtime:
- The filter runs normally
- The original pipe command also runs on the raw output
- tokf prints whichever result is smaller
When the pipe output wins, the event is recorded with pipe_override = 1 in the tracking DB. The tokf gain command shows how many times this happened:
tokf gain summary
total runs: 42
input tokens: 12,500 est.
output tokens: 3,200 est.
tokens saved: 9,300 est. (74.4%)
pipe preferred: 5 runs (pipe output was smaller than filter)
Note: strip = false takes priority — if pipe stripping is disabled, prefer_less has no effect.
Leading KEY=VALUE assignments are automatically stripped before matching, so env-prefixed commands are rewritten correctly:
# These ARE rewritten — env vars are preserved, the command is wrapped:
DEBUG=1 git status → DEBUG=1 tokf run git status
RUST_LOG=debug cargo test → RUST_LOG=debug tokf run cargo test
A=1 B=2 cargo test | tail -5 → A=1 B=2 tokf run --baseline-pipe 'tail -5' cargo testThe env vars are passed through verbatim to the underlying command; tokf only rewrites the executable portion.
User-defined skip patterns in .tokf/rewrites.toml match against the full shell segment, including any leading env vars. A pattern ^cargo will not skip RUST_LOG=debug cargo test because the segment doesn't start with cargo:
[skip]
patterns = ["^cargo"] # skips "cargo test" but NOT "RUST_LOG=debug cargo test"To skip a command regardless of any env prefix, use a pattern that accounts for it:
[skip]
patterns = ["(?:^|\\s)cargo\\s"] # matches "cargo" anywhere after start or whitespacetokf info prints a summary of all paths, database locations, and filter counts. Useful for debugging when filters aren't being found or to verify your setup:
tokf info # human-readable output
tokf info --json # machine-readable JSONExample output:
tokf 0.2.8
TOKF_HOME: (not set)
filter search directories:
[local] /home/user/project/.tokf/filters (not found)
[user] /home/user/.config/tokf/filters (not found)
[built-in] <embedded> (always available)
tracking database:
TOKF_DB_PATH: (not set)
path: /home/user/.local/share/tokf/tracking.db (will be created)
filter cache:
path: /home/user/.cache/tokf/manifest.bin (will be created)
filters:
local: 0
user: 0
built-in: 38
total: 38
| Variable | Description | Default |
|---|---|---|
TOKF_HOME |
Redirect all user-level tokf paths (filters, cache, DB, hooks, auth) to a single directory | Platform config dir (e.g. ~/.config/tokf on Linux) |
TOKF_DB_PATH |
Override the tracking database path only (takes precedence over TOKF_HOME) |
Platform data dir (e.g. ~/.local/share/tokf/tracking.db); or $TOKF_HOME/tracking.db when TOKF_HOME is set |
TOKF_NO_FILTER |
Skip filtering in shell mode (set to 1, true, or yes) |
unset |
TOKF_VERBOSE |
Print filter resolution details in shell mode | unset |
TOKF_HOME works like CARGO_HOME or RUSTUP_HOME — set it once to relocate everything:
# Put all tokf data under /opt/tokf (useful on read-only home dirs or shared systems)
TOKF_HOME=/opt/tokf tokf info
# Override only the tracking database, leave everything else in the default location
TOKF_DB_PATH=/tmp/my-tracking.db tokf infoThe tokf info output always shows the active TOKF_HOME value (or (not set)) at the top,
so you can quickly verify which paths are in effect.
Use tokf rewrite --verbose to see how a command would be rewritten, including which rule fired:
tokf rewrite --verbose "make check" # shows wrapper rule
tokf rewrite --verbose "cargo test" # shows filter rule
tokf rewrite --verbose "cargo test | tail" # shows pipe strippingFor shell mode (task runner recipe lines), set TOKF_VERBOSE=1 to see filter resolution for each recipe line:
TOKF_VERBOSE=1 make check # verbose output on stderr for each recipetokf caches the filter discovery index for faster startup. The cache rebuilds automatically when filters change, but you can manage it manually:
tokf cache info # show cache location, size, and validity
tokf cache clear # delete the cache, forcing a rebuild on next runGenerate tab-completion scripts for your shell:
tokf completions bash
tokf completions zsh
tokf completions fish
tokf completions powershell
tokf completions elvish
tokf completions nushellBash — add to ~/.bashrc:
eval "$(tokf completions bash)"Zsh — add to ~/.zshrc:
eval "$(tokf completions zsh)"Fish — save to completions directory:
tokf completions fish > ~/.config/fish/completions/tokf.fishPowerShell — add to your profile:
tokf completions powershell | Out-String | Invoke-ExpressionElvish — add to ~/.elvish/rc.elv:
eval (tokf completions elvish | slurp)Nushell — save and source in your config:
tokf completions nushell | save -f ~/.config/nushell/tokf.nu
source ~/.config/nushell/tokf.nuThe tokf community registry lets you discover filters published by other users, install them locally, and share your own filters with the community.
Authentication is required for all registry operations. Run tokf auth login first.
tokf search <query>Returns filters whose command pattern matches <query> as a substring, ranked by token savings
and install count.
COMMAND AUTHOR SAVINGS% RUNS
git push alice 42.3% 1,234
git push --force bob 38.1% 891
| Flag | Description |
|---|---|
-n, --limit <N> |
Maximum results to return (default: 20, max: 100) |
--json |
Output raw JSON array |
tokf search git # find all git filters
tokf search "cargo test" # find cargo test filters
tokf search "" -n 50 # list 50 most popular filters
tokf search git --json # machine-readable outputtokf install <filter><filter> can be:
- A command pattern substring — tokf searches the registry and installs the top match.
- A content hash (64 hex characters) — installs a specific, pinned version.
On install, tokf:
- Downloads the filter TOML and any bundled test files.
- Verifies the content hash to detect tampering.
- Writes the filter under
~/.config/tokf/filters/(global) or.tokf/filters/(local). - Runs the bundled test suite (if any). Rolls back on failure.
| Flag | Description |
|---|---|
--local |
Install to project-local .tokf/filters/ instead of global config |
--force |
Overwrite an existing filter at the same path |
--dry-run |
Preview what would be installed without writing any files |
tokf install git push # install top result for "git push"
tokf install git push --local # install into current project only
tokf install git push --dry-run # preview the install
tokf install <64-hex-hash> --force # install a pinned version, overwriting existingInstalled filters include an attribution header at the top of the TOML:
# Published by @alice · hash: <hash> · https://tokf.net/filters/<hash>This header is stripped automatically when the filter is loaded.
Warning: Community filters are third-party code. Review a filter at
https://tokf.net/filters/<hash>before installing it in production environments.
tokf verifies the content hash of every downloaded filter to detect server-side tampering. Test filenames are validated to prevent path traversal attacks.
After publishing a filter, the filter TOML itself is immutable (same content = same hash), but you can replace the bundled test suite at any time:
tokf publish --update-tests <filter-name>This replaces the entire test suite in the registry with the current local _test/ directory
contents. Only the original author can update tests.
| Flag | Description |
|---|---|
--dry-run |
Preview which test files would be uploaded without making changes |
tokf publish --update-tests git/push # replace test suite for git/push
tokf publish --update-tests git/push --dry-run # preview only- The filter's identity (content hash) does not change.
- The old test suite is deleted and fully replaced by the new one.
- You must be the original author of the filter.
tokf publish <filter-name>Publishes a local filter to the community registry under the MIT license. Authentication is required — run tokf auth login first.
- The filter must be a user-level or project-local filter (not a built-in). Use
tokf ejectfirst if needed. - At least one test file must exist in the adjacent
_test/directory. The server runs these tests against your filter before accepting the upload. - You must accept the MIT license (prompted on first publish, remembered afterwards).
- The filter TOML is read and validated.
- If the filter uses
lua_script.file, the referenced script is automatically inlined — its content is embedded aslua_script.sourceso the published filter is self-contained. The script file must reside within the filter's directory (path traversal is rejected). - A content hash is computed from the parsed config. This hash is the filter's permanent identity.
- The filter and test files are uploaded. The server verifies tests pass before accepting.
- On success, the registry URL is printed.
| Flag | Description |
|---|---|
--dry-run |
Preview what would be published without uploading |
--update-tests |
Replace the test suite for an already-published filter |
tokf publish git/push # publish a filter
tokf publish git/push --dry-run # preview only
tokf publish --update-tests git/push # replace test suite- Filter TOML: 64 KB max
- Total upload (filter + tests): 1 MB max
Published filters must use inline source for Lua scripts — lua_script.file is not supported on the server. The tokf publish command handles this automatically by reading the file and embedding its content. You don't need to change your filter.
All Lua scripts in published filters are executed in a sandbox with resource limits (1 million instructions, 16 MB memory) during server-side test verification.
For the full API reference (all endpoints, request/response shapes, rate limits, and environment variables), see docs/reference/api.md. For deployment instructions, see DEPLOY.md.
tokf was heavily inspired by rtk (rtk-ai.app) — a CLI proxy that compresses command output before it reaches an AI agent's context window. rtk pioneered the idea and demonstrated that 60–90% context reduction is achievable across common dev tools. tokf takes a different approach (TOML-driven filters, user-overridable library, Claude Code hook integration) but the core insight is theirs.
MIT — see LICENSE.