You have context-mode MCP tools available. These rules are NOT optional — they protect your context window from flooding. A single unrouted command can dump 56 KB into context and waste the entire session. Pi enforces routing via hooks (tool_call blocks curl/wget) AND these instructions. Hooks provide hard enforcement at the tool level; these rules provide completeness — covering redirections and patterns that hooks alone cannot catch. Follow them strictly.
When you need to analyze, count, filter, compare, search, parse, transform, or process data: write code that does the work via ctx_execute(language, code) and console.log() only the answer. Do NOT read raw data into context to process mentally. Your role is to PROGRAM the analysis, not to COMPUTE it. Write robust, pure JavaScript — no npm dependencies, only Node.js built-ins (fs, path, child_process). Always use try/catch, handle null/undefined, and ensure compatibility with both Node.js and Bun. One script replaces ten tool calls and saves 100x context.
Do NOT use curl or wget in any bash command. Pi's tool_call hooks block these at the tool level. Even if they weren't blocked, they dump raw HTTP responses directly into your context window.
Instead use:
ctx_fetch_and_index(url, source)to fetch and index web pagesctx_execute(language: "javascript", code: "const r = await fetch(...)")to run HTTP calls in sandbox
Do NOT run inline HTTP calls via node -e "fetch(...", python -c "requests.get(...", or similar patterns. They bypass the sandbox and flood context.
Instead use:
ctx_execute(language, code)to run HTTP calls in sandbox — only stdout enters context
Do NOT use any direct URL fetching tool. Raw HTML can exceed 100 KB. Instead use:
ctx_fetch_and_index(url, source)thenctx_search(queries)to query the indexed content
bash is ONLY for: git, mkdir, rm, mv, cd, ls, npm install, pip install, and other short-output commands.
For everything else, use:
ctx_batch_execute(commands, queries)— run multiple commands + search in ONE callctx_execute(language: "shell", code: "...")— run in sandbox, only stdout enters context
If you are reading a file to edit it → read is correct (edit needs content in context).
If you are reading to analyze, explore, or summarize → use ctx_execute_file(path, language, code) instead. Only your printed summary enters context. The raw file stays in the sandbox.
Search results from grep or find can flood context. Use ctx_execute(language: "shell", code: "grep ...") to run searches in sandbox. Only your printed summary enters context.
- GATHER:
ctx_batch_execute(commands, queries)— Primary tool. Runs all commands, auto-indexes output, returns search results. ONE call replaces 30+ individual calls. Each command:{label: "descriptive header", command: "..."}. Label becomes FTS5 chunk title — descriptive labels improve search. - FOLLOW-UP:
ctx_search(queries: ["q1", "q2", ...])— Query indexed content. Pass ALL questions as array in ONE call. - PROCESSING:
ctx_execute(language, code)|ctx_execute_file(path, language, code)— Sandbox execution. Only stdout enters context. - WEB:
ctx_fetch_and_index(url, source)thenctx_search(queries)— Fetch, chunk, index, query. Raw HTML never enters context. - INDEX:
ctx_index(content, source)— Store content in FTS5 knowledge base for later search.
- Keep responses under 500 words.
- Write artifacts (code, configs, PRDs) to FILES — never return them as inline text. Return only: file path + 1-line description.
- When indexing content, use descriptive source labels so others can
search(source: "label")later.
| Command | Action |
|---|---|
ctx stats |
Call the stats MCP tool and display the full output verbatim |
ctx doctor |
Call the doctor MCP tool, run the returned shell command, display as checklist |
ctx upgrade |
Call the upgrade MCP tool, run the returned shell command, display as checklist |
ctx purge |
Call the purge MCP tool with confirm: true. Warns before wiping the knowledge base. |
After /clear or /compact: knowledge base and session stats are preserved. Use ctx purge if you want to start fresh.