Meet pktai — a modern, Textual-powered TUI that blends Wireshark-like workflows with an AI copilot. Open a pcap, browse packets, and chat with an on-device LLM (via Ollama) to explain what’s going on. Apply Wireshark-style display filters or just ask in natural language — pktai routes the request to the right tool, instantly.
Project URL: https://github.com/kspviswa/pktai
- Packet-first UI: Left pane shows packets and expandable per-layer details.
- Built-in Chat Copilot: Right pane is a chat that understands your capture context.
- Wireshark-like Filters: Apply display filters inline or via slash commands.
- NL → Filter: Ask “get me all NGAP packets” — pktai applies
ngapautomatically. - Instant Stop: Cancel in-flight LLM responses with a Stop button.
- Zero mouse, pure keyboard: Fast and ergonomic terminal UX powered by Textual.
Requires Python 3.10+.
- Using pip:
pip install pktai
- Using uv:
uv add pktai
This installs the pktai command.
- Optional: run a local LLM with Ollama (default model
qwen3:latest):
ollama run qwen3:latest- Launch pktai:
pktai- Open a capture file: press
oand pick a.pcap/.pcapng.
- Browse packets: Navigate the left pane; expand layers to inspect fields.
- Hex/ASCII viewer: Click a field to view its raw bytes in the right pane.
- Diverse Models: Switch between different LLM models via the Settings modal.
- Local Models: Run a local LLM with Ollama.
- Chat analysis: Ask questions in the right chat pane (e.g., “summarize traffic patterns”).
- Stop generation: While the model is responding, click
Stopto cancel. - Display filter (slash command): Type:
/df ngap && sctp.dstport == 38412/df ip.src == 10.0.0.1 && tcpThis applies the filter immediately without calling the LLM.
- Natural language filter: Ask “show only NGAP packets with dst port 38412” — pktai converts NL → display filter and applies it.
- Settings: Press
sto open a compact Settings modal; choose model and tune generation parameters.
- Agentic Orchestrator: Routes your input between Filter, Packet, and Chat agents.
- Filtering Engine: Tokenizer + parser + evaluator for a practical Wireshark-like subset:
- Protocol tokens (e.g.,
tcp,ngap), field presence (e.g.,ip.src), equality/inequality on common fields (e.g.,ip.src == 1.2.3.4,sctp.dstport != 38412), boolean&&/||with parentheses. - Unsupported operators like
contains/matchesraise a clear error.
- Protocol tokens (e.g.,
- LLM Abstraction:
LLMService(OpenAI-compatible) talks to Ollama; switch models easily. - Markdown Chat: Renders assistant replies nicely; optional expandable “Thought process”.
- Responsive UX: Soft-wrapping chat log, tight spacing, and a cancelable generation flow.
- If the chat doesn’t work, ensure Ollama is running and the model is available:
ollama run qwen3:latest. - To start without chat, simply use filtering and packet browsing; chat can be configured later.
- Repository: https://github.com/kspviswa/pktai
MIT — see LICENSE.