Problem
When creating a cron job via the schedule tool, providing both command and prompt is rejected with "Provide either 'command' or 'prompt', not both". But a common use case is: run a shell command, feed its stdout into a prompt for summarization/parsing, and deliver the result.
Currently the only workaround is an agent job where the LLM decides on its own to run the command — resulting in two model round-trips instead of one (the agent calls the model to decide to use the shell tool, runs the command, then calls the model again to format the response).
Proposed behavior
When both command and prompt are provided:
- Run the shell command
- Template its stdout into the prompt (e.g.
{output} or similar placeholder)
- Pass the expanded prompt to the agent for a single LLM call
- Deliver the result
This gives: one subprocess + one model call, vs the current two model calls.
Backward compatibility concerns
The command field currently serves double duty — it is the shell command for shell jobs, but for agent jobs the code copies the prompt text into command to use as a display label (cron.zig L614, L730, L1133). Allowing both fields to have independent values would break this assumption.
Possible approaches:
- Add a
title or name field for display purposes, freeing command to be the actual shell command in all job types
- Add a new
job_type (e.g. pipeline or command_agent) to distinguish from pure shell/agent jobs
- Use different display modes depending on job_type when listing crons or delivering results to channels
The current prompt-into-command sync is also confusing when inspecting cron.json directly — the raw data looks like both fields were set by the user when they were not.
Environment
- nullclaw on ARMv5TE (Pogoplug), Telegram channel
- Discovered when asking the bot to schedule a weather report that fetches from wttr.in and summarizes it
Problem
When creating a cron job via the schedule tool, providing both
commandandpromptis rejected with "Provide either 'command' or 'prompt', not both". But a common use case is: run a shell command, feed its stdout into a prompt for summarization/parsing, and deliver the result.Currently the only workaround is an agent job where the LLM decides on its own to run the command — resulting in two model round-trips instead of one (the agent calls the model to decide to use the shell tool, runs the command, then calls the model again to format the response).
Proposed behavior
When both
commandandpromptare provided:{output}or similar placeholder)This gives: one subprocess + one model call, vs the current two model calls.
Backward compatibility concerns
The
commandfield currently serves double duty — it is the shell command for shell jobs, but for agent jobs the code copies the prompt text intocommandto use as a display label (cron.zig L614, L730, L1133). Allowing both fields to have independent values would break this assumption.Possible approaches:
titleornamefield for display purposes, freeingcommandto be the actual shell command in all job typesjob_type(e.g.pipelineorcommand_agent) to distinguish from pure shell/agent jobsThe current prompt-into-command sync is also confusing when inspecting
cron.jsondirectly — the raw data looks like both fields were set by the user when they were not.Environment