Tag @gork.botsky.social on Bluesky to interact!
Amazing prompt written by Elliot!
- Responds to all mentions and replies on Bluesky (via notifications)
- Uses OpenRouter for text + vision replies with a configurable model (defaults to
x-ai/grok-4-fast:free) - Content moderation layer uses a lightweight model to check replies for inappropriate references to sensitive topics
- Understands image attachments when the latest post includes photos (vision ready when your model supports it)
- Reads image alt text so described media affects the reply context
- Thread-aware: Replies with context of the entire conversation
- Persistent cache: Prevents duplicate replies (even if restarted)
- Guards against unsafe phrases with an editable
blacklist.csv - Outputs Bluesky-native mentions thanks to automatic mention facets
- Ships with a local CLI simulator to test replies before posting
-
Clone this repository
-
Create a
.envfile with your credentials:BLUESKY_HANDLE=your-bluesky-handle BLUESKY_PASSWORD=your-bluesky-password OPENROUTER_KEY=your-openrouter-api-key OPENROUTER_MODERATION_MODEL=meta-llama/llama-3.2-3b-instruct:freeTip:
BLUESKY_HANDLEcan be entered asname.bsky.social, with a leading@, or as ahttps://bsky.app/profile/...URL. The bot normalizes it automatically before login. -
Install dependencies (requires Python 3.8+):
pip install -r requirements.txtYou will need the
websocketsandrequestspackages, as well asatprotoandpython-dotenv. -
Run the bot:
python bot.py
Set the OPENROUTER_MODEL environment variable (or add it to .env) to point at any model that OpenRouter exposes. The default is x-ai/grok-4-fast:free because it is fast and supports vision, but it can lose coherence on very long threads—feel free to switch to something with deeper context if you need it.
Examples:
# fast vision default
OPENROUTER_MODEL=x-ai/grok-4-fast:free
# alternate text-only option
# OPENROUTER_MODEL=anthropic/claude-3-haiku
Make sure the model you choose supports vision (image inputs) if you want the bot to reason about attached photos.
The bot includes a two-stage reply generation process to prevent inappropriate references to sensitive topics:
- Primary reply generation: Uses your chosen
OPENROUTER_MODELto generate a witty, chaotic response - Moderation check: Uses a lightweight
OPENROUTER_MODERATION_MODELto analyze the generated reply against the thread context
The moderation layer checks if the reply inappropriately mentions self-harm, suicide, or extreme violence terms when those topics appear in the thread context. Context-aware examples:
- "rope" is flagged if the thread discusses suicide, but allowed in discussions about climbing
- "hanging" is flagged for self-harm contexts, but allowed in phrases like "hanging out"
If a reply is flagged, the bot automatically regenerates a new response with additional instructions to avoid sensitive topics. If regeneration fails after multiple attempts, a safe fallback reply is used instead.
Configure the moderation model in your .env:
OPENROUTER_MODERATION_MODEL=meta-llama/llama-3.2-3b-instruct:free
The default moderation model is fast and inexpensive while providing effective content safety checks.
The bot checks its reply against blacklist.csv before posting. Each row is a comma-separated phrase and optional reason, e.g.:
kill yourself,explicit self-harm directive
If any phrase is found (case insensitive, matching consecutive words), the response is replaced with a safe fallback and the incident is logged. Edit the file to add your own phrases, then save—it will hot-reload on the next loop.
To test a snippet against the blacklist without running the bot, use:
python bot.py --check-blacklist "sample text here"
Quickly test how the bot would respond without touching Bluesky:
- Interactive prompts:
python bot.py --simulate - From a JSON thread file:
python bot.py --simulate-file thread.json
The JSON format is a list of posts (oldest → newest):
[
{"handle": "alice.bsky.social", "text": "hey @gork what's good"},
{"handle": "bob.bsky.social", "text": "chiming in with chaos"}
]
Optional fields per post: images (list or count) and alt_texts (list).
- Listens for direct mentions and replies to your bot on Bluesky using the notifications API.
- Fetches the context of the whole thread for the most relevant and chaotic AI response.
- All replies (text + vision) are generated by your chosen OpenRouter model (default:
x-ai/grok-4-fast:free). - A lightweight moderation model checks each reply for inappropriate references to sensitive topics before posting.
- If a reply is flagged, it's automatically regenerated with additional safety instructions.
- Prevents duplicate replies with a persistent log file (
processed_uris.txt).
- This bot is platform-agnostic: you can run it on your own server, Raspberry Pi, or deploy it on any service that supports persistent Python processes (like Railway, Fly.io, etc).
- Make sure your process can run continuously and has a persistent filesystem for the cache file.
To safely update the bot on your production server without losing reply history:
./update.shThis script will:
- 🔒 Backup important state files (
processed_uris.txt,thread_replies.json,.env,blacklist.csv) - 🛑 Stop the bot service
- 📥 Pull latest code from GitHub
- ♻️ Restore your state files
- 🚀 Restart the bot
Never use git reset --hard on production - it will delete your state files and cause the bot to reply to old posts again!
# Backup state files
cp processed_uris.txt processed_uris.txt.backup
cp thread_replies.json thread_replies.json.backup
# Stop bot
sudo systemctl stop gork-bot
# Update code
git stash
git pull origin main
# Restore backups
cp processed_uris.txt.backup processed_uris.txt
cp thread_replies.json.backup thread_replies.json
# Restart bot
sudo systemctl start gork-bot- This version does not do any image generation (analysis works for attached photos).
- Grok 4 Fast is the default for quick vision replies, but consider a longer-context model if your threads are lengthy.
- If you want to match both “@grok” and common typo “@gork”, edit the script and change the firehose substring match accordingly.
- Restarting the bot will not duplicate responses to old posts, as long as the
processed_uris.txtfile is kept.
BLUESKY_HANDLE=your-handle.bsky.social
BLUESKY_PASSWORD=your-bluesky-app-password
OPENROUTER_KEY=sk-...your openrouter api key...
OPENROUTER_MODEL=x-ai/grok-4-fast:free
OPENROUTER_MODERATION_MODEL=meta-llama/llama-3.2-3b-instruct:free