Thanks to visit codestin.com
Credit goes to github.com

Skip to content

j4ckxyz/gork-bot

Repository files navigation

Bluesky Grok Bot

Tag @gork.botsky.social on Bluesky to interact!

Amazing prompt written by Elliot!

Features

  • Responds to all mentions and replies on Bluesky (via notifications)
  • Uses OpenRouter for text + vision replies with a configurable model (defaults to x-ai/grok-4-fast:free)
  • Content moderation layer uses a lightweight model to check replies for inappropriate references to sensitive topics
  • Understands image attachments when the latest post includes photos (vision ready when your model supports it)
  • Reads image alt text so described media affects the reply context
  • Thread-aware: Replies with context of the entire conversation
  • Persistent cache: Prevents duplicate replies (even if restarted)
  • Guards against unsafe phrases with an editable blacklist.csv
  • Outputs Bluesky-native mentions thanks to automatic mention facets
  • Ships with a local CLI simulator to test replies before posting

Setup

  1. Clone this repository

  2. Create a .env file with your credentials:

    BLUESKY_HANDLE=your-bluesky-handle
    BLUESKY_PASSWORD=your-bluesky-password
    OPENROUTER_KEY=your-openrouter-api-key
    OPENROUTER_MODERATION_MODEL=meta-llama/llama-3.2-3b-instruct:free
    

    Tip: BLUESKY_HANDLE can be entered as name.bsky.social, with a leading @, or as a https://bsky.app/profile/... URL. The bot normalizes it automatically before login.

  3. Install dependencies (requires Python 3.8+):

    pip install -r requirements.txt
    

    You will need the websockets and requests packages, as well as atproto and python-dotenv.

  4. Run the bot:

    python bot.py
    

Choosing an OpenRouter model

Set the OPENROUTER_MODEL environment variable (or add it to .env) to point at any model that OpenRouter exposes. The default is x-ai/grok-4-fast:free because it is fast and supports vision, but it can lose coherence on very long threads—feel free to switch to something with deeper context if you need it.

Examples:

# fast vision default
OPENROUTER_MODEL=x-ai/grok-4-fast:free

# alternate text-only option
# OPENROUTER_MODEL=anthropic/claude-3-haiku

Make sure the model you choose supports vision (image inputs) if you want the bot to reason about attached photos.

Content Moderation

The bot includes a two-stage reply generation process to prevent inappropriate references to sensitive topics:

  1. Primary reply generation: Uses your chosen OPENROUTER_MODEL to generate a witty, chaotic response
  2. Moderation check: Uses a lightweight OPENROUTER_MODERATION_MODEL to analyze the generated reply against the thread context

The moderation layer checks if the reply inappropriately mentions self-harm, suicide, or extreme violence terms when those topics appear in the thread context. Context-aware examples:

  • "rope" is flagged if the thread discusses suicide, but allowed in discussions about climbing
  • "hanging" is flagged for self-harm contexts, but allowed in phrases like "hanging out"

If a reply is flagged, the bot automatically regenerates a new response with additional instructions to avoid sensitive topics. If regeneration fails after multiple attempts, a safe fallback reply is used instead.

Configure the moderation model in your .env:

OPENROUTER_MODERATION_MODEL=meta-llama/llama-3.2-3b-instruct:free

The default moderation model is fast and inexpensive while providing effective content safety checks.

Guardrails with blacklist.csv

The bot checks its reply against blacklist.csv before posting. Each row is a comma-separated phrase and optional reason, e.g.:

kill yourself,explicit self-harm directive

If any phrase is found (case insensitive, matching consecutive words), the response is replaced with a safe fallback and the incident is logged. Edit the file to add your own phrases, then save—it will hot-reload on the next loop.

To test a snippet against the blacklist without running the bot, use:

python bot.py --check-blacklist "sample text here"

Local simulation CLI

Quickly test how the bot would respond without touching Bluesky:

  • Interactive prompts: python bot.py --simulate
  • From a JSON thread file: python bot.py --simulate-file thread.json

The JSON format is a list of posts (oldest → newest):

[
  {"handle": "alice.bsky.social", "text": "hey @gork what's good"},
  {"handle": "bob.bsky.social", "text": "chiming in with chaos"}
]

Optional fields per post: images (list or count) and alt_texts (list).

How It Works

  • Listens for direct mentions and replies to your bot on Bluesky using the notifications API.
  • Fetches the context of the whole thread for the most relevant and chaotic AI response.
  • All replies (text + vision) are generated by your chosen OpenRouter model (default: x-ai/grok-4-fast:free).
  • A lightweight moderation model checks each reply for inappropriate references to sensitive topics before posting.
  • If a reply is flagged, it's automatically regenerated with additional safety instructions.
  • Prevents duplicate replies with a persistent log file (processed_uris.txt).

Deployment

  • This bot is platform-agnostic: you can run it on your own server, Raspberry Pi, or deploy it on any service that supports persistent Python processes (like Railway, Fly.io, etc).
  • Make sure your process can run continuously and has a persistent filesystem for the cache file.

Safe Updates on Production (Raspberry Pi/Server)

To safely update the bot on your production server without losing reply history:

./update.sh

This script will:

  1. 🔒 Backup important state files (processed_uris.txt, thread_replies.json, .env, blacklist.csv)
  2. 🛑 Stop the bot service
  3. 📥 Pull latest code from GitHub
  4. ♻️ Restore your state files
  5. 🚀 Restart the bot

Never use git reset --hard on production - it will delete your state files and cause the bot to reply to old posts again!

Manual Update (if update.sh doesn't work)

# Backup state files
cp processed_uris.txt processed_uris.txt.backup
cp thread_replies.json thread_replies.json.backup

# Stop bot
sudo systemctl stop gork-bot

# Update code
git stash
git pull origin main

# Restore backups
cp processed_uris.txt.backup processed_uris.txt
cp thread_replies.json.backup thread_replies.json

# Restart bot
sudo systemctl start gork-bot

Notes

  • This version does not do any image generation (analysis works for attached photos).
  • Grok 4 Fast is the default for quick vision replies, but consider a longer-context model if your threads are lengthy.
  • If you want to match both “@grok” and common typo “@gork”, edit the script and change the firehose substring match accordingly.
  • Restarting the bot will not duplicate responses to old posts, as long as the processed_uris.txt file is kept.

Example .env

BLUESKY_HANDLE=your-handle.bsky.social
BLUESKY_PASSWORD=your-bluesky-app-password
OPENROUTER_KEY=sk-...your openrouter api key...
OPENROUTER_MODEL=x-ai/grok-4-fast:free
OPENROUTER_MODERATION_MODEL=meta-llama/llama-3.2-3b-instruct:free

About

gork bot on bluesky with openrouter's api!

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published