AutoTweet is a Python application that leverages the Llama 3 language model to generate technical tweets on various topics. It integrates with ntfy.sh for a push notification-based confirmation workflow before posting to X (formerly Twitter).
- Features
- How it Works
- Setup
- Usage
- Configuration
- Dependencies
- Running as a Systemd Service (Linux)
- Contributing
- Automated Tweet Generation: Uses the
meta-llama/Meta-Llama-3-8B-Instructmodel to create concise technical tweets. - X Integration: Posts approved tweets directly to your X account.
- Notification-based Confirmation:
- Sends a push notification via a self-hosted or public
ntfy.shserver to your device. - Allows you to "Approve ✅", "Discard ❌", or "Re-generate 🔁" the tweet directly from the notification.
- Sends a push notification via a self-hosted or public
- Direct Posting Option: A command-line argument (
--force-post) allows bypassing the confirmation step for direct posting - Configurable Topics: Easily customize the list of topics for tweet generation.
- Adjustable Tweet Frequency: Control how often the script attempts to generate and post a tweet.
- CUDA Support: Utilizes GPU for faster model inference if a CUDA-enabled GPU is available.
- Model Loading: The Llama 3 model and tokenizer are loaded from Hugging Face.
- Topic Selection: A random topic is chosen from a predefined list.
- Tweet Generation: A prompt is constructed, and the Llama 3 model generates a short technical tweet (under 280 characters) about the selected topic.
- Confirmation Request: Unless
--force-postis used, a notification with action buttons is sent to your configuredntfy.shtopic. The script then listens on the response topic for your decision. Ifntfy.shenvironment variables are not correctly set, the script will raise an error. - Action Based on Confirmation:
- Approve: The tweet is posted to X.
- Discard: The tweet is not posted.
- Re-generate: The script attempts to generate a new tweet for the same topic.
- Scheduling: After an action (or inaction), the script waits for a configurable interval (
TWEET_TIMEGAP_SECS) before starting the cycle again.
-
Clone the Repository:
git clone https://github.com/akshayxml/autotweet.git cd autotweet -
Install Dependencies: Make sure you have Python 3.x installed. It's highly recommended to create and activate a virtual environment first:
python3 -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
Then, install the required packages:
pip install -r requirements.txt # For CUDA support with PyTorch, ensure your PyTorch installation matches your CUDA version. # See: https://pytorch.org/get-started/locally/
-
Hugging Face Login/Token: To download and use the Llama 3 model, you need to:
- Accept the Llama 3 license on its Hugging Face model card.
- Log in via the Hugging Face CLI:
huggingface-cli login
- Alternatively, you can set the
HF_TOKENenvironment variable with your Hugging Face access token.
-
Environment Variables: Create a
.envfile in the project root by copying the provided.env.examplefile (cp .env.example .env) and then fill in your actual values. Alternatively, you can set these environment variables directly in your system.-
X API Credentials (Required):
X_CONSUMER_KEY="your_consumer_key" X_CONSUMER_SECRET="your_consumer_secret" X_ACCESS_TOKEN="your_access_token" X_ACCESS_TOKEN_SECRET="your_access_token_secret" X_BEARER_TOKEN="your_bearer_token" -
ntfy.sh Configuration (Required, unless using
--force-post):NTFY_SERVER="https://ntfy.sh" # Optional, defaults to public ntfy.sh. Use your own if self-hosting. NTFY_CONFIRM_TOPIC="your_unique_confirm_topic_name" # e.g., autotweet_confirm_myuser NTFY_RESPONSE_TOPIC="your_unique_response_topic_name" # e.g., autotweet_response_myuserNote: Ensure
NTFY_CONFIRM_TOPICandNTFY_RESPONSE_TOPICare unique and private to you if using the publicntfy.shserver. -
Hugging Face Token (Optional, if not using
huggingface-cli login):HF_TOKEN="your_hugging_face_read_token"
-
- Ensure all prerequisites and environment variables are set up.
- Run the main script:
- With ntfy.sh confirmation (default):
Make sure your
ntfy.shenvironment variables are set.python main.py
- To skip confirmation and post directly:
python main.py --force-post
- With ntfy.sh confirmation (default):
Make sure your
- The script will start generating tweets.
- If not using
--force-post, subscribe to yourNTFY_CONFIRM_TOPICon your phone/device using the ntfy app or web client. You will receive notifications to approve, reject, or regenerate tweets.
You can modify the following in main.py:
LLAMA3_MODEL_NAME: Change the Llama 3 model variant if needed (ensure compatibility).TWEET_TIMEGAP_SECS: Adjust the time interval (in seconds) between tweet generation attempts. Default is 12 hours.TWEET_TIMEGAP_SECS = 60 * 60 * 12 # 12 hours
topics: Add or remove topics for tweet generation.topics = [ "kubernetes", "docker", "c++", "golang", "java", "nodejs", "redis", "python", "system design", "data structures and algorithms", "operating system", "computer networking", "databases", "kafka", "javascript" ]
All Python dependencies are listed in requirements.txt. The main dependencies include:
transformers: For interacting with Hugging Face models.torch: The deep learning framework used by the model.tweepy: For interacting with the X API.requests: For making HTTP requests (e.g., to ntfy.sh).python-dotenv: For managing environment variables from a.envfile.
To run AutoTweet automatically on system boot, you can set it up as a systemd service.
-
Create the service file:
Create a file named
autotweet.servicein/etc/systemd/system/with the following content. Make sure to adjustUser,WorkingDirectory, andExecStartpaths to match your setup.[Unit] Description=AutoTweet Service After=network.target [Service] User=your_username WorkingDirectory=/path/to/your/autotweet_project ExecStart=/path/to/your/autotweet_project/venv/bin/python3 /path/to/your/autotweet_project/main.py Restart=always RestartSec=10 [Install] WantedBy=multi-user.target
Note:
- Replace
your_usernamewith the appropriate user for running the script. - Replace
/path/to/your/autotweet_projectwith the absolute path to your project directory. - Ensure the Python interpreter path in
ExecStartis correct for your virtual environment.
- Replace
-
Reload systemd, enable, and start the service:
sudo systemctl daemon-reload sudo systemctl enable autotweet.service sudo systemctl start autotweet.service -
Check the status:
You can check the status of the service using:
sudo systemctl status autotweet.service
And view logs with:
journalctl -u autotweet.service -f
Contributions are welcome! Please feel free to submit a pull request or open an issue.