Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Echolet is an offline voice-based AI assistant using Whisper.cpp, Ollama, and Piper for fast, private speech interaction no internet required.

Notifications You must be signed in to change notification settings

Arman001/echolet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Echolet 🗣️ Offline Voice Assistant

Echolet is a lightweight, offline voice assistant built for privacy and performance. It uses Whisper.cpp for voice recognition, Piper for ultra-fast text-to-speech, and a local LLM (via Ollama) for generating responses all running fully on your CPU without internet access.

🎯 Ideal for Linux users who want a private, fast, and customizable voice interaction system without depending on the cloud.

📈 Project Status

Phase 1 Complete: Echolet currently supports full offline voice interaction including:

  • Voice recording (mic input)
  • Transcription using whisper.cpp
  • LLM response via local models with Ollama
  • Text-to-speech output using Piper

⚙️ Upcoming (Phase 2 and beyond):

  • Voice-controlled commands (open apps, tell time)
  • Notes and reminders (voice-to-memory)
  • Wake word support
  • File and document search
  • Streamlit GUI

The project is fully modular and open for community contributions. Stay tuned for updates as Echolet evolves into a more complete offline personal assistant.

📦 Requirements

  • Python 3.9+
  • piper installed and in your PATH
  • Ollama (for local LLM)
  • Whisper.cpp (for speech-to-text)
  • aplay (for audio playback)

Install Python dependencies:

pip install -r requirements.txt

🛠 Setup

  1. Download a Piper Voice Model
# Download model and config JSON
wget -O en_US-amy-medium.onnx "https://huggingface.co/rhasspy/piper-voices/resolve/v1.0.0/en/en_US/amy/medium/en_US-amy-medium.onnx?download=true"
wget -O en_US-amy-medium.onnx.json "https://huggingface.co/rhasspy/piper-voices/resolve/v1.0.0/en/en_US/amy/medium/en_US-amy-medium.onnx.json?download=true"
  • Place them in:
/core/tts/piper/
  1. Set PIPER_MODEL_PATH in config/config.py
PIPER_MODEL_PATH = "/absolute/path/to/en_US-amy-medium.onnx"
  1. Install and Run Ollama
curl -fsSL https://ollama.com/install.sh | sh
ollama run phi

You can use ollama list to see installed models.

🚀 Running Echolet

Start the current assistant:

python -m scripts.voice_assistant

Speak your prompt. Echolet will:

  1. Listen using Whisper
  2. Respond using a local LLM
  3. Speak back using Piper

🧩 Folder Structure

echolet/
├── main.py                 # Entry point
├── voice/
│   ├── record.py           # Audio recording logic
│   └── transcribe.py       # Whisper.cpp wrapper
├── llm/
│   └── query_model.py      # Ask questions using Ollama
├── tts/
│   └── speak.py            # TTS using Piper or pyttsx3
├── config/
│   └── config.py           # Paths and settings
└── requirements.txt        # Python dependencies

📌 Notes

  • If Piper fails, it falls back to pyttsx3 (no model needed).
  • All modules are designed to run on low-end hardware without GPU.
  • Ollama models like phi are ideal for 4GB+ RAM CPUs.

❤️ Credits

📜 License

MIT License

Created By

Muhammad Saad

About

Echolet is an offline voice-based AI assistant using Whisper.cpp, Ollama, and Piper for fast, private speech interaction no internet required.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages