Thanks to visit codestin.com
Credit goes to github.com

Skip to content

faberpines/CEDAR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

CEDAR — Local, All-Inclusive AI Assistant

CEDAR turns a Dell Latitude 5430 Rugged into a fully local, privacy-first AI workstation. The app ships with a neon-styled PySide6/QML interface, orchestrates Ollama-hosted models, keeps long-term memory on-device, automates Windows tasks with guard rails, and bridges IoT environments — all without mandatory cloud calls.

CEDAR Badge

Feature Highlights

  • Local LLM Hub – REST/CLI Ollama client, dropdown model switcher, and raw ollama run "model" command entry. Missing models trigger pulls and progress feedback.
  • Rich Tool Belt – DuckDuckGo search + trafilatura extraction, summarized via the LLM; Windows automation for apps/input/files/system info routed through confirmation policies; IoT control through MQTT + Home Assistant adapter with registry cache; camera preview, OCR, optional ONNX detection.
  • Voice-First Interaction – “Cedar” wake word (openwakeword), VAD (webrtcvad), faster-whisper ASR, Piper TTS with optional Coqui XTTSv2 voices, live RMS pulse driving animations.
  • Responsive GUI – PySide6 + QML neon theme, Lottie/GIF animations for idle/listening/thinking/speaking/alert states, tool cards, logs, permissions editor, configuration wizard, and voice pack rights acknowledgement.
  • Memory + Safety – SQLite conversational memory, Ollama embeddings + Chroma vector store with fallback to sentence-transformers, JSON audit logs via loguru, policy-driven confirmations with two-step approvals for risky actions, sandboxed file access.
  • Packaging Pipeline – PowerShell bootstrap installer (scripts/setup.ps1), launcher (scripts/start.ps1), PyInstaller bundler (scripts/package.ps1), and clear requirements pinned to Windows-friendly wheels.
  • CI-Friendly--headless mode stubs mic/camera, tests cover Ollama client, tools, and an integration path from wake word trigger to assistant reply.

Hardware & OS Assumptions

  • Dell Latitude 5430 Rugged
  • Windows 11 Pro (64-bit), 64 GB RAM, 1 TB SSD
  • Local administrator rights for first-time setup (winget installs)
  • Working microphone/speakers/camera (or headless mode for testing)

Quick Start

  1. Open PowerShell as Administrator and allow local scripts if needed:
    Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
  2. Clone or copy this repository to C:\Users\<you>\cedar.
  3. Bootstrap dependencies and environment:
    .\scripts\setup.ps1
  4. Launch CEDAR (starts Ollama serve if required, then the GUI):
    .\scripts\start.ps1

The first launch presents the setup wizard to confirm mic/camera, model selection/pulls, voice engine preferences, MQTT/Home Assistant configuration, and permission policies.

Daily Workflow

  • Chat Panel – Type or use the wake word “Cedar.” Transcripts appear in real time; tool cards summarize searches, automations, or IoT actions.
  • Model Panel – Dropdown lists ollama list. Enter raw commands (e.g., ollama run "llama3.1:8b-instruct") to pull/switch models; selections persist to config.json.
  • Voice Panel – Choose input/output devices, toggle wake word and VAD thresholds, manage voice packs (rights checkbox required).
  • IoT Panel – Configure MQTT broker and Home Assistant token, inspect device tree, and toggle entities with policy enforcement.
  • Permissions Panel – View/edit policy.yaml via GUI editor; two-step confirmations apply for high-impact tasks.
  • Logs Panel – Live JSON logs rendered in a readable format with download link for diagnostics.

Memory, Data & Logs

  • SQLite database at data/memory.db stores turns, tool calls, and running summaries.
  • Chroma vector store in data/vector_store, using Ollama embeddings or sentence-transformers fallback.
  • Policy configuration in policy.yaml; runtime preferences in config.json.
  • Logs written to logs/cedar.jsonl (JSON-serialized, secrets redacted).
  • Sandbox roots configured in policy.yaml plus config.json overrides.

Vision & Voice Notes

  • Camera snapshots saved to data/captures; cleanup handled by sandbox constraints.
  • OCR via pytesseract (ensure Tesseract installed — handled by setup script).
  • Optional ONNX YOLO detection requires placing a .onnx model in assets/models (documented in GUI).
  • Default Piper voice en_US-lessac-medium.onnx downloaded at setup; additional voice packs go into voice_packs/ and require rights confirmation.

Packaging

Generate a distributable one-folder build:

.\scripts\package.ps1

Artifacts land in dist\CEDAR\ with assets and QML bundled.

Testing & Headless Mode

  • Activate headless mode via config.json ("headless": true) or environment variable:
    $env:CEDAR_HEADLESS = "1"
    .\scripts\start.ps1
  • Run automated tests:
    venv\Scripts\Activate.ps1
    pytest

Headless mode queues mock wake word events, reads audio/text fixtures, and disables camera preview for CI pipelines.

Troubleshooting

Issue Fix
Ollama pull error Ensure Ollama is installed (winget install Ollama.Ollama). Retry ollama pull llama3.1:8b-instruct.
Piper voice download fails Re-run scripts\setup.ps1 or manually download to voice_packs\.
Wake word not triggering Verify microphone selection in setup wizard; test with Trigger Wake Word button.
Tool denied Check policy.yaml and the permissions panel for required confirmations or adjust levels.
Packaging fails Ensure scripts\package.ps1 runs from repo root and PyInstaller is installed (memo: setup script handles it).

License & Etiquette

  • Default code is distributed under the MIT license.
  • Imported voice packs must respect rights; enabling them requires user confirmation.
  • External services (Home Assistant, MQTT brokers) should be configured responsibly.

Enjoy operating CEDAR completely on-device! If extending the project, consult agents.md for architectural guidance.

About

AI Assistant

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published