Thanks to visit codestin.com
Credit goes to github.com

Skip to content

DioxideAi is a desktop chat interface for running local large language models with web results for answers to be timely and more correct.

ianrelecker/dioxideai

Repository files navigation

DioxideAi

DioxideAi is a desktop chat interface for running local large language models through Ollama with live web context pulled directly from DuckDuckGo search results. A first-launch guide walks through the Ollama setup steps and can be reopened later from settings.

Highlights

  • Multi-chat workspace with a persistent history sidebar and one-click New Chat button.
  • Streaming responses from the Ollama /api/chat endpoint, rendered token-by-token.
  • “Thoughts” panel that exposes the assistant’s current step (searching, context retrieved, etc.).
  • Real web search integration powered by DuckDuckGo HTML scraping (no API key required).
  • Smart search planning that expands broad prompts (e.g., “latest news”) into focused queries for richer context.
  • Quick header controls (New Chat, hide chats) plus a configurable theme picker.
  • Streaming controls with a Stop button and rich Markdown rendering (code blocks, hyperlinks).
  • Drop links directly into prompts; they’re added to the context alongside web results.
  • Built-in macOS packaging with auto-updates served from GitHub Releases.
  • WinRAR-style support panel: totally free, but tips are welcome.
  • Local-first storage of every conversation in app.getPath('userData').

Requirements

  • macOS (tested on Apple Silicon)
  • Node.js 18+
  • Ollama installed and running (ollama serve or the background service)
  • At least one Ollama model pulled locally, e.g. ollama pull mistral or ollama pull llama3

Setup

git clone <this repo>
cd dioxideai
npm install

Ensure the Ollama service is active (ps aux | grep ollama or run ollama serve) and responds at http://localhost:11434.

Development

npm run dev

This launches the Electron app with auto-reload enabled. Edit the files under dioxideai/ and the window will refresh automatically.

Usage

  1. Open the app (npm run dev or the packaged build).
  2. Click + New Chat or pick an existing conversation from the sidebar (use Hide Chats to collapse/expand the history panel).
  3. Select one of the locally installed Ollama models (models are discovered via GET /api/tags).
  4. Type a prompt and press Send or hit Enter (use Shift+Enter for a newline).
  5. The main process decides whether to augment the prompt with live context. If it does, it plans a set of DuckDuckGo queries (broad requests like “latest news” are expanded automatically), scrapes the top snippets with cheerio, and streams a reply from Ollama’s /api/chat.
  6. Responses arrive in real time; include any http(s) links in your prompt and they’ll be added to the context automatically. Use the Stop button beside a live response to cancel long generations, and open the “Thoughts” disclosure to inspect retrieved context.
  7. Use the header pills to toggle Web Search (defaults to on) and Hide Chats. Open the gear icon to adjust additional preferences.

Chats are saved automatically and reloaded when you reopen the application.

Settings Panel

Click the gear icon in the chat header to open the modal preferences panel:

  • Automatic web search – disable to keep the assistant strictly offline.
  • Open thoughts by default – auto-expand the reasoning/context drawer for each reply.
  • Max search results – slider (1–12) controlling how many snippets are collected from DuckDuckGo per prompt.
  • Theme – choose Light, Dark, Cream, or follow the system theme (updates instantly).
  • Hide chat history sidebar – collapse the previous chats panel by default.
  • Delete all chats – wipe every saved conversation (creates a fresh empty chat immediately).
  • Export conversation – save the current chat as Markdown or PDF.

Preferences persist in ~/Library/Application Support/DioxideAi/dioxideai-settings.json.

Packaging & Updates (macOS)

This project ships with an electron-builder setup that produces a drag-to-Install DMG and wires in auto-updates via electron-updater.

  1. Set the build.publish block in package.json to point at your GitHub repo (or another update server). When publishing to GitHub Releases, export a GH_TOKEN with repo scope before running the build.

  2. Ensure you have a signing identity installed (Developer ID Application: …). Electron Builder will auto-discover it, or you can set CSC_IDENTITY_AUTO_DISCOVERY=false and provide CSC_NAME.

  3. Build the bundle:

    npm run dist

    Outputs land inside the dist/ directory: signed .dmg and .zip artifacts plus the .app.

  4. Notarize the DioxideAi.app and DMG (e.g., via xcrun notarytool submit --keychain-profile …). After notarization, staple the ticket: xcrun stapler staple dist/DioxideAi.dmg.

  5. Publish the generated latest-mac.yml and artifacts (DMG/ZIP) to your release endpoint. Auto-updates run only in packaged builds and will download/apply new releases automatically, prompting the user to restart once ready.

If you need a custom DMG background or icon, place assets under build/ and update the build section of package.json.

Configuration

  • Web search toggle: Controlled from the UI or programmatically via createSearchPlan helpers in main.js.
  • Context window: The scraper keeps up to searchResultLimit snippets per query (UI slider). Tune logic in performWebSearch.
  • Streaming: Responses already stream token-by-token. If you prefer batched replies, set stream: false in ask-ollama and remove the renderer’s stream listeners.

Data Storage

Chat logs are serialized to JSON at:

  • macOS: ~/Library/Application Support/DioxideAi/dioxideai-chats.json

Preferences live alongside chats in ~/Library/Application Support/DioxideAi/dioxideai-settings.json.

Support

The app follows the WinRAR monetisation strategy—use it as much as you like, and chip in only if it earns a spot in your workflow.

  • Website: relecker.com
  • Bitcoin: bc1qzs9m9ugzrzqsfdmhghlqvax0mdxqledn77v068

Delete those files to clear the history and reset settings.

Troubleshooting

  • No models listed: Confirm Ollama is running and models are installed (ollama list).
  • Slow responses: Larger models take longer to load; the first query may be slow while the model warms up.
  • Blocked network: DuckDuckGo HTML endpoint must be reachable; firewalls or VPNs can interfere with scraping.
  • Packaging issues: npm run dist uses electron-builder; ensure the signing identity is available and that publish is configured before shipping auto-updates.

Future Enhancements

  • Local vector store to ground models on personal notes or documents.
  • Compact density modes and typography options.
  • Per-model defaults, temperature controls, and advanced Ollama parameters.

Project Structure

dioxideai/
├── main.js        # Electron main process, Ollama + search integration
├── preload.js     # Secure IPC bridge for renderer
├── renderer.js    # Chat UI logic
├── index.html     # Renderer markup
├── styles.css     # Renderer styles
├── package.json   # Scripts and dependencies
└── README.md

License

MIT © Your Name

About

DioxideAi is a desktop chat interface for running local large language models with web results for answers to be timely and more correct.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published