DioxideAi is a desktop chat interface for running local large language models through Ollama with live web context pulled directly from DuckDuckGo search results. A first-launch guide walks through the Ollama setup steps and can be reopened later from settings.
- Multi-chat workspace with a persistent history sidebar and one-click New Chat button.
- Streaming responses from the Ollama
/api/chatendpoint, rendered token-by-token. - “Thoughts” panel that exposes the assistant’s current step (searching, context retrieved, etc.).
- Real web search integration powered by DuckDuckGo HTML scraping (no API key required).
- Smart search planning that expands broad prompts (e.g., “latest news”) into focused queries for richer context.
- Quick header controls (New Chat, hide chats) plus a configurable theme picker.
- Streaming controls with a Stop button and rich Markdown rendering (code blocks, hyperlinks).
- Drop links directly into prompts; they’re added to the context alongside web results.
- Built-in macOS packaging with auto-updates served from GitHub Releases.
- WinRAR-style support panel: totally free, but tips are welcome.
- Local-first storage of every conversation in
app.getPath('userData').
- macOS (tested on Apple Silicon)
- Node.js 18+
- Ollama installed and running (
ollama serveor the background service) - At least one Ollama model pulled locally, e.g.
ollama pull mistralorollama pull llama3
git clone <this repo>
cd dioxideai
npm installEnsure the Ollama service is active (ps aux | grep ollama or run ollama serve) and responds at http://localhost:11434.
npm run devThis launches the Electron app with auto-reload enabled. Edit the files under dioxideai/ and the window will refresh automatically.
- Open the app (
npm run devor the packaged build). - Click + New Chat or pick an existing conversation from the sidebar (use Hide Chats to collapse/expand the history panel).
- Select one of the locally installed Ollama models (models are discovered via
GET /api/tags). - Type a prompt and press Send or hit Enter (use Shift+Enter for a newline).
- The main process decides whether to augment the prompt with live context. If it does, it plans a set of DuckDuckGo queries (broad requests like “latest news” are expanded automatically), scrapes the top snippets with
cheerio, and streams a reply from Ollama’s/api/chat. - Responses arrive in real time; include any http(s) links in your prompt and they’ll be added to the context automatically. Use the Stop button beside a live response to cancel long generations, and open the “Thoughts” disclosure to inspect retrieved context.
- Use the header pills to toggle Web Search (defaults to on) and Hide Chats. Open the gear icon to adjust additional preferences.
Chats are saved automatically and reloaded when you reopen the application.
Click the gear icon in the chat header to open the modal preferences panel:
- Automatic web search – disable to keep the assistant strictly offline.
- Open thoughts by default – auto-expand the reasoning/context drawer for each reply.
- Max search results – slider (1–12) controlling how many snippets are collected from DuckDuckGo per prompt.
- Theme – choose Light, Dark, Cream, or follow the system theme (updates instantly).
- Hide chat history sidebar – collapse the previous chats panel by default.
- Delete all chats – wipe every saved conversation (creates a fresh empty chat immediately).
- Export conversation – save the current chat as Markdown or PDF.
Preferences persist in ~/Library/Application Support/DioxideAi/dioxideai-settings.json.
This project ships with an electron-builder setup that produces a drag-to-Install DMG and wires in auto-updates via electron-updater.
-
Set the
build.publishblock inpackage.jsonto point at your GitHub repo (or another update server). When publishing to GitHub Releases, export aGH_TOKENwith repo scope before running the build. -
Ensure you have a signing identity installed (
Developer ID Application: …). Electron Builder will auto-discover it, or you can setCSC_IDENTITY_AUTO_DISCOVERY=falseand provideCSC_NAME. -
Build the bundle:
npm run dist
Outputs land inside the
dist/directory: signed.dmgand.zipartifacts plus the.app. -
Notarize the
DioxideAi.appand DMG (e.g., viaxcrun notarytool submit --keychain-profile …). After notarization, staple the ticket:xcrun stapler staple dist/DioxideAi.dmg. -
Publish the generated
latest-mac.ymland artifacts (DMG/ZIP) to your release endpoint. Auto-updates run only in packaged builds and will download/apply new releases automatically, prompting the user to restart once ready.
If you need a custom DMG background or icon, place assets under build/ and update the build section of package.json.
- Web search toggle: Controlled from the UI or programmatically via
createSearchPlanhelpers inmain.js. - Context window: The scraper keeps up to
searchResultLimitsnippets per query (UI slider). Tune logic inperformWebSearch. - Streaming: Responses already stream token-by-token. If you prefer batched replies, set
stream: falseinask-ollamaand remove the renderer’s stream listeners.
Chat logs are serialized to JSON at:
- macOS:
~/Library/Application Support/DioxideAi/dioxideai-chats.json
Preferences live alongside chats in ~/Library/Application Support/DioxideAi/dioxideai-settings.json.
The app follows the WinRAR monetisation strategy—use it as much as you like, and chip in only if it earns a spot in your workflow.
- Website: relecker.com
- Bitcoin:
bc1qzs9m9ugzrzqsfdmhghlqvax0mdxqledn77v068
Delete those files to clear the history and reset settings.
- No models listed: Confirm Ollama is running and models are installed (
ollama list). - Slow responses: Larger models take longer to load; the first query may be slow while the model warms up.
- Blocked network: DuckDuckGo HTML endpoint must be reachable; firewalls or VPNs can interfere with scraping.
- Packaging issues:
npm run distuses electron-builder; ensure the signing identity is available and thatpublishis configured before shipping auto-updates.
- Local vector store to ground models on personal notes or documents.
- Compact density modes and typography options.
- Per-model defaults, temperature controls, and advanced Ollama parameters.
dioxideai/
├── main.js # Electron main process, Ollama + search integration
├── preload.js # Secure IPC bridge for renderer
├── renderer.js # Chat UI logic
├── index.html # Renderer markup
├── styles.css # Renderer styles
├── package.json # Scripts and dependencies
└── README.md
MIT © Your Name