Thanks to visit codestin.com
Credit goes to GitHub.com

Skip to content

An AI-Powered Desktop Assistant that can run system commands and gives response to your stupid questions.

Notifications You must be signed in to change notification settings

khaixu30/Oblivion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

9 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

OBLIVION

 ________ __________.____    ._______   ____.___________    _______   
\_____  \\______   \    |   |   \   \ /   /|   \_____  \   \      \  
 /   |   \|    |  _/    |   |   |\   Y   / |   |/   |   \  /   |   \ 
/    |    \    |   \    |___|   | \     /  |   /    |    \/    |    \
\_______  /______  /_______ \___|  \___/   |___\_______  /\____|__  /
        \/       \/        \/                          \/         \/ 

Version Language License Platform

A lightweight, high-performance AI assistant powered by Google's Gemini API

Oblivion is a modern C++ application that brings the power of large language models directly to your desktop without the computational overhead of running models locally. Designed with performance and efficiency in mind, it provides seamless AI assistance while maintaining minimal resource usage.


🌟 Key Features

πŸ”₯ New in v3.0 branch

  • Unnecessary API calls canceled.
  • TTS integratred

πŸ› οΈ Technology Stack

Component Technology Purpose
Core Language C++17 High-performance system programming
AI Provider Google Gemini API Advanced language model capabilities
HTTP Client libcurl Secure API communication
JSON Processing nlohmann/json Efficient data parsing
Build System Make Cross-platform compilation

πŸ“¦ Installation

Prerequisites

  • Operating System: Windows 10/11 (Linux support in development)
  • Compiler: GCC with C++17 support
  • Internet Connection: Required for API communication
  • Gemini API Key: Get your free key here

Step-by-Step Setup

  1. Clone the Repository

    git clone https://github.com/Iftikha/Oblivion.git
    cd Oblivion
  2. Install libcurl

    • Download from curl.se/windows
    • Extract to C://libcurl/
    • Copy DLL files to project directory
  3. Configure API Key To configure API-KEY, create .env and paste the following:

    API_KEY=PASTE YOUR API KEY HERE
  4. Create Required Directories

    mkdir history
    touch history/history.txt
  5. Build and Run

    make build
    make run

πŸš€ Quick Start

// Initialize Oblivion
Engine engine;
engine.init("your-api-key-here");

// Start conversation
cout << "You: Hello Oblivion";
string response = engine.sendRequest("Hello Oblivion");
cout << "Oblivion: " << response << endl;

πŸš€ Quick Start for v3.0

# Switch to new branch
git checkout v3.0
# Build
make build

# Run
make run

Sample Interaction

=== OBLIVION - AI Assistant ===
Model: gemini-pro
Welcome, User!

You: > run chrome
Oblivion: Executing system command...
[ Chrome launched ]

You: Write a Python script to print numbers 1-10
Oblivion: Here's a simple script:

for i in range(1, 11):
    print(i)

You: /bye
---

## πŸ“ Project Structure

Oblivion/ | β”œβ”€β”€ main.cpp # Application entry point β”œβ”€β”€ audio_output/ # Saves the TTS audio files β”œβ”€β”€ include/ β”‚ β”œβ”€β”€ json.hpp # JSON library (included) β”‚ β”œβ”€β”€ Conversation.hpp # History class header β”‚ β”œβ”€β”€ Conversation.cpp # History management β”‚ β”œβ”€β”€ CommandParser.hpp # Parses and Executes Commands Class β”‚ β”œβ”€β”€ CommandParser.cpp # Parses and executes commands β”‚ β”œβ”€β”€ ResponseHandler.hpp # Manages the prompt Class β”‚ β”œβ”€β”€ ResponseHanlder.cpp # Manages Prompt β”‚ β”œβ”€β”€ Engine.hpp # Engine class header β”‚ └── Engine.cpp # API communication β”œβ”€β”€ history/ β”‚ └── history.txt # Conversation storage β”œβ”€β”€ commands/ β”‚ └── commands.json # Conversation storage β”œβ”€β”€ Makefile # Build configuration └── README.md # This file


---

## πŸ”§ Configuration

### Build Options
```makefile
# Build exe file
make build

# Run the exe file
make run

# Clean build files
make clean

API Configuration

  • Model Selection: Supports gemini-pro, gemini-2.0-flash
  • Response Length: Configurable token limits
  • History Size: Adjustable conversation memory

🀝 Contributing

We welcome contributions!

Development Guidelines

  • Follow C++17 standards
  • Include unit tests for new features
  • Update documentation for API changes
  • Ensure cross-platform compatibility

πŸ“Š Performance Metrics

Metric Value
Startup Time < 100ms
Memory Usage ~15MB RAM
Response Time 1-3 seconds (API dependent)
Storage < 1MB per 1000 conversations

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.


πŸ‘¨β€πŸ’» Developer

Iftikhar Ahmed
Software Engineer & AI Enthusiast


πŸ™ Acknowledgments

  • Google LLC - For the powerful Gemini API
  • nlohmann - For the excellent JSON library
  • curl team - For reliable HTTP client library
  • C++ Community - For continuous language improvements

πŸ“ˆ Version History

v1.0.0 (Old)

  • βœ… Core chat functionality
  • βœ… Conversation history
  • βœ… Gemini API integration
  • βœ… Windows support

v2.0.0 (Old)

  • βœ… System commands execution

v3.0.0 (Current)

  • πŸ”„ TTS Integrated

v4.0.0 (Future)

  • πŸ”„ Cross-platform support
  • πŸ”„ Plugin architecture
  • πŸ”„ Advanced automation

⭐ Star this repository if you find it helpful!

Made with ❀️ and C++

About

An AI-Powered Desktop Assistant that can run system commands and gives response to your stupid questions.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages