RAG (Retrieval-Augmented Generation) and basic chatbot application built using the LangChain, LangGraph, and LangSmith frameworks/tools.
- Retrieval-Augmented Generation (RAG): Combines retrieval techniques with generative AI to provide accurate and context-aware responses.
- Chatbot Functionality: Interactive chatbot powered by LangChain and LangGraph for seamless communication.
- Modular Frameworks: Utilizes LangSmith for advanced debugging and monitoring of AI workflows.
- Streamed Responses: Supports streaming responses for real-time interaction.
- LangChain: Framework for building applications with LLMs (Large Language Models).
- LangGraph: Tool for managing and executing complex workflows with AI agents.
- LangSmith: Framework for debugging and monitoring AI workflows.
- Python: Core programming language for the application.
- Clone the repository:
git clone https://github.com/your-username/RAG-app.git
cd RAG-app
- Create a virtual environment:
python3 -m venv venv
source venv/bin/activate
- Install dependencies:
pip install -r requirements.txt
- Ensure the Ollama server is running and the required models (e.g., deepseek-coder) are available.
- Start the chatbot application:
python chatbot.py
-
Interact with the chatbot by typing your queries in the terminal.
-
To exit the chatbot, type quit.
-
create .env file and insert these important key-values in that:
OPENAI_API_KEY=sk-your-openai-key-here
GOOGLE_API_KEY=your-gemini-api-key
LANGSMITH_API_KEY=your-langsmith-key
LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=your-langsmith-key
export OLLAMA_API_KEY="your_ollama_api_key"