This repository contains a Retrieval-Augmented Generation (RAG) full-stack chatbot application built with Python, FastAPI, Redis, React and OpenAI's GPT-4o. The chatbot specializes in answering questions about new technology trends, powered by the latest reports from leading institutions such as the World Bank, World Economic Forum, McKinsey, Deloitte, and OECD.
The application is designed to be easily customizable, allowing you to integrate your own data sources and adapt it to different use cases.
For a detailed walkthrough of the code and the technologies used, check out this blog post: Building an AI Chatbot Powered by Your Data.
The repository is organized into two main folders:
backend/: Contains the Python FastAPI backend code and a local Python version for testing.frontend/: Contains the React frontend code. It uses Vite.js as the build tool and bundler.
- Python 3.11+.
- Node.js 18+.
- Poetry (Python package manager).
- Redis Stack Server. Follow the installation and running instructions. The application requires the RedisJSON and RediSearch modules, which Redis Stack Server includes. Alternatively, you can install Redis and add the required modules yourself.
-
Navigate to the backend folder and install the Python dependencies using Poetry:
cd backend poetry install -
Create a
.envfile in the backend folder copying the.env.examplefile provided and set the required environment variable:OPENAI_API_KEY: Your OpenAI API key.
-
The application uses Pydantic Settings for configuration management. You can adjust the configuration defaults in
backend/app/config.py, or set the configuration variables directly using environment variables.
-
Navigate to the frontend folder and install the JavaScript dependencies:
cd frontend npm install -
Create a
.env.developmentfile in the frontend folder copying the.env.examplefile provided that includes the required environment variable:VITE_API_URL: The URL to connect to the backend API.
Before running the full-stack application, you need to load the source documents and build the knowledge base. Make sure Redis Stack Server is running before executing the loading script:
cd backend
poetry run loadThis script processes the documents in the backend/data/docs directory, creates vector embeddings, and stores them in the Redis database.
You can customize this chatbot with your own data sources:
- Replace the existing PDF files in the
backend/data/docswith your own data sources. - If needed, adjust the
process_docsfunction inbackend/app/loader.pyto handle different file formats. - Adjust the assistant prompts in
backend/app/assistants/prompts.pyfor your specific use case. - Run the
poetry run loadscript as shown above.
To run the full-stack chatbot application:
-
Ensure Redis Stack Server is running.
-
Activate the virtual environment for the backend and start the backend server:
cd backend poetry shell fastapi dev app/main.py -
In a separate terminal, start the frontend server:
cd frontend npm run dev -
Open your web browser and visit
http://localhost:3000to access the application.
You can run the local Python application for testing in your console using the provided Poetry script:
cd backend
poetry run localTo export all conversation chats to a JSON file in the backend/data directory:
cd backend
poetry run export