Disabled students face barriers to learning due to limited inclusive resources. Deaf students struggle with text and audio, while blind students lack access to visual content. Our platform tackles these issues using AI-powered tools like text-to-sign language, scene descriptions, braille conversion, and personalized learning features to ensure accessible education for all.
Elev8AI is an inclusive education platform designed to empower both disabled and general users. For deaf users, it offers text-to-sign language teaching by processing uploaded PDFs and audio files. Blind users can benefit from features like scene description with audio output and YouTube-to-braille conversion for accessible content. Additionally, the platform includes tools for all users, such as Generative AI-based career guidance, a "Talk to PDF" feature for interactive document exploration, and an MCQ generator that creates quizzes and evaluates responses using a Large Language Model (LLM). This platform aims to make education universally accessible and engaging for everyone.
Cloning the repository
git clone https://github.com/Hackademia-2k25/Elev8AI.git
API keys needed to run this project(store this in .env file)
GROQ_API_KEY = "<your api key>"
PHIDATA_API_KEY = "<your api key>"
Run Elev8AI frontend with npm
npm install
npm run build
npm run dev
To create and run a virtual environment using conda
conda create -p venv python==3.12 -y
conda activate venv/
Run the backend(flask files)
pip install -r requirements.txt
python <filename.py>
Converts uploaded PDFs and audio files into sign language to aid learning for deaf students.
Provides audio descriptions of uploaded images to help blind users understand visual content.
Converts YouTube video subtitles into braille for tactile reading by blind users.
Uses Groq's open-source LLM models to offer personalized career suggestions and roadmaps.
Allows users to interact with uploaded PDFs using conversational AI for better comprehension.
Automatically generates multiple-choice questions from content and evaluates responses using Groq models. This platform is built using the Phidata framework, ensuring seamless integration of these AI-powered tools for inclusive education.
Employs OpenCV to detect hand gestures and sign language in real-time via camera input, enabling users to practice sign language interactively.
- GROQ for LLM models
- PhiData for Agentic AI framework
- Flask
- Nextjs
- Typescript
- ShadCN
We extend our heartfelt gratitude to everyone who contributed to the development of this project. A special thanks to NIT Andhra for organizing events and providing unwavering support throughout the process.