AI-powered lecture analysis platform for educators
EduVisor helps teachers improve their lecturing skills by analyzing lecture videos using AI. Get insights on speech patterns, emotional engagement, and personalized feedback.
- Video Upload - Upload lecture videos for analysis
- Emotion Recognition - Detect speech emotions using wav2vec2
- Transcription - Automatic speech-to-text via Google Cloud
- Analytics Dashboard:
- Engagement score (% of engaging segments)
- Tone modulation (vocal variety)
- Speaking pace (words per minute)
- Question count
- Visual Timeline - Interactive engagement chart
- AI Feedback - Personalized improvement suggestions from GPT-3.5
- History Tracking - Track progress across lectures
EduVisor/
├── src/ # Source code
│ ├── config/ # Django configuration
│ │ ├── settings.py # Project settings
│ │ ├── urls.py # Root URL routing
│ │ ├── wsgi.py # WSGI entry point
│ │ └── asgi.py # ASGI entry point
│ │
│ ├── apps/ # Django applications
│ │ ├── uploads/ # Video upload handling
│ │ │ ├── models.py # Video model
│ │ │ ├── views.py # Upload/preview views
│ │ │ ├── forms.py # Upload form
│ │ │ └── urls.py # Upload routes
│ │ │
│ │ ├── analysis/ # Lecture analysis
│ │ │ ├── views.py # Analysis views
│ │ │ └── urls.py # Analysis routes
│ │ │
│ │ └── lectures/ # Lecture history
│ │ ├── models.py # Lecture model
│ │ ├── views.py # History views
│ │ └── urls.py # History routes
│ │
│ ├── core/ # Core business logic
│ │ └── services/ # Service modules
│ │ ├── analyzer.py # Main orchestrator
│ │ ├── audio.py # Audio extraction
│ │ ├── speech.py # Speech-to-text
│ │ ├── emotion.py # Emotion recognition
│ │ ├── metrics.py # Metrics calculation
│ │ ├── visualization.py # Chart generation
│ │ ├── ai_feedback.py # OpenAI integration
│ │ └── config.py # Configuration
│ │
│ ├── templates/ # HTML templates
│ │ ├── base.html
│ │ ├── uploads/
│ │ ├── analysis/
│ │ └── lectures/
│ │
│ ├── static/ # Static assets
│ │ └── css/main.css
│ │
│ └── manage.py # Django CLI
│
├── data/ # Runtime data (gitignored)
│ ├── audio/ # Temporary audio files
│ ├── media/ # Uploaded videos
│ └── config.json # API keys (gitignored)
│
├── requirements.txt # Python dependencies
├── .gitignore
└── README.md
- Python 3.10+
- FFmpeg (for audio processing)
git clone https://github.com/yourusername/EduVisor.git
cd EduVisor
# Create virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txtCreate data/config.json:
{
"google_cloud_key": "/path/to/your/google-cloud-key.json",
"openai_key": "sk-your-openai-api-key",
"hugging_face_key": "hf_your-huggingface-key"
}Or use environment variables:
export GOOGLE_CLOUD_KEY_PATH=/path/to/google-cloud-key.json
export OPENAI_API_KEY=sk-your-openai-api-key
export HUGGINGFACE_API_KEY=hf_your-huggingface-keycd src
python manage.py migratepython manage.py runserverNavigate to: http://localhost:8000
- Upload - Go to the home page and upload your lecture video
- Preview - Review the uploaded video
- Analyze - Click "Analyze Lecture" to start AI processing
- View Results - See metrics, timeline, and AI feedback
- Track History - Access previous analyses from "Previous Lectures"
| Service | Purpose | Get Key |
|---|---|---|
| Google Cloud | Speech-to-Text | Google Cloud Console |
| OpenAI | GPT-3.5 Feedback | OpenAI Platform |
| Hugging Face | Model Downloads | Hugging Face |
- Backend: Django 4.2
- ML/AI: Transformers (wav2vec2), Google Cloud Speech, OpenAI GPT-3.5
- Audio: MoviePy, PyDub
- Visualization: Plotly
- Frontend: HTML, CSS, JavaScript
cd src
python manage.py runserverpython manage.py makemigrations
python manage.py migratepython manage.py createsuperuserMIT License
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open Pull Request