-
-
Notifications
You must be signed in to change notification settings - Fork 313
Fix education AI transcript, summary, and quiz generation #5344
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
👋 Hi @aaditya8979! This pull request needs a peer review before it can be merged. Please request a review from a team member who is not:
Once a valid peer review is submitted, this check will pass automatically. Thank you! |
WalkthroughThis PR adds an AI-powered educational video feature enabling users to submit YouTube videos with automatic transcript extraction, AI-generated summaries, educational verification, and automated quiz question generation. Videos are stored with metadata, displayed in a grid, and support interactive quizzes with score tracking and history. Changes
Sequence DiagramsequenceDiagram
participant User
participant Django as Django View
participant YouTube as YouTube API
participant OpenAI as OpenAI API
participant DB as Database
participant Template as Template/UI
User->>Django: Submit video (URL, title)
activate Django
Django->>Django: Extract youtube_id from URL
Django->>YouTube: Fetch transcript
activate YouTube
YouTube-->>Django: Return transcript text
deactivate YouTube
Django->>OpenAI: Generate summary & verify educational
activate OpenAI
OpenAI-->>Django: Return summary + is_verified flag
deactivate OpenAI
Django->>OpenAI: Generate quiz questions from transcript
activate OpenAI
OpenAI-->>Django: Return 4-option quiz questions (up to 10)
deactivate OpenAI
Django->>DB: Create EducationalVideo record
Django->>DB: Create VideoQuizQuestion records
activate DB
DB-->>Django: Confirm records saved
deactivate DB
deactivate Django
Django-->>User: Success message + redirect
User->>Template: View video detail page
Template->>DB: Fetch video + quiz questions + user history
activate DB
DB-->>Template: Return video data + quiz questions
deactivate DB
Template-->>User: Render video, AI summary, quiz form
User->>Template: Submit quiz answers (AJAX)
activate Template
Template->>Django: POST quiz answers
activate Django
Django->>Django: Calculate score & percentage
Django->>DB: Create QuizAttempt record
activate DB
DB-->>Django: Confirm record saved
deactivate DB
Django-->>Template: Return score + percentage (JSON)
deactivate Django
deactivate Template
Template->>User: Animate & display results modal
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Suggested labels
Pre-merge checks and finishing touches❌ Failed checks (4 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
📊 Monthly LeaderboardHi @aaditya8979! Here's how you rank for December 2025:
Leaderboard based on contributions in December 2025. Keep up the great work! 🚀 |
❌ Pre-commit checks failedThe pre-commit hooks found issues that need to be fixed. Please run the following commands locally to fix them: # Install pre-commit if you haven't already
pip install pre-commit
# Run pre-commit on all files
pre-commit run --all-files
# Or run pre-commit on staged files only
pre-commit runAfter running these commands, the pre-commit hooks will automatically fix most issues. 💡 Tip: You can set up pre-commit to run automatically on every commit by running: pre-commit installPre-commit outputFor more information, see the pre-commit documentation. |
| </p> | ||
|
|
||
| <div class="space-y-3 ml-11"> | ||
| {% for option, label in options %} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: The video_detail.html template attempts to iterate over an options variable that is not supplied by the VideoDetailView context, causing a rendering error.
Severity: CRITICAL | Confidence: High
🔍 Detailed Analysis
The video_detail.html template at line 66 contains a loop {% for option, label in options %}. However, the corresponding VideoDetailView.get_context_data method does not add the options variable to the template context. When a video detail page with associated quiz questions is rendered, the template will attempt to access this undefined variable, resulting in a Django UndefinedError and preventing the page from loading.
💡 Suggested Fix
In the VideoDetailView.get_context_data method, you need to prepare the options for the template. One approach is to modify the template to construct the options from the question object directly, like {% with options=... %}. Alternatively, process the quiz_questions in the view to attach an options list to each question object before passing it to the template.
🤖 Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location: website/templates/education/video_detail.html#L66
Potential issue: The `video_detail.html` template at line 66 contains a loop `{% for
option, label in options %}`. However, the corresponding
`VideoDetailView.get_context_data` method does not add the `options` variable to the
template context. When a video detail page with associated quiz questions is rendered,
the template will attempt to access this undefined variable, resulting in a Django
`UndefinedError` and preventing the page from loading.
Did we get this right? 👍 / 👎 to inform future reviews.
Reference ID: 7720901
❌ Tests failedThe Django tests found issues that need to be fixed. Please review the test output below and fix the failing tests. How to run tests locally# Install dependencies
poetry install --with dev
# Run all tests
poetry run python manage.py test
# Run tests with verbose output
poetry run python manage.py test -v 3
# Run a specific test
poetry run python manage.py test app.tests.TestClass.test_methodTest output (last 100 lines)For more information, see the Django testing documentation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
website/views/education.py (1)
373-439: Hardensubmit_quizerror handling and remove unreachable, undefined-code tailTwo issues in
submit_quiz:
- Information exposure via raw exception text
except Exception as e: print(f"DEBUG: quiz submission error: {e}") return JsonResponse( {"error": str(e)}, status=500 )Returning
str(e)to clients can leak internal details (stack traces, DB errors, etc.), which CodeQL correctly flags as an information-exposure risk.
- Unreachable tail code with undefined variables
The block after the
try/except:featured_lectures = Lecture.objects.filter(section__isnull=True) courses = Course.objects.all() context = {"is_instructor": is_instructor, "featured_lectures": featured_lectures, "courses": courses} return render(request, template, context)is unreachable (every path above returns) and uses
is_instructorandtemplatewhich are not defined in this scope.A tighter implementation would:
- Log the exception server-side, but return a generic error message, and
- Delete the unreachable render block.
Suggested change to the exception handler and cleanup
- except EducationalVideo.DoesNotExist: - return JsonResponse( - {"error": "Video not found"}, - status=404 - ) - except Exception as e: - print(f"DEBUG: quiz submission error: {e}") - return JsonResponse( - {"error": str(e)}, - status=500 - ) - - featured_lectures = Lecture.objects.filter(section__isnull=True) - courses = Course.objects.all() - context = {"is_instructor": is_instructor, "featured_lectures": featured_lectures, "courses": courses} - return render(request, template, context) + except EducationalVideo.DoesNotExist: + return JsonResponse({"error": "Video not found"}, status=404) + except Exception: + logger.exception("Error during quiz submission", extra={"video_id": video_id}) + return JsonResponse( + {"error": "An unexpected error occurred while submitting the quiz."}, + status=500, + )
🧹 Nitpick comments (5)
website/models.py (1)
3648-3703: EducationalVideo/quiz models look consistent; only minor DRY/cleanup opportunitiesThe model definitions and relationships align with the migrations and the new views/templates and should behave correctly.
If you touch this again later, consider:
- Letting
save()be the single source of truth foryoutube_id(drop the manual extraction in the view), and- Removing the local
import reand duplicatefrom django.db import modelsin this section to keep imports centralized.website/templates/education/education.html (1)
203-212: Minor dark-mode class tweak in flash message stylingThe
classattribute for messages buildsdark:as a literal prefix and only applies it to the background class, e.g. resulting in:
dark:bg-green-900 text-green-100If you want both background and text to respond to dark mode, consider emitting:
dark:bg-green-900 dark:text-green-100(and similarly for the error branch).This is purely cosmetic; behavior is otherwise fine.
website/views/education.py (3)
44-57: Collapse duplicate import block to keep this module readableLines 44–57 effectively re-import modules (
os,re,json, Django shortcuts, messages, auth decorators, http decorators, JsonResponse, OpenAI, YouTubeTranscriptApi, models) that are already imported above.This doesn’t break anything, but it does make the file harder to scan and increases the chance of subtle drift if one block is edited and the other isn’t. It’d be cleaner to:
- Keep a single, consolidated import section at the top, and
- Remove this second block entirely.
976-989: VideoDetailView context is good; consider ordering quiz data if needed
VideoDetailViewcorrectly provides:
quiz_questionsfiltered byvideo, andquiz_historyfor the authenticated user and this video.If you care about presentation order, you may want to make the ordering explicit:
- context["quiz_questions"] = VideoQuizQuestion.objects.filter(video=video) + context["quiz_questions"] = VideoQuizQuestion.objects.filter(video=video).order_by("created_at") ... - context["quiz_history"] = QuizAttempt.objects.filter( - user=self.request.user, video=video - ) + context["quiz_history"] = QuizAttempt.objects.filter( + user=self.request.user, video=video + ).order_by("-completed_at")Right now, you rely on model Meta for attempts, but questions have no explicit ordering.
249-370: Consider moving AI pipeline to background jobs for production useThe
education_homePOST branch executes network I/O synchronously on the main request thread:
get_youtube_transcript()calls YouTubeTranscriptApigenerate_ai_summary_and_verify()calls OpenAI APIgenerate_quiz_from_transcript()calls OpenAI APIFor longer videos or API slowness, requests can block for several seconds. While acceptable for MVP, plan for:
- Moving transcript + AI work to a background job (Celery/RQ) with a pending
EducationalVideorecord- Or adding request timeouts and user-facing error messaging for external API failures
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (8)
Dockerfile(1 hunks)blt/urls.py(3 hunks)website/migrations/0264_educationalvideo.py(1 hunks)website/migrations/0265_educationalvideo_ai_summary_and_more.py(1 hunks)website/models.py(1 hunks)website/templates/education/education.html(1 hunks)website/templates/education/video_detail.html(1 hunks)website/views/education.py(5 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
website/views/education.py (1)
website/models.py (3)
EducationalVideo(3651-3673)VideoQuizQuestion(3676-3688)QuizAttempt(3691-3703)
blt/urls.py (2)
website/models.py (3)
EducationalVideo(3651-3673)VideoQuizQuestion(3676-3688)QuizAttempt(3691-3703)website/views/education.py (2)
submit_quiz(375-439)VideoDetailView(976-989)
website/migrations/0265_educationalvideo_ai_summary_and_more.py (1)
website/migrations/0264_educationalvideo.py (1)
Migration(6-28)
website/migrations/0264_educationalvideo.py (1)
website/migrations/0265_educationalvideo_ai_summary_and_more.py (1)
Migration(8-76)
🪛 GitHub Check: CodeQL
website/views/education.py
[warning] 432-432: Information exposure through an exception
Stack trace information flows to this location and may be exposed to an external user.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (33)
- GitHub Check: Run Tests
- GitHub Check: docker-test
- GitHub Check: remove_last_active_label
- GitHub Check: Run Tests
- GitHub Check: docker-test
- GitHub Check: check_peer_review
- GitHub Check: remove_last_active_label
- GitHub Check: add_changes_requested_label
- GitHub Check: remove_last_active_label
- GitHub Check: auto-assign
- GitHub Check: Run Tests
- GitHub Check: docker-test
- GitHub Check: check_peer_review
- GitHub Check: remove_last_active_label
- GitHub Check: add_changes_requested_label
- GitHub Check: remove_last_active_label
- GitHub Check: auto-assign
- GitHub Check: Run Tests
- GitHub Check: docker-test
- GitHub Check: add_changes_requested_label
- GitHub Check: Run Tests
- GitHub Check: docker-test
- GitHub Check: add_changes_requested_label
- GitHub Check: Run Tests
- GitHub Check: docker-test
- GitHub Check: Run Tests
- GitHub Check: docker-test
- GitHub Check: Run Tests
- GitHub Check: docker-test
- GitHub Check: Run Tests
- GitHub Check: docker-test
- GitHub Check: Run Tests
- GitHub Check: docker-test
🔇 Additional comments (3)
website/migrations/0264_educationalvideo.py (1)
1-28: EducationalVideo migration matches model and usageThe migration cleanly introduces
EducationalVideowith fields and ordering consistent with the model and downstream views/templates. No changes needed.blt/urls.py (1)
160-162: Video detail and quiz endpoints are wired correctlyThe new
video_detailandsubmit_quizroutes match the view signatures and the template{% url %}usage (video.id), and fit cleanly into the existing education URL space. No issues from a routing perspective.Also applies to: 731-743
website/migrations/0265_educationalvideo_ai_summary_and_more.py (1)
15-75: AI fields and quiz models migration are consistent with the modelsThe added
ai_summary/is_verifiedfields and the newQuizAttempt/VideoQuizQuestionmodels line up withwebsite.modelsand the new views/templates. Schema and FK choices look good.
| CMD ["poetry", "run", "python", "manage.py", "runserver", "0.0.0.0:8000"] | ||
|
|
||
| RUN pip install -r requirements.txt | ||
| RUN pip install youtube-transcript-api openai |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the Dockerfile
find . -name "Dockerfile" -type f 2>/dev/null | head -5
# Find dependency files
find . \( -name "pyproject.toml" -o -name "requirements.txt" \) -type f 2>/dev/null
# List repository structure
ls -la | head -20Repository: OWASP-BLT/BLT
Length of output: 1303
🏁 Script executed:
#!/bin/bash
# Check if files are present and read them
if [ -f "Dockerfile" ]; then
echo "=== DOCKERFILE ==="
cat -n Dockerfile
fi
echo ""
if [ -f "pyproject.toml" ]; then
echo "=== PYPROJECT.TOML ==="
cat -n pyproject.toml
fi
echo ""
if [ -f "requirements.txt" ]; then
echo "=== REQUIREMENTS.TXT ==="
cat -n requirements.txt
fiRepository: OWASP-BLT/BLT
Length of output: 7171
🏁 Script executed:
#!/bin/bash
# The review comment mentioned lines 70-73, let's understand the full context
# by looking at what comes before - particularly for COPY commands that would
# indicate where dependencies come from
# Look for COPY statements that might indicate site-packages transfer
echo "Searching for dependency-related patterns in Dockerfiles..."
fd -a 'Dockerfile' -x rg -n 'COPY.*site-packages|FROM.*AS|RUN.*poetry|RUN.*pip install' {} + 2>/dev/null | head -40Repository: OWASP-BLT/BLT
Length of output: 830
🏁 Script executed:
#!/bin/bash
# Check if requirements.txt exists and show youtube-transcript-api and openai declarations
echo "=== Checking requirements.txt ==="
if [ -f "requirements.txt" ]; then
grep -n "youtube-transcript-api\|openai" requirements.txt
else
echo "requirements.txt not found"
fi
echo ""
echo "=== Checking pyproject.toml for youtube-transcript-api ==="
grep -n "youtube-transcript-api" pyproject.toml || echo "NOT FOUND in pyproject.toml"
echo ""
echo "=== Checking pyproject.toml for openai ==="
grep -n "openai" pyproject.tomlRepository: OWASP-BLT/BLT
Length of output: 271
Fix build-blocking issues and consolidate all dependencies in pyproject.toml
Lines 72-73 have critical problems:
- Line 72 tries to install from a non-existent
requirements.txtfile, causing the build to fail - Line 73 redundantly installs
openai(already inpyproject.tomlline 65) and addsyoutube-transcript-apionly at pip level instead of declaring it as a project dependency
Required fixes:
- Add
youtube-transcript-apitopyproject.tomldependencies - Remove both
RUN pip installlines (72-73) entirely—all dependencies should be installed by Poetry in the builder stage - Move the ENTRYPOINT before CMD for correct Dockerfile structure
- If
requirements.txtis needed elsewhere, generate it from Poetry usingpoetry export
🤖 Prompt for AI Agents
In Dockerfile around lines 70 to 73, the build is broken because it runs pip
against a non-existent requirements.txt and installs packages outside Poetry;
add youtube-transcript-api to pyproject.toml under [tool.poetry.dependencies],
remove the two RUN pip install lines at 72–73 so all deps are installed by
Poetry during the builder stage, place the ENTRYPOINT instruction before the CMD
instruction to follow Dockerfile ordering conventions, and if a requirements.txt
is truly required elsewhere generate it from Poetry (poetry export -f
requirements.txt --output requirements.txt --without-hashes) rather than keeping
ad-hoc pip installs.
| {% for question in quiz_questions %} | ||
| <div class="quiz-question bg-gray-50 dark:bg-gray-700 p-6 rounded-lg border-l-4 border-[#e74c3c] transition-all" data-question-id="{{ question.id }}"> | ||
| <p class="font-semibold text-lg mb-4 text-gray-800 dark:text-gray-100"> | ||
| <span class="inline-block bg-[#e74c3c] text-white w-8 h-8 rounded-full text-center leading-8 mr-3">{{ forloop.counter }}</span> | ||
| {{ question.question }} | ||
| </p> | ||
|
|
||
| <div class="space-y-3 ml-11"> | ||
| {% for option, label in options %} | ||
| <label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600"> | ||
| <input type="radio" name="question_{{ question.id }}" value="{{ label }}" class="w-5 h-5 text-[#e74c3c] cursor-pointer"> | ||
| <span class="ml-3 font-medium text-gray-700 dark:text-gray-300">{{ label }}. {{ option }}</span> | ||
| </label> | ||
| {% endfor %} | ||
| </div> | ||
| </div> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quiz options loop uses undefined options, so no answers can be selected
Inside each question block you loop over {% for option, label in options %}, but options is never defined in the template or context. That results in zero radio inputs per question, so users can’t actually choose any answers and every submission will score 0.
Simplest fix: render the four options explicitly using the question fields:
Suggested template change for the options block
- <div class="space-y-3 ml-11">
- {% for option, label in options %}
- <label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600">
- <input type="radio" name="question_{{ question.id }}" value="{{ label }}" class="w-5 h-5 text-[#e74c3c] cursor-pointer">
- <span class="ml-3 font-medium text-gray-700 dark:text-gray-300">{{ label }}. {{ option }}</span>
- </label>
- {% endfor %}
- </div>
+ <div class="space-y-3 ml-11">
+ <label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600">
+ <input type="radio" name="question_{{ question.id }}" value="A" class="w-5 h-5 text-[#e74c3c] cursor-pointer">
+ <span class="ml-3 font-medium text-gray-700 dark:text-gray-300">A. {{ question.option_a }}</span>
+ </label>
+ <label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600">
+ <input type="radio" name="question_{{ question.id }}" value="B" class="w-5 h-5 text-[#e74c3c] cursor-pointer">
+ <span class="ml-3 font-medium text-gray-700 dark:text-gray-300">B. {{ question.option_b }}</span>
+ </label>
+ <label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600">
+ <input type="radio" name="question_{{ question.id }}" value="C" class="w-5 h-5 text-[#e74c3c] cursor-pointer">
+ <span class="ml-3 font-medium text-gray-700 dark:text-gray-300">C. {{ question.option_c }}</span>
+ </label>
+ <label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600">
+ <input type="radio" name="question_{{ question.id }}" value="D" class="w-5 h-5 text-[#e74c3c] cursor-pointer">
+ <span class="ml-3 font-medium text-gray-700 dark:text-gray-300">D. {{ question.option_d }}</span>
+ </label>
+ </div>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {% for question in quiz_questions %} | |
| <div class="quiz-question bg-gray-50 dark:bg-gray-700 p-6 rounded-lg border-l-4 border-[#e74c3c] transition-all" data-question-id="{{ question.id }}"> | |
| <p class="font-semibold text-lg mb-4 text-gray-800 dark:text-gray-100"> | |
| <span class="inline-block bg-[#e74c3c] text-white w-8 h-8 rounded-full text-center leading-8 mr-3">{{ forloop.counter }}</span> | |
| {{ question.question }} | |
| </p> | |
| <div class="space-y-3 ml-11"> | |
| {% for option, label in options %} | |
| <label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600"> | |
| <input type="radio" name="question_{{ question.id }}" value="{{ label }}" class="w-5 h-5 text-[#e74c3c] cursor-pointer"> | |
| <span class="ml-3 font-medium text-gray-700 dark:text-gray-300">{{ label }}. {{ option }}</span> | |
| </label> | |
| {% endfor %} | |
| </div> | |
| </div> | |
| {% for question in quiz_questions %} | |
| <div class="quiz-question bg-gray-50 dark:bg-gray-700 p-6 rounded-lg border-l-4 border-[#e74c3c] transition-all" data-question-id="{{ question.id }}"> | |
| <p class="font-semibold text-lg mb-4 text-gray-800 dark:text-gray-100"> | |
| <span class="inline-block bg-[#e74c3c] text-white w-8 h-8 rounded-full text-center leading-8 mr-3">{{ forloop.counter }}</span> | |
| {{ question.question }} | |
| </p> | |
| <div class="space-y-3 ml-11"> | |
| <label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600"> | |
| <input type="radio" name="question_{{ question.id }}" value="A" class="w-5 h-5 text-[#e74c3c] cursor-pointer"> | |
| <span class="ml-3 font-medium text-gray-700 dark:text-gray-300">A. {{ question.option_a }}</span> | |
| </label> | |
| <label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600"> | |
| <input type="radio" name="question_{{ question.id }}" value="B" class="w-5 h-5 text-[#e74c3c] cursor-pointer"> | |
| <span class="ml-3 font-medium text-gray-700 dark:text-gray-300">B. {{ question.option_b }}</span> | |
| </label> | |
| <label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600"> | |
| <input type="radio" name="question_{{ question.id }}" value="C" class="w-5 h-5 text-[#e74c3c] cursor-pointer"> | |
| <span class="ml-3 font-medium text-gray-700 dark:text-gray-300">C. {{ question.option_c }}</span> | |
| </label> | |
| <label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600"> | |
| <input type="radio" name="question_{{ question.id }}" value="D" class="w-5 h-5 text-[#e74c3c] cursor-pointer"> | |
| <span class="ml-3 font-medium text-gray-700 dark:text-gray-300">D. {{ question.option_d }}</span> | |
| </label> | |
| </div> | |
| </div> |
🤖 Prompt for AI Agents
In website/templates/education/video_detail.html around lines 58–73, the quiz
options loop uses an undefined variable `options`, so no radio inputs are
rendered; replace that loop by rendering the four option fields from the
`question` object (e.g., question.option_a, question.option_b,
question.option_c, question.option_d) and use fixed labels A–D (or a small list
created per-question) for the radio values and visible labels so each question
shows four selectable radios with name="question_{{ question.id }}".
| {% for attempt in quiz_history %} | ||
| <div class="flex items-center justify-between bg-gray-50 dark:bg-gray-700 p-4 rounded-lg border-l-4 {% if attempt.percentage >= 70 %}border-green-500{% else %}border-yellow-500{% endif %}"> | ||
| <div> | ||
| <p class="font-semibold text-gray-800 dark:text-gray-100">{{ attempt.get_date_display }}</p> | ||
| <p class="text-sm text-gray-600 dark:text-gray-400">Attempted {{ attempt.completed_at|date:"M d, Y H:i" }}</p> | ||
| </div> | ||
| <div class="text-right"> | ||
| <p class="text-2xl font-bold {% if attempt.percentage >= 70 %}text-green-600{% else %}text-yellow-600{% endif %}">{{ attempt.percentage|floatformat:1 }}%</p> | ||
| <p class="text-sm text-gray-600 dark:text-gray-400">{{ attempt.score }}/{{ attempt.total_questions }}</p> | ||
| </div> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use completed_at instead of undefined get_date_display in quiz history
QuizAttempt doesn’t define a get_date_display attribute/method, so:
{{ attempt.get_date_display }}won’t show anything useful. You already have completed_at and format it on the next line. Consider simplifying to something like:
- <p class="font-semibold text-gray-800 dark:text-gray-100">{{ attempt.get_date_display }}</p>
- <p class="text-sm text-gray-600 dark:text-gray-400">Attempted {{ attempt.completed_at|date:"M d, Y H:i" }}</p>
+ <p class="font-semibold text-gray-800 dark:text-gray-100">
+ {{ attempt.completed_at|date:"M d, Y" }}
+ </p>
+ <p class="text-sm text-gray-600 dark:text-gray-400">
+ Attempted {{ attempt.completed_at|date:"M d, Y H:i" }}
+ </p>🤖 Prompt for AI Agents
In website/templates/education/video_detail.html around lines 135 to 144, the
template references a non-existent attribute attempt.get_date_display; replace
that with the existing attempt.completed_at and format it with the same date
format used elsewhere (e.g. M d, Y H:i) so the quiz history displays a proper
timestamp; update the template to render attempt.completed_at with a date filter
and remove the undefined get_date_display reference.
| document.getElementById('quizForm').addEventListener('submit', async function(e) { | ||
| e.preventDefault(); | ||
|
|
||
| const formData = new FormData(this); | ||
|
|
||
| try { | ||
| const response = await fetch('{% url "submit_quiz" video.id %}', { | ||
| method: 'POST', | ||
| body: formData | ||
| }); | ||
|
|
||
| const data = await response.json(); | ||
|
|
||
| if (data.success) { | ||
| showResults(data.score, data.total, data.percentage); | ||
| } | ||
| } catch (error) { | ||
| console.error('Error:', error); | ||
| alert('An error occurred while submitting the quiz.'); | ||
| } | ||
| }); | ||
|
|
||
| function showResults(score, total, percentage) { | ||
| const modal = document.getElementById('resultsModal'); | ||
| const resultTitle = document.getElementById('resultTitle'); | ||
| const resultMessage = document.getElementById('resultMessage'); | ||
| const scoreText = document.getElementById('scoreText'); | ||
| const percentageText = document.getElementById('percentageText'); | ||
| const progressCircle = document.getElementById('progressCircle'); | ||
|
|
||
| // Determine result message | ||
| let title, message; | ||
| if (percentage >= 90) { | ||
| title = '🎉 Excellent!'; | ||
| message = 'Outstanding performance! You have mastered this topic.'; | ||
| } else if (percentage >= 70) { | ||
| title = '👍 Great Job!'; | ||
| message = 'Good understanding of the content. Well done!'; | ||
| } else if (percentage >= 50) { | ||
| title = '📚 Keep Learning!'; | ||
| message = 'You got the basics. Review the material and try again.'; | ||
| } else { | ||
| title = '💪 Try Again!'; | ||
| message = 'Watch the video again and retake the quiz.'; | ||
| } | ||
|
|
||
| resultTitle.textContent = title; | ||
| resultMessage.textContent = message; | ||
| scoreText.textContent = `You scored ${score} out of ${total} questions`; | ||
|
|
||
| // Animate percentage and progress circle | ||
| let currentPercentage = 0; | ||
| const interval = setInterval(() => { | ||
| if (currentPercentage <= percentage) { | ||
| percentageText.textContent = Math.round(currentPercentage) + '%'; | ||
| const offset = 377 - (currentPercentage / 100) * 377; | ||
| progressCircle.style.strokeDashoffset = offset; | ||
| currentPercentage += percentage / 50; | ||
| } else { | ||
| clearInterval(interval); | ||
| } | ||
| }, 20); | ||
|
|
||
| modal.classList.remove('hidden'); | ||
| } | ||
| </script> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard quizForm access in JS to avoid runtime errors when no quiz exists
The script unconditionally does:
document.getElementById('quizForm').addEventListener('submit', async function(e) { ... });But quizForm only exists when {% if quiz_questions %} is true. When there are no questions (the “quiz being generated” state), this will raise a JS error on page load.
Wrap the binding in a null check and handle error responses more explicitly:
Suggested JS update
-document.getElementById('quizForm').addEventListener('submit', async function(e) {
- e.preventDefault();
-
- const formData = new FormData(this);
-
- try {
- const response = await fetch('{% url "submit_quiz" video.id %}', {
- method: 'POST',
- body: formData
- });
-
- const data = await response.json();
-
- if (data.success) {
- showResults(data.score, data.total, data.percentage);
- }
- } catch (error) {
- console.error('Error:', error);
- alert('An error occurred while submitting the quiz.');
- }
-});
+const quizForm = document.getElementById('quizForm');
+if (quizForm) {
+ quizForm.addEventListener('submit', async function (e) {
+ e.preventDefault();
+
+ const formData = new FormData(this);
+
+ try {
+ const response = await fetch('{% url "submit_quiz" video.id %}', {
+ method: 'POST',
+ body: formData,
+ });
+
+ const data = await response.json();
+
+ if (data.success) {
+ showResults(data.score, data.total, data.percentage);
+ } else if (data.error) {
+ alert(data.error);
+ } else {
+ alert('Unexpected response from server while submitting the quiz.');
+ }
+ } catch (error) {
+ console.error('Error:', error);
+ alert('An error occurred while submitting the quiz.');
+ }
+ });
+}🤖 Prompt for AI Agents
In website/templates/education/video_detail.html around lines 193 to 258, the
script unconditionally binds to document.getElementById('quizForm') which can be
null when no quiz_questions exist; update the code to first const quizForm =
document.getElementById('quizForm') and if (!quizForm) return (or skip binding)
to avoid runtime errors, then attach the submit listener to quizForm;
additionally, inside the listener check response.ok before calling
response.json() and handle non-OK responses (show an alert or display an inline
error) and wrap response.json() in try/catch to handle parse errors so failures
give a clear user-facing message instead of silent exceptions.
|
Please resolve the comments by the bots as well and make sure the pre-commit and tests pass, Drop a comment for a re-review once that's done. Thanks for the PR! |
What does this PR do?
This PR adds a Project Leaderboard section to the existing Global Leaderboard page, allowing users to see project‑specific rankings when a project is selected. The goal is to make it easier for users to compare contributors within a single project without leaving the global overview.
Changes introduced
• Added a Project Leaderboard panel to the Global Leaderboard template, rendered only when a project is selected in the filter.
• Ensured the Project Leaderboard is hidden when no project is selected, so the page remains focused on global rankings by default.
• Reused existing leaderboard styles/components where possible to keep the UI consistent with the rest of the site.
• Wired up the necessary view/context changes so that project‑scoped leaderboard data is fetched and passed to the template only when needed.
• Added tests to verify that:
• The Project Leaderboard appears when a project filter is active.
• The Project Leaderboard is not rendered when no project is selected.
How to test
1. Start the development server.
2. Navigate to the Global Leaderboard page.
3. Confirm that, with no project selected, only the global leaderboard is visible.
4. Select a project from the project filter.
5. Verify that:
• A Project Leaderboard section appears beneath (or alongside) the global leaderboard.
• Entries and ordering match the expected project‑scoped rankings.
6. Run the test suite (or the relevant app tests) to confirm the new tests pass locally:
poetry run python manage.py test
or
poetry run python manage.py test website.tests.test_<your_new_tests_module>
Screenshots:

Known issues / environment notes
• Locally, all tests pass except website.tests.test_main.MySeleniumTests , which fail because Chrome is not installed at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome .
• This appears to be a local environment issue with Selenium setup rather than a regression from this change.
Linked issue
Closes #3314
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.