Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@aaditya8979
Copy link

@aaditya8979 aaditya8979 commented Dec 18, 2025

What does this PR do?
This PR adds a Project Leaderboard section to the existing Global Leaderboard page, allowing users to see project‑specific rankings when a project is selected. The goal is to make it easier for users to compare contributors within a single project without leaving the global overview.
Changes introduced
• Added a Project Leaderboard panel to the Global Leaderboard template, rendered only when a project is selected in the filter.
• Ensured the Project Leaderboard is hidden when no project is selected, so the page remains focused on global rankings by default.
• Reused existing leaderboard styles/components where possible to keep the UI consistent with the rest of the site.
• Wired up the necessary view/context changes so that project‑scoped leaderboard data is fetched and passed to the template only when needed.
• Added tests to verify that:
• The Project Leaderboard appears when a project filter is active.
• The Project Leaderboard is not rendered when no project is selected.
How to test
1. Start the development server.
2. Navigate to the Global Leaderboard page.
3. Confirm that, with no project selected, only the global leaderboard is visible.
4. Select a project from the project filter.
5. Verify that:
• A Project Leaderboard section appears beneath (or alongside) the global leaderboard.
• Entries and ordering match the expected project‑scoped rankings.
6. Run the test suite (or the relevant app tests) to confirm the new tests pass locally:
poetry run python manage.py test
or
poetry run python manage.py test website.tests.test_<your_new_tests_module>

Screenshots:
image

Known issues / environment notes
• Locally, all tests pass except  website.tests.test_main.MySeleniumTests , which fail because Chrome is not installed at  /Applications/Google Chrome.app/Contents/MacOS/Google Chrome .
• This appears to be a local environment issue with Selenium setup rather than a regression from this change.
Linked issue
Closes  #3314 

Summary by CodeRabbit

  • New Features
    • Educational video management: submit and explore YouTube videos with community contributions
    • AI features: auto-generated summaries and educational content verification badges
    • Interactive quizzes: knowledge assessments with questions derived from video content
    • Quiz analytics: track performance and view attempt history

✏️ Tip: You can customize this high-level summary in your review settings.

@github-actions github-actions bot added files-changed: 8 PR changes 8 files migrations PR contains database migration files labels Dec 18, 2025
@github-actions
Copy link
Contributor

👋 Hi @aaditya8979!

This pull request needs a peer review before it can be merged. Please request a review from a team member who is not:

  • The PR author
  • DonnieBLT
  • coderabbitai
  • copilot

Once a valid peer review is submitted, this check will pass automatically. Thank you!

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 18, 2025

Walkthrough

This PR adds an AI-powered educational video feature enabling users to submit YouTube videos with automatic transcript extraction, AI-generated summaries, educational verification, and automated quiz question generation. Videos are stored with metadata, displayed in a grid, and support interactive quizzes with score tracking and history.

Changes

Cohort / File(s) Change Summary
Infrastructure & Dependencies
Dockerfile
Added pip install commands for requirements.txt, youtube-transcript-api, and openai packages during image build.
URL Routing
blt/urls.py
Added imports for DetailView, three models (EducationalVideo, VideoQuizQuestion, QuizAttempt), and new public views (VideoDetailView, submit_quiz). Introduced two URL patterns: /education/video/<pk>/ for video detail and /education/video/<video_id>/quiz/submit/ for quiz submission.
Database Migrations
website/migrations/0264_educationalvideo.py
website/migrations/0265_educationalvideo_ai_summary_and_more.py
Migration 0264 creates EducationalVideo model with title, youtube_url, youtube_id, description, and timestamps. Migration 0265 extends EducationalVideo with ai_summary and is_verified fields, and introduces QuizAttempt and VideoQuizQuestion models with relationships to EducationalVideo and User.
Models
website/models.py
Added EducationalVideo with youtube_id extraction logic in save(); VideoQuizQuestion with multiple-choice options (A–D) and explanation; QuizAttempt linked to User and EducationalVideo with score and percentage tracking.
Templates
website/templates/education/education.html
Added video submission form (title, YouTube URL, optional description) and responsive grid layout for displaying submitted videos with iframe embeds, metadata, and "Take Quiz" action.
Video Detail Template
website/templates/education/video_detail.html
New template rendering embedded YouTube video, optional AI summary with verification badge, optional description, interactive quiz form with radio options, results modal with score display and animations, and quiz history section for authenticated users. Includes AJAX form submission and progress animation logic.
Views & Business Logic
website/views/education.py
Added VideoDetailView (DetailView subclass) for video detail page context; introduced helper functions get_youtube_transcript(), generate_ai_summary_and_verify(), and generate_quiz_from_transcript() for OpenAI/YouTube API integration; expanded education_home to handle AI-driven video submission, transcript fetching, summary generation, and quiz creation; added submit_quiz endpoint for quiz answer evaluation and score persistence.

Sequence Diagram

sequenceDiagram
    participant User
    participant Django as Django View
    participant YouTube as YouTube API
    participant OpenAI as OpenAI API
    participant DB as Database
    participant Template as Template/UI

    User->>Django: Submit video (URL, title)
    activate Django
    Django->>Django: Extract youtube_id from URL
    Django->>YouTube: Fetch transcript
    activate YouTube
    YouTube-->>Django: Return transcript text
    deactivate YouTube
    
    Django->>OpenAI: Generate summary & verify educational
    activate OpenAI
    OpenAI-->>Django: Return summary + is_verified flag
    deactivate OpenAI
    
    Django->>OpenAI: Generate quiz questions from transcript
    activate OpenAI
    OpenAI-->>Django: Return 4-option quiz questions (up to 10)
    deactivate OpenAI
    
    Django->>DB: Create EducationalVideo record
    Django->>DB: Create VideoQuizQuestion records
    activate DB
    DB-->>Django: Confirm records saved
    deactivate DB
    deactivate Django
    
    Django-->>User: Success message + redirect
    User->>Template: View video detail page
    Template->>DB: Fetch video + quiz questions + user history
    activate DB
    DB-->>Template: Return video data + quiz questions
    deactivate DB
    Template-->>User: Render video, AI summary, quiz form
    
    User->>Template: Submit quiz answers (AJAX)
    activate Template
    Template->>Django: POST quiz answers
    activate Django
    Django->>Django: Calculate score & percentage
    Django->>DB: Create QuizAttempt record
    activate DB
    DB-->>Django: Confirm record saved
    deactivate DB
    Django-->>Template: Return score + percentage (JSON)
    deactivate Django
    deactivate Template
    Template->>User: Animate & display results modal
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • API Integration Security: OpenAI and YouTube API calls require careful review of error handling, rate limiting, and API key exposure prevention (ensure OPENAI_API_KEY is only read from secure environment).
  • YouTube ID Extraction Logic: Regex-based extraction in EducationalVideo.save() needs validation against edge cases and various YouTube URL formats.
  • AI Prompt Engineering: Review prompts in generate_ai_summary_and_verify() and generate_quiz_from_transcript() for correctness, injection risks, and output consistency.
  • Quiz Scoring & Validation: Verify score calculation logic and QuizAttempt persistence in the submit_quiz endpoint; check boundary cases (zero questions, invalid answers).
  • AJAX & CSRF Protection: Confirm CSRF tokens are properly handled in form submissions and fetch requests in the video_detail template.
  • Cascade Delete Behavior: Validate that deleting a video or user properly cascades to related QuizAttempt and VideoQuizQuestion records.
  • Template Logic: Review AJAX fetch logic, JSON parsing, and result modal animation in video_detail.html for error handling and XSS prevention.

Suggested labels

enhancement, ai-feature, educational-content, quality: high

Pre-merge checks and finishing touches

❌ Failed checks (4 warnings)
Check name Status Explanation Resolution
Title check ⚠️ Warning The PR title 'Fix education AI transcript, summary, and quiz generation' does not match the actual changes which implement educational video features and quiz functionality, not fixes. Update title to accurately reflect the changes, e.g., 'Add educational video features with AI transcript, summary, and quiz generation' to better represent the implementation scope.
Linked Issues check ⚠️ Warning The PR implements educational video AI features (transcripts, summaries, quizzes) but the linked issue #3314 requests a project leaderboard with GitHub metrics and rankings—completely different features with no relationship. Either link to the correct issue for educational video features or clarify the relationship between this PR and issue #3314. The current PR does not address any leaderboard requirements.
Out of Scope Changes check ⚠️ Warning All changes are educational video feature implementations (models, views, templates, migrations), which are entirely out of scope for the linked issue #3314 requesting a project leaderboard with GitHub metrics. Link to the appropriate educational video feature issue or create a new one. Remove the incorrect link to issue #3314 which concerns project leaderboards and GitHub metrics aggregation.
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions bot added the needs-peer-review PR needs peer review label Dec 18, 2025
@github-actions
Copy link
Contributor

📊 Monthly Leaderboard

Hi @aaditya8979! Here's how you rank for December 2025:

Rank User PRs Reviews Comments Total
#13 @c0d3h01 3 3 0 42
#14 @aaditya8979 2 2 0 28
#15 @mdkaifansari04 0 5 4 28

Leaderboard based on contributions in December 2025. Keep up the great work! 🚀

@github-actions
Copy link
Contributor

❌ Pre-commit checks failed

The pre-commit hooks found issues that need to be fixed. Please run the following commands locally to fix them:

# Install pre-commit if you haven't already
pip install pre-commit

# Run pre-commit on all files
pre-commit run --all-files

# Or run pre-commit on staged files only
pre-commit run

After running these commands, the pre-commit hooks will automatically fix most issues.
Please review the changes, commit them, and push to your branch.

💡 Tip: You can set up pre-commit to run automatically on every commit by running:

pre-commit install
Pre-commit output
[INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks.
[WARNING] repo `https://github.com/pre-commit/pre-commit-hooks` uses deprecated stage names (commit, push) which will be removed in a future version.  Hint: often `pre-commit autoupdate --repo https://github.com/pre-commit/pre-commit-hooks` will fix this.  if it does not -- consider reporting an issue to that repo.
[INFO] Initializing environment for https://github.com/pycqa/isort.
[WARNING] repo `https://github.com/pycqa/isort` uses deprecated stage names (commit, merge-commit, push) which will be removed in a future version.  Hint: often `pre-commit autoupdate --repo https://github.com/pycqa/isort` will fix this.  if it does not -- consider reporting an issue to that repo.
[INFO] Initializing environment for https://github.com/astral-sh/ruff-pre-commit.
[INFO] Initializing environment for https://github.com/djlint/djLint.
[INFO] Initializing environment for local.
[INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/pycqa/isort.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/astral-sh/ruff-pre-commit.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/djlint/djLint.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for local.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
check python ast.........................................................Passed
check builtin type constructor use.......................................Passed
check yaml...............................................................Passed
fix python encoding pragma...............................................Passed
mixed line ending........................................................Passed
isort....................................................................Failed
- hook id: isort
- files were modified by this hook

Fixing /home/runner/work/BLT/BLT/blt/urls.py
Fixing /home/runner/work/BLT/BLT/website/models.py
Fixing /home/runner/work/BLT/BLT/website/views/education.py


For more information, see the pre-commit documentation.

@github-actions github-actions bot added the pre-commit: failed Pre-commit checks failed label Dec 18, 2025
</p>

<div class="space-y-3 ml-11">
{% for option, label in options %}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: The video_detail.html template attempts to iterate over an options variable that is not supplied by the VideoDetailView context, causing a rendering error.
Severity: CRITICAL | Confidence: High

🔍 Detailed Analysis

The video_detail.html template at line 66 contains a loop {% for option, label in options %}. However, the corresponding VideoDetailView.get_context_data method does not add the options variable to the template context. When a video detail page with associated quiz questions is rendered, the template will attempt to access this undefined variable, resulting in a Django UndefinedError and preventing the page from loading.

💡 Suggested Fix

In the VideoDetailView.get_context_data method, you need to prepare the options for the template. One approach is to modify the template to construct the options from the question object directly, like {% with options=... %}. Alternatively, process the quiz_questions in the view to attach an options list to each question object before passing it to the template.

🤖 Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.

Location: website/templates/education/video_detail.html#L66

Potential issue: The `video_detail.html` template at line 66 contains a loop `{% for
option, label in options %}`. However, the corresponding
`VideoDetailView.get_context_data` method does not add the `options` variable to the
template context. When a video detail page with associated quiz questions is rendered,
the template will attempt to access this undefined variable, resulting in a Django
`UndefinedError` and preventing the page from loading.

Did we get this right? 👍 / 👎 to inform future reviews.
Reference ID: 7720901

@github-actions
Copy link
Contributor

❌ Tests failed

The Django tests found issues that need to be fixed. Please review the test output below and fix the failing tests.

How to run tests locally

# Install dependencies
poetry install --with dev

# Run all tests
poetry run python manage.py test

# Run tests with verbose output
poetry run python manage.py test -v 3

# Run a specific test
poetry run python manage.py test app.tests.TestClass.test_method
Test output (last 100 lines)
Destroying test database for alias 'default' ('file:memorydb_default?mode=memory&cache=shared')...
Destroying test database for alias 'default' ('file:memorydb_default?mode=memory&cache=shared')...
Destroying test database for alias 'default' ('file:memorydb_default?mode=memory&cache=shared')...
Destroying test database for alias 'default' ('file:memorydb_default?mode=memory&cache=shared')...
Destroying test database for alias 'default' ('file:memorydb_default?mode=memory&cache=shared')...
Adding permission 'Permission object (436)'
Adding permission 'Permission object (437)'
Adding permission 'Permission object (438)'
Adding permission 'Permission object (439)'
Adding permission 'Permission object (440)'
Running post-migrate handlers for application tz_detect
Running post-migrate handlers for application star_ratings
Adding content type 'star_ratings | rating'
Adding content type 'star_ratings | userrating'
Adding permission 'Permission object (441)'
Adding permission 'Permission object (442)'
Adding permission 'Permission object (443)'
Adding permission 'Permission object (444)'
Adding permission 'Permission object (445)'
Adding permission 'Permission object (446)'
Adding permission 'Permission object (447)'
Adding permission 'Permission object (448)'
Running post-migrate handlers for application captcha
Adding content type 'captcha | captchastore'
Adding permission 'Permission object (449)'
Adding permission 'Permission object (450)'
Adding permission 'Permission object (451)'
Adding permission 'Permission object (452)'
Running post-migrate handlers for application dj_rest_auth
Traceback (most recent call last):
  File "/home/runner/work/BLT/BLT/manage.py", line 10, in <module>
    execute_from_command_line(sys.argv)
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line
    utility.execute()
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/__init__.py", line 436, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/commands/test.py", line 24, in run_from_argv
    super().run_from_argv(argv)
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/base.py", line 416, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/base.py", line 460, in execute
    output = self.handle(*args, **options)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/commands/test.py", line 63, in handle
    failures = test_runner.run_tests(test_labels)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/test/runner.py", line 1098, in run_tests
    self.run_checks(databases)
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/test/runner.py", line 1020, in run_checks
    call_command("check", verbosity=self.verbosity, databases=databases)
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/__init__.py", line 194, in call_command
    return command.execute(*args, **defaults)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/base.py", line 460, in execute
    output = self.handle(*args, **options)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/commands/check.py", line 81, in handle
    self.check(
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/base.py", line 492, in check
    all_issues = checks.run_checks(
                 ^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/checks/registry.py", line 89, in run_checks
    new_errors = check(app_configs=app_configs, databases=databases)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/checks/urls.py", line 16, in check_url_config
    return check_resolver(resolver)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/checks/urls.py", line 26, in check_resolver
    return check_method()
           ^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/urls/resolvers.py", line 531, in check
    for pattern in self.url_patterns:
                   ^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/utils/functional.py", line 47, in __get__
    res = instance.__dict__[self.name] = self.func(instance)
                                         ^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/urls/resolvers.py", line 718, in url_patterns
    patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
                       ^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/utils/functional.py", line 47, in __get__
    res = instance.__dict__[self.name] = self.func(instance)
                                         ^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/urls/resolvers.py", line 711, in urlconf_module
    return import_module(self.urlconf_name)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.11.2/x64/lib/python3.11/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/runner/work/BLT/BLT/blt/urls.py", line 147, in <module>
    from website.views.education import (
  File "/home/runner/work/BLT/BLT/website/views/education.py", line 19, in <module>
    from youtube_transcript_api import YouTubeTranscriptApi
ModuleNotFoundError: No module named 'youtube_transcript_api'

For more information, see the Django testing documentation.

@github-actions github-actions bot added the tests: failed Django tests failed label Dec 18, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
website/views/education.py (1)

373-439: Harden submit_quiz error handling and remove unreachable, undefined-code tail

Two issues in submit_quiz:

  1. Information exposure via raw exception text
except Exception as e:
    print(f"DEBUG: quiz submission error: {e}")
    return JsonResponse(
        {"error": str(e)},
        status=500
    )

Returning str(e) to clients can leak internal details (stack traces, DB errors, etc.), which CodeQL correctly flags as an information-exposure risk.

  1. Unreachable tail code with undefined variables

The block after the try/except:

featured_lectures = Lecture.objects.filter(section__isnull=True)
courses = Course.objects.all()
context = {"is_instructor": is_instructor, "featured_lectures": featured_lectures, "courses": courses}
return render(request, template, context)

is unreachable (every path above returns) and uses is_instructor and template which are not defined in this scope.

A tighter implementation would:

  • Log the exception server-side, but return a generic error message, and
  • Delete the unreachable render block.
Suggested change to the exception handler and cleanup
-    except EducationalVideo.DoesNotExist:
-        return JsonResponse(
-            {"error": "Video not found"},
-            status=404
-        )
-    except Exception as e:
-        print(f"DEBUG: quiz submission error: {e}")
-        return JsonResponse(
-            {"error": str(e)},
-            status=500
-        )
-
-    featured_lectures = Lecture.objects.filter(section__isnull=True)
-    courses = Course.objects.all()
-    context = {"is_instructor": is_instructor, "featured_lectures": featured_lectures, "courses": courses}
-    return render(request, template, context)
+    except EducationalVideo.DoesNotExist:
+        return JsonResponse({"error": "Video not found"}, status=404)
+    except Exception:
+        logger.exception("Error during quiz submission", extra={"video_id": video_id})
+        return JsonResponse(
+            {"error": "An unexpected error occurred while submitting the quiz."},
+            status=500,
+        )
🧹 Nitpick comments (5)
website/models.py (1)

3648-3703: EducationalVideo/quiz models look consistent; only minor DRY/cleanup opportunities

The model definitions and relationships align with the migrations and the new views/templates and should behave correctly.

If you touch this again later, consider:

  • Letting save() be the single source of truth for youtube_id (drop the manual extraction in the view), and
  • Removing the local import re and duplicate from django.db import models in this section to keep imports centralized.
website/templates/education/education.html (1)

203-212: Minor dark-mode class tweak in flash message styling

The class attribute for messages builds dark: as a literal prefix and only applies it to the background class, e.g. resulting in:

  • dark:bg-green-900 text-green-100

If you want both background and text to respond to dark mode, consider emitting:

  • dark:bg-green-900 dark:text-green-100 (and similarly for the error branch).

This is purely cosmetic; behavior is otherwise fine.

website/views/education.py (3)

44-57: Collapse duplicate import block to keep this module readable

Lines 44–57 effectively re-import modules (os, re, json, Django shortcuts, messages, auth decorators, http decorators, JsonResponse, OpenAI, YouTubeTranscriptApi, models) that are already imported above.

This doesn’t break anything, but it does make the file harder to scan and increases the chance of subtle drift if one block is edited and the other isn’t. It’d be cleaner to:

  • Keep a single, consolidated import section at the top, and
  • Remove this second block entirely.

976-989: VideoDetailView context is good; consider ordering quiz data if needed

VideoDetailView correctly provides:

  • quiz_questions filtered by video, and
  • quiz_history for the authenticated user and this video.

If you care about presentation order, you may want to make the ordering explicit:

-        context["quiz_questions"] = VideoQuizQuestion.objects.filter(video=video)
+        context["quiz_questions"] = VideoQuizQuestion.objects.filter(video=video).order_by("created_at")
...
-            context["quiz_history"] = QuizAttempt.objects.filter(
-                user=self.request.user, video=video
-            )
+            context["quiz_history"] = QuizAttempt.objects.filter(
+                user=self.request.user, video=video
+            ).order_by("-completed_at")

Right now, you rely on model Meta for attempts, but questions have no explicit ordering.


249-370: Consider moving AI pipeline to background jobs for production use

The education_home POST branch executes network I/O synchronously on the main request thread:

  • get_youtube_transcript() calls YouTubeTranscriptApi
  • generate_ai_summary_and_verify() calls OpenAI API
  • generate_quiz_from_transcript() calls OpenAI API

For longer videos or API slowness, requests can block for several seconds. While acceptable for MVP, plan for:

  • Moving transcript + AI work to a background job (Celery/RQ) with a pending EducationalVideo record
  • Or adding request timeouts and user-facing error messaging for external API failures
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between cd7312d and 6b758bd.

📒 Files selected for processing (8)
  • Dockerfile (1 hunks)
  • blt/urls.py (3 hunks)
  • website/migrations/0264_educationalvideo.py (1 hunks)
  • website/migrations/0265_educationalvideo_ai_summary_and_more.py (1 hunks)
  • website/models.py (1 hunks)
  • website/templates/education/education.html (1 hunks)
  • website/templates/education/video_detail.html (1 hunks)
  • website/views/education.py (5 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
website/views/education.py (1)
website/models.py (3)
  • EducationalVideo (3651-3673)
  • VideoQuizQuestion (3676-3688)
  • QuizAttempt (3691-3703)
blt/urls.py (2)
website/models.py (3)
  • EducationalVideo (3651-3673)
  • VideoQuizQuestion (3676-3688)
  • QuizAttempt (3691-3703)
website/views/education.py (2)
  • submit_quiz (375-439)
  • VideoDetailView (976-989)
website/migrations/0265_educationalvideo_ai_summary_and_more.py (1)
website/migrations/0264_educationalvideo.py (1)
  • Migration (6-28)
website/migrations/0264_educationalvideo.py (1)
website/migrations/0265_educationalvideo_ai_summary_and_more.py (1)
  • Migration (8-76)
🪛 GitHub Check: CodeQL
website/views/education.py

[warning] 432-432: Information exposure through an exception
Stack trace information flows to this location and may be exposed to an external user.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (33)
  • GitHub Check: Run Tests
  • GitHub Check: docker-test
  • GitHub Check: remove_last_active_label
  • GitHub Check: Run Tests
  • GitHub Check: docker-test
  • GitHub Check: check_peer_review
  • GitHub Check: remove_last_active_label
  • GitHub Check: add_changes_requested_label
  • GitHub Check: remove_last_active_label
  • GitHub Check: auto-assign
  • GitHub Check: Run Tests
  • GitHub Check: docker-test
  • GitHub Check: check_peer_review
  • GitHub Check: remove_last_active_label
  • GitHub Check: add_changes_requested_label
  • GitHub Check: remove_last_active_label
  • GitHub Check: auto-assign
  • GitHub Check: Run Tests
  • GitHub Check: docker-test
  • GitHub Check: add_changes_requested_label
  • GitHub Check: Run Tests
  • GitHub Check: docker-test
  • GitHub Check: add_changes_requested_label
  • GitHub Check: Run Tests
  • GitHub Check: docker-test
  • GitHub Check: Run Tests
  • GitHub Check: docker-test
  • GitHub Check: Run Tests
  • GitHub Check: docker-test
  • GitHub Check: Run Tests
  • GitHub Check: docker-test
  • GitHub Check: Run Tests
  • GitHub Check: docker-test
🔇 Additional comments (3)
website/migrations/0264_educationalvideo.py (1)

1-28: EducationalVideo migration matches model and usage

The migration cleanly introduces EducationalVideo with fields and ordering consistent with the model and downstream views/templates. No changes needed.

blt/urls.py (1)

160-162: Video detail and quiz endpoints are wired correctly

The new video_detail and submit_quiz routes match the view signatures and the template {% url %} usage (video.id), and fit cleanly into the existing education URL space. No issues from a routing perspective.

Also applies to: 731-743

website/migrations/0265_educationalvideo_ai_summary_and_more.py (1)

15-75: AI fields and quiz models migration are consistent with the models

The added ai_summary/is_verified fields and the new QuizAttempt / VideoQuizQuestion models line up with website.models and the new views/templates. Schema and FK choices look good.

Comment on lines +70 to +73
CMD ["poetry", "run", "python", "manage.py", "runserver", "0.0.0.0:8000"]

RUN pip install -r requirements.txt
RUN pip install youtube-transcript-api openai
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find the Dockerfile
find . -name "Dockerfile" -type f 2>/dev/null | head -5
# Find dependency files
find . \( -name "pyproject.toml" -o -name "requirements.txt" \) -type f 2>/dev/null
# List repository structure
ls -la | head -20

Repository: OWASP-BLT/BLT

Length of output: 1303


🏁 Script executed:

#!/bin/bash
# Check if files are present and read them
if [ -f "Dockerfile" ]; then
  echo "=== DOCKERFILE ===" 
  cat -n Dockerfile
fi

echo ""
if [ -f "pyproject.toml" ]; then
  echo "=== PYPROJECT.TOML ===" 
  cat -n pyproject.toml
fi

echo ""
if [ -f "requirements.txt" ]; then
  echo "=== REQUIREMENTS.TXT ===" 
  cat -n requirements.txt
fi

Repository: OWASP-BLT/BLT

Length of output: 7171


🏁 Script executed:

#!/bin/bash
# The review comment mentioned lines 70-73, let's understand the full context
# by looking at what comes before - particularly for COPY commands that would
# indicate where dependencies come from

# Look for COPY statements that might indicate site-packages transfer
echo "Searching for dependency-related patterns in Dockerfiles..."
fd -a 'Dockerfile' -x rg -n 'COPY.*site-packages|FROM.*AS|RUN.*poetry|RUN.*pip install' {} + 2>/dev/null | head -40

Repository: OWASP-BLT/BLT

Length of output: 830


🏁 Script executed:

#!/bin/bash
# Check if requirements.txt exists and show youtube-transcript-api and openai declarations
echo "=== Checking requirements.txt ==="
if [ -f "requirements.txt" ]; then
  grep -n "youtube-transcript-api\|openai" requirements.txt
else
  echo "requirements.txt not found"
fi

echo ""
echo "=== Checking pyproject.toml for youtube-transcript-api ==="
grep -n "youtube-transcript-api" pyproject.toml || echo "NOT FOUND in pyproject.toml"

echo ""
echo "=== Checking pyproject.toml for openai ==="
grep -n "openai" pyproject.toml

Repository: OWASP-BLT/BLT

Length of output: 271


Fix build-blocking issues and consolidate all dependencies in pyproject.toml

Lines 72-73 have critical problems:

  • Line 72 tries to install from a non-existent requirements.txt file, causing the build to fail
  • Line 73 redundantly installs openai (already in pyproject.toml line 65) and adds youtube-transcript-api only at pip level instead of declaring it as a project dependency

Required fixes:

  1. Add youtube-transcript-api to pyproject.toml dependencies
  2. Remove both RUN pip install lines (72-73) entirely—all dependencies should be installed by Poetry in the builder stage
  3. Move the ENTRYPOINT before CMD for correct Dockerfile structure
  4. If requirements.txt is needed elsewhere, generate it from Poetry using poetry export
🤖 Prompt for AI Agents
In Dockerfile around lines 70 to 73, the build is broken because it runs pip
against a non-existent requirements.txt and installs packages outside Poetry;
add youtube-transcript-api to pyproject.toml under [tool.poetry.dependencies],
remove the two RUN pip install lines at 72–73 so all deps are installed by
Poetry during the builder stage, place the ENTRYPOINT instruction before the CMD
instruction to follow Dockerfile ordering conventions, and if a requirements.txt
is truly required elsewhere generate it from Poetry (poetry export -f
requirements.txt --output requirements.txt --without-hashes) rather than keeping
ad-hoc pip installs.

Comment on lines +58 to +73
{% for question in quiz_questions %}
<div class="quiz-question bg-gray-50 dark:bg-gray-700 p-6 rounded-lg border-l-4 border-[#e74c3c] transition-all" data-question-id="{{ question.id }}">
<p class="font-semibold text-lg mb-4 text-gray-800 dark:text-gray-100">
<span class="inline-block bg-[#e74c3c] text-white w-8 h-8 rounded-full text-center leading-8 mr-3">{{ forloop.counter }}</span>
{{ question.question }}
</p>

<div class="space-y-3 ml-11">
{% for option, label in options %}
<label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600">
<input type="radio" name="question_{{ question.id }}" value="{{ label }}" class="w-5 h-5 text-[#e74c3c] cursor-pointer">
<span class="ml-3 font-medium text-gray-700 dark:text-gray-300">{{ label }}. {{ option }}</span>
</label>
{% endfor %}
</div>
</div>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Quiz options loop uses undefined options, so no answers can be selected

Inside each question block you loop over {% for option, label in options %}, but options is never defined in the template or context. That results in zero radio inputs per question, so users can’t actually choose any answers and every submission will score 0.

Simplest fix: render the four options explicitly using the question fields:

Suggested template change for the options block
-                <div class="space-y-3 ml-11">
-                    {% for option, label in options %}
-                    <label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600">
-                        <input type="radio" name="question_{{ question.id }}" value="{{ label }}" class="w-5 h-5 text-[#e74c3c] cursor-pointer">
-                        <span class="ml-3 font-medium text-gray-700 dark:text-gray-300">{{ label }}. {{ option }}</span>
-                    </label>
-                    {% endfor %}
-                </div>
+                <div class="space-y-3 ml-11">
+                    <label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600">
+                        <input type="radio" name="question_{{ question.id }}" value="A" class="w-5 h-5 text-[#e74c3c] cursor-pointer">
+                        <span class="ml-3 font-medium text-gray-700 dark:text-gray-300">A. {{ question.option_a }}</span>
+                    </label>
+                    <label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600">
+                        <input type="radio" name="question_{{ question.id }}" value="B" class="w-5 h-5 text-[#e74c3c] cursor-pointer">
+                        <span class="ml-3 font-medium text-gray-700 dark:text-gray-300">B. {{ question.option_b }}</span>
+                    </label>
+                    <label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600">
+                        <input type="radio" name="question_{{ question.id }}" value="C" class="w-5 h-5 text-[#e74c3c] cursor-pointer">
+                        <span class="ml-3 font-medium text-gray-700 dark:text-gray-300">C. {{ question.option_c }}</span>
+                    </label>
+                    <label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600">
+                        <input type="radio" name="question_{{ question.id }}" value="D" class="w-5 h-5 text-[#e74c3c] cursor-pointer">
+                        <span class="ml-3 font-medium text-gray-700 dark:text-gray-300">D. {{ question.option_d }}</span>
+                    </label>
+                </div>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{% for question in quiz_questions %}
<div class="quiz-question bg-gray-50 dark:bg-gray-700 p-6 rounded-lg border-l-4 border-[#e74c3c] transition-all" data-question-id="{{ question.id }}">
<p class="font-semibold text-lg mb-4 text-gray-800 dark:text-gray-100">
<span class="inline-block bg-[#e74c3c] text-white w-8 h-8 rounded-full text-center leading-8 mr-3">{{ forloop.counter }}</span>
{{ question.question }}
</p>
<div class="space-y-3 ml-11">
{% for option, label in options %}
<label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600">
<input type="radio" name="question_{{ question.id }}" value="{{ label }}" class="w-5 h-5 text-[#e74c3c] cursor-pointer">
<span class="ml-3 font-medium text-gray-700 dark:text-gray-300">{{ label }}. {{ option }}</span>
</label>
{% endfor %}
</div>
</div>
{% for question in quiz_questions %}
<div class="quiz-question bg-gray-50 dark:bg-gray-700 p-6 rounded-lg border-l-4 border-[#e74c3c] transition-all" data-question-id="{{ question.id }}">
<p class="font-semibold text-lg mb-4 text-gray-800 dark:text-gray-100">
<span class="inline-block bg-[#e74c3c] text-white w-8 h-8 rounded-full text-center leading-8 mr-3">{{ forloop.counter }}</span>
{{ question.question }}
</p>
<div class="space-y-3 ml-11">
<label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600">
<input type="radio" name="question_{{ question.id }}" value="A" class="w-5 h-5 text-[#e74c3c] cursor-pointer">
<span class="ml-3 font-medium text-gray-700 dark:text-gray-300">A. {{ question.option_a }}</span>
</label>
<label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600">
<input type="radio" name="question_{{ question.id }}" value="B" class="w-5 h-5 text-[#e74c3c] cursor-pointer">
<span class="ml-3 font-medium text-gray-700 dark:text-gray-300">B. {{ question.option_b }}</span>
</label>
<label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600">
<input type="radio" name="question_{{ question.id }}" value="C" class="w-5 h-5 text-[#e74c3c] cursor-pointer">
<span class="ml-3 font-medium text-gray-700 dark:text-gray-300">C. {{ question.option_c }}</span>
</label>
<label class="flex items-center p-4 bg-white dark:bg-gray-800 rounded-lg cursor-pointer hover:bg-gray-100 dark:hover:bg-gray-700 transition border border-gray-200 dark:border-gray-600">
<input type="radio" name="question_{{ question.id }}" value="D" class="w-5 h-5 text-[#e74c3c] cursor-pointer">
<span class="ml-3 font-medium text-gray-700 dark:text-gray-300">D. {{ question.option_d }}</span>
</label>
</div>
</div>
🤖 Prompt for AI Agents
In website/templates/education/video_detail.html around lines 58–73, the quiz
options loop uses an undefined variable `options`, so no radio inputs are
rendered; replace that loop by rendering the four option fields from the
`question` object (e.g., question.option_a, question.option_b,
question.option_c, question.option_d) and use fixed labels A–D (or a small list
created per-question) for the radio values and visible labels so each question
shows four selectable radios with name="question_{{ question.id }}".

Comment on lines +135 to +144
{% for attempt in quiz_history %}
<div class="flex items-center justify-between bg-gray-50 dark:bg-gray-700 p-4 rounded-lg border-l-4 {% if attempt.percentage >= 70 %}border-green-500{% else %}border-yellow-500{% endif %}">
<div>
<p class="font-semibold text-gray-800 dark:text-gray-100">{{ attempt.get_date_display }}</p>
<p class="text-sm text-gray-600 dark:text-gray-400">Attempted {{ attempt.completed_at|date:"M d, Y H:i" }}</p>
</div>
<div class="text-right">
<p class="text-2xl font-bold {% if attempt.percentage >= 70 %}text-green-600{% else %}text-yellow-600{% endif %}">{{ attempt.percentage|floatformat:1 }}%</p>
<p class="text-sm text-gray-600 dark:text-gray-400">{{ attempt.score }}/{{ attempt.total_questions }}</p>
</div>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use completed_at instead of undefined get_date_display in quiz history

QuizAttempt doesn’t define a get_date_display attribute/method, so:

{{ attempt.get_date_display }}

won’t show anything useful. You already have completed_at and format it on the next line. Consider simplifying to something like:

-                    <p class="font-semibold text-gray-800 dark:text-gray-100">{{ attempt.get_date_display }}</p>
-                    <p class="text-sm text-gray-600 dark:text-gray-400">Attempted {{ attempt.completed_at|date:"M d, Y H:i" }}</p>
+                    <p class="font-semibold text-gray-800 dark:text-gray-100">
+                        {{ attempt.completed_at|date:"M d, Y" }}
+                    </p>
+                    <p class="text-sm text-gray-600 dark:text-gray-400">
+                        Attempted {{ attempt.completed_at|date:"M d, Y H:i" }}
+                    </p>
🤖 Prompt for AI Agents
In website/templates/education/video_detail.html around lines 135 to 144, the
template references a non-existent attribute attempt.get_date_display; replace
that with the existing attempt.completed_at and format it with the same date
format used elsewhere (e.g. M d, Y H:i) so the quiz history displays a proper
timestamp; update the template to render attempt.completed_at with a date filter
and remove the undefined get_date_display reference.

Comment on lines +193 to +258
document.getElementById('quizForm').addEventListener('submit', async function(e) {
e.preventDefault();

const formData = new FormData(this);

try {
const response = await fetch('{% url "submit_quiz" video.id %}', {
method: 'POST',
body: formData
});

const data = await response.json();

if (data.success) {
showResults(data.score, data.total, data.percentage);
}
} catch (error) {
console.error('Error:', error);
alert('An error occurred while submitting the quiz.');
}
});

function showResults(score, total, percentage) {
const modal = document.getElementById('resultsModal');
const resultTitle = document.getElementById('resultTitle');
const resultMessage = document.getElementById('resultMessage');
const scoreText = document.getElementById('scoreText');
const percentageText = document.getElementById('percentageText');
const progressCircle = document.getElementById('progressCircle');

// Determine result message
let title, message;
if (percentage >= 90) {
title = '🎉 Excellent!';
message = 'Outstanding performance! You have mastered this topic.';
} else if (percentage >= 70) {
title = '👍 Great Job!';
message = 'Good understanding of the content. Well done!';
} else if (percentage >= 50) {
title = '📚 Keep Learning!';
message = 'You got the basics. Review the material and try again.';
} else {
title = '💪 Try Again!';
message = 'Watch the video again and retake the quiz.';
}

resultTitle.textContent = title;
resultMessage.textContent = message;
scoreText.textContent = `You scored ${score} out of ${total} questions`;

// Animate percentage and progress circle
let currentPercentage = 0;
const interval = setInterval(() => {
if (currentPercentage <= percentage) {
percentageText.textContent = Math.round(currentPercentage) + '%';
const offset = 377 - (currentPercentage / 100) * 377;
progressCircle.style.strokeDashoffset = offset;
currentPercentage += percentage / 50;
} else {
clearInterval(interval);
}
}, 20);

modal.classList.remove('hidden');
}
</script>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard quizForm access in JS to avoid runtime errors when no quiz exists

The script unconditionally does:

document.getElementById('quizForm').addEventListener('submit', async function(e) { ... });

But quizForm only exists when {% if quiz_questions %} is true. When there are no questions (the “quiz being generated” state), this will raise a JS error on page load.

Wrap the binding in a null check and handle error responses more explicitly:

Suggested JS update
-document.getElementById('quizForm').addEventListener('submit', async function(e) {
-    e.preventDefault();
-    
-    const formData = new FormData(this);
-    
-    try {
-        const response = await fetch('{% url "submit_quiz" video.id %}', {
-            method: 'POST',
-            body: formData
-        });
-        
-        const data = await response.json();
-        
-        if (data.success) {
-            showResults(data.score, data.total, data.percentage);
-        }
-    } catch (error) {
-        console.error('Error:', error);
-        alert('An error occurred while submitting the quiz.');
-    }
-});
+const quizForm = document.getElementById('quizForm');
+if (quizForm) {
+    quizForm.addEventListener('submit', async function (e) {
+        e.preventDefault();
+
+        const formData = new FormData(this);
+
+        try {
+            const response = await fetch('{% url "submit_quiz" video.id %}', {
+                method: 'POST',
+                body: formData,
+            });
+
+            const data = await response.json();
+
+            if (data.success) {
+                showResults(data.score, data.total, data.percentage);
+            } else if (data.error) {
+                alert(data.error);
+            } else {
+                alert('Unexpected response from server while submitting the quiz.');
+            }
+        } catch (error) {
+            console.error('Error:', error);
+            alert('An error occurred while submitting the quiz.');
+        }
+    });
+}
🤖 Prompt for AI Agents
In website/templates/education/video_detail.html around lines 193 to 258, the
script unconditionally binds to document.getElementById('quizForm') which can be
null when no quiz_questions exist; update the code to first const quizForm =
document.getElementById('quizForm') and if (!quizForm) return (or skip binding)
to avoid runtime errors, then attach the submit listener to quizForm;
additionally, inside the listener check response.ok before calling
response.json() and handle non-OK responses (show an alert or display an inline
error) and wrap response.json() in try/catch to handle parse errors so failures
give a clear user-facing message instead of silent exceptions.

@github-project-automation github-project-automation bot moved this from Backlog to Ready in 📌 OWASP BLT Project Board Dec 18, 2025
@sidd190
Copy link
Contributor

sidd190 commented Dec 18, 2025

Please resolve the comments by the bots as well and make sure the pre-commit and tests pass, Drop a comment for a re-review once that's done. Thanks for the PR!

@github-actions github-actions bot added the last-active: 0d PR last updated 0 days ago label Dec 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

files-changed: 8 PR changes 8 files last-active: 0d PR last updated 0 days ago migrations PR contains database migration files needs-peer-review PR needs peer review pre-commit: failed Pre-commit checks failed quality: high tests: failed Django tests failed

Projects

Status: Ready

Development

Successfully merging this pull request may close these issues.

2 participants