-
-
Notifications
You must be signed in to change notification settings - Fork 313
Now Profanity check is available for Bug_reporting #4060
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThe changes introduce a new dependency ( Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant IssueCreate as IssueCreate View
participant Profanity as Profanity Checker
participant Blocked as Blocked Model
User->>IssueCreate: Submit bug report with description
IssueCreate->>IssueCreate: Combine text fields for checking
IssueCreate->>Profanity: Check for profanity in combined text
alt Profanity Detected
Profanity-->>IssueCreate: Returns true
IssueCreate->>Blocked: Create new Blocked record (IP, agent, reason)
IssueCreate-->>User: Redirect with error message
else No Profanity
Profanity-->>IssueCreate: Returns false
IssueCreate-->>User: Process form submission normally
end
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
website/views/issue.py (2)
910-928: Well-implemented profanity check for bug reports.The implementation effectively:
- Extracts text from both description fields
- Combines them for a comprehensive check
- Creates an enforcement record when profanity is detected
- Prevents form submission with an appropriate error message
This is a good security measure to prevent inappropriate content in bug reports.
Consider adding a small refactor to sanitize or normalize the text before checking for profanity:
# Combine fields to check -text_to_check = f"{description} {markdown_description}" +text_to_check = f"{description} {markdown_description}".strip()This would handle edge cases where whitespace might affect the profanity check.
929-929: Incomplete code comment.There appears to be an incomplete or unnecessary comment starting with "# Pr" that doesn't provide any useful information.
Consider removing this incomplete comment or completing it with meaningful information:
-# Pr +# Proceed with normal form validation
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
pyproject.toml(2 hunks)website/views/issue.py(3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Run Tests
🔇 Additional comments (3)
pyproject.toml (1)
70-70:Details
✅ Verification successful
New dependency added for profanity checking.
Adding the
better-profanitypackage will enable profanity filtering capabilities for bug reports as implemented in theIssueCreate.form_validmethod.Let's verify if this version is the latest:
🌐 Web query:
What is the latest version of better-profanity python package?💡 Result:
The latest version of the
better-profanityPython package is 0.7.0, released on November 2, 2020[1][6]. This version remains current as of March 2025 based on available documentation.Key details:
- Supports Unicode character handling and leetspeak detection[1][3]
- Compatible with Python 3.4+ and PyPy3[1][6]
- Features include customizable censoring characters, profanity presence checks, and wordlist modifications[1][3][9]
The package continues to be maintained on GitHub with open issues and contributions still being accepted[9].
Citations:
- 1: https://pypi.org/project/better-profanity/
- 2: https://github.com/vzhou842/profanity-check
- 3: https://dev.to/frankdev20/how-to-use-the-python-betterprofanity-filter-api-with-graphql-43if
- 4: https://socket.dev/pypi/package/dictionary-profanity-filter
- 5: https://github.com/snguyenthanh/better_profanity/blob/master/setup.py
- 6: https://repo-portal.ito.gov.ir/repo/library/Python/Others/better-profanity-0.7.0.html
- 7: https://pypi.org/project/profanity-check/
- 8: https://profanity-im.github.io/plugins.html
- 9: https://github.com/snguyenthanh/better_profanity/blob/master/better_profanity/better_profanity.py
Verified Dependency Version:
better-profanity0.7.0 is Current
- The dependency addition in
pyproject.toml(line 70) withbetter-profanity = "^0.7.0"is correct since version 0.7.0 is confirmed as the latest release.- This package is now correctly set up to provide profanity filtering in the
IssueCreate.form_validmethod.website/views/issue.py (2)
16-16: Successfully imported the profanity module.Properly importing the profanity module from the better-profanity package for content moderation.
63-63: Successfully imported the Blocked model.Correctly imported the Blocked model to track instances of profanity detection.
|
@CodeRabbit full review it |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 68355 🏁 Script executed: Length of output: 1829 Overall AssessmentThis is a well-implemented feature that adds important content moderation capabilities to the bug reporting system. Positive Aspects
Suggestions for Improvement
Technical ImplementationThe implementation correctly:
Overall, this is a valuable addition to the project that will help maintain content quality. The changes look good to merge, with the test coverage being the main area that could be enhanced in the future. |
|
/giphy amazing |
DonnieBLT
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please change the version as suggested by the developers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
website/views/issue.py (2)
926-928: Consider improving the error message for better user experience.The current error message "Have a nice day." doesn't inform users about why their submission was rejected. A more informative message would improve user experience and reduce confusion.
- messages.error(self.request, "Have a nice day.") + messages.error(self.request, "Your submission contains inappropriate language. Please revise your content and try again.")
910-928: Consider enhancing the profanity filter implementation for better usability.The current implementation blocks submissions entirely when profanity is detected. Consider these enhancements to improve user experience:
- Highlight problematic terms instead of rejecting outright
- Add configuration options for sensitivity levels
- Create an allowlist for false positives
- Track repeat offenders with escalating responses
A more robust implementation could use a configuration file to control sensitivity and maintain allowlists:
# Example configuration-based approach from django.conf import settings # Get configuration from settings PROFANITY_SETTINGS = getattr(settings, 'PROFANITY_SETTINGS', { 'censored_words': [], # Additional words to censor 'allowed_words': [], # Words to exclude from censoring 'action_level': 'block', # Options: 'warn', 'highlight', 'block' 'max_warnings': 3, # Number of warnings before blocking }) # Configure profanity filter profanity.load_censor_words(whitelist=PROFANITY_SETTINGS['allowed_words']) for word in PROFANITY_SETTINGS['censored_words']: profanity.add_censor_word(word) # Later in the code if profanity.contains_profanity(text_to_check): if PROFANITY_SETTINGS['action_level'] == 'block': # Current implementation elif PROFANITY_SETTINGS['action_level'] == 'highlight': # Return with highlighted text censored_text = profanity.censor(text_to_check) # Show user the censored text and allow editing elif PROFANITY_SETTINGS['action_level'] == 'warn': # Track warnings for this user # Block if exceeded max_warnings
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
poetry.lockis excluded by!**/*.lock
📒 Files selected for processing (1)
website/views/issue.py(3 hunks)
🧰 Additional context used
🧬 Code Definitions (1)
website/views/issue.py (1)
website/models.py (1)
Blocked(1200-1210)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Run Tests
🔇 Additional comments (4)
website/views/issue.py (4)
16-16: Good addition of the profanity filter dependency.The
better-profanitypackage is a solid choice for implementing content moderation, providing good performance and flexibility for profanity detection.
63-63: Properly imported the Blocked model.The
Blockedmodel imported here is correctly used later in the code to log moderation actions, ensuring consistent use of the existing infrastructure.
910-915: Well-designed text collection for profanity checking.Combining both the description and markdown_description fields ensures thorough profanity checking across all user-provided content in the bug report.
917-924: Good implementation of the profanity check and blocking.The implementation correctly uses the profanity check API and creates a block record with appropriate details when profanity is detected. It records the reporter's IP, reason for block, and user agent string, which is helpful for monitoring abuse patterns.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
website/views/issue.py (1)
910-928: 🛠️ Refactor suggestionAdd unit tests for the profanity checking functionality.
This important content moderation feature lacks test coverage. Consider adding tests to verify profanity detection works as expected with various inputs including edge cases.
Here's a suggested structure for tests:
# In tests.py or a dedicated test file from django.test import TestCase, RequestFactory from django.contrib.messages.storage.fallback import FallbackStorage from django.contrib.auth.models import User from website.models import Blocked from website.views.issue import IssueCreate from unittest.mock import patch class ProfanityCheckTest(TestCase): def setUp(self): self.factory = RequestFactory() self.user = User.objects.create_user(username='testuser', email='[email protected]', password='testpass') def test_profanity_detection_blocks_submission(self): # Create a request with profanity in the description request = self.factory.post('/report/', { 'description': 'This contains a bad word like sh*t', 'markdown_description': 'More details', # Add other required form fields }) request.user = self.user request.META['HTTP_USER_AGENT'] = 'Test User Agent' # Set up messages framework setattr(request, 'session', 'session') messages = FallbackStorage(request) setattr(request, '_messages', messages) # Create the view and call form_valid view = IssueCreate() view.request = request # Mock the form with cleaned_data form = mock.MagicMock() form.cleaned_data = { 'description': 'This contains a bad word like sh*t', 'markdown_description': 'More details', } # Call form_valid and check the result response = view.form_valid(form) # Verify that a Blocked entry was created self.assertEqual(Blocked.objects.count(), 1) blocked = Blocked.objects.first() self.assertEqual(blocked.reason_for_block, 'Inappropriate language in bug report') # Verify the user was redirected self.assertEqual(response.status_code, 302) self.assertEqual(response.url, '/')
🧹 Nitpick comments (3)
website/views/issue.py (3)
926-927: Consider providing a more informative error message.The generic message "Have a nice day" doesn't inform users why their submission was blocked. A more descriptive message would help users understand the issue and modify their submission accordingly.
- messages.error(self.request, "Have a nice day.") + messages.error(self.request, "Your submission contains inappropriate language. Please revise your content and try again.")
910-928: Consider enhancing user experience with warning instead of blocking.Currently, the implementation blocks submissions completely when profanity is detected. Consider enhancing the user experience by highlighting the inappropriate content and allowing users to edit their submission instead of forcing them to start over.
Example implementation:
# Check for profanity if profanity.contains_profanity(text_to_check): + # Log the attempt without blocking Blocked.objects.create( address=reporter_ip, reason_for_block="Inappropriate language in bug report", user_agent_string=self.request.META.get("HTTP_USER_AGENT", ""), count=1, ) - # Prevent form submission - messages.error(self.request, "Have a nice day.") - return HttpResponseRedirect("/") + # Instead of blocking, warn and return to form + messages.warning(self.request, "Your submission contains inappropriate language. Please revise and submit again.") + return render(self.request, "report.html", { + "form": form, + "captcha_form": CaptchaForm(self.request.POST), + "has_profanity": True, + "submitted_data": form.cleaned_data + })
910-928: Consider adding configurability for profanity checking.The current implementation uses default settings for profanity detection. Consider adding configuration options to:
- Control the sensitivity level
- Create an allowlist for false positives
- Allow admin toggling of the feature
This could be implemented using Django settings or a dedicated configuration model:
# In settings.py PROFANITY_CHECK_ENABLED = True PROFANITY_SENSITIVITY = 0.8 # 0-1 scale PROFANITY_ALLOWLIST = ['spam', 'analysis'] # Words that might trigger false positives # Then in the view if settings.PROFANITY_CHECK_ENABLED: # Configure profanity filter profanity.load_censor_words() for word in settings.PROFANITY_ALLOWLIST: profanity.whitelist_words([word]) # Then check as normal if profanity.contains_profanity(text_to_check): # Handle profanity...
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
website/views/issue.py(3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: Run Tests
- GitHub Check: docker-test
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (4)
website/views/issue.py (4)
16-16: Appropriate library imported for profanity detection.The
better-profanitylibrary is properly imported and will be used for checking inappropriate language in bug reports.
63-63: Required model imported for blocking functionality.The
Blockedmodel is correctly added to the imports list, which will be used to record instances of blocked submissions due to profanity.
910-915: Effective extraction and combination of text fields for profanity checking.The implementation properly retrieves both the description and markdown_description fields and combines them into a single string for comprehensive profanity checking. This ensures all user-submitted content is properly moderated.
917-924: Good implementation of profanity detection and blocking.The code correctly uses the profanity detection library and creates a proper blocking record with meaningful information (reporter IP, reason, user agent, count).
|
@CodeRabbit Now check and tell me is it correct? |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 68610
The implementation is straightforward yet effective. It follows the application's existing patterns and integrates naturally with the form validation flow. If you wanted to enhance this feature in the future, you could consider:
But for the current PR, your implementation is correct and ready to merge. |
|
/giphy thumbs up please resolve conversations |
* done * done * Update website/views/issue.py * Update website/views/issue.py * done * done --------- Co-authored-by: DonnieBLT <[email protected]>
FIxes : #4061
Summary by CodeRabbit
Summary by CodeRabbit
New Features
Chores
better-profanityto enhance functionality and applied minor formatting improvements to maintain standard configuration practices.