The generated code was polluted with malicious keywords #193404
Replies: 1 comment
-
|
π¬ Your Product Feedback Has Been Submitted π Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users. Here's what you can expect moving forward β©
Where to look to see what's shipping π
What you can do in the meantime π»
As a member of the GitHub community, your participation is essential. While we can't promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities. Thank you once again for your contribution to making GitHub even better! We're grateful for your ongoing support and collaboration in shaping the future of our platform. β |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
π·οΈ Discussion Type
Bug
π¬ Feature/Topic Area
Code quality
Discussion Details
Describe the bug
I am developing a C# .NET application (specifically a MCP Server for data processing). While using GitHub Copilot to generate bootstrap code for Dependency Injection and Mocking, the model output contained severe "hallucinations" and unrelated toxic content.
Instead of generating valid C# code, Copilot printed spam text related to online gambling and adult content (e.g., "彩η₯δΊιΈ", "ε€§εεΏ«δΈ", "ζ₯ζ¬δΈζ¬ι"). Fortunately, it did not directly incorporate into the code comments and logical structure. Under the Modernize mode of Visual Studio.
Impact
Context
Evidence

The output included lines like:
//\n public string DicomRootPath { get; set; } = "彩η₯δΊιΈε€§εεΏ«δΈjson";// η©ε½©η₯δΊιΈ ζ₯ζ¬δΈζ¬ιThis content is completely irrelevant to the domain context and represents a failure in the model's content filtering mechanism.
Expected Behavior
Copilot should strictly adhere to the technical context and filter out any unrelated, toxic, or spam content.
Beta Was this translation helpful? Give feedback.
All reactions