Thanks to visit codestin.com
Credit goes to github.com

Skip to content
#

hallucination-evaluation

Here are 17 public repositories matching this topic...

TrustScoreEval: Trust Scores for AI/LLM Responses — Detect hallucinations, flags misinformation & Validate outputs. Build trustworthy AI.

  • Updated Oct 13, 2025
  • Python

An interactive Python chatbot demonstrating real-time contextual hallucination detection in Large Language Models using the "Lookback Lens" method. This project implements the attention-based ratio feature extraction and a trained classifier to identify when an LLM deviates from the provided context during generation.

  • Updated May 16, 2025
  • Python
hlft-legality-engine

Improve this page

Add a description, image, and links to the hallucination-evaluation topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the hallucination-evaluation topic, visit your repo's landing page and select "manage topics."

Learn more