Evaluation code for various unsupervised automated metrics for Natural Language Generation.
-
Updated
Aug 20, 2024 - Python
Evaluation code for various unsupervised automated metrics for Natural Language Generation.
Well tested & Multi-language evaluation framework for text summarization.
A neural network to generate captions for an image using CNN and RNN with BEAM Search.
Evaluation tools for image captioning. Including BLEU, ROUGE-L, CIDEr, METEOR, SPICE scores.
A python3 library for evaluating caption's BLEU, Meteor, CIDEr, SPICE,ROUGE_L,WMD score. Fork from https://github.com/ruotianluo/coco-caption
MAchine Translation Evaluation Online (MATEO)
Machine Translation (MT) Evaluation Scripts
Implementation for paper BLEU: a Method for Automatic Evaluation of Machine Translation
Automatic text metrics (BLEU, ROUGE, METEOR, +++)
Image caption generator is a task that involves computer vision and natural language processing concepts to recognize the context of an image and describe them in a natural language like English.
Corpus level and sentence level BLEU calculation for machine translation
Generator of data for training LLMs for the specific use case of creating SQL queries from natural language. Developed as a practical project at TUM.
A local evaluation suite for Luxembourgish machine translation.
[Working on it] Implemented the Sequence-to-Sequence LSTM architecture from Sutskever et al. (2014) for English-to-French translation, achieving BLEU score comparable to the original paper; implemented custom data preprocessing, deep LSTM encoder-decoder.
Neural Dialogue Generation Benchmarks implemented TensorFlow 2.0
BleuMacaw: GPT-2 and SentenceTransformers for Paraphrases Generation
Add a description, image, and links to the bleu topic page so that developers can more easily learn about it.
To associate your repository with the bleu topic, visit your repo's landing page and select "manage topics."