Thanks to visit codestin.com
Credit goes to github.com

Skip to content

ekgus9/FLEX

Repository files navigation

FLEX: A Benchmark for Evaluating Robustness of Fairness in Large Language Models

This repository is the official implementation of the paper "FLEX: A Benchmark for Evaluating Robustness of Fairness in Large Language Models", NAACL-findings 2025.

Installation

To install the lm-eval package from the github repository, run:

git clone https://github.com/dhaabb55/FLEX/
cd FLEX
pip install -e .

Running the Full Evaluation

bash run_FLEX.sh

Our code is based on Language Model Evaluation Harness

Our data is based on On Second Thought, Let's Not Think Step by Step: Bias and Toxicity in Zero-Shot Reasoning

Cite as

@misc{jung2025flexbenchmarkevaluatingrobustness,
      title={FLEX: A Benchmark for Evaluating Robustness of Fairness in Large Language Models}, 
      author={Dahyun Jung and Seungyoon Lee and Hyeonseok Moon and Chanjun Park and Heuiseok Lim},
      year={2025},
      eprint={2503.19540},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2503.19540}, 
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published