Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Yui010206/MEXA

Repository files navigation

[EMNLP 2025 Findings] Image description MEXA: Towards General Multimodal Reasoning with Dynamic Multi-Expert Aggregation

arXiv

University of North Carolina at Chapel Hill


teaser image

🔥 News

Setup

  • We will release multi-expert skills/captions code / data later

Install Dependencies

conda create -n mexa python=3.10
conda activate mexa
pip install -r requirements.txt

Inference

We provide MEXA inference script examples as follows.

sh run_mexa.sh

Reference

Please cite our paper if you use our models in your work:

@article{yu2025mexa,
  title={MEXA: Towards General Multimodal Reasoning with Dynamic Multi-Expert Aggregation},
  author={Yu, Shoubin and Zhang, Yue and Wang, Ziyang and Yoon, Jaehong and Bansal, Mohit},
  journal={arxiv:2506.17113},
  year={2025}
}

About

Code for "MEXA: Towards General Multimodal Reasoning with Dynamic Multi-Expert Aggregation"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •