Thanks to visit codestin.com
Credit goes to github.com

Skip to content

DarthZhu/lm-extend

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Is Extending Modality
The Right Path Towards Omni-Modality?

Code and data for paper Is Extending Modality The Right Path Towards Omni-Modality?.

[Website] • [Paper]

Environment Setup

To install the inference environment, run the following code:

conda env create -f environment.yml

Inference

To generate answers from LLMs, run the script scripts/infer.sh. The script has three steps:

  1. Download the model tensors from huggingface.
  2. Extract the LLM component from the multimodal model.
  3. Generate.

In case certain steps are not required for specific models, you can delete these steps on your own.

To generate answers from multimodal models, run the script scripts/infer_multimodal.sh.

For merged models, you need to first run python src/utils/save_merged_vlm.py to load the merged LLM into the multimodal model. You can change the target model name in the python file.

Fine-tuning

To train the merged model, you can run the script src/training/Qwen2.5-VL/qwen-vl-finetune/train.sh adapted from Qwen2.5-VL training codes.

Citation

If you find this repo useful, please cite the following paper:

@article{zhu2025extending,
  title={Is Extending Modality The Right Path Towards Omni-Modality?},
  author={Zhu, Tinghui and Zhang, Kai and Chen, Muhao and Su, Yu},
  journal={arXiv preprint arXiv:2506.01872},
  year={2025}
}

About

Code for paper "Is Extending Modality The Right Path Towards Omni-Modality?"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published