Thanks to visit codestin.com
Credit goes to github.com

Skip to content

yu-lin-li/DyToK

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Less Is More, but Where?
Dynamic Token Compression via LLM-Guided Keyframe Prior

This repository is the official PyTorch implementation of DyToK.

📚 TABLE OF CONTENTS

  1. Motivation
  2. Method
  3. News
  4. TODO
  5. Installation
  6. Quick Start
  7. Reproducing Results
  8. Development
  9. Acknowledgements
  10. Citation

🎯 Motivation

Unveiling the keyframe prior in VLLMs Unveiling the keyframe prior in VLLMs. We visualize the averaged attention from the final text token to visual tokens across all layers for each frame. The top-8 frames by attention scores are shown chronologically, with ground truth (GT) keyframes highlighted in red. We observe that even when the model answers incorrectly, its attention still pinpoints relevant frames, revealing a strong task-dependent keyframe prior.

🌈 Method

Illustration of DyToK Illustration of DyToK. We adaptively compress video tokens through two synergistic components:

  1. Temporal Importance Estimation leverages cross-modal attention from a lightweight assistant model to identify keyframes;
  2. Dynamic Frame-Level Compression that proportionally allocates token budgets to preserve salient content.

🎉 News

  • [2025.12.06] Released code for integrating DyToK with encoder feature-based pruning methods.
  • [2025.09.18] Our paper has been accepted at NeurIPS 2025.

🔥 TODO

  • Initialize Project.
  • Release code for integrating DyToK with LLM attention-based pruning methods.
  • Add support for Qwen3-VL.

📦 Installation

DyToK's code is extremely concise and works out of the box. Just install and go!

1. Quick Install

Install the latest stable version directly from PyPI:

pip install dytok

2. Development Install

Clone the repository and install in editable mode:

git clone https://github.com/yu-lin-li/DyToK.git
cd DyToK
pip install -e .

🚀 Quick Start

Integrating DyToK takes just two lines of code:

from dytok import visionzip
visionzip(model, dytok=True, use_tiny=True, tiny_model=tiny_model)

Try it out with our demo script using LLaVA-OneVision:

python playground/llavaov_infer.py

📊 Reproducing Results

All experiments in the paper are based on LMMs-Eval. Follow these steps to reproduce our results.

1. Setup Environment

# Create virtual environment
conda create -n dytok python=3.10
conda activate dytok

# Install base models (e.g., LLaVA-OneVision)
pip install git+https://github.com/LLaVA-VL/LLaVA-NeXT.git

# Install DyToK
git clone https://github.com/yu-lin-li/DyToK.git
cd DyToK
pip install -e .

# Install LMMs-Eval streamlined for DyToK
cd eval
pip install -e .
pip install flash-attn==2.6.3  # optional

💡 Note: Our eval/ directory contains a minimal, DyToK-focused version of LMMs-Eval. For full functionality, install the official LMMs-Eval separately and integrate DyToK as described in Development.

2. Evaluation

Reproduce DyToK-enhanced VisionZip results on LLaVA-OneVision:

bash eval/scripts/dytok_visionzip_tiny_32_ov.sh

🛠️ Development

1. Repository Structure

.
├── assets/
├── dytok/                    # Core DyToK logic
│   └── visionzip/            # DyToK-enhanced VisionZip
├── eval/
│   ├── lmms_eval/            # Evaluation toolkit
│   │   └── models/           # DyToK-integrated models
│   └── scripts/              # Evaluation scripts
├── playground/               # Demo inference scripts
│   └── llavaov_infer.py
├── pyproject.toml
└── README.md

2. Adapt DyToK to Your Own Method

DyToK is designed as a plug-and-play module. To integrate it into your token compression method:

  • Look for code blocks explicitly annotated to isolate DyToK-specific logic from the base method, as shown below:
# ! ———— DyToK Begin ————
...
# ! ———— DyToK End ————
  • Migrate the enclosed logic into your method.

✅ Pro Tip: Use the Better Comments extension in VSCode to highlight DyToK annotations in red!

3. Integrate with Your Own LMMs-Eval

To add DyToK support to your local LMMs-Eval:

cp eval/lmms_eval/models/*.py <YOUR_LMMS_EVAL_PATH>/models/

Then register the model in <YOUR_LMMS_EVAL_PATH>/models/__init__.py:

# Add the DyToK model entry to AVAILABLE_MODELS
AVAILABLE_MODELS = {
    # existing models ...
    "llava_onevision_dytok": "Llava_OneVision_DyToK"
}

❤️ Acknowledgements

Our work builds upon the codebase of VisionZip, DyCoke, FastV, LLaVA-NeXT, Qwen2.5-VL, and LMMs-Eval. We sincerely thank the authors for their remarkable contributions.

📜 Citation

If you find DyToK useful in your research, please cite our paper:

@article{li2025less,
  title={Less Is More, but Where? Dynamic Token Compression via LLM-Guided Keyframe Prior},
  author={Li, Yulin and Gui, Haokun and Fan, Ziyang and Wang, Junjie and Kang, Bin and Chen, Bin and Tian, Zhuotao},
  journal={arXiv preprint arXiv:2025},
  year={2025}
}

About

[NeurIPS 2025] Less Is More, but Where? Dynamic Token Compression via LLM-Guided Keyframe Prior

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published