This repository is the official PyTorch implementation of DyToK.
- Motivation
- Method
- News
- TODO
- Installation
- Quick Start
- Reproducing Results
- Development
- Acknowledgements
- Citation
Unveiling the keyframe prior in VLLMs. We visualize the averaged attention from the final text token to visual tokens across all layers for each frame. The top-8 frames by attention scores are shown chronologically, with ground truth (GT) keyframes highlighted in red. We observe that even when the model answers incorrectly, its attention still pinpoints relevant frames, revealing a strong task-dependent keyframe prior.
Illustration of DyToK. We adaptively compress video tokens through two synergistic components:
- Temporal Importance Estimation leverages cross-modal attention from a lightweight assistant model to identify keyframes;
- Dynamic Frame-Level Compression that proportionally allocates token budgets to preserve salient content.
- [2025.12.06] Released code for integrating DyToK with encoder feature-based pruning methods.
- [2025.09.18] Our paper has been accepted at NeurIPS 2025.
- Initialize Project.
- Release code for integrating DyToK with LLM attention-based pruning methods.
- Add support for Qwen3-VL.
DyToK's code is extremely concise and works out of the box. Just install and go!
Install the latest stable version directly from PyPI:
pip install dytokClone the repository and install in editable mode:
git clone https://github.com/yu-lin-li/DyToK.git
cd DyToK
pip install -e .Integrating DyToK takes just two lines of code:
from dytok import visionzip
visionzip(model, dytok=True, use_tiny=True, tiny_model=tiny_model)Try it out with our demo script using LLaVA-OneVision:
python playground/llavaov_infer.pyAll experiments in the paper are based on LMMs-Eval. Follow these steps to reproduce our results.
# Create virtual environment
conda create -n dytok python=3.10
conda activate dytok
# Install base models (e.g., LLaVA-OneVision)
pip install git+https://github.com/LLaVA-VL/LLaVA-NeXT.git
# Install DyToK
git clone https://github.com/yu-lin-li/DyToK.git
cd DyToK
pip install -e .
# Install LMMs-Eval streamlined for DyToK
cd eval
pip install -e .
pip install flash-attn==2.6.3 # optional💡 Note: Our
eval/directory contains a minimal, DyToK-focused version of LMMs-Eval. For full functionality, install the official LMMs-Eval separately and integrate DyToK as described in Development.
Reproduce DyToK-enhanced VisionZip results on LLaVA-OneVision:
bash eval/scripts/dytok_visionzip_tiny_32_ov.sh.
├── assets/
├── dytok/ # Core DyToK logic
│ └── visionzip/ # DyToK-enhanced VisionZip
├── eval/
│ ├── lmms_eval/ # Evaluation toolkit
│ │ └── models/ # DyToK-integrated models
│ └── scripts/ # Evaluation scripts
├── playground/ # Demo inference scripts
│ └── llavaov_infer.py
├── pyproject.toml
└── README.mdDyToK is designed as a plug-and-play module. To integrate it into your token compression method:
- Look for code blocks explicitly annotated to isolate DyToK-specific logic from the base method, as shown below:
# ! ———— DyToK Begin ————
...
# ! ———— DyToK End ————- Migrate the enclosed logic into your method.
✅ Pro Tip: Use the Better Comments extension in VSCode to highlight DyToK annotations in red!
To add DyToK support to your local LMMs-Eval:
cp eval/lmms_eval/models/*.py <YOUR_LMMS_EVAL_PATH>/models/Then register the model in <YOUR_LMMS_EVAL_PATH>/models/__init__.py:
# Add the DyToK model entry to AVAILABLE_MODELS
AVAILABLE_MODELS = {
# existing models ...
"llava_onevision_dytok": "Llava_OneVision_DyToK"
}Our work builds upon the codebase of VisionZip, DyCoke, FastV, LLaVA-NeXT, Qwen2.5-VL, and LMMs-Eval. We sincerely thank the authors for their remarkable contributions.
If you find DyToK useful in your research, please cite our paper:
@article{li2025less,
title={Less Is More, but Where? Dynamic Token Compression via LLM-Guided Keyframe Prior},
author={Li, Yulin and Gui, Haokun and Fan, Ziyang and Wang, Junjie and Kang, Bin and Chen, Bin and Tian, Zhuotao},
journal={arXiv preprint arXiv:2025},
year={2025}
}