Thanks to visit codestin.com
Credit goes to huggingface.co

Papers
arxiv:2510.14979

From Pixels to Words -- Towards Native Vision-Language Primitives at Scale

Published on Oct 16
· Submitted by Haiwen Diao on Oct 17
Authors:
,
,
,
,
,
,
,

Abstract

NEO, a novel family of native Vision-Language Models, addresses fundamental constraints and integrates vision and language within a unified framework, achieving competitive performance with limited data.

AI-generated summary

The edifice of native Vision-Language Models (VLMs) has emerged as a rising contender to typical modular VLMs, shaped by evolving model architectures and training paradigms. Yet, two lingering clouds cast shadows over its widespread exploration and promotion: (-) What fundamental constraints set native VLMs apart from modular ones, and to what extent can these barriers be overcome? (-) How to make research in native VLMs more accessible and democratized, thereby accelerating progress in the field. In this paper, we clarify these challenges and outline guiding principles for constructing native VLMs. Specifically, one native VLM primitive should: (i) effectively align pixel and word representations within a shared semantic space; (ii) seamlessly integrate the strengths of formerly separate vision and language modules; (iii) inherently embody various cross-modal properties that support unified vision-language encoding, aligning, and reasoning. Hence, we launch NEO, a novel family of native VLMs built from first principles, capable of rivaling top-tier modular counterparts across diverse real-world scenarios. With only 390M image-text examples, NEO efficiently develops visual perception from scratch while mitigating vision-language conflicts inside a dense and monolithic model crafted from our elaborate primitives. We position NEO as a cornerstone for scalable and powerful native VLMs, paired with a rich set of reusable components that foster a cost-effective and extensible ecosystem. Our code and models are publicly available at: https://github.com/EvolvingLMMs-Lab/NEO.

Community

Paper submitter

🌟NEO: Native Vision-Language Primitives🌟 constructs native VLMs from first principles and shows an alternative multimodal pathway: End-to-end training, unified native primitives, and intrinsically multimodal design.

🔥 Unified Native Architecture 🔥: Innovates native VLM primitives that perform pixel–word encoding, alignment, and reasoning within a single dense model across different scales.
🔥 Extreme Training Efficiency 🔥: With only 390M image-text examples, NEO develops strong visual perception from scratch, achieving performance on par with top modular VLMs such as Qwen2.5-VL across multiple benchmarks.
🔥 Building a Native Ecosystem 🔥: Provides a rich set of reusable components that lower development costs and facilitate high-performance native large model research, accelerating the native VLM ecosystem.

🔗 Paper link: https://arxiv.org/abs/2510.14979
🔗 Code link: https://github.com/EvolvingLMMs-Lab/NEO
🔗 Model link: https://huggingface.co/collections/Paranioar/neo1-0-68f0db9cbac952be3eca7089

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 6

Browse 6 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.14979 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.14979 in a Space README.md to link it from this page.

Collections including this paper 3