Thanks to visit codestin.com
Credit goes to github.com

Skip to content

A comprehensive framework for integration of single-cell omics data with probabilistic contrastive learning

License

Notifications You must be signed in to change notification settings

broadinstitute/multiVIB

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 

multiVIB: A Unified Probabilistic Contrastive Learning Framework for Atlas-Scale Integration of Single-Cell Multi-Omics Data

multiVIB is a unified framework to integrate single-cell multi-omics datasets across different scenarios. The model backbone of multiVIB consists of three parts: (1) a modality-specific linear translator, (2) a shared encoder, and (3) a shared projector.

multiVIB

Introduction

Comprehensive brain cell atlases are essential for understanding neural functions and enabling translational insights. As single-cell technologies proliferate across experimental platforms, species, and modalities, these atlases must scale accordingly, calling for integration frameworks capable of aligning heterogeneous datasets without erasing biologically meaningful variations.

Existing tools typically focus on narrow integration scenarios, forcing researchers to assemble ad hoc workflows that often introduce artifacts. multiVIB addresses this limitation by providing a unified probabilistic contrastive learning framework that supports diverse single-cell integration tasks.

multiVIB

With the model backbone fixed, multiVIB adapts to different integration scenarios by altering only the training strategy, not the architecture. For horizontal integration, in which no jointly-profiled cells, datasets are anchored through shared features, and multiVIB aligns cells by enforcing consistency across shared genomic signals while ensuring that technical covariates do not drive the alignment. For vertical integration, jointly-profiled multi-omics data covers individual cells with multiple modality views. These cells serve as direct biological anchors, allowing multiVIB to learn cross-modality correspondence without relying on engineered feature mappings. Finally, mosaic integration is achieved by combining horizontal and vertical steps tailored to the pattern of modality overlap.

Navigating this Repository

The multiVIB repository is organized as follows:

<repo_root>/
├─ multiVIB/              # multiVIB python package
└─ doc/                   # Package documentation
    └─ tutorial/
        └─ notebooks/     # Example jupyter notebooks

Installation

We suggest creating a new conda environment to run multiVIB

conda create -n multiVIB python=3.10
conda activate multiVIB

git clone https://github.com/broadinstitute/multiVIB.git
cd multiVIB

pip install .

Tutorial

We provide end-to-end Jupyter notebooks demonstrating how to use multiVIB across common integration tasks.

Preprint and Citation

If you use multiVIB in your research, please cite our preprint:

Yang Xu, Stephen Jordan Fleming, Brice Wang, Erin G Schoenbeck, Mehrtash Babadi, Bing-Xing Huo. multiVIB: A Unified Probabilistic Contrastive Learning Framework for Atlas-Scale Integration of Single-Cell Multi-Omics Data. bioRxiv, 2025.

About

A comprehensive framework for integration of single-cell omics data with probabilistic contrastive learning

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages