Thanks to visit codestin.com
Credit goes to github.com

Skip to content

CEll in silico Labeling using Tabular Input Context

License

zaritskylab/CELTIC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CELTIC

CELTIC (CEll in silico Labeling using Tabular Input Context) is a context-dependent model for in silico labeling of organelle fluorescence from label-free microscopy images. By incorporating biological cell contexts, CELTIC enhances the prediction of out-of-distribution data such as cells undergoing mitosis. The explicit inclusion of context has the potential to harmonize multiple datasets, paving the way for generalized in silico labeling foundation models.

This repository contains the code, models, and data preprocessing tools for the CELTIC pipeline, as described in our paper:

Overview

This repository provides the complete implementation for training, inference, and context vector generation. The structure is designed to help users easily reproduce the workflow and understand how biological context enhances organelle prediction. Below are the steps for running each part of the pipeline, along with links to the relevant notebooks.

Data

The datasets used for training and evaluation are available through the BioImage Archive at https://doi.org/10.6019/S-BIAD2156. The datasets contain six organelles, each with 3D single-cell images of hiPSC-derived cells with brightfield imaging, the EGFP-labeled organelle, segmentation masks, and metadata (cell cycle, edge flag, neighbors, and shape information).

The datasets can be downloaded via FTP at: ftp://ftp.ebi.ac.uk/pub/databases/biostudies/S-BIAD/156/S-BIAD2156/Files

Each organelle has its own folder, structured as follows:

organelle_name/
├── cell_images/
│   ├── <FOVId_CellId>_signal.tiff
│   ├── <FOVId_CellId>_target.tiff
│   └── <FOVId_CellId>_mask.tiff
│   ```
│
├── metadata/
│   ├── metadata.csv
│   ├── context.csv
│   ├── cell_cube_coordinates_in_fov.csv
│   └── neighbours.csv


cell_images: Contains 2,052–2,993 3D single-cell images per organelle, cropped from 180 Fields of View (FOVs) originating from the Allen Institute WTC-11 dataset. Each cell is represented by three aligned 3D images:

  • <FOVId_CellId>_signal.tiff - Brightfield
  • <FOVId_CellId>_target.tiff - EGFP-tagged organelle
  • <FOVId_CellId>_mask.tiff - Segmentation mask

metadata.csv: FOV and cell IDs, paths to cell images, columns from the WTC-11 dataset (e.g., cell index in FOV mask, cell_stage).

context.csv: Precomputed CELTIC context for each cell (same order as metadata.csv)

cell_cube_coordinates_in_fov.csv: Computed cell shape descriptors.

neighbours.csv: Computed neighborhood features.

Installation and Setup

  1. Create a conda environment:
    conda create -n celtic_env python=3.9
    conda activate celtic_env
  2. Clone the repository:
    git clone https://github.com/zaritskylab/CELTIC
    cd CELTIC
  3. Install the required dependencies:
    pip install .
    • If you need to add a kernel for JupyterLab, you might need to run:
    pip install notebook jupyterlab
    python -m ipykernel install --user --name <your env> --display-name "<your env>"

How-To Notebooks

We have created example notebooks located in the examples folder. Each notebook supports running a minimal demo as well as processing the full dataset.

  • Training the CELTIC Model:

    This notebook demonstrates how to train the CELTIC model using single cell images and context data.

    Open In Colab Open In Jupyter

  • Prediction with the CELTIC Model:

    This notebook shows how to run predictions using the trained single cell model, both with and without context. It allows for the comparison of results and demonstrates how context improves the prediction accuracy.

    Open In Colab Open In Jupyter

  • Context Creation:

    This notebook provides a detailed walkthrough of how to create the cell context features used in the CELTIC model. Note that the BioImage Archive dataset (S-BIAD2156) already includes precomputed context features for all single-cell images. This notebook is useful if you want to start from scratch — for example, to take a field of view (FOV) from the Allen Institute WTC-11 dataset, crop individual cells, and generate the corresponding context features yourself.

    Open In Colab Open In Jupyter

Contacts

Author: Nitsan Elmalam

Corresponding Author: Assaf Zaritsky

Citation

If you use this implementation in your research, please cite:

Elmalam, N. & Zaritsky, A.
Cell-context dependent in silico organelle localization in label-free microscopy images
bioRxiv (2024). https://doi.org/10.1101/2024.11.10.622841

@article {Elmalam2024.11.10.622841,
	author = {Elmalam, Nitsan and Zaritsky, Assaf},
	title = {Cell-context dependent in silico organelle localization in label-free microscopy images},
	elocation-id = {2024.11.10.622841},
	year = {2024},
	doi = {10.1101/2024.11.10.622841},
	publisher = {Cold Spring Harbor Laboratory},
	URL = {https://www.biorxiv.org/content/early/2024/11/10/2024.11.10.622841},
	eprint = {https://www.biorxiv.org/content/early/2024/11/10/2024.11.10.622841.full.pdf},
	journal = {bioRxiv}
} 

License

This repository is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).