Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Official Implementation of CAVE: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes

License

Notifications You must be signed in to change notification settings

phamleyennhi/CAVE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 

Repository files navigation

CAVE: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes

Nhi Pham1, Artur Jesslen2, Bernt Schiele1, Adam Kortylewski*1, 2, Jonas Fischer*1

*Equal senior advisorship

1Max Planck Institute for Informatics, Saarland Informatics Campus, Germany

2University of Freiburg, Germany

arXiv Project Website

📣 News

  • [25-09-10] Code is available soon, stay tuned!
  • [25-09-04] 👀 Release of arXiv paper and project website.

Contents

📓 Abstract

With the rise of neural networks, especially in high-stakes applications, these networks need two properties (i) robustness and (ii) interpretability to ensure their safety. Recent advances in classifiers with 3D volumetric object representations have demonstrated greatly enhanced robustness in out-of-distribution data. However, these 3D-aware classifiers have not been studied from the perspective of interpretability. We introduce CAVE - Concept Aware Volumes for Explanations - a new direction that unifies interpretability and robustness in image classification. We design an inherently-interpretable and robust classifier by extending existing 3D-aware classifiers with concepts extracted from their volumetric representations for classification. In an array of quantitative metrics for interpretability, we compare against different concept-based approaches across the explainable AI literature and show that CAVE discovers well-grounded concepts that are used consistently across images, while achieving superior robustness.

🛠️ Installation

To get started, create a virtual environment using Python 3.12+:

python3.12 -m venv cave
source cave/bin/activate
pip install -r requirements.txt

💾 Datasets & Checkpoints

Datasets

Checkpoints

📣 Usage

CAVE : A 3D-Aware Inherently Interpretable Classifier

LRP with Conservation for CAVE

Evaluation

📘 Citation

When using this code in your project, consider citing our work as follows:

@inproceedings{pham25cave,
    title     = {Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes},
    author    = {Pham, Nhi and Jesslen, Artur and Schiele, Bernt and Kortylewski, Adam and Fischer, Jonas},
    booktitle = {arXiv},
    year      = {2025},
}

Acknowledgements

Nhi Pham was funded by the International Max Planck Research School on Trustworthy Computing (IMPRS-TRUST) program. We thank Christopher Wewer for his support with the NOVUM codebase, insightful discussions on 3D consistency evaluation, and careful proofreading of our paper.

About

Official Implementation of CAVE: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published