-
Stanford University
- SF Bay Area, CA
- https://akshayparuchuri.com/
- https://orcid.org/0000-0003-4664-3186
- @yahskapar
- in/akshayparuchuri
Lists (1)
Sort Name ascending (A-Z)
Stars
Latest Advances on Agentic AI & AI Agents for Healthcare
Code base for Universal Sparse Autoencoders (USAEs)
[NeurIPS 2025] Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models
HypotheSAEs: hypothesizing interpretable relationships in text datasets using sparse autoencoders. https://arxiv.org/abs/2502.04382
Earth system foundation model data, training, and eval
Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models know themselves through automated interpretability.
Official code for the NeurIPS 2025 paper "egoEMOTION: Egocentric Vision and Physiological Signals for Emotion and Personality Recognition in Real-World Tasks."
Code and data for UniEgoMotion (ICCV 2025)
Open-source framework for the research and development of foundation models.
A framework that allows you to apply Sparse AutoEncoder on any models
[ICML 2025 Poster] SAE-V: Interpreting Multimodal Models for Enhanced Alignment
Training Sparse Autoencoders on Language Models
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
[ICCV 2025] Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.
Code for paper: Reinforced Vision Perception with Tools
[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
Graph Neural Network Library for PyTorch
This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.org/abs/2404.12390 [ECCV 2024]
Code that accompanies the public release of the paper Lost in Conversation (https://arxiv.org/abs/2505.06120)
Explain Before You Answer: A Survey on Compositional Visual Reasoning
Reference PyTorch implementation and models for DINOv3
Official repository for Beyond Binary Rewards: Training LMs to Reason about Their Uncertainty