Change the repository type filter
All
Repositories list
39 repositories
BiMediX2
PublicBio-Medical EXpert LMM with English and Arabic Language Capabilities- Awesome Reasoning LLM Tutorial/Survey/Guide
ViMUL
PublicFannOrFlop
Public- Evaluating Deep Multimodal Reasoning in Vision-Centric Agentic Tasks
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models
groundingLMM
Public[CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses that are seamlessly integrated with object segmentation masks.LLaVA-pp
Public🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)VideoGPT-plus
PublicOfficial Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video UnderstandingVideo-ChatGPT
Public[ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the capabilities of LLMs with a pretrained visual encoder adapted for spatiotemporal video representation. We also introduce a rigorous 'Quantitative Evaluation Benchmarking' for video-based conversational models.VideoMolmo
PublicVideoMathQA
PublicTerraFM
PublicGeoPixel
PublicGeoPixel: A Pixel Grounding Large Multimodal Model for Remote Sensing is specifically developed for high-resolution remote sensing image analysis, offering advanced multi-target pixel grounding capabilities.- [CVPR 2025 🔥] ALM-Bench is a multilingual multi-modal diverse cultural benchmark for 100 languages across 19 categories. It assesses the next generation of LMMs on cultural inclusitivity.
CoVR-VidLLM-CVPRW25
PublicARB
PublicKITAB-Bench
Public[ACL 2025 🔥] A Comprehensive Multi-Domain Benchmark for Arabic OCR and Document UnderstandingNestEO
PublicLlamaV-o1
PublicLLMVoX
PublicLLMVoX: Autoregressive Streaming Text-to-Speech Model for Any LLMMobiLlama
Public[ICLR-2025-SLLM Spotlight 🔥]MobiLlama : Small Language Model tailored for edge devicesUniMed-CLIP
Public- [NAACL 2025 🔥] CAMEL-Bench is an Arabic benchmark for evaluating multimodal models across eight domains with 29,000 questions.
VideoGLaMM
Public[CVPR 2025 🔥]A Large Multimodal Model for Pixel-Level Visual Grounding in Videos