Stars
Implementations, Pre-training Code and Datasets of Large Time-Series Models
MOMENT: A Family of Open Time-series Foundation Models, ICML'24
[NeurIPS2025] MM-Agent: LLM as Agents for Real-world Mathematical Modeling Problem
A Library for Advanced Deep Time Series Models for General Time Series Analysis.
Official PyTorch implement of CIKM '25 paper: BALM-TSF
PyTorch implementation of "ChatTime: A Unified Multimodal Time Series Foundation Model Bridging Numerical and Textual Data" (AAAI 2025 [oral])
Code for our paper "VisionTS: Visual Masked Autoencoders Are Free-Lunch Zero-Shot Time Series Forecasters".
[VLDB' 25] ChatTS: Understanding, Chat, Reasoning about Time Series with TS-MLLM
verl: Volcano Engine Reinforcement Learning for LLMs
An open-source implementaion for fine-tuning Qwen-VL series by Alibaba Cloud.
This repository provides the official implementation of ITFormer, a novel framework for temporal-textual multimodal question answering (QA), as presented in our ICML 2025 paper.
Official code for paper "TimeMaster: Training Time-Series Multimodal LLMs to Reason via Reinforcement Learning"
Source code for the AAAI 2025 paper "TimeCAP: Learning to Contextualize, Augment, and Predict Time Series Events with Large Language Model Agents."
Implementation of the paper "In-context Time Series Predictor" (ICLR 2025)
Time-R1 is a two-stage reinforcement fine-tuning framework that trains large language models to perform slow-thinking, step-by-step reasoning for accurate and explainable time series forecasting.
[NeurIPS'24] Identifying Spatio-Temporal Drivers of Extreme Events
(ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"
The repository documents code for "Validating Deep-Learning Weather Forecast Models on Recent High-Impact Extreme Events " https://doi.org/10.1175/AIES-D-24-0033.1
Source code for ClimateLearn