Liangsi Lu1β , Jingchao Wang2β , Zhaorong Dai3, Hanqian Liu4, Yang Shi1β
1 Guangdong University of Technology 2 Peking University 3 South China Agricultural University 4 Sun Yat-Sen University
RLSTG is a Riemannian geometry-based liquid spatio-temporal graph neural network for spatio-temporal graph data prediction tasks, supporting traffic flow prediction and link prediction tasks.
# Create conda environment
conda env create -f environment.yaml
# Activate environment
conda activate rlstg- Python 3.12
- PyTorch 2.5.1 (with CUDA 12.4 support)
- PyTorch Geometric
- torch-spatiotemporal
- Ray (for hyperparameter search)
- Other dependencies see
environment.yaml
- METR-LA: Los Angeles area traffic sensor data
- PEMS03: California traffic data (PEMS system, region 3)
- PEMS04: California traffic data (PEMS system, region 4)
- Enron: Enron email network dataset
# Run RLSTG model on METR-LA dataset (default configuration)
python main.py --dataset metrla --model RLSTG
# Run on PEMS03 dataset
python main.py --dataset pems03 --model RLSTG --epochs 300
# Link prediction on Enron dataset
python main.py --dataset enron --model RLSTG --epochs 200python main.py --dataset metrla --mode searchThis mode automatically searches for optimal hyperparameter configurations.
python main.py --dataset metrla --mode evalUses the best configuration found to perform evaluation.
python main.py --dataset metrla --mode allFirst performs hyperparameter search, then evaluates using the best configuration.
--dataset: Choose dataset, options:['metrla', 'pems03', 'pems04', 'enron']--vl_size: Validation set ratio, default 0.15 (15%)--ts_size: Test set ratio, default 0.15 (15%)--batch_size: Batch size, default 200--min_distance: Minimum interval for selecting time steps in traffic datasets, default 3--remove_ratio: Ratio of edges to remove from graph, default 0.0
--epochs: Number of training epochs, default 500--patience: Early stopping patience, default 10--metric: Evaluation metric to optimize, default 'MAE'--seed: Random seed, default 42--load_weight_only: Load only model weights (not training history)
--ntrials: Number of trials for best configuration, default 3--max_ray: Maximum number of parallel processes for Ray search, default 8
--savedir: Results save directory, default './results'--debug: Debug mode--num_nodes: Number of nodes for visualization, default 3
Model configurations are defined in the config.py file, main parameters include:
hyperbolic_dim: Hyperbolic space dimensionspherical_dim: Spherical space dimensioneuclidean_dim: Euclidean space dimensionhyperboloid_curvature: Hyperboloid curvature, default 1e-6sphere_curvature: Sphere curvature, default 1.0
backbone_layers: Number of backbone network layers, default 1backbone_activation: Activation function, default 'tanh'dropout: Dropout ratio, default 0ode_unfold: ODE unfolding steps, default 4ode_solver: ODE solver, default 'gd'epsilon: Numerical stability parameter, default 1
lr: Learning rate, search range [1e-6, 1e-2]weight_decay: Weight decay, default 1e-6optimizer: Optimizer type, default 'Adam'
To modify model configuration, edit the get_conf_rlstg function in config.py:
def get_conf_rlstg(input_dim, output_dim, time_dim, edge_dim, data_name):
for lr in [1e-3, 1e-2, 1e-4]: # Modify learning rate search space
for wd in [1e-6, 1e-5]: # Modify weight decay search space
for h, s, e in [(0,0,16), (2,0,14)]: # Modify geometric space dimension combinations
# ... other configurationspython main.py \
--dataset metrla \
--model RLSTG \
--epochs 500 \
--batch_size 200 \
--patience 10 \
--mode all \
--seed 42python main.py \
--dataset pems03 \
--epochs 100 \
--patience 5 \
--debug \
--mode searchpython main.py \
--dataset enron \
--model RLSTG \
--epochs 200 \
--metric MAE \
--mode all- Experimental results are saved in the
./results/directory. - Each dataset has its own subdirectory.
- Includes model weights, training history, hyperparameter configurations, etc.
During execution, displays:
- Dataset loading information (number of nodes, edges, feature dimensions, etc.)
- Training progress and performance metrics.
- Best hyperparameter configuration.
- Final evaluation results.
- GPU Support: Code automatically detects CUDA availability, GPU acceleration recommended for training
- Memory Requirements: Large datasets may require substantial memory, adjust batch_size as needed
- Parallel Search: Hyperparameter search uses Ray for parallelization, adjust parallelism with
--max_ray - Data Preprocessing: Traffic data is sampled according to
min_distanceparameter for time steps. - Randomness: Use
--seedparameter to ensure reproducible experiments
- CUDA out of memory: Reduce
batch_sizeor use CPU mode. - Dependency version conflicts: Ensure using the provided
environment.yamlfile. - Data loading failure: Check if datasets are correctly downloaded and extracted.
python main.py --helpView all available parameters and descriptions.
If you find our work helpful, please star π this repo and cite π our paper. Thanks for your support!
@misc{lu2026rlstg,
title={Riemannian Liquid Spatio-Temporal Graph Network},
author={Liangsi Lu and Jingchao Wang and Zhaorong Dai and Hanqian Liu and Yang Shi},
year={2026},
eprint={2601.14115},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2601.14115},
}