Thanks to visit codestin.com
Credit goes to github.com

Skip to content

[IROS 2025] HARP-NeXt: High-Speed and Accurate Range-Point Fusion Network for 3D LiDAR Semantic Segmentation

License

Notifications You must be signed in to change notification settings

SamirAbouHaidar/HARP-NeXt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HARP-NeXt: High-Speed and Accurate Range-Point Fusion Network for 3D LiDAR Semantic Segmentation

Samir Abou Haidar1,2,   Alexandre Chariot1,   Mehdi Darouich1,   Cyril Joly2,   Jean-Emmanuel Deschaud2
1Paris-Saclay University, CEA List    2Mines Paris PSL University, Centre for Robotics (CAOR)   

About

This is the official repo of HARP-NeXt, a High-Speed and Accurate Range-Point Fusion Network for 3D LiDAR Semantic Segmentation. We first propose a novel pre-processing methodology that significantly reduces computational overhead. Then, we design the Conv-SE-NeXt feature extraction block to efficiently capture representations without deep layer stacking per network stage. We also employ a multi-scale range-point fusion backbone that leverages information at multiple abstraction levels to preserve essential geometric details, thereby enhancing accuracy.


Fast and Accurate 3D Scene Semantic Segmentation on nuScenes

Updates

  • [2025.10] - We provide trained network weights on nuScenes and SemanticKITTI benchmarks. The checkpoints are available here.
  • [2025.10] - Our paper is available on arXiv, and the code is publicly available.
  • [2025.06] - Our paper was accepted by IROS 2025.

Results

HARP-NeXt achieves high accuracy in real-time on the RTX 4090 GPU at 100 FPS, while delivering near real-time performance on the Jetson AGX Orin embedded system, breaking performance trends.

mIoU vs. Runtime on the NVIDIA RTX 4090 GPU mIoU vs. Runtime on the NVIDIA Jetson AGX Orin

Performance Comparison with State-of-the-Art Methods

Method Params nuScenes SemanticKITTI
mIoU (Val) FPS (RTX 4090) FPS (AGX Orin) mIoU (Val) FPS (RTX 4090) FPS (AGX Orin)
WaffleIron 6.8 M 76.1 9.0 1.9 65.8 2.7 0.5
PTv3 15.3 M 78.4 4.1 1.1 - - -
SalsaNext 6.7 M 68.2 76.9 19.6 55.9 27.7 9.1
CENet 6.8 M 73.3 62.5 10.3 62.6 17.2 6.0
FRNet 10.0 M 75.1 12.2 2.6 66.0 11.6 2.5
MinkowskiNet 21.7 M 73.5 21.2 6.8 64.3 14.1 4.7
Cylinder3D 55.9 M 76.1 5.6 - 63.2 3.2 -
SPVCNN 21.8 M 72.6 17.5 5.9 65.3 11.7 3.9
HARP-NeXt 5.4 M 77.1 100 14.1 65.1 76.9 8.3

Installation

A. Desktop / Workstation (conda)

We recommend using conda with the conda-forge channel.

1. Create and activate the environment

We recommend Python 3.10–3.12 (tested with 3.12)

conda create -n harpnext -c conda-forge python=3.12
conda activate harpnext

2. Install PyTorch

We use PyTorch v2.5.0 with CUDA 11.8:

pip install torch==2.5.0 torchvision==0.20.0 torchaudio==2.5.0 --index-url https://download.pytorch.org/whl/cu118

3. Additionally install essentials

conda install -c conda-forge numpy tqdm tensorboard
pip install torch-scatter -f https://data.pyg.org/whl/torch-2.5.0+cu118.html
pip install pyyaml seaborn opencv-python

B. Jetson Platforms (venv)

We recommend using venv virtual environment and installing PyTorch (for JetPack) following NVIDIA's documentation. The following installation is tested on an NVIDIA Jetson AGX Orin Developer Kit with a JetPack 6.0 (L4T R36.2 / R36.3).

1. Create and activate the environment

We recommend creating a Python 3 venv:

python3 -m venv harpnext
source harpnext/bin/activate

2. Install PyTorch

We use PyTorch v2.3.0 with CUDA 12.4:

wget https://nvidia.box.com/shared/static/zvultzsmd4iuheykxy17s4l2n91ylpl8.whl -O torch-2.3.0-cp310-cp310-linux_aarch64.whl
sudo apt-get install python3-pip libopenblas-base libopenmpi-dev libomp-dev
pip3 install 'Cython<3'
pip3 install numpy torch-2.3.0-cp310-cp310-linux_aarch64.whl

3. Additionally install essentials

pip3 install tqdm tensorboard
pip3 install torch-scatter
pip3 install pyyaml seaborn opencv-python

Data Preparation

nuScenes:

You can download the nuScenes dataset from the official website with the following structure:

nuscenes
├── lidarseg
|   ├── v1.0-mini
|   ├── v1.0-test
|   ├── v1.0-trainval
├── panoptic
├── samples
├── v1.0-mini
├── v1.0-test
├── v1.0-trainval

SemanticKITTI:

You can download the SemanticKITTI dataset from the official website with the following structure:

semantickitti
├── sequences
│   ├── 00
│   │   ├── labels
│   │   ├── velodyne
│   ├── 01
│   ├── ..
│   ├── 21

Pretrained Models

You can download the pretrained weights on nuScenes and SemanticKITTI from here and save them in the ./pretrained/ directory.

Testing Pretrained Models

You can evaluate HARP-NeXt's pretrained model on nuScenes as follows:

python main.py \
--net harpnext \
--dataset nuscenes \
--path_dataset /path/to/nuscenes \
--mainconfig ./configs/main/main-config.yaml \
--netconfig ./configs/net/harpnext-nuscenes.yaml \
--log_path ./pretrained/harpnext-nuscenes-32x480 \
--gpu 0 \
--seed 0 \
--fp16 \
--restart \
--eval

You can evaluate HARP-NeXt's pretrained model on SemanticKITTI as follows:

python main.py \
--net harpnext \
--dataset semantic_kitti \
--path_dataset /path/to/SemanticKITTI \
--mainconfig ./configs/main/main-config.yaml \
--netconfig ./configs/net/harpnext-semantickitti.yaml \
--log_path ./pretrained/harpnext-cutmix-semantickitti-64x512 \
--gpu 0 \
--fp16 \
--restart \
--eval

You can choose whether to execute the pre-processing on CPU or GPU by setting the option in the corresponding netconfig harpnext-nuscenes.yaml and harpnext-semantickitti.yaml files.

Training

nuScenes

You can retrain the harpnext-nuscenes-32x480 model on nuScenes as follows:

python main.py \
--net harpnext \
--dataset nuscenes \
--path_dataset /path/to/nuscenes \
--mainconfig ./configs/main/main-config.yaml \
--netconfig ./configs/net/harpnext-nuscenes.yaml \
--log_path ./logs/harpnext-nuscenes-32x480-retrain \
--gpu 0 \
--seed 0 \
--fp16

SemanticKITTI

You can retrain the harpnext-cutmix-semantickitti-64x512 model on SemanticKITTI as follows:

python main.py \
--net harpnext \
--dataset semantic_kitti \
--path_dataset /path/to/SemanticKITTI \
--mainconfig ./configs/main/main-config.yaml \
--netconfig ./configs/net/harpnext-semantickitti.yaml \
--log_path ./logs/harpnext-cutmix-semantickitti-64x512-retrain \
--gpu 0 \
--fp16

You can enable or disable the Instance CutMix and PolarMix augmentations, during the training only on SemanticKITTI, by setting the instance_cutmix flag in the main-config.yaml file (True to enable, False to disable). The extracted instances are saved in /tmp/semantic_kitti_instances/. Always set to False when evaluating the models.

License

This work is under the Apache 2.0 license.

Citation

Kindly consider citing our paper if you find this work helpful:

@article{abouhaidar2025harpnext,
  title={HARP-NeXt: High-Speed and Accurate Range-Point Fusion Network for 3D LiDAR Semantic Segmentation},
  author={Abou Haidar, Samir and Chariot, Alexandre and Darouich, Mehdi and Joly, Cyril and Deschaud, Jean-Emmanuel},
  journal={arXiv preprint arXiv:2510.06876},
  year={2025}
}

Acknowledgements

We acknowledge and thank the following public resources used in this work:

About

[IROS 2025] HARP-NeXt: High-Speed and Accurate Range-Point Fusion Network for 3D LiDAR Semantic Segmentation

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages