Thanks to visit codestin.com
Credit goes to github.com

Skip to content

QVPR/VPRTempo

Repository files navigation

VPRTempo - A Temporally Encoded Spiking Neural Network for Visual Place Recognition

PyTorch License: MIT Pixi Badge QUT Centre for Robotics stars Downloads Conda Version GitHub repo size

This repository contains code for VPRTempo, a spiking neural network that uses temporally encoding to perform visual place recognition tasks. The network is based off of BLiTNet and adapted to the VPRSNN framework.

VPRTempo method diagram

VPRTempo is built on a torch.nn framework and employs custom learning rules based on the temporal codes of spikes in order to train layer weights.

In this repository, we provide two networks:

  • VPRTempo: Our base network architecture to perform visual place recognition (fp32)
  • VPRTempoQuant: A modified base network with Quantization Aware Training (QAT) enabled (int8)

To use VPRTempo, please follow the instructions below for installation and usage.

⭐ Update v1.1.11: What's new?

  • Fixed minor bug in the DataLoader that was causing VPRTempo to hang 🐛

Quick start

For simplicity and reproducibility, VPRTempo uses pixi to install and manage dependencies. If you do not already have pixi installed, run the following in your command terminal:

curl -fsSL https://pixi.sh/install.sh | bash

For more information, please refer to the pixi documentation.

Get the repository

Get the latest VPRTempo code and navigate to the project directory by running the following in your command terminal:

git clone https://github.com/QVPR/VPRTempo.git
cd VPRTempo

Run the demo

To quickly evaluate VPRTempo, we provide a pre-trained network trained on 500 places from the Nordland dataset. Run the following in your command terminal to run the demo:

pixi run demo

Note: this will start a download of the models and datasets (~600MB), please ensure you have enough disk space before proceeding.

Train and evaluate a new model

Training and evaluating a new model is quick and easy, simply run the following in your command terminal to re-train and evaluate the demo model:

pixi run train
pixi run eval

Note: You will be prompted if you want to retrain the pre-existing network.

Use the quantized models

For training and evaluation of the 8-bit quantized model, run the following in your command terminal:

pixi run train_quant
pixi run eval_quant

Alternative dependency install

Dependencies for VPRTempo can alternatively be installed in a conda environment. We recommend micromamba and run the following in your command terminal:

micromamba create -n vprtempo -c conda-forge vprtempo
micromamba activate vprtempo

Note: Whilst we do have a PyPi package, we do not recommend using pip to install dependencies for VPRTempo.

Datasets

VPRTempo was developed to be simple to train and test a variety of datasets. Please see the information below about recreating our results for the Nordland and Oxford RobotCar datasets and setting up custom datasets.

Nordland

VPRTempo was developed and tested using the Nordland dataset. To download the full dataset, please visit this repository. Once downloaded, place dataset folders into the VPRTempo directory as follows:

|__./vprtempo
    |___dataset
        |__summer
        |__spring
        |__fall
        |__winter

To replicate the results in our paper, run the following in your command terminal:

pixi run nordland_train
pixi run nordland_eval

Alternatively, specify the data directory using the following argument:

pixi run nordland_train --data_dir <YOUR_DIRECTORY>
pixi run nordland_eval --data_dir <YOUR_DIRECTORY>

Oxford RobotCar

In order to train and test on Oxford RobotCar, you will need to register an account to get access to download the dataset and process the images before proceeding. For more information, please refer to the documentation.

Once fully processed, to replicate the results in our paper run the following in your command terminal:

pixi run orc_train
pixi run orc_eval

Custom Datasets

To define your own custom dataset to use with VPRTempo, simply follow the same dataset structure defined above for Nordland. A .csv file of the image names will be required for the dataloader.

We have included a convenient script ./vprtempo/src/create_data_csv.py which will generate the necessary file. Simply modify the dataset_name variable to the folder containing your images.

To train a new model with a custom dataset, you can do the following.

pixi run train --dataset <your custom database name> --database_dirs <your custom database name>
pixi run eval --database_dirs <your custom database name> --dataset <your custom query name> --query_dir <your custom query name>

License & Citation

This repository is licensed under the MIT License. If you use our code, please cite our IEEE ICRA paper:

@inproceedings{hines2024vprtempo,
      title={VPRTempo: A Fast Temporally Encoded Spiking Neural Network for Visual Place Recognition}, 
      author={Adam D. Hines and Peter G. Stratton and Michael Milford and Tobias Fischer},
      year={2024},
      pages={10200-10207},
      booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)}     
}

Tutorials

We provide a series of Jupyter Notebook tutorials that go through the basic operations and logic for VPRTempo and VPRTempoQuant.

Issues, bugs, and feature requests

If you encounter problems whilst running the code or if you have a suggestion for a feature or improvement, please report it as an issue.

About

Code for VPRTempo, our temporally encoded spiking neural network for visual place recognition.

Topics

Resources

License

Stars

Watchers

Forks