Thanks to visit codestin.com
Credit goes to github.com

Skip to content

πŸ›°οΈπŸŒ²Terra_Mask is an end-to-end computer vision project for semantic segmentation in land cover classification. It uses deep learning with PyTorch to classify features in high-resolution satellite imagery β€” such as buildings, woodland, water, and roads.

License

XaXtric7/Terra_Mask

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

8 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

logo

πŸ›£ Land-Cover-Semantic-Segmentation-PyTorch:
An end-to-end Image Segmentation (CV) project

PyTorch - Version TorchVision - Version Python - Version Generic badge


πŸ“š Table of Contents


πŸ“Œ Overview

An end-to-end Computer Vision project focused on the topic of Image Segmentation (specifically Semantic Segmentation). Although this project has primarily been built with the LandCover.ai dataset, the project template can be applied to train a model on any semantic segmentation dataset and extract inference outputs from the model in a promptable fashion. Though this is not even close to actual promptable AI, the term is being used here because of a specific functionality that has been integrated here.

The model can be trained on any or all the classes present in the semantic segmentation dataset with the ability to customize the model architecture, optimizer, learning rate, and a lot more parameters directly from the config file, giving it an exciting AutoML aspect. Thereafter while testing, the user can pass the prompt (in the form of the config variable 'test_classes') of the selected classes that the user wants to be present in the masks predicted by the trained model.

For example, suppose the model has been trained on all the 30 classes of the CityScapes dataset and while inferencing, the user only wants the class 'parking' to be present in the predicted mask due to a specific use-case application. Therefore, the user can provide the prompt as 'test_classes = ['parking']' in the config file and get the desired output.


πŸ’« Demo

1. Training the model on LandCover.ai dataset with 'train_classes': ['background', 'building', 'woodland', 'water']...

2. Testing the trained model for all the classes used to train the model, i.e. 'test_classes': ['background', 'building', 'woodland', 'water']...

3. Testing the trained model for selective classes as per user input, i.e. 'test_classes': ['background', 'building', 'water']...


πŸš€ Getting Started

βœ… Prerequisites

  • Dataset prerequisite for training:

Before starting to train a model, make sure to download the dataset from LandCover.ai or from kaggle/LandCover.ai, and copy/move over the downloaded directories 'images' and 'masks' to the 'train' directory of the project.

🧰 System Requirements

  • Python: 3.9 (Docker image uses python:3.9)
  • PyTorch: 2.0.1, TorchVision: 0.15.2 (see requirements.txt)
  • CUDA (optional, recommended): NVIDIA GPU with a supported CUDA toolkit/driver for PyTorch 2.0.1. CPU is supported but significantly slower.
  • OS: Linux/Windows/macOS (Docker recommended for reproducibility)

To install a CUDA-enabled PyTorch that matches your NVIDIA driver, follow the official selector and install command from the PyTorch site. Example (Linux, CUDA 11.8):

pip install --index-url https://download.pytorch.org/whl/cu118 torch==2.0.1 torchvision==0.15.2

If you do not have a compatible GPU/driver, install the CPU wheels instead:

pip install --index-url https://download.pytorch.org/whl/cpu torch==2.0.1 torchvision==0.15.2

Note: This repository pins torch==2.0.1 and torchvision==0.15.2 in requirements.txt.

⚑ GPU Acceleration (CUDA)

  • The runtime device is controlled via the config at config/config.yaml with key vars.device. Default is:
vars:
  device: "cuda" # set to "cpu" to force CPU
  • Scripts use torch.device from this config. If CUDA is available and device: "cuda", training/inference will run on the GPU. Otherwise, set device: "cpu".

  • Verify CUDA availability on your machine before running training/testing:

python testcuda.py

Expected output (example):

PyTorch version: 2.0.1
CUDA available: True
Device count: 1
Current device: 0
GPU name: NVIDIA GeForce RTX ...

If CUDA available: False, install the correct CUDA-enabled PyTorch wheel and ensure NVIDIA drivers are installed and compatible.

🐳 Setting up and Running the project with Docker

First and foremost, make sure that Docker is installed and working properly in the system.

πŸ’‘ Check the Dockerfile added in the repository. According the instructions provided in the file, comment and uncomment the mentioned lines to setup the docker image and container either to train or test the model at a time.

  1. Clone the repository:
git clone https://github.com/XaXtric7/Terra_Mask.git
  1. Change to the project directory:
cd Land-Cover-Semantic-Segmentation-PyTorch
  1. Build the image from the Dockerfile:
docker build -t segment_project_image .
  1. Running the docker image in a docker container:
docker run --name segment_container -d segment_project_image
  1. Copying the output files from the container directory to local project directory after execution is complete:
docker cp segment_container:/segment_project/models ./models
docker cp segment_container:/segment_project/logs ./logs
docker cp segment_container:/segment_project/output ./output
  1. Tidying up:
docker stop segment_container
docker rm segment_container
docker rmi segment_project_image

If Docker is not installed in the system, follow the below methods to set up and run the project without Docker.

πŸ’» Setup (Without 🐳 Docker)

  1. Clone the repository:
git clone https://github.com/XaXtric7/Terra_Mask.git
  1. Change to the project directory:
cd Land-Cover-Semantic-Segmentation-PyTorch
  1. Setting up programming environment to run the project:
  • If using an installed conda package manager, i.e. either Anaconda or Miniconda, create the conda environment following the steps mentioned below:
conda create --name <environment-name> python=3.9
conda activate <environment-name>
  • If using a directly installed python software, create the virtual environment following the steps mentioned below:
python -m venv <environment-name>
<environment-name>\Scripts\activate
  1. Install the dependencies:
pip install -r requirements.txt
  1. (Optional) Select CPU or CUDA device in config/config.yaml:
vars:
  device: "cuda" # change to "cpu" if no GPU

πŸ€– Running the project (Without 🐳 Docker)

Running the model training and testing/inferencing scripts from the project directory. It is not necessary to train the model first mandatorily, as a simple trained model has been provided to run the test and check outputs before trying to fine-tune the model.

  1. Run the model training script:
cd src
python train.py
  1. Run the model test (with images and masks) script:
cd src
python test.py
  1. Run the model inference (with images only, masks not required) script:
cd src
python inference.py
  1. Verify CUDA/GPU availability (optional but recommended):
python testcuda.py

If CUDA is working, keep vars.device: "cuda". Otherwise, update to "cpu" in config/config.yaml.


πŸ›  Configuration

All key hyperparameters and IO paths are controlled via config/config.yaml. Highlights:

dirs:
  data_dir: data
  train_dir: train
  test_dir: test
  image_dir: images
  mask_dir: masks
  model_dir: models
  output_dir: output
  pred_mask_dir: predicted_masks
  pred_plot_dir: prediction_plots
  log_dir: logs
vars:
  file_type: ".tif"
  patch_size: 256
  batch_size: 4
  model_arch: "Unet" # see: https://smp.readthedocs.io/en/latest/models.html
  encoder: "efficientnet-b0" # see: https://smp.readthedocs.io/en/latest/encoders_timm.html
  encoder_weights: "imagenet"
  activation: "softmax2d" # sigmoid for binary, softmax2d for multi-class
  optimizer_choice: "Adam"
  init_lr: 0.0003
  epochs: 20
  device: "cuda" # set to "cpu" if no GPU
  all_classes: ["background", "building", "woodland", "water", "road"]
  train_classes: ["background", "building", "woodland", "water"]
  test_classes: ["background", "building", "water"]

πŸ“ Citing

@misc{XaXtric_7:2025,
  author       = {Sarthak Dharmik},
  title        = {Terra Mask},
  year         = {2025},
  howpublished = {\url{https://github.com/XaXtric7/Terra_Mask}},
  note         = {GitHub repository},
  publisher    = {GitHub}
}


πŸ›‘οΈ License

Project is distributed under MIT License


πŸ‘ Acknowledgements

@misc{Iakubovskii:2019,
 Author = {Pavel Iakubovskii},
 Title = {Segmentation Models Pytorch},
 Year = {2019},
 Publisher = {GitHub},
 Journal = {GitHub repository},
 Howpublished = {\url{https://github.com/qubvel/segmentation_models.pytorch}}
}

@misc{Souvik:2023,
  Author = {Souvik Majumder},
  Title = {Land Cover Semantic Segmentation PyTorch},
  Year = {2023},
  Publisher = {GitHub},
  Journal = {GitHub repository},
  Howpublished = {\url{https://github.com/souvikmajumder26/Land-Cover-Semantic-Segmentation-PyTorch}}
}

πŸ” Return


About

πŸ›°οΈπŸŒ²Terra_Mask is an end-to-end computer vision project for semantic segmentation in land cover classification. It uses deep learning with PyTorch to classify features in high-resolution satellite imagery β€” such as buildings, woodland, water, and roads.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published