This is an official implementation of Sports Field Registration via Keypoints-aware Label Condition.
Yen-Jui Chu, Jheng-Wei Su, Kai-Wen Hsiao, Chi-Yu Lien, Shu-Ho Fan, Min-Chun Hu, Ruen-Rone Lee, Chih-Yuan Yao, Hung-Kuo Chu
8th International Workshop on Computer Vision in Sports (CVsports) at CVPR 2022
We propose a novel deep learning framework for sports field registration. The typical algorithmic flow for sports field registration involves extracting field-specific features (e.g., corners, lines, etc.) from field image and estimating the homography matrix between a 2D field template and the field image using the extracted features. Unlike previous methods that strive to extract sparse field features from field images with uniform appearance, we tackle the problem differently. First, we use a grid of uniformly distributed keypoints as our field-specific features to increase the likelihood of having sufficient field features under various camera poses. Then we formulate the keypoints detection problem as an instance segmentation with dynamic filter learning. In our model, the convolution filters are generated dynamically, conditioned on the field image and associated keypoint identity, thus improving the robustness of prediction results. To extensively evaluate our method, we introduce a new soccer dataset, called TS-WorldCup, with detailed field markings on 3812 time-sequence images from 43 videos of Soccer World Cup 2014 and 2018. The experimental results demonstrate that our method outperforms state-of-the-arts on the TS-WorldCup dataset in both quantitative and qualitative evaluations.
- CUDA 11
- Python >= 3.8
- Pytorch == 1.9.0
- torchvision == 0.9.0
- Numpy
- OpenCV-Python == 4.5.1.48
- Matplotlib
- Pillow/scikit-image
- Shapely == 1.7.1
- tqdm
- Clone this repo:
git clone https://github.com/ericsujw/KpSFR
cd KpSFR/- Install miniconda
- Install all the dependencies
conda env create -f environment.yml- Switch to the conda environment
conda activate kpsfr- Download pretrained weight on WorldCup dataset.
- Now the pretrained models would place in checkpoints.
- Download public WorldCup dataset.
- Download TS-WorldCup dataset.
- Now the WorldCup dataset would place in dataset/soccer_worldcup_2014 and TS-WorldCup in dataset/WorldCup_2014_2018.
Please use robust model first to get the preprocess results before running the inference command below.
python inference.py <path/param_text_file>param_text_file as follows,
inference.txt: download pretrained weight on WorldCup dataset or pretrained weight on TS-WorldCup dataset first and place in checkpoints. For inference on your own data.target_video: specify the target video, use space to split multiple video, and set--train_stageto 1,--sfp_finetunedto True and--ckpt_pathto the corresponding model finetuned weight.target_image: specify the target image, use space to split multiple image, and set--train_stageto 0,--sfp_finetunedto False and--ckpt_pathto the corresponding model weight.
Note:
- In the current implementation, we could only infer on WorldCup or TS-WorldCup test set.
- Input index of the corresponding image when specify
target_image. - Input format for specifying
target_videoto refer to the text file. - Execute and output all testing data if not specify target image or video.
Please use robust model first to get the preprocess results before running the evaluation command below.
python eval_testset.py <path/param_text_file>param_text_file as follows,
exp_ours.txt: download pretrained weight on WorldCup dataset first and place in checkpoints. Set--train_stageto 0 for testing on WorldCup test set or set--train_stageto 1 on TS-WorldCup test set and set--sfp_finetunedto False.exp_our_finetuned.txt: download pretrained weight on TS-WorldCup dataset first and place in checkpoints. Set--train_stageto 1 and--sfp_finetunedto True for testing finetuned results on TS-WorldCup test set.exp_our_dice_bce.txt: download pretrained weight first and place in checkpoints. Set--train_stageto 1 for ablation study of loss function (binary dice loss with binary cross entropy loss) on TS-WorldCup test set.exp_our_dice_wce.txt: download pretrained weight first and place in checkpoints. Set--train_stageto 1 for ablation study of loss function (binary dice loss with weighted cross entropy loss) on TS-WorldCup test set.
We will save heatmap results and corresponding homography matrix into /checkpoints/path of experimental name, which set --name in param_text_file.
python train_nn.py <path/param_text_file>param_text_file as follows,
opt_ours.txt: download pretrained weight on WorldCup dataset first and place in checkpoints. Set--train_stageto 0,--trainsetto train_val and--loss_modeto all for training on WorldCup train set. For ablation study, download pretrained weight on TS-WorldCup dataset first and place in checkpoints. Set--loss_modeto dice_bce or dice_wce.opt_our_finetuned.txt: download pretrained weight on TS-WorldCup dataset first and place in checkpoints. Set--train_stageto 1,--trainsetto train and--loss_modeto all for finetuning on TS-WorldCup train set.
We will save visualize results and weights into /checkpoints/path of experimental name, which set --name in param_text_file.
Note: Please check the following arguments to set correct before training every time.
--gpu_ids--name--train_stageand--trainset--ckpt_path--loss_mode--train_epochsand--step_size
Details refer to options.py.
This work is licensed under MIT License. See LICENSE for details.
If you find our code/models useful, please consider citing our paper:
@InProceedings{Chu_2022_CVPR,
author = {Chu, Yen-Jui and Su, Jheng-Wei and Hsiao, Kai-Wen and Lien, Chi-Yu and Fan, Shu-Ho and Hu, Min-Chun and Lee, Ruen-Rone and Yao, Chih-Yuan and Chu, Hung-Kuo},
title = {Sports Field Registration via Keypoints-Aware Label Condition},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
year = {2022}
}