Thanks to visit codestin.com
Credit goes to link.springer.com

Skip to main content
Log in

Exploring Salient Embeddings for Gait Recognition

  • Research Article
  • Published:
Machine Intelligence Research Aims and scope Submit manuscript

Abstract

Gait recognition aims to identify individuals by distinguishing unique walking patterns based on video-level pedestrian silhouettes. Previous studies have focused on designing powerful feature extractors to model the spatio-temporal dependencies of gait, thereby obtaining gait features that contain rich semantic information. However, they have overlooked the potential of feature maps to construct discriminative gait embeddings. In this work, we propose a novel model, EmbedGait, which is designed to learn salient gait embeddings for improved recognition results. Specifically, our framework starts with a frame-level spatial alignment to maintain inter-sequence consistency. Then, horizontal salient mapping (HSM) module is designed to extract the representative embeddings and discard the background information by a designed pooling operation. The subsequent adaptive embedding weighting (AEW) module is used to adaptively highlight the salient embeddings of different body parts and channels. Extensive experiments on the Gait3D, GREW and SUSTech1K datasets demonstrate that our approach improves comparable performance in several benchmarks tests. For example, our proposed EmbedGait achieves rank-1 accuracies of 77.3%, 79.0% and 79.6% on Gait3D, GREW and SUSTech1K, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+
from £29.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. M. Kim, K. A. Jain, X. Liu. AdaFace: Quality adaptive margin for face recognition. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, pp. 18729–18738, 2022. DOI: https://doi.org/10.1109/CVPR52688.2022.01819.

    Google Scholar 

  2. J. Shen, N. Liu, C. Xu, H. Sun, Y. Xiao, D. Li, Y. Zhang. Finger vein recognition algorithm based on lightweight deep convolutional neural network. IEEE Transactions on Instrumentation and Measurement, vol. 71, Article number 5000413, 2022. DOI: https://doi.org/10.1109/TIM.2021.3132332.

  3. J. Wei, H. Huang, Y. Wang, R. He, Z. Sun. Towards more discriminative and robust iris recognition by learning uncertain factors. IEEE Transactions on Information Forensics and Security, vol. 17, pp. 865–879, 2022. DOI: https://doi.org/10.1109/TIFS.2022.3154240.

    Article  Google Scholar 

  4. F. Deligianni, Y. Guo, G. Z. Yang. From emotions to mood disorders: A survey on gait analysis methodology. IEEE Journal of Biomedical and Health Informatics, vol. 23, no. 6, pp. 2302–2316, 2019. DOI: https://doi.org/10.1109/JBHI.2019.2938111.

    Article  Google Scholar 

  5. Y. Qiu, Y. Song. Multi-view gait recognition method based on RBF network. In Proceedings of the 13th Chinese Conference on Biometric Recognition, Urumqi, China, pp. 96–108, 2018. DOI: https://doi.org/10.1007/978-3-319-97909-0_11.

    Chapter  Google Scholar 

  6. L. Yao, W. Kusakunniran, Q. Wu, J. Zhang, Z. Tang. Robust CNN-based gait verification and identification using skeleton gait energy image. In Proceedings of Digital Image Computing: Techniques and Applications, Canberra, Australia, 2018. DOI: https://doi.org/10.1109/DICTA.2018.8615802.

    Google Scholar 

  7. H. Chao, Y. He, J. Zhang, J. Feng. GaitSet: Regarding gait as a set for cross-view gait recognition. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, Honolulu, USA, pp. 8126–8133, 2019. DOI: https://doi.org/10.1609/aaai.v33i01.33018126.

    Google Scholar 

  8. C. Fan, Y. Peng, C. Cao, X. Liu, S. Hou, J. Chi, Y. Huang, Q. Li, Z. He. GaitPart: Temporal part-based model for gait recognition. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, pp. 14213–14221, 2020. DOI: https://doi.org/10.1109/CVPR42600.2020.01423.

    Google Scholar 

  9. B. Lin, S. Zhang, X. Yu. Gait recognition via effective global-local feature representation and local temporal aggregation. In Proceedings of IEEE/CVF International Conference on Computer Vision, Montreal, Canada, pp. 14628–14636, 2021. DOI: https://doi.org/10.1109/ICCV48922.2021.01438.

    Google Scholar 

  10. X. Huang, D. Zhu, H. Wang, X. Wang, B. Yang, B. He, W. Liu, B. Feng. Context-sensitive temporal feature learning for gait recognition. In Proceedings of IEEE/CVF International Conference on Computer Vision, Montreal, Canada, pp. 12889–12898, 2021. DOI: https://doi.org/10.1109/ICCV48922.2021.01267.

    Google Scholar 

  11. S. Yu, D. Tan, T. Tan. A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition. In Proceedings of the 18th International Conference on Pattern Recognition, Hong Kong, China, pp. 441–444, 2006. DOI: https://doi.org/10.1109/ICPR.2006.67.

    Google Scholar 

  12. X. Wu, S. Yu, Y. Huang. Multiscale temporal network for video-based gait recognition. In Proceedings of the 14th Chinese Conference on Biometric Recognition, Zhuzhou, China, pp. 75–83, 2019. DOI: https://doi.org/10.1007/978-3-030-31456-9_9.

    Chapter  Google Scholar 

  13. X. Li, Y. Makihara, C. Xu, Y. Yagi. Multi-view large population gait database with human meshes and its performance evaluation. IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 4, no. 2, pp. 234–248, 2022. DOI: https://doi.org/10.1109/TBIOM.2022.3174559.

    Article  Google Scholar 

  14. X. Li, Y. Makihara, C. Xu, Y. Yagi, S. Yu, M. Ren. End-to-end model-based gait recognition. In Proceedings of the 15th Asian Conference on Computer Vision, Kyoto, Japan, pp. 3–20, 2020. DOI: https://doi.org/10.1007/978-3-030-69535-4_1.

    Google Scholar 

  15. R. Liao, C. Cao, E. B. Garcia, S. Yu, Y. Huang. Pose-based temporal-spatial network (PTSN) for gait recognition with carrying and clothing variations. In Proceedings of the 12th Chinese Conference on Biometric Recognition, Shenzhen, China, pp. 474–483, 2017. DOI: https://doi.org/10.1007/978-3-319-69923-3_51.

    Chapter  Google Scholar 

  16. R. Liao, S. Yu, W. An, Y. Huang. A model-based gait recognition method with body pose and human prior knowledge. Pattern Recognition, vol. 98, Article number 107069, 2020. DOI: https://doi.org/10.1016/j.patcog.2019.107069.

  17. D. Kastaniotis, I. Theodorakopoulos, S. Fotopoulos. Pose-based gait recognition with local gradient descriptors and hierarchically aggregated residuals. Journal of Electronic Imaging, vol. 25, no. 6, Article number 063019, 2016. DOI: https://doi.org/10.1117/1.JEI.25.6.063019.

  18. T. Teepe, R. A. Khan, J. Gilg, F. Herzog, S. Hörmann, G. Rigoll. Gaitgraph: Graph convolutional network for skeleton-based gait recognition. In Proceedings of IEEE International Conference on Image Processing, Anchorage, USA, pp. 2314–2318, 2021. DOI: https://doi.org/10.1109/ICIP42928.2021.9506717.

    Google Scholar 

  19. T. Teepe, J. Gilg, F. Herzog, S. Hörmann, G. Rigoll. Towards a deeper understanding of skeleton-based gait recognition. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, New Orleans, USA, pp. 1568–1576, 2022. DOI: https://doi.org/10.1109/CVPRW56347.2022.00163.

    Google Scholar 

  20. C. Zhang, X. P. Chen, G. Q. Han, X. J. Liu. Spatial transformer network on skeleton-based gait recognition. Expert Systems, vol. 40, no. 6, Article number e13244, 2023. DOI: https://doi.org/10.1111/exsy.13244.

  21. Y. Fu, S. Meng, S. Hou, X. Hu, Y. Huang. GPGait: Generalized pose-based gait recognition. In Proceedings of IEEE/CVF International Conference on Computer Vision, Paris, France, pp. 19538–19547, 2023. DOI: https://doi.org/10.1109/ICCV51070.2023.01795.

    Google Scholar 

  22. Y. Wang, Z. Chang, C. Wu, Z. Cheng, H. Gao. SpheriGait: Enriching spatial representation via spherical projection for LiDAR-based gait recognition, [Online], Available: https://arxiv.org/abs/2409.11869, 2024.

    Google Scholar 

  23. J. Hao, Y. Wang, Z. Chang, H. Gao, Z. Cheng, C. Wu, X. Zhao, P. Fang, R. Muwardi. HorGait: A hybrid model for accurate gait recognition in LiDAR point cloud planar projections, [Online], Available: https://arxiv.org/abs/2410.08454, 2024.

    Google Scholar 

  24. R. Wang, H. Ling, P. Li, Y. Shi, L. Wu, J. Shen. Gait recognition via cross walking condition constraint. Computers, Materials & Continua, vol. 68, no. 3, pp. 3045–3060, 2021. DOI: https://doi.org/10.32604/cmc.2021.017275.

    Article  Google Scholar 

  25. C. A. Hanif, M. A. Mughal, M. A. Khan, N. A. Almujally, T. Kim, J. H. Cha. Human gait recognition for biometrics application based on deep learning fusion assisted framework. Computers, Materials and Continua, vol. 78, no. 1, pp. 357–374, 2024. DOI: https://doi.org/10.32604/cmc.2023.043061.

    Article  Google Scholar 

  26. M. Wang, X. Guo, B. Lin, T. Yang, Z. Zhu, L. Li, S. Zhang, X. Yu. DyGait: Exploiting dynamic representations for high-performance gait recognition. In Proceedings of IEEE/CVF International Conference on Computer Vision, Paris, France, pp. 13378–13387, 2023. DOI: https://doi.org/10.1109/ICCV51070.2023.01235.

    Google Scholar 

  27. H. Dou, P. Zhang, W. Su, Y. Yu, Y. Lin, X. Li. GaitGCI: Generative counterfactual intervention for gait recognition. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, pp. 5578–5588, 2023. DOI: https://doi.org/10.1109/CVPR52729.2023.00540.

    Google Scholar 

  28. J. Wang, S. Hou, Y. Huang, C. Cao, X. Liu, Y. Huang, L. Wang. Causal intervention for sparse-view gait recognition. In Proceedings of the 31st ACM International Conference on Multimedia, Ottawa, Canada, pp. 77–85, 2023. DOI: https://doi.org/10.1145/3581783.3612124.

    Chapter  Google Scholar 

  29. H. Xiong, B. Feng, X. Wang, W. Liu. Causality-inspired discriminative feature learning in triple domains for gait recognition. In Proceedings of the 18th European Conference on Computer Vision, Milan, Italy, pp. 251–270, 2025. DOI: https://doi.org/10.1007/978-3-031-72949-2_15.

    Google Scholar 

  30. H. Dou, P. Zhang, W. Su, Y. Yu, X. Li. MetaGait: Learning to learn an omni sample adaptive representation for gait recognition. In Proceedings of the 17th European Conference on Computer Vision, Tel Aviv, Israel, pp. 357–374, 2022. DOI: https://doi.org/10.1007/978-3-031-20065-6_21.

    Google Scholar 

  31. S. Zhu, S. Zhang, A. Li, Y. Wang. Multiple temporal aggregation embedding for gait recognition in the wild. In Proceedings of the 17th Chinese Conference on Biometric Recognition, Xuzhou, China, pp. 269–279, 2023. DOI: https://doi.org/10.1007/978-981-99-8565-4_26.

    Chapter  Google Scholar 

  32. M. Cai, M. Wang, S. Zhang. Gait recognition by jointing transformer and CNN. In Proceedings of the 17th Chinese Conference on Biometric Recognition, Xuzhou, China, pp. 312–321, 2023. DOI: https://doi.org/10.1007/978-981-99-8565-4_30.

    Chapter  Google Scholar 

  33. C. A. Hanif, M. A. Mughal, M. A. Khan, U. Tariq, Y. J. Kim, J. H. Cha. Human gait recognition based on sequential deep learning and best features selection. Computers, Materials & Continua, vol. 75, no. 3, pp. 5123–5140, 2023. DOI: https://doi.org/10.32604/cmc.2023.038120.

    Article  Google Scholar 

  34. J. Luo, B. Xu, T. Tjahjadi, J. Yi. A novel 3D gait model for subject identification robust against carrying and dressing variations. Computers, Materials and Continua, vol. 80, no. 1, pp. 235–261, 2024. DOI: https://doi.org/10.32604/cmc.2024.050018.

    Article  Google Scholar 

  35. F. Min, Q. Cai, S. Guo, Y. Yu, H. Fan, J. Dong. ZipGait: Bridging skeleton and silhouette with diffusion model for advancing gait recognition, [Online], Available: https://arxiv.org/abs/2408.12111, 2024.

    Google Scholar 

  36. C. Fan, S. Hou, Y. Huang, S. Yu. Exploring deep models for practical gait recognition, [Online], Available: https://arxiv.org/abs/2303.03301, 2023.

    Google Scholar 

  37. Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, B. Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of IEEE/CVF International Conference on Computer Vision, Montreal, Canada, pp. 9992–10002, 2021. DOI: https://doi.org/10.1109/ICCV48922.2021.00986.

    Google Scholar 

  38. Y. Fu, Y. Wei, Y. Zhou, H. Shi, G. Huang, X. Wang, Z. Yao, T. Huang. Horizontal pyramid matching for person re-identification. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, Honolulu, USA, pp. 8295–8302, 2019. DOI: https://doi.org/10.1609/aaai.v33i01.33018295.

    Google Scholar 

  39. Y. Wang, P. Zhang, S. Gao, X. Geng, H. Lu, D. Wang. Pyramid spatial-temporal aggregation for video-based person re-identification. In Proceedings of IEEE/CVF International Conference on Computer Vision, Montreal, Canada, pp. 12006–12015, 2021. DOI: https://doi.org/10.1109/ICCV48922.2021.01181.

    Google Scholar 

  40. R. M. Bayoumi, E. E. Hemayed, M. E. Ragab, M. B. Fayek. Person re-identification via pyramid multipart features and multi-attention framework. Big Data and Cognitive Computing, vol. 6, no. 1, Article number 20, 2022. DOI: https://doi.org/10.3390/bdcc6010020.

  41. C. Fan, J. Liang, C. Shen, S. Hou, Y. Huang, S. Yu. Open-Gait: Revisiting gait recognition toward better practicality. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, pp. 9707–9716, 2023. DOI: https://doi.org/10.1109/CVPR52729.2023.00936.

    Google Scholar 

  42. H. Luo, Y. Gu, X. Liao, S. Lai, W. Jiang. Bag of tricks and a strong baseline for deep person re-identification. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, USA, pp. 1487–1495, 2019. DOI: https://doi.org/10.1109/CVPRW.2019.00190.

    Google Scholar 

  43. J. Zheng, X. Liu, W. Liu, L. He, C. Yan, T. Mei. Gait recognition in the wild with dense 3D representations and a benchmark. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, pp. 20196–20205, 2022. DOI: https://doi.org/10.1109/CVPR52688.2022.01959.

    Google Scholar 

  44. Z. Zhu, X. Guo, T. Yang, J. Huang, J. Deng, G. Huang, D. Du, J. Lu, J. Zhou. Gait recognition in the wild: A benchmark. In Proceedings of IEEE/CVF International Conference on Computer Vision, Montreal, Canada, pp. 14769–14779, 2022. DOI: https://doi.org/10.1109/ICCV48922.2021.01452.

    Google Scholar 

  45. C. Shen, C. Fan, W. Wu, R. Wang, G. Q. Huang, S. Yu. LidarGait: Benchmarking 3D gait recognition with point clouds. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, pp. 1054–1063, 2023. DOI: https://doi.org/10.1109/CVPR52729.2023.00108.

    Google Scholar 

  46. B. Zhou, A. Khosla, À. Lapedriza, A. Oliva, A. Torralba. Learning deep features for discriminative localization. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp. 2921–2929, 2016. DOI: https://doi.org/10.1109/CVPR.2016.319.

    Google Scholar 

  47. L. van der Maaten, G. Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, vol. 9, pp. 2579–2605, 2008.

    Google Scholar 

Download references

Acknowledgements

This work was supported by the International Science and Technology Cooperation Project of Guangzhou Economic and Technological Development District (No. 2023GH16).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenxiong Kang.

Ethics declarations

The authors declared that they have no conflicts of interest to this work.

Additional information

Colored figures are available in the online version at https://link.springer.com/journal/11633

Jiacong Hu received the B. Sc. degree in automation science and engineering from the South China University of Technology, China in 2023. He is currently a master student in automation science and engineering with the South China University of Technology, China.

His research interests include computer vision, biometric identification and gait recognition.

Kun Liu received the M. Sc. degree in School of Automation from Guangdong University of Technology, China in 2019, and he is currently a Ph. D. degree candidate with South China University of Technology, China.

His research interests include gait recognition, object detection and human activity analysis.

Yuheng Peng is a master student in School of Information and Electronic Engineering, Zhejiang University of Science and Technology, China.

His research interests include point cloud completion, pedestrian detection and gait recognition.

Ming Zeng received the B. Sc. degree in Huazhong University of Science and Technology, China in 2000, and the Ph. D. degree from South China University of Technology, China in 2008, where he is currently a senior lecturer. And he is also the Secretary General of Guangdong Internet of Things Association, China.

His research interests include the Internet of things, big data analysis and artificial intelligence.

Wenxiong Kang received the Ph. D. degree in control science and engineering from the South China University of Technology, China in 2009. He is currently a professor with the School of Automation Science and Engineering, South China University of Technology. He is a member of IEEE.

His research interests include biometrics identification, image processing, pattern recognition and computer vision.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, J., Liu, K., Peng, Y. et al. Exploring Salient Embeddings for Gait Recognition. Mach. Intell. Res. 22, 888–899 (2025). https://doi.org/10.1007/s11633-025-1545-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue date:

  • DOI: https://doi.org/10.1007/s11633-025-1545-5

Keywords