Thanks to visit codestin.com
Credit goes to github.com

Skip to content

khanhdat111/Classroom_Object_Localization-Classification-with-Inceptionv3

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

64 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Object Localization and Classification in classroom with InceptionV3 model

Demonstration

Firstly, I introduced demo app for having an overview about this project. Here is a quick demonstration of the application in action:

Application Demo

Project Description

This project focuses on object localization and classification in images using TensorFlow and OpenCV. It leverages the powerful deep learning capabilities of TensorFlow to train models on annotated datasets and OpenCV for image processing tasks. The goal is to accurately identify and classify objects within a variety of image contexts.

Example images after processing data with truth label and bouding box

Example Image 1 Example Image 2 Example Image 3

Features

  • Use of TensorFlow for model training and inference.
  • Image processing and manipulation with OpenCV.
  • Evaluation of model accuracy and performance metrics.
  • Visualization of localization results.

Model

Model

Prerequisites

Before you begin, ensure you have the following installed:

  • Python 3.8 or higher
  • pip (Python package installer)

Example

You can run example notebook from /notebooks/example.ipynb to train model easier

Data

The data used in this project were meticulously collected from Classroom Objects and processed by our team. We gathered images through photography and manually annotated each image with labels and bounding boxes to ensure high-quality training data for our model. This dataset is essential for the precision and effectiveness of the object localization and classification tasks.

If you wish to access the dataset, please visit the following Google Drive link: Access Dataset

Please note that the data is provided for academic and non-commercial use only.

Training Chart

Example Image 1

Results

After training the model, we had a result with Mean IoU: 0.7363 - Acc: 0.9362

Example Image 1

Installation

Setting Up a Virtual Environment

It is recommended to use a virtual environment to avoid conflicts with existing Python packages from requirements.txt.

Citation

If you utilize the Inception v3 model in your project, please consider citing the original paper. Here is the citation in APA format:

Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the Inception Architecture for Computer Vision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2818-2826.

For BibTeX users:

@inproceedings{szegedy2016rethinking,
  title={Rethinking the Inception Architecture for Computer Vision},
  author={Szegedy, Christian and Vanhoucke, Vincent and Ioffe, Sergey and Shlens, Jonathon and Wojna, Zbigniew},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  pages={2818--2826},
  year={2016}
}

About

Using Inception v3 model to object localization and classification ( data self-collected)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published