A
Mini Project Report
On
“Gender and Age Detection using
OpenCV and Deep Learning”
In partial fulfillment of requirements for the degree of
Bachelor of Technology
In
Computer Engineering
Submitted By
SONAVANE JAYESH POPATRAO(51)
PATKAR AVINASH MURLIDHAR(63)
JAIN PRASANNAJEET DEVENDRA(64)
Under the Guidance of
Prof. Ms. Punam R. Patil
The Shirpur Education Society’s
R. C. Patel Institute of Technology, Shirpur - 425405.
Department of Computer Engineering
[2024-25]
This project aims to detect a person's face, predict their gender (Male or
Female), and estimate their age using pre-trained deep learning models with
OpenCV's DNN module. The system processes images or video frames to
perform real-time detection and classification.
Table of Contents
1. Introduction
2. Prerequisites
3. Project Structure
4. Required Models
5. Dataset Used
6. Code Overview
o Importing Libraries
o Pre-trained Models
o Face Detection Function
o Main Detection Logic
7. Running the Program
8. Output
9. Conclusion
1. Introduction
This project implements gender and age detection using OpenCV's Deep
Neural Network (DNN) module. By leveraging pre-trained deep learning
models, the system can identify faces in real-time (from a video feed or
image) and classify them into gender (Male/Female) and an estimated age
group.
The following steps are involved:
• Detect faces in the input image or video feed.
• For each detected face, predict the gender and estimate the age group.
• Display the results in real-time.
2. Prerequisites
Before proceeding with the project, you need to ensure the following
libraries and dependencies are installed:
1. OpenCV: Version 3.4 or later (with DNN support)
pip install opencv-python opencv-python-headless
2. Python: Python 3.6 or later.
3. Pre-trained Models: Download the pre-trained models for face, age,
and gender detection.
3. Project Structure
The project consists of the following files and folders:
/GenderAgeDetection/
│
├── gender_age_detection.py # Main program file
├── opencv_face_detector.pbtxt # Face detection config file
├── opencv_face_detector_uint8.pb # Pre-trained face detection model
├── age_deploy.prototxt # Age detection model config
├── age_net.caffemodel # Pre-trained age model
├── gender_deploy.prototxt # Gender detection model config
├── gender_net.caffemodel # Pre-trained gender model
└── README.md # Documentation
4. Required Models
The project uses three pre-trained models to detect and classify faces,
genders, and ages:
1. Face Detection Model:
o opencv_face_detector.pbtxt: Face detector configuration.
o opencv_face_detector_uint8.pb: Pre-trained face detection
model.
2. Gender Detection Model:
o gender_deploy.prototxt: Gender detector configuration.
o gender_net.caffemodel: Pre-trained gender classification model.
3. Age Detection Model:
o age_deploy.prototxt: Age detector configuration.
o age_net.caffemodel: Pre-trained age estimation model.
You can download these models from OpenCV’s GitHub or other sources like
the Adience dataset or the Caffe model zoo.
5. Dataset Used
• Adience Dataset: For gender and age detection, this dataset contains
unfiltered images of real-world faces, labeled with age ranges and
genders.(Dataset Link) (kaggle.com/datasets/alfredhhw/adiencegender/)
• WIDER FACE/FDDB: These datasets are commonly used for training
face detection models.
• IMDB-WIKI Dataset: This dataset can also be used for age estimation
models, as it contains over 500,000 images of celebrities with age
labels.
These datasets were likely used to train the pre-trained models in the
project.
6. Code Overview
Importing Libraries
First, we import the required libraries:
import cv2
import argparse
• cv2: OpenCV library for working with images and videos.
• argparse: To handle command-line arguments.
Pre-trained Models
The models are loaded using OpenCV's DNN module:
faceProto = "opencv_face_detector.pbtxt"
faceModel = "opencv_face_detector_uint8.pb"
ageProto = "age_deploy.prototxt"
ageModel = "age_net.caffemodel"
genderProto = "gender_deploy.prototxt"
genderModel = "gender_net.caffemodel"
# Load models
faceNet = cv2.dnn.readNet(faceModel, faceProto)
ageNet = cv2.dnn.readNet(ageModel, ageProto)
genderNet = cv2.dnn.readNet(genderModel, genderProto)
Face Detection Function
This function uses the face detection model to find faces in the image and
returns the coordinates of bounding boxes for each detected face.
def highlightFace(net, frame, conf_threshold=0.7):
frameOpencvDnn = frame.copy()
frameHeight, frameWidth = frame.shape[0], frame.shape[1]
blob = cv2.dnn.blobFromImage(frameOpencvDnn, 1.0, (300, 300), [104,
117, 123], True, False)
net.setInput(blob)
detections = net.forward()
faceBoxes = []
for i in range(detections.shape[2]):
confidence = detections[0, 0, i, 2]
if confidence > conf_threshold:
x1 = int(detections[0, 0, i, 3] * frameWidth)
y1 = int(detections[0, 0, i, 4] * frameHeight)
x2 = int(detections[0, 0, i, 5] * frameWidth)
y2 = int(detections[0, 0, i, 6] * frameHeight)
faceBoxes.append([x1, y1, x2, y2])
cv2.rectangle(frameOpencvDnn, (x1, y1), (x2, y2), (0, 255, 0),
int(round(frameHeight/150)), 8)
return frameOpencvDnn, faceBoxes
• blobFromImage(): Preprocesses the image for DNN (resizes,
normalizes).
• net.forward(): Runs the forward pass of the network to detect faces.
Main Detection Logic
In the main loop, we:
1. Capture video frames or load an image.
2. Use the highlightFace() function to detect faces.
3. For each detected face, classify the gender and estimate the age.
video = cv2.VideoCapture(args.image if args.image else 0)
padding = 20
while cv2.waitKey(1) < 0:
hasFrame, frame = video.read()
if not hasFrame:
cv2.waitKey()
break
resultImg, faceBoxes = highlightFace(faceNet, frame)
if not faceBoxes:
print("No face detected")
for faceBox in faceBoxes:
face = frame[max(0, faceBox[1]-
padding):min(faceBox[3]+padding,frame.shape[0]-1),
max(0, faceBox[0]-padding):min(faceBox[2]+padding,
frame.shape[1]-1)]
blob = cv2.dnn.blobFromImage(face, 1.0, (227,227),
MODEL_MEAN_VALUES, swapRB=False)
# Gender prediction
genderNet.setInput(blob)
genderPreds = genderNet.forward()
gender = genderList[genderPreds[0].argmax()]
# Age prediction
ageNet.setInput(blob)
agePreds = ageNet.forward()
age = ageList[agePreds[0].argmax()]
# Display the results
cv2.putText(resultImg, f'{gender}, {age}', (faceBox[0], faceBox[1]-10),
cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0,255,255), 2, cv2.LINE_AA)
cv2.imshow("Detecting age and gender", resultImg)
7. Running the Program
You can run the program from the command line:
python gender_age_detection.py --image /path/to/image.jpg
Or, to use the webcam:
python gender_age_detection.py
8. Output
When running the program, it detects faces and displays the predicted
gender and estimated age above each detected face. For example:
• Male, 15-20 years
• Female, 38-43 years
The video feed (or image) will be shown in a window labeled "Detecting age
and gender."
9. Conclusion
This project demonstrates how to use pre-trained deep learning models for
gender and age detection using OpenCV’s DNN module. It effectively detects
faces in real-time and classifies the gender and age, making it suitable for
various applications like surveillance, demographic analysis, and more. The
models used are fast and accurate, making the solution robust for real-world
applications.
Code:
#A Gender and Age Detection pusing OpenCV ,Deep Learning
import cv2
import math
import argparse
def highlightFace(net, frame, conf_threshold=0.7):
frameOpencvDnn=frame.copy()
frameHeight=frameOpencvDnn.shape[0]
frameWidth=frameOpencvDnn.shape[1]
blob=cv2.dnn.blobFromImage(frameOpencvDnn, 1.0, (300, 300), [104,
117, 123], True, False)
net.setInput(blob)
detections=net.forward()
faceBoxes=[]
for i in range(detections.shape[2]):
confidence=detections[0,0,i,2]
if confidence>conf_threshold:
x1=int(detections[0,0,i,3]*frameWidth)
y1=int(detections[0,0,i,4]*frameHeight)
x2=int(detections[0,0,i,5]*frameWidth)
y2=int(detections[0,0,i,6]*frameHeight)
faceBoxes.append([x1,y1,x2,y2])
cv2.rectangle(frameOpencvDnn, (x1,y1), (x2,y2), (0,255,0),
int(round(frameHeight/150)), 8)
return frameOpencvDnn,faceBoxes
parser=argparse.ArgumentParser()
parser.add_argument('--image')
args=parser.parse_args()
faceProto="opencv_face_detector.pbtxt"
faceModel="opencv_face_detector_uint8.pb"
ageProto="age_deploy.prototxt"
ageModel="age_net.caffemodel"
genderProto="gender_deploy.prototxt"
genderModel="gender_net.caffemodel"
MODEL_MEAN_VALUES=(78.4263377603, 87.7689143744,
114.895847746)
ageList=['(0-2)', '(4-6)', '(8-12)', '(15-20)', '(21-23)', '(38-43)', '(48-53)',
'(60-100)']
genderList=['Male','Female']
faceNet=cv2.dnn.readNet(faceModel,faceProto)
ageNet=cv2.dnn.readNet(ageModel,ageProto)
genderNet=cv2.dnn.readNet(genderModel,genderProto)
video=cv2.VideoCapture(args.image if args.image else 0)
padding=20
while cv2.waitKey(1)<0 :
hasFrame,frame=video.read()
if not hasFrame:
cv2.waitKey()
break
resultImg,faceBoxes=highlightFace(faceNet,frame)
if not faceBoxes:
print("No face detected")
for faceBox in faceBoxes:
face=frame[max(0,faceBox[1]-padding):
min(faceBox[3]+padding,frame.shape[0]-1),max(0,faceBox[0]-
padding)
:min(faceBox[2]+padding, frame.shape[1]-1)]
blob=cv2.dnn.blobFromImage(face, 1.0, (227,227),
MODEL_MEAN_VALUES, swapRB=False)
genderNet.setInput(blob)
genderPreds=genderNet.forward()
gender=genderList[genderPreds[0].argmax()]
print(f'Gender: {gender}')
ageNet.setInput(blob)
agePreds=ageNet.forward()
age=ageList[agePreds[0].argmax()]
print(f'Age: {age[1:-1]} years')
cv2.putText(resultImg, f'{gender}, {age}', (faceBox[0], faceBox[1]-10),
cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0,255,255), 2, cv2.LINE_AA)
cv2.imshow("Detecting age and gender", resultImg)