Research Article: Real-Time Vehicle Detection Using Cross-Correlation and 2D-DWT For Feature Extraction
Research Article: Real-Time Vehicle Detection Using Cross-Correlation and 2D-DWT For Feature Extraction
Research Article
Real-Time Vehicle Detection Using Cross-Correlation and
2D-DWT for Feature Extraction
Received 31 July 2018; Revised 5 November 2018; Accepted 5 December 2018; Published 9 January 2019
Copyright © 2019 Abdelmoghit Zaarane et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
Nowadays, real-time vehicle detection is one of the biggest challenges in driver-assistance systems due to the complex envi-
ronment and the diverse types of vehicles. Vehicle detection can be exploited to accomplish several tasks such as computing the
distances to other vehicles, which can help the driver by warning to slow down the vehicle to avoid collisions. In this paper, we
propose an efficient real-time vehicle detection method following two steps: hypothesis generation and hypothesis verification. In
the first step, potential vehicles locations are detected based on template matching technique using cross-correlation which is one
of the fast algorithms. In the second step, two-dimensional discrete wavelet transform (2D-DWT) is used to extract features from
the hypotheses generated in the first step and then to classify them as vehicles and nonvehicles. The choice of the classifier is very
important due to the pivotal role that plays in the quality of the final results. Therefore, SVMs and AdaBoost are two classifiers
chosen to be used in this paper and their results are compared thereafter. The results of the experiments are compared with some
existing system, and it showed that our proposed system has good performance in terms of robustness and accuracy and that our
system can meet the requirements in real time.
vehicles where the detection method is based on the de- hypothesis verification method is presented. The experi-
tection of rear lights by looking for the red representation in mental results are presented in Section 4 followed by the
the image, and they used the function of symmetry measure conclusion in Section 5.
to analyse the symmetry of the color distribution and de-
termine the specific position of axis of the symmetry. Af-
2. Hypothesis Generation
terwards, the pair of edges are determined to rebuild the
integrated edges of a vehicle. The principal step in the vehicle detection system is the
After the hypotheses generation step, the generated generation of hypothesis; where in this step, we should look
hypotheses should be classified either as vehicles or not. In in the image for the places where vehicles may be found
this step, two essential operations are needed: feature ex- (zones of interest). In our proposition, we perform first a
traction and classification. Various methods are proposed to preprocessing method using edge detection which acts an
overcome this step. The Haar-like feature extraction method important role in the performance of our method. After
is usually used which is a robust and rapid method which performing the preprocessing, the cross-correlation is used
uses the integral image, but the problem resides in the huge to detect the zones of interest, which is an algorithm that
number of the output features. Usually dimensionality re- calculates the similarity between a template and an image.
duction techniques [5, 6] are required for the high- The use of edge detection improves the result of the cross-
dimensional features. The Haar-like method was a good correlation and also reduces the processing time.
partner for many classifiers. In [7–9], the Haar-like method In this section, the preprocessing and the cross-
was combined with the SVMs classifier. Also, the Haar-like correlation techniques for initial candidate generation are
combination method and the AdaBoost classifier have been treated.
used in [10–12]. Other famous features extraction methods
are also used such as the histogram of oriented gradients
(HOG), Gabor filters, and Gradient features. In [1] the 2.1. Edge Detection. The best features that can be extracted
AdaBoost and SVMs classifiers are trained by the combined from vehicles in the detection systems are corners, color,
HOG’s features. In [13], a new descriptor is proposed for shadows, and horizontal edges and vertical edges. The
vehicle verification using the alternative family of functions shadows are good features to extract that can be utilized to
of log Gabor instead of the existing descriptors based on the facilitate the hypothesis of vehicles. However, they are very
Gabor filter. Descriptors which are based on Gabor filters dependent on image intensity that depends also on weather
have presented good results and showed good performance conditions. The corner features can be found easily. How-
in extracting features [14]. A system that detects rear of ever, they can be corrupted easily due to the noise.
vehicles in real time based on the AdaBoost classification In this paper, the edge detection is used where the
method and the gradient features method for adaptive cruise horizontal edges and vertical edges are good features to
control application (ACC applications) is presented. The extract. Looking at the edges reduces the required in-
Gradient features method is good at characterizing the formation because they replace a color image by a binary
objects shape and appearance. image in which objects and surface markings are outlined.
In our proposition, at the hypothesis generation step, These image parts are the most informative ones.
vehicle candidates are determined by using cross-correlation The first step is to generate a global contour image from
after preprocessing using edge detection to improve the the input gray-scale image using the Canny edge detector
results. The cross-correlation is a common method which [15]. The selection of the threshold values for the Canny edge
has been used to evaluate and compute the similarity degree detector is not so critical as long as it generates enough edges
between two compared images. In the step of hypothesis for the symmetry detector. The edge detection was per-
verification, the generated candidates in the previous step formed on the image and on the template. Figure 2 shows the
are verified. Two major operations are needed in this step: result of edge detection performed on a typical road scene
feature extraction and classification. For feature extraction, captured by the forward looking camera.
the third level of 2D-DWT is utilized which is a powerful This technique improves the choice quality of the vehicle
technique for representing data at different scales and fre- candidates, and it optimizes the processing time.
quencies. For classification, two classifiers are used: support
vector machines (SVMs) classifier and AdaBoost classifier,
and then their results are compared to get a reliable result. 2.2. Cross-Correlation. The purpose is to identify areas in the
We have tested these classifiers using real data. However, it image that are probably vehicles. However, the problem is to
needs a large training set. Currently, we concentrate on the detect the pattern position in images. The cross-correlation
daytime detection for various vehicle models and types. In is utilized to achieve this purpose which is a standard
our approach, the vehicle candidates are generated using the method of estimating the degree of similarity, in other words
highly correlated zones. These possible vehicle candidates to estimate how much two images are correlated [16].
are then classified with AdaBoost and SVMs to remove Therefore, the vehicle hypotheses in the images are found
nonvehicle objects. Figure 1 shows the overall flow diagram based on the similarity degree between template images and
of the method. test images. Figure 3 shows a template image example.
The organization of the paper is as follows. Section 2 The function of cross-correlation between the image and
describes the hypothesis generation. In Section 3, the the template is defined as:
Journal of Electrical and Computer Engineering 3
Support
vector Detected
Gray input Preprocessing Third level
Cross- machines/ vehicles
image (edge 2D-DWT
correlation adaboost
sequence detection)
SVMs
SVMs training
classification
Vehicle Nonvehicle
Input: (x1 , y1 ), . . . , (xm , ym ) is a set of labeled examples where xi belongs to X, yi ∈ {−1, +1}.
Initialization: D1 (i) � 1/m for i � 1, . . . , m.
For t � 1, . . . , T:
Train the weak learner based on distribution Dt
Obtain the weak hypotheses ht: X ⟶ Y{−1, +1}
Select ht with low weighted error:
εt � Dt (i)
i:ht(x)≠y
If εt > 1/2, then set T � t − 1 and abort loop
Choose βt � εt /(1 − εt )
β , if ht (xi ) � yi
Update Dt : Dt+1 (i) � Dt (i)/Zt × t
1 , otherwise
Where Zt is a factor of normalization (chosen in a way that Dt+1 is a distribution)
The final hypothesis is given as
H(x) � sign (Tt�1 ln(1/βt )ht (x) )
ALGORITHM 1
4. Experiment Results
4.1. Experimental Datasets. The database used in the ex-
periments contains two parts. The first part was done by
combining the Caltech car database [21] and some images
that are captured manually from different situations, which
were used to train the classifier. The second part was col-
lecting the videos in real traffic scenes which are utilized to
test the hypothesis generation step and hypothesis verifi-
cation step. Some of the images contain vehicles and others
contain background objects. All images are normalized to
158 × 154 pixels. This paper uses MATLAB R2015b as the
software development tool to test the proposed method. The
device configuration is 4.0 GB memory DDR4 and 3.40 GHz
Figure 9: Some vehicle training sample images.
Intel(R) Core(TM) i5 CPU.
The Caltech car database included 1155 vehicle images
from the rear and 1155 nonvehicle images. The real traffic Td
scenes are captured by a camera mounted on the car accuracy � × 100%, (6)
Td + Mv + Fd
windshield. The real traffic scenes contain much in-
terference, such as traffic lines, trees, and billboards. Figure 9 where Td is the number of true detections, Mv is the number
shows some examples of the database. of missed vehicles, and Fd is the number of false detections.
In order to get the best results, we have to look for an
efficient classifier where the classification step is the most
4.2. Performance Metrics. To test the proposed system, we important step in detection systems. Therefore, we have
collected real traffic videos using a camera mounted on front used and compared two classifiers: SVMs and AdaBoost
of a car. The vehicle detection was tested in various envi- which are two efficient methods of classification, which
ronments, and it showed a good rate especially on the have been used to verify and classify the extracted features
highways. by using 2D-DWT of the generated hypothesis. The use of
Some results of hypothesis generation using cross- these two classifiers gave really efficient results. However,
correlation from different image sequences are shown in the AdaBoost classifier gave a high accuracy of classifi-
Figure 10. The trees beside the road and the rear window of a cation and showed more advantages than the SVM
car generate some false hypothesis. However, the purpose of classifier that also showed an important accuracy of
this step was to detect the potential vehicles location re- generated hypothesis classification. The most missed ve-
gardless of the amount of false candidates generated where hicles are missed due to the overlapping. However, the
the false candidates would be removed in the hypothesis detection of overlapping vehicles is done successfully
verification step as shown in Figure 11. based on the percentage of vehicle parts hidden behind
To evaluate the performance of the proposed method, other vehicles. If only small part of a vehicle is hidden, it
the statistical data and the accuracy of various testing cases will be generated in the hypothesis generation step and
were recorded and are listed in Table 1. The accuracy is will be detected otherwise it will not be detected. This
defined as follows: problem is not very important, and the most important
Journal of Electrical and Computer Engineering 7
Figure 10: Hypothesis generation result after cross-correlation: (a) very close and very far generated candidates; (b) very close and far
generated candidates; (c) close and far generated candidates.
(a) (b)
(c) (d)
Figure 11: The results of hypothesis verification step of the generated candidates.
problem is to detect vehicles directly in front of the Table 1: Vehicle detection rates.
current vehicle. Video sequences
Table 1 shows the results of our vehicle detection system. Methods
1 2 3 4
TD 98 102 115 121
Cross-correlation + MV 2 2 2 4
4.3. Evaluation Results. To evaluate our proposed work, we 2D-DWT + SVMs FD 2 2 1 2
use three methods to compare with. Yan et al. [1] are based Accuracy (%) 96.08 96.23 97.46 95.27
on shadow under vehicle to detect the region of interest and TD 99 101 116 123
then used histograms of oriented gradients and the Ada- Cross-correlation +
MV 1 2 1 2
Boost classifier for vehicle detection. Tang et al. [7] are based 2D-DWT +
FD 1 2 1 1
on the Haar-like features and the AdaBoost classifier to AdaBoost
Accuracy (%) 98.02 96.19 98.31 97.62
detect vehicles which is a very popular method. Ruan et al.
[22] focused on wheel detection to detect vehicles. They are
based on the HOG extractor and MB-LBP (multiblock local different scenes in different conditions compared to our
binary pattern) with AdaBoost to detect vehicle’s wheels. proposed method results, and this comparison shows that
Table 2 shows the results of three different methods from the proposed method has the highest accuracy and confirms
8 Journal of Electrical and Computer Engineering
Table 2: Evaluation results of 4 vehicle detection methods. [3] A. Jazayeri, H. Cai, Y. Jiang Zheng, and M. Tuceryan, “Vehicle
detection and tracking in car video based on motion model,”
Video sequences
Methods IEEE Transactions on Intelligent Transportation Systems,
1 (%) 2 (%) 3 (%) 4 (%) vol. 12, no. 2, pp. 583–595, 2011.
Yana et al. [1] 97.03 95.24 97.48 96.82 [4] L. Gao, C. Li, T. Fang, and X. Zhang, “Vehicle detection based
Tang et al. [7] 96.08 94.29 96.64 96.06 on color and edge information,” in ICIAR 2008, Vol. 5112,
Ruan et al. [22] 94.23 92.45 94.22 93.79 Springer, Berlin, Germany, 2008.
Proposed method 98.02 96.19 98.31 97.62 [5] J. Li and D. Tao, “Simple exponential family PCA,” IEEE
Transactions on Neural Networks and Learning Systems,
vol. 24, no. 3, pp. 485–497, 2013.
that it is able to detect vehicles in different conditions with a [6] P. J. Cunningham and Z. Ghahramani, “Linear di-
high accuracy and efficiency. mensionality reduction:survey, insights, and generaliza-
tions,” Journal of Machine Learning Research, vol. 16,
pp. 2859–2900, 2015.
5. Conclusion [7] Y. Tang, C. Zhang, R. Gu, P. Li, and B. Yang, “Vehicle de-
tection and recognition for intelligent traffic surveillance
A real-time vehicle detection system using a camera system,” Multimedia Tools and Applications, vol. 76, no. 4,
mounted on front of a car is proposed in this paper. We pp. 5817–5832, 2015.
have proposed a solution based on the cross-correlation [8] X. Wen, H. Zhao, N. Wang, and H. Yuan, “A rear-vehicle
method. The proposed system included two steps: the hy- detection system for static images based on monocular vi-
pothesis generation and hypothesis verification steps. Firstly, sion,” in Proceedings of 9th International Conference on
in the hypothesis generation step, the initial candidate se- Control, Automation, Robotics and Vision, pp. 2421–2424,
lection is done by using the cross-correlation technique after Singapore, March 2006.
applying the edge detection to improve the result and reduce [9] W. Liu, X. Wen, B. Duan et al., “Rear vehicle detection and
the processing time. Then, in the hypothesis verification step, tracking for lane change assist,” in Proceedings of IEEE In-
the two-dimensional discrete wavelet transform has been telligent Vehicles Symposium, pp. 252–257, Istanbul, Turkey,
applied on both selected candidates and dataset to extract June 2007.
[10] M. M. Moghimi, M. Nayeri, M. Pourahmadi, and
features. Two famous classifier SVM and AdaBoost have been
M. K. Moghimi, “Moving vehicle detection using AdaBoost and
trained using these extracted features. Based on a comparison haar-like feature in surveillance videos,” International Journal
of these two classifier results, it was concluded that the of Imaging and Robotics, vol. 18, no. 1, pp. 94–106, 2018.
AdaBoost classifier performed better in terms of accuracy than [11] X. Wen, L. Shao, Y. Xue, and W. Fang, “A rapid learning
SVMs that has also showed an interesting accuracy. The ex- algorithm for vehicle classification,” Information Sciences,
perimental results presented in this paper showed that the vol. 295, pp. 395–406, 2015.
proposed approach have good accuracy compared to other [12] R. Lienhart and J. Maydt, “An extended set of haar–like
methods. features for rapid object detection,” in Proceedings of IEEE
International Conference on Image Processing, pp. 900–903,
Rochester, NY, USA, January 2002.
Data Availability [13] J. Arrospide and L. Salgado, “Log-gabor filters for image-
based vehicle verification,” IEEE Transactions on Image
The data used to support the findings of this study are in-
Processing, vol. 22, no. 6, pp. 2286–2295, 2013.
cluded within the article [21]. [14] A. Khammari, F. Nashashibi, Y. Abramson, and C. Laurgeau,
“Vehicle detection combining gradient analysis and AdaBoost
Additional Points classification,” in Proceedings of 2005 IEEE Intelligent
Transportation Systems, pp. 66–71, Vienna, Austria, Sep-
Our perspectives include the improvement in hypothesis tember 2005.
verification step by updating the AdaBoost classifier in order [15] J. Canny, “A computational approach to edge detection,” IEEE
to reduce the processing time and the distance measurement Transactions on Pattern Analysis and Machine Intelligence,
between the detected vehicles and the camera. vol. 8, no. 6, pp. 679–698, 1986.
[16] S.-D. Wei and S.-H. Lai, “Fast template matching based on
normalized cross correlation with adaptive multilevel winner
Conflicts of Interest update,” IEEE Transactions on Image Processing, vol. 17,
no. 11, pp. 2227–2235, 2008.
The authors declare that there are no conflicts of interest. [17] I. Daubechies, “The wavelet transform, time-frequency lo-
calization and signal analysis,” IEEE Information Theory So-
References ciety, vol. 36, no. 5, pp. 961–1005, 1990.
[18] I. Slimani, A. Zaarane, and A. Hamdoun, “Convolution
[1] G. Yana, Y. Ming, Y. Yang, and L. Fan, “Real-time vehicle algorithm for implementing 2D discrete wavelet transform
detection using histograms of oriented gradients and Ada- on the FPGA,” in Proceedings of Computer Systems and
Boost classification,” Optik, vol. 127, no. 19, pp. 7941–7951, Applications (AICCSA), 2016 IEEE/ACS 13th International
2016. Conference of IEEE, pp. 1–3, Agadir, Morocco, Novem-
[2] S. T. Soo and T. Bräunl, “Symmetry-based monocular vehicle ber-December 2016.
detection system,” Machine Vision and Applications, vol. 23, [19] Y. Freund and R. E. Schapire, “A decision-theoretic gener-
no. 5, pp. 831–842, 2012. alization of online learning and an application to boosting,” in
Journal of Electrical and Computer Engineering 9
Rotating Advances in
Machinery Multimedia
The Scientific
Engineering
Journal of
Journal of
Hindawi
World Journal
Hindawi Publishing Corporation Hindawi
Sensors
Hindawi Hindawi
www.hindawi.com Volume 2018 http://www.hindawi.com
www.hindawi.com Volume 2018
2013 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018
Journal of
Control Science
and Engineering
Advances in
Civil Engineering
Hindawi Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018
Journal of
Journal of Electrical and Computer
Robotics
Hindawi
Engineering
Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018
VLSI Design
Advances in
OptoElectronics
International Journal of
International Journal of
Modelling &
Simulation
Aerospace
Hindawi Volume 2018
Navigation and
Observation
Hindawi
www.hindawi.com Volume 2018
in Engineering
Hindawi
www.hindawi.com Volume 2018
Engineering
Hindawi
www.hindawi.com Volume 2018
Hindawi
www.hindawi.com www.hindawi.com Volume 2018
International Journal of
International Journal of Antennas and Active and Passive Advances in
Chemical Engineering Propagation Electronic Components Shock and Vibration Acoustics and Vibration
Hindawi Hindawi Hindawi Hindawi Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018