Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
80 views10 pages

Research Article: Real-Time Vehicle Detection Using Cross-Correlation and 2D-DWT For Feature Extraction

Uploaded by

azed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views10 pages

Research Article: Real-Time Vehicle Detection Using Cross-Correlation and 2D-DWT For Feature Extraction

Uploaded by

azed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Hindawi

Journal of Electrical and Computer Engineering


Volume 2019, Article ID 6375176, 9 pages
https://doi.org/10.1155/2019/6375176

Research Article
Real-Time Vehicle Detection Using Cross-Correlation and
2D-DWT for Feature Extraction

Abdelmoghit Zaarane , Ibtissam Slimani , Abdellatif Hamdoun, and Issam Atouf


LTI Lab, Laboratory of Information Processing, Department of Physics, Faculty of Sciences Ben M’sik,
University Hassan II Casablanca, BP 7955, Casablanca, Morocco

Correspondence should be addressed to Abdelmoghit Zaarane; [email protected]

Received 31 July 2018; Revised 5 November 2018; Accepted 5 December 2018; Published 9 January 2019

Academic Editor: Jar Ferr Yang

Copyright © 2019 Abdelmoghit Zaarane et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.

Nowadays, real-time vehicle detection is one of the biggest challenges in driver-assistance systems due to the complex envi-
ronment and the diverse types of vehicles. Vehicle detection can be exploited to accomplish several tasks such as computing the
distances to other vehicles, which can help the driver by warning to slow down the vehicle to avoid collisions. In this paper, we
propose an efficient real-time vehicle detection method following two steps: hypothesis generation and hypothesis verification. In
the first step, potential vehicles locations are detected based on template matching technique using cross-correlation which is one
of the fast algorithms. In the second step, two-dimensional discrete wavelet transform (2D-DWT) is used to extract features from
the hypotheses generated in the first step and then to classify them as vehicles and nonvehicles. The choice of the classifier is very
important due to the pivotal role that plays in the quality of the final results. Therefore, SVMs and AdaBoost are two classifiers
chosen to be used in this paper and their results are compared thereafter. The results of the experiments are compared with some
existing system, and it showed that our proposed system has good performance in terms of robustness and accuracy and that our
system can meet the requirements in real time.

1. Introduction underneath the vehicles to detect the zones where a vehicle


can be in the image but that can be suitable just in specific
The automatic vehicle detection has gained importance in weather and specific time in the day. Soo et al. [2] proposed a
research for the last fifteen years where the development of a monocular symmetry-based vehicle detection system in
successful system for vehicle detection is the principal step which the symmetry is one of the most interesting visual
for driver assistance which needs calculation of the distances characteristics of a vehicle. However, computation of the
between vehicles to warn drivers to slow down vehicles to symmetry values for every pixel is a time-consuming pro-
avoid accidents and collisions. cess. Jazayeri et al. [3] detected and tracked vehicles based on
Several methods are used to detect vehicles [1] such as motion information, and they relied on temporal in-
laser-based or radar systems. However, in this paper, we are formation of features and their motion behaviors for vehicle
based on image processing. The majority of the proposed identification, which helps recompensing the complexity in
method follows two steps, namely, hypothesis generation recognizing types, colors, and shapes of a vehicle. A motion-
and hypothesis verification. In the hypothesis generation based method is a successful method to detect moving
step, the localization of vehicles “zones of interest” in the objects. However, it is intensive in terms of calculation and
image is hypothesized. In the hypothesis verification, the requires analysis of several frames before an object can be
zones of interest are treated and verified if they are vehicles detected. It is also sensitive to camera movement and may
or not. Several methods are proposed to generate the hy- fail to detect objects with slow relative motion. Gao et al. [4]
pothesis. Yan et al. [1] used the preknowledge shadows used color information and edge information to detect
2 Journal of Electrical and Computer Engineering

vehicles where the detection method is based on the de- hypothesis verification method is presented. The experi-
tection of rear lights by looking for the red representation in mental results are presented in Section 4 followed by the
the image, and they used the function of symmetry measure conclusion in Section 5.
to analyse the symmetry of the color distribution and de-
termine the specific position of axis of the symmetry. Af-
2. Hypothesis Generation
terwards, the pair of edges are determined to rebuild the
integrated edges of a vehicle. The principal step in the vehicle detection system is the
After the hypotheses generation step, the generated generation of hypothesis; where in this step, we should look
hypotheses should be classified either as vehicles or not. In in the image for the places where vehicles may be found
this step, two essential operations are needed: feature ex- (zones of interest). In our proposition, we perform first a
traction and classification. Various methods are proposed to preprocessing method using edge detection which acts an
overcome this step. The Haar-like feature extraction method important role in the performance of our method. After
is usually used which is a robust and rapid method which performing the preprocessing, the cross-correlation is used
uses the integral image, but the problem resides in the huge to detect the zones of interest, which is an algorithm that
number of the output features. Usually dimensionality re- calculates the similarity between a template and an image.
duction techniques [5, 6] are required for the high- The use of edge detection improves the result of the cross-
dimensional features. The Haar-like method was a good correlation and also reduces the processing time.
partner for many classifiers. In [7–9], the Haar-like method In this section, the preprocessing and the cross-
was combined with the SVMs classifier. Also, the Haar-like correlation techniques for initial candidate generation are
combination method and the AdaBoost classifier have been treated.
used in [10–12]. Other famous features extraction methods
are also used such as the histogram of oriented gradients
(HOG), Gabor filters, and Gradient features. In [1] the 2.1. Edge Detection. The best features that can be extracted
AdaBoost and SVMs classifiers are trained by the combined from vehicles in the detection systems are corners, color,
HOG’s features. In [13], a new descriptor is proposed for shadows, and horizontal edges and vertical edges. The
vehicle verification using the alternative family of functions shadows are good features to extract that can be utilized to
of log Gabor instead of the existing descriptors based on the facilitate the hypothesis of vehicles. However, they are very
Gabor filter. Descriptors which are based on Gabor filters dependent on image intensity that depends also on weather
have presented good results and showed good performance conditions. The corner features can be found easily. How-
in extracting features [14]. A system that detects rear of ever, they can be corrupted easily due to the noise.
vehicles in real time based on the AdaBoost classification In this paper, the edge detection is used where the
method and the gradient features method for adaptive cruise horizontal edges and vertical edges are good features to
control application (ACC applications) is presented. The extract. Looking at the edges reduces the required in-
Gradient features method is good at characterizing the formation because they replace a color image by a binary
objects shape and appearance. image in which objects and surface markings are outlined.
In our proposition, at the hypothesis generation step, These image parts are the most informative ones.
vehicle candidates are determined by using cross-correlation The first step is to generate a global contour image from
after preprocessing using edge detection to improve the the input gray-scale image using the Canny edge detector
results. The cross-correlation is a common method which [15]. The selection of the threshold values for the Canny edge
has been used to evaluate and compute the similarity degree detector is not so critical as long as it generates enough edges
between two compared images. In the step of hypothesis for the symmetry detector. The edge detection was per-
verification, the generated candidates in the previous step formed on the image and on the template. Figure 2 shows the
are verified. Two major operations are needed in this step: result of edge detection performed on a typical road scene
feature extraction and classification. For feature extraction, captured by the forward looking camera.
the third level of 2D-DWT is utilized which is a powerful This technique improves the choice quality of the vehicle
technique for representing data at different scales and fre- candidates, and it optimizes the processing time.
quencies. For classification, two classifiers are used: support
vector machines (SVMs) classifier and AdaBoost classifier,
and then their results are compared to get a reliable result. 2.2. Cross-Correlation. The purpose is to identify areas in the
We have tested these classifiers using real data. However, it image that are probably vehicles. However, the problem is to
needs a large training set. Currently, we concentrate on the detect the pattern position in images. The cross-correlation
daytime detection for various vehicle models and types. In is utilized to achieve this purpose which is a standard
our approach, the vehicle candidates are generated using the method of estimating the degree of similarity, in other words
highly correlated zones. These possible vehicle candidates to estimate how much two images are correlated [16].
are then classified with AdaBoost and SVMs to remove Therefore, the vehicle hypotheses in the images are found
nonvehicle objects. Figure 1 shows the overall flow diagram based on the similarity degree between template images and
of the method. test images. Figure 3 shows a template image example.
The organization of the paper is as follows. Section 2 The function of cross-correlation between the image and
describes the hypothesis generation. In Section 3, the the template is defined as:
Journal of Electrical and Computer Engineering 3

Support
vector Detected
Gray input Preprocessing Third level
Cross- machines/ vehicles
image (edge 2D-DWT
correlation adaboost
sequence detection)

Hypothesis generation Hypothesis verification

Figure 1: Overall flow diagram of our vehicle detection algorithm.

x(i, j) and y(i, j), respectively. The function ρ varies be-


tween −1 and +1, where the good correlation state is found
when the ρ function takes values near +1 (i.e., when first
function increases, the second one does too in proportion);
the uncorrelated state is found when the ρ function takes
values near 0 (i.e., no relation between variation in the first
function and the second one); and the anti-correlated state is
detected when the ρ function takes values near −1 (i.e., when
the first function decreases, the second increases in pro-
portion). The best match occurs when templates and test
images have maximum ρ. Multiple candidate locations can
Figure 2: Resulting image using the Canny edge detector. be found by using this technique.
The problem of matching using cross-correlation is that
it detects the similarity between template and a part of the
image only if they have almost the same size or a little bit
bigger or smaller size which means that we can detect ve-
hicles just in a predefined distance; in other words, we can
detect only far vehicles or near vehicles. In our proposition,
to overcome this problem, we chose to work with four
different template’s sizes. Two smaller sizes are used to detect
far and very far vehicles, and two bigger sizes are used to
detect close and very close vehicles. We do not need various
sizes because the farthest vehicles are not that important.
Different hypotheses of different vehicles are generated
using few templates even if they have different shapes or
types compared with the templates using the edge detection;
therefore, there is no need to use templates for each vehicle
type, shape, or texture. In our case, three templates in four
sizes are enough to generate the hypotheses following the
three vehicles categories, template for cars, template for
buses, and template for trucks. Figure 4 shows an example of
Figure 3: Example of a template.
cross-correlation result that generates the hypothesis of far
vehicles “red bounding box” and hypothesis of nearby ve-
(x(i, j) − x)(y(i, j) − y) hicles “green bounding box.”
ρ , (1)
i,j
σxσy
3. Hypothesis Verification
where x(i, j) is the part of the image shared by template and
x is the mean of x(i, j); y(i, j) is the template and y is the The hypothesis verification step acts an important role for
mean of y(i, j); and σ x and σ y are the standard deviations of vehicle detection. The results of the previous step are the
4 Journal of Electrical and Computer Engineering

Figure 4: The result of the cross-correlation.

positions in the image where vehicles may be found. However, (a)


not all positions detected on the image belong to vehicles.
Therefore, further verification is needed. In the verification
step, two major methods are needed: feature extraction LL LH
method and classification method. The classifier is used to
classify the extracted features if they correspond to vehicles or
not. Seeking the solutions to improve the vehicle detection
accuracy and reduce the false detection rate while considering
the real time, we propose to use the two-dimensional discrete HL HH
wavelet transform for feature extraction, AdaBoost, and SVMs
to classify these extracted features. The discrete wavelet
transform (DWT) has a good location property in frequency (b)
and time domains, and it is an efficient method for features
Figure 5: (a) First level of DWT. (b) Subbands of the first level of
extraction. The AdaBoost and SVM classifiers are used in
DWT.
several studies, and they showed a very good result.
In this section, the discrete wavelet transform and SVMs
and AdaBoost classifiers are treated.
Image LL
Low 2 Low 2
3.1. Discrete Wavelet Transform. Wavelet transform is pass pass
widely used in many applications because it reduces the
computation cost and provides sharper time/frequency lo- High 2 LH
pass
calization [17] in contrary to the Fourier transform. The
discrete wavelet transform (DWT) is any wavelet transform High Low 2 HL
for which the wavelets are discretely sampled. The principal pass 2 pass
of DWT is to decompose the input signal into two sub-
signals: the detail and the approximation. The approxima- High 2 HH
pass
tion corresponds to the low frequency of the input signal
which is the most energy of a signal, and the detail corre-
sponds to the high frequency of the input signal. This Figure 6: The structure of forward two-dimensional DWT.
technique can be repeated at multiple levels by taking the
approximation as an input signal. The same principal is In this study, we have concentrated on the third level of
applied for images, and the DWTdecomposes the image into the 2D-DWT. This technique is applied on each generated
four subband images: LL, LH, HL, and HH subband images candidate and on the dataset images to extract features. We
[18] as shown in Figure 5. The LL subband image contains extract the important features that we need, and it helps us to
the low-frequency component of the input image which improve the result of the classification.
corresponds to the approximation, and HL, LH, and HH
subband images contain the high-frequency components of
the input image which are the details. 3.2. Support Vector Machines (SVMs). SVM is a popular
As shown in Figure 6, the low-pass filter and the high- machine learning algorithm for classification. It is a dis-
pass filter are used first on the lines of the input image, tinctive classifier that defines a separation hyper plane based
“i.e., vertically” and then on the columns, “i.e., horizontally.” on training data with its label (supervised learning). This
Furthermore, after each filtering operation a down sampling algorithm generates the best hyper plane that classifies new
is used to reduce the overall number of computation. This examples. The SVM algorithm principle is used to find the
technique can be repeated at multiple levels until obtaining hyper plane that maximizes the distance between the
the desired result as shown in Figure 7. training example classes which is called the margin.
Journal of Electrical and Computer Engineering 5

Templates Test image

SVMs
SVMs training
classification

Vehicle Nonvehicle

Figure 8: The flowchart of SVM classification.

supervised learning algorithm that classifies between positive


and negative examples, and it aims at converting an ensemble
Figure 7: Third level of 2D-DWT. of weak classifiers into strong classifier; a single classifier may
classify the objects poorly. However, when multiple classifiers
Therefore, the optimal separating hyper plane maximizes the are combined with selection of the training set at every it-
margin of the training data. eration and assigning right amount of weight in final voting,
The separating hyper plane is defined as we can have good accuracy score for the overall classifier. The
f(x) � (ω · x) + b, (2) algorithm’s input is a set of labeled training examples (xi , yi ),
i � 1, . . . , m, where xi is an example and yi is its label that
where ω is known as the weight and b is called the bias. indicates if xi is a positive or negative example. Every weak
The margin is given as classifier is noted as function ht (x) that returns one of the two
2 values [+1, −1]. ht (x) is +1, if x is classified as a positive
M� . (3) example, and ht (x) is −1, if x is classified as a negative ex-
‖ω‖
ample. The AdaBoost algorithm is shown in Algorithm 1
According to this expression, it is necessary to minimize according to [20].
ω to maximize the margin. Concerning training examples, we give m labeled ex-
The classification function is given as amples (x1 , y1 ), . . . , (xm , ym ) whither the xi ∈ X, and the
Cf � 􏽘 ωi · k x, xi 􏼁 + b, labels yi ∈ {−1, +1}. Dt is a distribution calculated on the m
(4) training examples of each value of t � 1, . . . , T, and to find a
i
weak hypothesis ht : X ⟶ {−1, +1}, a weak learning algo-
where xi is the support vector selected from training rithm is applied. Where the weak learner purpose is to look
samples, x is the input vector, k(x, xi ) is the kernel function, for a weak hypothesis that has a low-weighted error εt
and ωi is the support vector weight (xi ) which is determined relative to Dt . The weighted combination sign of the weak
in the training process. hypotheses is computed to determine H the final hypothesis.
In our paper, radial basis function kernel (RBF kernel) is
used, and it gives good results compared to the other kernels.
The RBF kernel function is given as 3.4. Preparation of Input Data
�� �
��x − x2 ��� 3.4.1. Training Process. To train the classifier, we should
i
k x, xi 􏼁 � exp􏼠−􏼠 􏼡 􏼡. (5)
2δ2 prepare the templates first by normalizing them to 158 × 154
grayscale images, then extracting the features using the third
The SVMs are trained using the positive samples and level of 2D-DWT, and finally, setting them in labeled vectors.
negative samples. The positive and negative vectors are
trained to be classified with the SVMs. X is considered to be a
3.4.2. Classification Process. To classify the generated can-
member of class one only if Cf ≥ 0; otherwise, x is considered
didates (zones of interest), we should normalize them to 158
a member of class two. The flowchart that illustrates the
× 154 grayscale images and then extract the features using
SVM classification is shown in Figure 8.
the third level of 2D-DWT, and finally, we construct a vector
using the extracted features which will be the input of the
3.3. AdaBoost Classifier. AdaBoost (Adaptive boosting) was trained classifier and then obtain the results of the
proposed by Freund and Schapire in 1996 [19]. It is a classification.
6 Journal of Electrical and Computer Engineering

Input: (x1 , y1 ), . . . , (xm , ym ) is a set of labeled examples where xi belongs to X, yi ∈ {−1, +1}.
Initialization: D1 (i) � 1/m for i � 1, . . . , m.
For t � 1, . . . , T:
Train the weak learner based on distribution Dt
Obtain the weak hypotheses ht: X ⟶ Y{−1, +1}
Select ht with low weighted error:
εt � 􏽐 Dt (i)
i:ht(x)≠y
If εt > 1/2, then set T � t − 1 and abort loop
Choose βt � εt /(1 − εt )
β , if ht (xi ) � yi
Update Dt : Dt+1 (i) � Dt (i)/Zt × 􏼨 t
1 , otherwise
Where Zt is a factor of normalization (chosen in a way that Dt+1 is a distribution)
The final hypothesis is given as
H(x) � sign (􏽐Tt�1 ln(1/βt )ht (x) )

ALGORITHM 1

4. Experiment Results
4.1. Experimental Datasets. The database used in the ex-
periments contains two parts. The first part was done by
combining the Caltech car database [21] and some images
that are captured manually from different situations, which
were used to train the classifier. The second part was col-
lecting the videos in real traffic scenes which are utilized to
test the hypothesis generation step and hypothesis verifi-
cation step. Some of the images contain vehicles and others
contain background objects. All images are normalized to
158 × 154 pixels. This paper uses MATLAB R2015b as the
software development tool to test the proposed method. The
device configuration is 4.0 GB memory DDR4 and 3.40 GHz
Figure 9: Some vehicle training sample images.
Intel(R) Core(TM) i5 CPU.
The Caltech car database included 1155 vehicle images
from the rear and 1155 nonvehicle images. The real traffic Td
scenes are captured by a camera mounted on the car accuracy � × 100%, (6)
Td + Mv + Fd
windshield. The real traffic scenes contain much in-
terference, such as traffic lines, trees, and billboards. Figure 9 where Td is the number of true detections, Mv is the number
shows some examples of the database. of missed vehicles, and Fd is the number of false detections.
In order to get the best results, we have to look for an
efficient classifier where the classification step is the most
4.2. Performance Metrics. To test the proposed system, we important step in detection systems. Therefore, we have
collected real traffic videos using a camera mounted on front used and compared two classifiers: SVMs and AdaBoost
of a car. The vehicle detection was tested in various envi- which are two efficient methods of classification, which
ronments, and it showed a good rate especially on the have been used to verify and classify the extracted features
highways. by using 2D-DWT of the generated hypothesis. The use of
Some results of hypothesis generation using cross- these two classifiers gave really efficient results. However,
correlation from different image sequences are shown in the AdaBoost classifier gave a high accuracy of classifi-
Figure 10. The trees beside the road and the rear window of a cation and showed more advantages than the SVM
car generate some false hypothesis. However, the purpose of classifier that also showed an important accuracy of
this step was to detect the potential vehicles location re- generated hypothesis classification. The most missed ve-
gardless of the amount of false candidates generated where hicles are missed due to the overlapping. However, the
the false candidates would be removed in the hypothesis detection of overlapping vehicles is done successfully
verification step as shown in Figure 11. based on the percentage of vehicle parts hidden behind
To evaluate the performance of the proposed method, other vehicles. If only small part of a vehicle is hidden, it
the statistical data and the accuracy of various testing cases will be generated in the hypothesis generation step and
were recorded and are listed in Table 1. The accuracy is will be detected otherwise it will not be detected. This
defined as follows: problem is not very important, and the most important
Journal of Electrical and Computer Engineering 7

(a) (b) (c)

Figure 10: Hypothesis generation result after cross-correlation: (a) very close and very far generated candidates; (b) very close and far
generated candidates; (c) close and far generated candidates.

(a) (b)

(c) (d)

Figure 11: The results of hypothesis verification step of the generated candidates.

problem is to detect vehicles directly in front of the Table 1: Vehicle detection rates.
current vehicle. Video sequences
Table 1 shows the results of our vehicle detection system. Methods
1 2 3 4
TD 98 102 115 121
Cross-correlation + MV 2 2 2 4
4.3. Evaluation Results. To evaluate our proposed work, we 2D-DWT + SVMs FD 2 2 1 2
use three methods to compare with. Yan et al. [1] are based Accuracy (%) 96.08 96.23 97.46 95.27
on shadow under vehicle to detect the region of interest and TD 99 101 116 123
then used histograms of oriented gradients and the Ada- Cross-correlation +
MV 1 2 1 2
Boost classifier for vehicle detection. Tang et al. [7] are based 2D-DWT +
FD 1 2 1 1
on the Haar-like features and the AdaBoost classifier to AdaBoost
Accuracy (%) 98.02 96.19 98.31 97.62
detect vehicles which is a very popular method. Ruan et al.
[22] focused on wheel detection to detect vehicles. They are
based on the HOG extractor and MB-LBP (multiblock local different scenes in different conditions compared to our
binary pattern) with AdaBoost to detect vehicle’s wheels. proposed method results, and this comparison shows that
Table 2 shows the results of three different methods from the proposed method has the highest accuracy and confirms
8 Journal of Electrical and Computer Engineering

Table 2: Evaluation results of 4 vehicle detection methods. [3] A. Jazayeri, H. Cai, Y. Jiang Zheng, and M. Tuceryan, “Vehicle
detection and tracking in car video based on motion model,”
Video sequences
Methods IEEE Transactions on Intelligent Transportation Systems,
1 (%) 2 (%) 3 (%) 4 (%) vol. 12, no. 2, pp. 583–595, 2011.
Yana et al. [1] 97.03 95.24 97.48 96.82 [4] L. Gao, C. Li, T. Fang, and X. Zhang, “Vehicle detection based
Tang et al. [7] 96.08 94.29 96.64 96.06 on color and edge information,” in ICIAR 2008, Vol. 5112,
Ruan et al. [22] 94.23 92.45 94.22 93.79 Springer, Berlin, Germany, 2008.
Proposed method 98.02 96.19 98.31 97.62 [5] J. Li and D. Tao, “Simple exponential family PCA,” IEEE
Transactions on Neural Networks and Learning Systems,
vol. 24, no. 3, pp. 485–497, 2013.
that it is able to detect vehicles in different conditions with a [6] P. J. Cunningham and Z. Ghahramani, “Linear di-
high accuracy and efficiency. mensionality reduction:survey, insights, and generaliza-
tions,” Journal of Machine Learning Research, vol. 16,
pp. 2859–2900, 2015.
5. Conclusion [7] Y. Tang, C. Zhang, R. Gu, P. Li, and B. Yang, “Vehicle de-
tection and recognition for intelligent traffic surveillance
A real-time vehicle detection system using a camera system,” Multimedia Tools and Applications, vol. 76, no. 4,
mounted on front of a car is proposed in this paper. We pp. 5817–5832, 2015.
have proposed a solution based on the cross-correlation [8] X. Wen, H. Zhao, N. Wang, and H. Yuan, “A rear-vehicle
method. The proposed system included two steps: the hy- detection system for static images based on monocular vi-
pothesis generation and hypothesis verification steps. Firstly, sion,” in Proceedings of 9th International Conference on
in the hypothesis generation step, the initial candidate se- Control, Automation, Robotics and Vision, pp. 2421–2424,
lection is done by using the cross-correlation technique after Singapore, March 2006.
applying the edge detection to improve the result and reduce [9] W. Liu, X. Wen, B. Duan et al., “Rear vehicle detection and
the processing time. Then, in the hypothesis verification step, tracking for lane change assist,” in Proceedings of IEEE In-
the two-dimensional discrete wavelet transform has been telligent Vehicles Symposium, pp. 252–257, Istanbul, Turkey,
applied on both selected candidates and dataset to extract June 2007.
[10] M. M. Moghimi, M. Nayeri, M. Pourahmadi, and
features. Two famous classifier SVM and AdaBoost have been
M. K. Moghimi, “Moving vehicle detection using AdaBoost and
trained using these extracted features. Based on a comparison haar-like feature in surveillance videos,” International Journal
of these two classifier results, it was concluded that the of Imaging and Robotics, vol. 18, no. 1, pp. 94–106, 2018.
AdaBoost classifier performed better in terms of accuracy than [11] X. Wen, L. Shao, Y. Xue, and W. Fang, “A rapid learning
SVMs that has also showed an interesting accuracy. The ex- algorithm for vehicle classification,” Information Sciences,
perimental results presented in this paper showed that the vol. 295, pp. 395–406, 2015.
proposed approach have good accuracy compared to other [12] R. Lienhart and J. Maydt, “An extended set of haar–like
methods. features for rapid object detection,” in Proceedings of IEEE
International Conference on Image Processing, pp. 900–903,
Rochester, NY, USA, January 2002.
Data Availability [13] J. Arrospide and L. Salgado, “Log-gabor filters for image-
based vehicle verification,” IEEE Transactions on Image
The data used to support the findings of this study are in-
Processing, vol. 22, no. 6, pp. 2286–2295, 2013.
cluded within the article [21]. [14] A. Khammari, F. Nashashibi, Y. Abramson, and C. Laurgeau,
“Vehicle detection combining gradient analysis and AdaBoost
Additional Points classification,” in Proceedings of 2005 IEEE Intelligent
Transportation Systems, pp. 66–71, Vienna, Austria, Sep-
Our perspectives include the improvement in hypothesis tember 2005.
verification step by updating the AdaBoost classifier in order [15] J. Canny, “A computational approach to edge detection,” IEEE
to reduce the processing time and the distance measurement Transactions on Pattern Analysis and Machine Intelligence,
between the detected vehicles and the camera. vol. 8, no. 6, pp. 679–698, 1986.
[16] S.-D. Wei and S.-H. Lai, “Fast template matching based on
normalized cross correlation with adaptive multilevel winner
Conflicts of Interest update,” IEEE Transactions on Image Processing, vol. 17,
no. 11, pp. 2227–2235, 2008.
The authors declare that there are no conflicts of interest. [17] I. Daubechies, “The wavelet transform, time-frequency lo-
calization and signal analysis,” IEEE Information Theory So-
References ciety, vol. 36, no. 5, pp. 961–1005, 1990.
[18] I. Slimani, A. Zaarane, and A. Hamdoun, “Convolution
[1] G. Yana, Y. Ming, Y. Yang, and L. Fan, “Real-time vehicle algorithm for implementing 2D discrete wavelet transform
detection using histograms of oriented gradients and Ada- on the FPGA,” in Proceedings of Computer Systems and
Boost classification,” Optik, vol. 127, no. 19, pp. 7941–7951, Applications (AICCSA), 2016 IEEE/ACS 13th International
2016. Conference of IEEE, pp. 1–3, Agadir, Morocco, Novem-
[2] S. T. Soo and T. Bräunl, “Symmetry-based monocular vehicle ber-December 2016.
detection system,” Machine Vision and Applications, vol. 23, [19] Y. Freund and R. E. Schapire, “A decision-theoretic gener-
no. 5, pp. 831–842, 2012. alization of online learning and an application to boosting,” in
Journal of Electrical and Computer Engineering 9

Computational Learning Theory, (Eurocolt), Vol. 904,


Springer, Berlin, Germany, 1995.
[20] Y. Freund and E. S. Robert, “Experiments with a new boosting
algorithm,” in Proceedings of Thirteenth International Con-
ference, pp. 148–156, Bari, Italy, July 1996.
[21] Caltech Cars Dataset: http://www.robots.ox.ac.uk/∼vgg/
data3.html.
[22] Yu-S. Ruan, I.-C. Chang, and H.-Y. Yeh, “Vehicle detection
based on wheel part detection,” in Proceedings of IEEE In-
ternational Conference on Consumer Electronics-Taiwan
(ICCE-TW), pp. 187-188, Taipei, Taiwan, June 2017.
International Journal of

Rotating Advances in
Machinery Multimedia

The Scientific
Engineering
Journal of
Journal of

Hindawi
World Journal
Hindawi Publishing Corporation Hindawi
Sensors
Hindawi Hindawi
www.hindawi.com Volume 2018 http://www.hindawi.com
www.hindawi.com Volume 2018
2013 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

Journal of

Control Science
and Engineering

Advances in
Civil Engineering
Hindawi Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

Submit your manuscripts at


www.hindawi.com

Journal of
Journal of Electrical and Computer
Robotics
Hindawi
Engineering
Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

VLSI Design
Advances in
OptoElectronics
International Journal of

International Journal of
Modelling &
Simulation
Aerospace
Hindawi Volume 2018
Navigation and
Observation
Hindawi
www.hindawi.com Volume 2018
in Engineering
Hindawi
www.hindawi.com Volume 2018
Engineering
Hindawi
www.hindawi.com Volume 2018
Hindawi
www.hindawi.com www.hindawi.com Volume 2018

International Journal of
International Journal of Antennas and Active and Passive Advances in
Chemical Engineering Propagation Electronic Components Shock and Vibration Acoustics and Vibration
Hindawi Hindawi Hindawi Hindawi Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

You might also like