Final Report PDF
Final Report PDF
PLATE
BACHELOR OF TECHNOLOGY
In
i
PRESIDENCY UNIVERSITY
Bengaluru
Department of Electronics and Communication Engineering
Certificate
This is to certify that the University Project – II work entitled “REAL TIME
RECOGNITION OF VEHICLE NUMBER PLATE was carried out by Mr. Amith
Prasad (ID No. 2016ECE086), Mr.Hari Govinda Kalkura (ID No. 2016ECE108),
Mr.Korada Sagar (ID No. 2016ECE112) and Mr.. D.V.S.Subhash(ID No.
2016ECE150) who are bonafide students of VIII Semester B.Tech. Electronics and
Communication Engineering in Presidency University. This is in partial fulfillment of
the course work in place of Professional Practice – II of Bachelor of Engineering in
Presidency University, Bengaluru, during the year 2019-2020.
Examiner-1 ______________________
Examiner-2 ______________________
ii
DECLARATION
Mr . Amith Prasad
ID No.: 2016ECE086
Mr.Korada Sagar
ID No.: 2016ECE112
Mr.D.V.S.Subhas
ID No.: 2016ECE150
iii
ACKNOWLEDGEMENT
We would like to express our sincere thanks to the supervisor, Ms. Amrutha V Nair,
Assistant Professor, Department of Electronics and Communication Engineering for
her morale boosting, meticulous guidance, co-operation and supervision throughout
this project work.
We would like to convey our sincere thanks to our project coordinators Mr. G
Tirumala Vasu, Ms. Aruna M and Mr. Tanjir Alam and all other teaching and non-
teaching staff, Department of Electronics and Communication Engineering for
providing us required support throughout the project work.
We would like to owe our heartiest gratitude to Dr. Shilpa Mehta, Head of the
Department of Electronics and Communication Engineering for her encouragement
during the progress of this project work.
We would also like to pay our sincere thanks to Dr. C. Prabhakar Reddy, The Dean,
School of Engineering for sharing his valuable experience in completing project work.
We would like to convey our sincere thanks to the management of our university for
providing us required infrastructure within college campus.
We would also like to thank all of our classmates and friends for their valuable
suggestions to complete our project work on time.
Last but not the least we would like to thank our parents for always staying beside us
and encouraging all the time.
iv
ABSTRACT
v
TABLE OF CONTENTS
Certificate .....................................................................................................................ii
DECLARATION.........................................................................................................iii
ACKNOWLEDGEMENT .......................................................................................... iv
ABSTRACT .................................................................................................................. v
TABLE OF CONTENTS ........................................................................................... vi
TABLE OF FIGURE .................................................................................................vii
CHAPTER 1: INTRODUCTION ............................................................................... 2
1.1 PROJECT INTRODUCTION ............................................................................. 2
1.2 OBJECTIVE ........................................................................................................ 3
1.3 MOTIVATION .................................................................................................... 3
CHAPTER 2: LITERATURE SURVEY ................................................................... 5
2.1 Summary of the literature survey ...................... Error! Bookmark not defined.
CHAPTER 3: EXISTING METHOD …………………………………………….12
3.1 Morphological Operations Based Plate Localization……………………..…..13
3.2 Drawbacks of Morphological methods of number plate recognition………….15
CHAPTER 4: PROPOSED METHOD ………..………………………………......16
4.1 Methodology……………………………………………………………….….17
4.2 Functions & Implementations…………………………………………………19
CHAPTER 5: RESULTS …………………………………………………………..20
5.1 Simulation and Results………………………………………………………...21
CHAPTER 6: Conclusion & Future Scope …………………...……………….….24
6.1 Conclusion………………………………………………………………….….25
6.2 Future Scope…………………………………………………………..…….…25
References…………………………………………………………………………...26
vi
TABLE OF FIGURE
Fig. 3 SE Matrix 14
Fig. 7 Simulations and Results (a) Original image (b) Detecting the 21
number plate (c) Removing unneeded parts(d) Result after
removal (e) Detecting words(f) Displaying characters in
order of detection (g) Number gets saved in a text document
vii
viii
CHAPTER 1
INTRODUCTION
1
CHAPTER 1: INTRODUCTION
Most of the number plate localization algorithms merge several procedures, resulting
in long computational (and accordingly considerable execution) time (this may be
reduced by applying less and simpler algorithms). The results are highly dependent on
the image quality, since the reliability of the procedures severely degrades in the case
of complex, noisy pictures that contain a lot of details. Unfortunately the various
procedures barely offer remedy for this problem, precise camera adjustment is the only
solution. This means that the car must be photographed in a way that the environment is
excluded as possible and the size of the number plate is as big as possible. Adjustment
of the size is especially difficult in the case of fast vehicles, since the optimum moment
of exposure can hardly be guaranteed. Number Plate Localization on the Basis of Edge
Finding: The algorithms rely on the observation that number plates usually appear as
high contrast areas in the image (black-and-white or black-and-yellow).
The Automatic Number Plate Recognition (ANPR) was invented in 1976 at the
Police Scientific Development Branch in the UK. However, it gained much interest
during the last decade along with the improvement of digital camera and the increase in
computational capacity. In essence it consists of a camera or frame grabber that has the
capability to grab an image, find the location of the number in the image and then
extract the characters for character recognition tool to translate the pixels into
numerically readable character. ANPR can be used in many areas from speed
enforcement and tool collection to management of parking lots, etc .First, the original
car image in color is converted to black and white image grayscale images. It can also
be used to detect and prevent a wide range of criminal activities and for security control of a
highly restricted areas like military zones or area around top government offices.
The system is computationally inexpensive compare to the other ANPR systems
.Besides the robustness, the earlier methods use either feature based approached using edge
detection or Hough transform which are computationally expensive or use artificial
neural network which requires large training data .The presented ANPR system is aimed
to be light weighted so that it can be run real time and recognizes Sindh standard number plate
2
under normal conditions. The ANPR system works in three steps, the first step is the
detection and capturing a vehicle image, the second steps is the detection and
extraction of number plate in an image. The third section use image segmentation technique to
get individual character and optical character recognition (OCR) to recognize the
individual character with the help of database stored for each and every alphanumeric
character.
1.2 OBJECTIVE
• To remove defects pertaining to camera orientations, noise in images and HSL factors
1.3 MOTIVATION
• Vehicle management is one of the most needed and major challenges in urban areas.
Unlike other countries, India, with its one billion people population, has a unique set of
needs for ANPR. The main use of ANPR is in highway monitoring, parking management,
and neighborhood law enforcement security.
• In India there is one death in every four minutes with most of them occurring due to over
speeding. ANPR with the help of radar gun technology can used to monitor the vehicles‟
average speed and can identify the vehicles that exceed the speed limit. In this case, a fine
ticket can be automatically generated by calculating the distance between two cameras.
This helps to maintain law and order which, in turn, can minimize the number of road
casualties.
3
• ANPR provides the best solution for providing parking management. Vehicles with
registered plates can automatically enter into parking areas while non-registered vehicles
will be charged by time of check in and check out. Number plates of the car can be
directly linked with owner mobile phone through data base provided. ANPR can support a
system pre-book and pre-pay platform for parking.
• In India 200,000 cars are stolen per year. This number can lessen if proper steps are taken
and ANPR system is used to track cars so that if vehicles are stolen, law enforcement will
be able to identify when, where and the route taken by a stolen vehicle. This can help
bring justice swiftly to such a vast nation.
4
CHAPTER 2
LITERATURE SURVEY
5
CHAPTER 2: LITERATURE SURVEY
In[1] an OCR system is developed to recognize handwritten kannada letters and numerals.
Discrete Wavelet Transform is used in this proposed paper because it is easy to implement
and consumes less computational time. In this paper, Symlet wavelet family is used. The
symlets are nearly symmetrical wavelets proposed by Daubechies as modifications to the db
family and both the wavelet families have same properties. Artificial Neural Networks are
used in the classification stage. An accuracy of 91% and 97.60% for handwritten kannada
characters and numerical are obtained respectively.
[2]Huei-Yung Lin, Chin-Yu Hsu” Optical Character Recognition with Fast Training Neural
Network” pp 1458- 1461, IEEE 2016
In[2] developed an OCR system with fast training neural network. In this work each
recognition stage is assigned with a training period with short duration. Now the training data
are subdivided into several groups. This division is based on the criterion such as symmetry,
Euler number features and so on. Neural network find its application during the learning time.
This neural network helps the system in learning things at a faster rate. Here in this proposed
method no preprocessing is done. Each character is now passed through neural network and
selected best one or two results for the next stage. Many OCR techniques make problems in
classifying the character in preprocessing stage. This can be overcome by using above
mentioned approach. After completing the first stage, the second phase is the similarity check;
this will helps to omit the similar words. This similarity check is carried out by comparing the
pixel of one object with the pixel of another one. Once the recognition results from all neural
networks are out, it collects the results, compares it with input character. Now the weighing
factor is used to make the final result. The output with largest weighted score is then selected
as the final recognition result. In this proposed paper, the recognition rate of this technique is
said to have higher precision compared to the conventional neural network approach.
[3] D.Padhi and D.Senapati,“Zone Centroid Distance and Standard Deviation Based Feature
Matrix for Odia Handwritten Character Recognition”, International Conference on Frontiers
of Intelligent Computing Theory and Applications (FICTA) ,pp. 649–658, 2005.
D. Padhi et al. in [3] performed a two way approach for the recognition to the printed
character scripts. Their feature matrix contains the empirical values of standard deviation and
6
zone based average centroid distance of images. They have listed two scenario of
classification one for similar character and other for distinct characters
[4] Arun K. Pujari, Chandana Mitra and Sagarika Mishra, “A New Parallel Thinning
Algorithm with Stroke Correction for Odia Characters”, Advanced Computing, Networking
and Informatics ,Volume 1, Smart Innovation, Systems and Technologies 27,Springer
International Publishing Switzerland 2014.
Some algorithm for classification was applied on thinned characters by Arun K. Pujari et al. in
[4]. All the calculation was done to skeletonise images of character in order perform stroke
preservation. They had used 10 different algorithms to maintain the structural analysis of
numerals such as connectivity, topological etc.
[5] C. Vasantha Lakshmi,Ritu Jain and C. Patvardhan “OCR of Printed Telugu Text with
High Recognition Accuracies,”Springer.
An application Tessercat OCR Engine over Telugu printed document was reported by C.
Vasantha Lakshmi, Ritu Jain and C. Patvardhan [5].
[6] Kalyan S Dash , N.B. Puhan and Ganapati Panda , “BESAC: Binary External Symmetry
Axis Constellation for unconstrained handwritten character recognition” Pattern Recognition
Letters, June 25, 2016.
[7]Swapnil Desai, Ashima Singh” Optical character recognition using template matching and
back propagation algorithm”IEEE 2016
In[7] optical character recognition using template matching and back propagation algorithm is
implemented. Template matching is one the most common method used in optical character
recognition techniques. It is mainly used as a feature extraction technique. Its simplicity for
the implementation makes it more popular. Correlation is one other name that holds for the
template matching. In this method each individual character pixel matrix are used and they are
suitable for the feature extraction. A correlation function R, is used in the test data set and the
resultant is stored in the database. The character with highest correlation value is selected as
the best match for that character [7]. Back propagation algorithm that uses reverse mechanism
to find the error and it reduces the error by propagating it backwards. It is based on the error
correction. The problem that is found after the grouping. There may be unidentified letters
exist after grouping. This unidentified letters will appear as character that leads to an
erroneous result. The character recognition using this method gives a highest accuracy rate.
7
[8]Abdullah-al-mamun, Tanjina Alam” An approach to empirical Optical Character
recognition paradigm using Multi-Layer Perceptorn Neural Network” 18 th International
conference on computer and information technology, pp 132- 137, IEEE 2015
In[8] Optical character recognition is made possible using multilayer perceptron neural
network. As usual the image is acquired initially, and then it is preprocessed and segmented.
During the segmentation the character lines are separated. Enumeration of character lines in a
character image is essential in delimiting the bounds within which the detection can precede
[8]. Next step in segmentation is to separate the characters. Once the characters are separated
the features are extracted. To implement the feature extraction process, Image to matrix
mapping process is used. This process is converting the images to a 2D matrix. Next step is to
train the system. Training gives the system capability to take the decision to do the task
efficiently and it will give a better result in an unpredicted environment. The proposed system
used the Multi-Layer Perceptron Learning Algorithm. This methods uses pyramid like
structure for the learning purposes. This method can be utilized not only for the learning
purposes but also for the classification purposes. Appling the learning process algorithm
within the multilayer network architecture, the synaptic weights and threshold are update in a
way that the classification/recognition task can be performing efficiently [8]. These synaptic
weights are important for the iteration purposes. During the iterations the weights are got
updated to some integer value. So in order to recognize an object its feature data is feed to the
network input layer and produced an output vector. The error is calculated now by the output
and by using the target output. By analyzing the output one can determine the character of the
recognition rate. The proposed system achieves 91.53% accuracy for the isolated character
and 80.65% accuracy for the sentential case character.
[9].Kumar R., Singh A., “Algorithm to Detect and Segment Gurmukhi Handwritten Text into
Lines, Words and Characters”, IACSIT International Journal of Engineering and Technology,
Vol.3, No.4, 2011.
Kumar and Singh (2011) [9]: It was tested on different documents, the results obtained were
encouraging were detected with a great accuracy. The lines, which were having some
characters in the lower zone, were interpreted almost correctly. To get the character, the
coordinates of detected lines and words are used. Forcharacter segmentation process was
divided in two parts, (i) to get the segmented region R (ii) to check, if R has a meaningful
symbol or not. This can be a reverse approach to ensure correct segmentation, i.e. if R does
not have a meaningful symbol then R is readjusted. After close analysis, we found that this is
due to the shapes of characters. Certain characters in Gurumukhi script are combined in
nature. But overall, results were good and encouraging.
[10] Hamanaka, M., Yamada, K. ; Tsukumo, J., “On-line Japanese character recognition
experiments by an off-line method based on normalizationcooperated feature
extraction”,Proceedings of the Second International Conference on Document Analysis and
Recognition, 1993., 204-207.
8
Hamanaka et al (1993) [10], suggested a methods that is effective in the recognition of
Japanese characters. The conventional methods used till dates restricted the order and number
of strokes. The offline methodology removes the above said constraints based on the pattern
matching of orientation of feature patterns. It can be improved with the enhancement in
nonlinear pattern matching, nonlinear shape normalization, and the normalization-cooperated
feature extraction method. The recognition rate attained was 95.1%.
Kang et al. (2004) [11], proposed a system in which the strokes of characters and relationship
between characters are represented stochastically. A character/glyph is characterized by a
multivariate RV (random variable) over the components and its probability distribution is
studied from a training data set. The character is resolved into factors and is almost corrected
by a set of lower-order probability distributions. As per the method put forward by the
authors, a handwritten Hangul character recognition system was developed which gives better
results.
[12] XingiaoLv, Dongshan Huang, ENming Song, Ping Li, CHunshan Wu, “One Radical-
Based on-line character recognition (OLCCR) system using support vector machine for
recognition of Radicals”, 1st International Conference on Bioinformatics and Biomedical
Engineering, 2007, 558-561.
Huang et al. (2007) [12], put forward a method of radical based online recognition of Chinese
handwritten characters using support vector machine (SVM). The input characters are pre-
processed segmented and feature extracted. Then in order to arbitrate the type of the pattern of
the glyph, the midpoint of each segment is projected in vertical and horizontal directions. The
glyph is thus disintegrated into significant sub-structures. Every substructure is a radical and
is split into 8 subareas for all the four directions so that the statistics feature of number of
pixels in each subarea is suited to recognize radical using SVM. The recognition of Chinese
character is converted to a series of radical matching between the input glyph and reference
pattern. The front or rear radical is utilized in coarse classification stage to reduce the number
of candidate glyphs. The structure and statistical features of characters are also taken on in
this technique. This method is independent of stroke number and stroke order of characters.
[13] Wang Yutao, Qin Tingting, TianRuixia, Yang Gang, “Recognition of license plate
character based on wavelet transform and generalized regression neural network”, Control
and Decision Conference (CCDC), 2012 24th Chinese, 1881-1885.
Yutao et al., (2012) [13], proposed a hybrid system based on Generalized Regression Neural
Network which employs wavelet transform. To extract the features from input Chinese
characters, a wavelet transform based block projection adopted. In order to minimize the
dimension of feature vector, a cluster algorithm is introduced further and to identify similar
characters an approach of regional recognition is also introduced in this system. A GRNN
9
with significant non-linear mapping ability and improved fault-tolerant capability is
developed as character classifier. The classification algorithm has more robustness than the
algorithms introduced so far.
Patel et al. (2013) [14], presents a method of handwriting character recognition. To enhance
the accuracy of recognition at the pixel level, computational capability of Euclidean distance
metric and the learning capability of artificial neural network, this method utilizes the
compression capability of discrete wavelet transform. The problem of handwritten character
recognition has been addressed with multi-resolution technique using discrete wavelet
transform and learning rule through the artificial neural network. Handwritten characters are
categorized into 26 pattern classes based on apt properties. During pre-processing each
character is captured within a rectangular box and then resized to a threshold size. The
learning rule of artificial neural network is having been used for computing the weight matrix
of all classes and then recognition scores are generated by fusing the unknown input pattern
vector with the weight matrices of all the classes. Utmost value of the score corresponds to the
identified input character. Greater recognition rate was achieved with this method.
Dassanyake et al. (2013) [15], proposed Handwritten Character Recognition system which is
implemented with the capability of extracting the content of an image. A background process
is run by the conversion process, without any involvement of the user. User can perform the
editing of the converted text after completing the conversion, in the Panhinda editor. This
document describes the techniques for enhancing the quality of the image, character
segmentation, character recognition and digital dictionaries. Noise removal, lighting
conditions and angle effects are done at the preprocessing phase. Character segmentation is
done after obtaining a binarized image using Horizontal and Projection Profile method. For
recognizing the characters, the Support Vector Machine technique was used. Error correction
is done by using a combined model of noisy channel model and natural language model.
This paper [16] presents a recognition method in which the vehicle plate image is obtained by
the digital cameras and the image is processed to get the number plate information. A rear
image of a vehicle is captured and processed using various algorithms.Furtherwe are
planningto study about the characteristics involved with the automatic number plate
system for better performance.
10
2.1 Summary of the literature survey
OCR is one of the most popular and challenging topic of pattern recognition which has been a
topic of interest for many years. OCR is defined as the conversion of scanned images of
typed, handwritten or printed text into machine encoded text. This paper gives various optical
character recognition techniques that are used for various character recognition.
11
CHAPTER 3
EXISTING METHOD
12
CHAPTER 3: EXISTING METHODOLOGY
License plate is a pattern with high variations of contrast. This feature is used
to locate the plate and is robust to the changes of lighting conditions and view
orientations.
13
License plate is a pattern with high variations of contrast. This feature is used
to locate the plate and is robust to the changes of lighting conditions and view
orientations.
The open and close morphological operations are used to extract the contrast features
within the plate [4]. This is a relatively stable method when subjected to different image
alterations or conditions.
The proposed algorithm in this paper consists of three major stages:
•Morphological operations for extracting plate features;
•Selection of candidate regions;
•Validation of plate region.
An input to the following block where the first morphological open operation will be
used. This operation is an erosion followed by a dilation used to eliminate small and
narrow parts of an image
Fig 3: SE Matrix
Where SE is a matrix fully filled by 1‟s and its size is 4×30. The centre of the matrix is
called „Origin‟. The choice of the size is purely based on the resolution of the original
image and plate region ,According to the rule of opening, this SE can effectively erase
plate region and keep non-plate region from greyscale image. The result obtained
after performing the opening operation is actually the background of the image. The
next stage is to subtract the background image from the original greyscale image
where the area of plate region will be highlighted.
14
Fig 4: Highlighting Plate Region Process
The drawback of the above solution (Edge Finding Methodology) is that after the filtering
also additional areas of high intensity appear besides the number plate. If the image contains a
lot of details and edges (example: complex background) the further areas. As a result, the SFR
curve exhibits a smaller increment at the number plate and the edges in the
surrounding areas may sometimes be more dominant
15
CHAPTER 4:
PROPOSED METHOD
16
CHAPTER 4: PROPOSED METHODOLOG
The proposed method uses scan line evaluation and averaging method to localize the
number plate followed by a border removal mechanism combined with character mending and
approximation of character height to extract the number plate characters. Finally, a template
matching approach is used to recognize the characters. A Graphical User Interface has been
created and the algorithm is experimented successfully on a variety of real images, both single
as well as double line plates. The sample results obtained on testing with various images are
also detailed.
4.1 Methodology
i.It is done to remove the unwanted background details, and thereby focusing on to the
essential details in the image.
ii. Applying a top-hat filter to the whole image followed by a multiscale region search has
been described .
iii. To detect the vertical edges, to extract the license plate using Sobel operators.
iv. A technique using edge detection and Hough transforms, to detect the vertical and
horizontal edges, by making use of the rectangular shape of the license plate has been
presented. Sorin developed an approach to analyze the input image, looking for areas with
high contrast gradients at the given scale of about 15 pixels followed by histogram stretching.
b. Character extraction
c. Character recognition
18
4.2 FUNCTIONS & IMPLEMENTATION
i. As the image may have complex background details, number plate localization is the
central issue that demands great attention. Scan line evaluation and averaging method is
used here to accomplish this task. The image obtained from the sensor is filtered and
binarized and the set of connected components are segmented.
i. To extract the characters from the localized number plate, the image obtained from the
previous step is complemented. The contents in this image will either be trivial noise
components or characters to be identified. A border removal mechanism followed by the
approximation of character height is performed to extract the characters
ADVANTAGES:
2.For a damaged plate region, morphologic operations can be used to eliminate damages.
3. Selectively removes the redundancy present in the captured images does not affect details
19
CHAPTER 5:
RESULTS
20
5. Results:
5.1 Simulation and Results
In the need to perform the above-mentioned ANPR system, a digital simulation using
MATLAB was performed. The given video would be converted into images and the
already existing and proposed simulations have a very complex program for generating the
necessary extracted number from the selected image in the MATLAB , here making it more
reliable and efficient Segmentation of characters In this stage we are segmenting individual
characters from extracted region of interest which is extracted from captured image. As we
have the text area i.e. number plate. The segmentation of charter is to crop and separate the
lines from the image rows. The same process is repeated on the columns of each row to
separate each character. Figure shows the extracted region of interest (ROI), in which we have
find the centroids for each character to know the number of characters present in ROI
Increase with high resolution camera.Which can be able to capture clear images of the
vehicle.The OCR method is sensitive to misalignment and to different sizes, so we
have to create different kind of templets for different RTO specifications. The statistical
analysis can also be used to define the probability of detection and recognition of the vehicle
number plate. At present there are certain limits on parameters like speed of the vehicle, script
on the vehicle number plate, skew in the image which can be removed by enhancing the
algorithms further
Original image
Fig 7(a)
21
Detecting the number plate
Fig 7(b)
Removing unneeded parts
Fig 7(c)
Result after removal
Fig 7(d)
22
Detecting words
Fig 7 (e)
Displaying characters in order of detection
Fig 7(f)
Number gets saved in a text document
Fig 7 (g)
23
CHAPTER 6:
Conclusion & Future Scope
24
Chapter 6: Conclusion & Future Scope
6.1 Conclusion:
Today advances technology took Automatic Number Plate Recognition (ANPR)
systems from hard to set up, limited expensive, fixed based applications to simple
mobile ones in which “point to shoot” method can be used. This is possible because of
the creation of software which ran on cheaper PC based and also non specialist
hardware in which their no need to give pre-defined direction, angels, speed and size
in which the plate would be passing the camera field of view.Also Smaller cameras
which can read license plates at high speed, along with smaller, more durable
processors that can fit in police vehicles, allowed law enforcement officers to patrol
daily with the benefit of license plate recognition in real time.
25
References
1.Du, S., Ibrahim, M., & Badawy, W.(2013,February). Automatic License Plate
recognition(ALPR):A State-of-the-Art Review. IEEE Transactions on Circuits and
Systems for Video Technology,
2. Yoh-Han Pao,“Adaptive Pattern Recognition and Neural Network.”Pearson Education
Asia, 2009.
3. F. Martin, M. Garcia and J. L. Alba. “New methods for Automatic Reading of VLP‟s
(Vehicle License Plates),”in Proc. IASTED Int. Conf. SPPRA, 2002, Optasia Systems
Pte Ltd, “The World Leader in License Plate Recognition Technology” Sourced from:
www.singaporegateway.com/optasia, Accessed 22 November 2008.
4. Francesca O.:Experiments “ License Plate Recognition System.”DISI, Università degli
Studi di Genova, Technical Report number: DISI-TR-06-17(2007).
5. Lubkowski, P., & Laskowski, D. (2017). Assessment of Quality of Identification
of Data in Systems of Automatic License Plate Recognition. In J. Mikulski
(Ed.),Smart Solutions in Today‟s Transport. TST 2017. Communications in
Computer and Information Science (vol.715). Cham: Springer.
6. Lee J.W., Kweon I.S:“Automatic number-plate recognition: neural network
approach.”IEEE Vehicle Navigation and Information Systems Conference,
vol.3.1. no.12, pp. 99-101 (1998).
7. Shridhar, M.; Miller, J. W V; Houle, G.; Bijnagte, L., "Recognition of license
plate images: issues and perspectives," Document Analysis and
Recognition,1999. ICDAR '99. Proceedings of the Fifth International Conference
on, vol., no., pp.17,20, 20 -22 Sep 1999.
8. Eun Ryung Lee; Pyeoung Kee Kim; Hang Joon Kim, "Automatic recognition of
a car license plate using colour image processing," Image Processing, 1994.
9. Chang, S.-L., Chen L.-S., Chung, Y.-C., & Chen, S.-W. (2004, March). Automatic license
plate recognition. IEEE Transactions on Intelligent Transport Systems, 5(1), 42-53.
doi:10.1109/TITS.2004.825086
10.Du, S., Ibrahim, M., & Badawy, W.(2013,February). Automatic License Plate
recognition(ALPR):A State-of-the-Art Review. IEEE Transactions on Circuits and Systems
for Video Technology, 23(2), 311-325.doi:10.1109/TCSVT.2012.2203741
11.Lubkowski, P., & Laskowski, D. (2017). Assessment of Quality of Identification of Data
in Systems of Automatic License Plate Recognition. In J. Mikulski (Ed.),Smart Solutions in
26
Today‟s Transport. TST 2017. Communications in Computer and Information Science
(vol.715). Cham: Springer. Doi:10.1007/978-3-319-66251-0_39
27