Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
12 views13 pages

Ijhs 10146+3874 3886

Uploaded by

MD HASANUZZAMAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views13 pages

Ijhs 10146+3874 3886

Uploaded by

MD HASANUZZAMAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

How to Cite:

Hira, S., & Lande, S. (2022). Detection of fruit ripeness using image
processing. International Journal of Health Sciences, 6(S6), 3874–3886.
https://doi.org/10.53730/ijhs.v6nS6.10146

Detection of fruit ripeness using image


processing

Swati Hira
Assistant Professor, CSE, RCOEM, Nagpur, India, 440013

Simran Lande
Student, CSE, RCOEM, Nagpur, India, 440013

Abstract---Cultivation of fruit crops plays a very important role within


the prosperity of any nation. Productive growth and high yield
production of fruits is necessary and required for the agricultural
industry. To understand ones health, it is better for agriculture agents
to check ripeness of fruits naturally and in organic way. So that one
can have fresh and natural fruits at their doorsteps. In this paper we
introduce ripeness of fruit using a new, high-quality, dataset of
images containing all type fruits. We also present the results of some
experiment for training a digital image processing to detect fruits. We
discuss the In fruit during ripening there is a well-coordinated series
of changes in the composition of the fruit which lead from the unripe
to the ripe condition and which give obvious changes in colour,
texture, taste, time to ripe and aroma which are readily perceived by
the senses.

Keywords---digital image processing, image detection, VGG16 model,


fruits dataset, image classification.

Introduction

The making of a fruit is also a development process unique to fruits. It requires a


elaborate network of interacting genes and signaling pathways. In fleshy fruit, it
involves some distinct stages, namely, fruit set, fruit development, fruit
classification, fruit image detection and fruit ripening. The ripening has received
most attention from people working in this area, as the necessary process to
activates a whole set of biochemical in the path that make the fruit more
attractive, desirable, and edible in nature for consumers. In recent years, the
scientific goal has been to reveal the mechanisms by which nutritional and
sensory qualities is developing during fruit development and ripening. The
classification of fruits is useful in supermarkets where prices for fruits purchased
by a client can be defined automatically. Fruit classification can also be utilized in

International Journal of Health Sciences ISSN 2550-6978 E-ISSN 2550-696X © 2022.


Manuscript submitted: 9 March 2022, Manuscript revised: 27 May 2022, Accepted for publication: 18 June 2022
3874
3875

computer vision for the automatic sorting of fruits from a set consisting of
different kinds of fruits. Picking out different kinds of fruits is a routine task in
supermarkets, where the seller must identify not only the species of a particular
fruit (i.e. orange, banana, apple, grapes), but also its variety to determine its
price. It is the hardest task for the seller to put the fruit which will be right from
the inside and should be tasting good as well .This problem has been addressed
by packaging fruits, but most of the time consumers want to pick the right fruit
themselves, which cannot be packaged, and which must be weighed.

In this paper we have propose a new dataset of images containing popular fruits.
The dataset was named Fruits360 and it can be downloaded from the address
pointed by reference [8]. Fruits have always occupied an important place in
human nutrition, due to the properties they offer, the amount of supply and the
easy purchase of the product. Currently, the processes of selection of harvested
fruits that are available for sale require expert inspections or complex systems,
procedures that are high in cost and should have controlled environments. In the
best cases, the selected fruits are in the biologically mature. Here classification
processes of the degree of maturity of fruits require the use of complex systems,
which, most of the times, are not within the reach to consumers who don’t have
clear view in knowledge of the characteristics that a fruit must have in order to be
categories as the maturity of fruit, the shape fruit, the fuzziness of fruit, or the
fruit is rotten. In this paper we have describes the use of a convolutional neural
network using VGG16 model for the detection of estimated accuracy of the
following fruits.

Related work

In this section we review several previous attempts of detection of fruit ripeness


using digital image processing. In [1] article authors discussed different fruit
ripening agents along with their ripening mechanisms and possible health
hazard. It is important to check chemical criteria, mechanisms, effects on
fruit quality and nutrition value.Where they found Ethylene is the major ripening
agent produced in natural way within the fruits to instigate ripening process.
Artificial fruit ripening is a complex issue especially in developing within countries
Therefore here we have consider fruits which are ripping or yet to be ripped, so
that the buyers can have all organically ripped fruits without toxic chemical
harming their body. The work authors investigate in [2[ resulting for ripe or
unripe fruits recognition was given. The applied procedure codes were build and
run in MATLAB. The descriptions which are used on this work have changed
things needy on the surface surroundings. The approach of the camera, the
placement of the sun, the time accomplish the picture of fruit which can touch for
collective or shrinking the percentage. But if the situation of the camera is
transformed and engaged the image from another perspective, it might be
potential to disclose each fruit without help.

The author in this article has evaluated a number of the machine vision
techniques to classify selected citrus fruits supported color analysis using single
view fruit images. Algorithms were developed to classify the chosen citrus fruits,
like orange, sweet-lime, and lemon, supported single view fruit images and
therefore the fruits were categorized into different classes supported external
3876

parameters like color and maturity. the one view fruit images were analyzed to
extract the hue and classify using methods like color distance, LDA(linear
discriminant analysis), and PDF(probability distribution function).The results
clearly indicate that either hue or hue and saturation are sufficient for color
classification, whereas the use of saturation alone does no give satisfaction in
results. For the colour distance method, both hue and saturation were used for
color classification.

[4]Non destructive maturity detection of tomatoes is that the main objective of this
work. this is often done using deep transfer learning, an emerging computer
vision technique. Tomatoes were classified automatically into three classes of
maturity-immature, partially mature and mature using this method. The Several
CNN pre-trained models of transfer learning like VGG16, VGG19, Inception V3,
ResNet101 and ResNet152 are used here for solving the targeted task of
classification.where VGG19 gives 97.37% classification accuracy.Thus it's found
that transfer learning could be a viable solution to image classification and may
be adopted in food and agriculture for solving classification and recognition
related problems economically and accurately. But Size grading of agricultural
products is another important processing operation which will be automated
using machine learning. Where size grading using machine learning may be a
complex process, therefore, authors consider the task for future investigation.

The author, In [5] article proposed a general approach of developing to estimate


the ripeness level without touching the fruit. The two methods is used for this
purpose are - color image segmentation and mathematical logic technique. Four
images of one fruit are clicked from four directions and separate desired part from
each image using color image segmentation.This approach can operate directly on
RGB color space without the necessity of color space transformation. Moreover,
the system are often applied to different applications with none difficulty by
merely changing the values of the parameters a, b and c This technique is
accustomed detect ripeness level of fruits, vegetables with color bases.But in
some cases the range do not give accuracy. Some value of RGB lies in overlapped
regions in red, green and blue mean values. Hence sometimes provide unexpected
results.

In [6] paper an affordable method of combination of image processing and


artificial neural network (ANN) technique is used to predict cherry and strawberry
color parameters they identified the pre-mature, early-mature, mature and over-
mature fruits supported quality in image processing using MATLAB is
successfully finished 63% accuracy in cherry fruit and 60% accuracy in
strawberry fruit. By using the thresholding technique the cherries and strawberry
at different stages of ripeness were segmented successfully.The colour measuring
technique discussed during used MATLAB software here for image analysis has
been used to provide a more adaptive way to measure the colour of the many
fruits and the traditional expensive color-measuring instruments.Accuracy is only
better of this system with low cost makes it more useful. But in other system with
this accuracy is much less .

The authors introduced fusion approch in [7] which is validated using a multi-
class fruit-and-vegetable categorization task in an exceedingly semi-controlled
3877

environment, like a distribution center or the supermarket cashier. The results


show that the answer is in a position to scale back the classification error in up to
fifteen percentage points with respect to the baseline. It shows the multi-class
classification as a collection of binary problems in such how one can assemble
together diverse features and classifier approaches custom-tailored to parts of the
matter.Whether or no more complex approaches like CCVs(color coherence
vectors) ,BIC(border/interior)descriptors, appearance-based descriptors provides
good results for the classifying continously cause an open problem. It will be
unfair to conclude they are doing not help within the classification as long as,
their success is very supported their patches representation. Such approaches
are computational demanding and maybe not advisable in some scenarios it
exampled for some few category of fruits only. By studying the above research
papers we have seen that there are some limitations in each papers like finding
chemicals involving in the process of fruit ripening which can be harmful for the
human body, using slow processing tools and old methods, etc..So we have tried
to solve and overcome some of the problems coming across and giving better
version for detection of fruit ripeness in our project.

Methodology

CNN

A convolutional neural network (CNN) could be a style of artificial neural network


used for image recognition and processing that's specifically designed to process
pixel data. CNNs are powerful image processing, AI (AI) that use deep learning to
perform both generative and descriptive tasks, often using machine vison that
features image and video recognition, together with recommender systems and
Natural Language processing (NLP). A CNN uses a system very like a
multilayer perceptron that has been designed for reduced processing
requirements. The layers of a CNN contains an input layer, an output layer and a
hidden layer that has multiple convolutional layers, pooling layers, fully
connected layers and normalization layers.

Image classification

Image classification using CNN forms a big a part of machine learning


experiments. Together with using CNN and its induced capabilities, it's now
widely used for a variety of applications-right from Facebook picture tagging to
Amazon product recommendations and healthcare imagery to automatic cars. the
explanation CNN is so popular is that it requires little pre-processing, meaning
that it can read 2D images by applying filters that other conventional algorithms
cannot. we are going to delve deeper into the method of how image classification
using CNN works.

VGG16 model

In an annual computer vision competition is a challenge on the ImageNet large


scale visual recognition Each year, teams compete on two tasks. the primary is to
detect objects within an image coming from 200 classes, which is termed object
localization. The second is to classify images, each labeled with one in every of
3878

1000 categories, which is termed image classification. VGG 16 model won the first
and 2nd place on the above categories in 2014 ILSVRC challenge. The VGG16
model achieves 92.7% top-5 test accuracy on this dataset which
contains 14 million images belonging to 1000 classes.

Figure 1. VGG16 Architecture [9]

The dataset contains images of fixed size of 224*224 and have RGB channels. So,
we've got a tensor of (224, 224, 3) as our input. This CNN model process the input
image and gives outputs as a vector of 1000 values. The following equation
suggests the classification probability for the represented class. Suppose we've a
model that predicts that image belongs to class 0 with probability 0.1, class 1
with probability 0.05, class 2 with probability 0.05, class 3 with probability 0.03,
class 780 with probability 0.72, class 999 with probability 0.05 and every one
other class with 0. as shown in equation (1) and (2)so, the classification vector for
this may be:

ŷ=[ŷ0, ŷ1,ŷ2 ,ŷ3……..ŷ999]


……(1)
ŷ=[ŷ0=0.1 , ŷ1=0.05, ŷ2= 0.05, ŷ3=0.03,…………..,ŷ780=0.72,……..ŷ999=0.95] ……(2)

To make sure these probabilities add to 1, here we use softmax function. This
softmax function is defined as :

𝑘
(i)
(i)
P(Y = j|θ(i)) = eθ ∑ 𝑒 θk
𝑗=0

Where θ=W0X0+W1X1+….+WKXK=∑𝑘𝑗=0 wi xi = WT 𝑥

Hue Saturation Value (HSV) Feature

The Hue Saturation Value (HSV) represents the colour, dominance of color and
brightness. Therefore, the colour detection algorithm are often accustomed search
in terms of color position and color purity. The HSV accustomed detect the
3879

pixels.The scale provides a numerical readout of your image that corresponds to


the colour names contained therein. The Hue of HSV is measured in degrees from
0 to 360. as an example, cyan falls between 181–240 degrees, and magenta falls
between 301–360 degrees. the worth and saturation of a color are both analyzed
on a scale of 0 to 100%. Most digital color pickers are supported the HSV scale,
and HSV color models are particularly useful for choosing precise colors for art,
color swatches, and digital graphics.

Class detection

Class Detection for an image is a computer technology related to computer vision,


image processing, and deep learning that deals with detecting instances of objects
in images and videos. OpenCV is used as the huge open-source library for
computer vision, machine learning, and image processing and now it plays a
major role in real-time operation which is very important in today’s systems. By
using it, we can process images to identify different type of fruits.

LOADING IMAGES

FEATURE
EXTRACTION

PCA

SCREEN PLOT

OPTIMAL COMPONENT

BEFORE PCA SAMPLE SET AFTER PCA SAMPLE SET

TSNE

TRAIN/VALIDATION SPLIT

TEST SET

Flowchart: Implementation of image classification in PYTHON

The above flowchart describes the process which is to be executed sequential


firstly starting with loading an images to Unzip fruits and store image in train
and test set respectively. Store training set x features as images. PCA(Principal
3880

Components Analysis) reduce the dimentionality of this large dataset by


transforming a large set of variables into a smaller one that still contains most of
the information in the large set. By using PCA as a result, we are able to reduce
our dataset to 50 components with 99.68% variance retained. Screen Plotting can
be retained by a lot of variance in the used dataset with very few dimensions.

Figure 2. 50 components is the optimal number of components to use for this


dataset. It retain 99.68% (1-.000037) variance with just 50 components

In order to grasp whether PCA are useful or not, we'd like to make a screen plot. A
screen plot shows the amount of components plotted against explained variance.
we would like to use PCA if a low number of components includes a high
cumulative explained variance. The plot below shows that using PCA is helpful,
and as a result, we are ready to reduce our dataset to 50 components with
99.68% variance retained as shown in fig(2).

Figure 3. This screen plot shows retained variance in the given dataset with very
few dimensions

Here below images fig(4) & fig(5) show that the pictures are rather more blurred
and undefinable. This can be because these images only have 50 components
each, rather than 30,000. Although identifying the fruits during this plot is
difficult for the human eye, our PCA model identifies these fruits nearly the
identical as if they were untouched this can be the wonder of PCA.
3881

Figure 4. Before PCA (untouched)

Figure 5. After PCA (grayscale)

Table 1
TSNE output shows 50 components from PCA as 2 components x and y

x y label
0 1.038440 -1.045200 Apple Crimson
1 0.936067 -1.099165 Apple Crimson
2 0.936550 -1.099763 Apple Crimson
3 0.937179 -1.100508 Apple Crimson
4 0.937651 -1.101029 Apple Crimson

Above table table(1) shows that after getting PCA sample set further moving to the
process of t-distributed Stochastic Neighbor Embedding(TSNE) it reduce
dimension non-linear way which allowed to represent all our 50 PCA components
we obtained, in 2 dimensions. TSNE is similar to dimension reduction technique
like PCA, but in this case, we used TSNE for visual purposes. The fruits are still
clustered together even after reduced to 50 components as shown in fig.(6) below.
3882

Figure 6. TSNE plot with fruit dataset images

Result

Dataset review

Dataset has been taken from pointed to reference [8] fruit-360.There are various
type of fruit in the dataset Apple, Avacado, Banana, Pear etc.The total number of
images is90483. Training set contain 67692 images (same fruit of different
varieties like apple braeburn , apple crimson are stored as per their belonging to
different classes).Test set size contain 22688 images The number of classes
are131 (fruits and vegetables, here we used only fruits). And the image size is
100x100 pixels.As the dataset if captured from Logitech C920 web camera by the
author of Fruit360 it is a 2012 model which is sharp, clear, wide field in view but
having some bugs in it.There are some other better camera or webcam are in
market which is best in quality for pictures and videos.Better the quality of
dataset better will be the estimation accuracy for detection of fruit ripeness. Here
we have divided the dataset into two parts for fruits, one is training dataset and
another is test dataset.

Experiments

For the experiment obtaining accurracy greater than other softwares here we have
shortlisted two fruit Apple and Blueberry from the taken Dataset. This given
database can be primarily classified with the help of a working MATLAB program.
In order to enhance our outcomes as precise as possible in terms of a code that
can differentiate the data in terms of image classification in a given datasets, In
Matlab first make a folder consisting of different type of images,feature
statiscal.m,training.m ,testImage.m file. Copy the link of the folder and open in
MATLAB and on the training.m file into the EDITOR window in MATLAB. RUN
the training.m file choose the image from the folder for classification used two
fruits Apple and Blueberry. Saving the image of all Apples in class 1 and
Blueberries in class 2 we classified two different type of images a databases for
apple and blueberry as shown in table (2).Hence, it will be generated as a result of
it will detect class as per we select the inputed image from the generated database
shown in fig.(7).
3883

Table 2
Database created while processing sets Matlab

Figure 7. Grayscale image and Detected Class for selection image

But this is very slow and manual process where large dataset requires more time
and some manpower for selection sets to obtain required classes. Therefore, we
have used a working Python program, which comes with more efficient and
precise output as desired.

Result and Discussion

Here we used VGG-16 CNN model which was one of the best performing
architecture in ILSVRC challenge 2014.It was the runner up in classification
task with top-5 classification error of 7.32%. It is also best for localization task
with 25.32% localization error.VGG16 is object classification and detection
algorithm with greater in size i.e 528MB which is able to classify more than
1000 images of 1000 different catagories. VGG16 is also outperforms baselines
on many tasks and dataset outside of ImageNet as it has 16 layers to support
object recognition model. The project resulting accuracy 92.7% for used dataset.
Here for this whole detection process we can use VGG19 model as well which
have additional three convolutional layers i.e. extra 21MB size. Here we have
taken VGG16 convolutional neural network model because it was sufficient for
processing of our dataset. Apart from the dataset to see if it work for some other
images we have taken some other fruit images estimated by VGG16 algorithm
3884

(peaches fig.(8) and pomogranates fig.(9))as well to show that accuracy of


ripeness of train set fruits we are getting shown in fig.(10) and (11) .

Figure 8. Train set image of peaches.

Figure 9. Train set image of pomogranates.

Figure 10. Tested set image of peaches


3885

Figure 11. Test set image of pomogranate

Conclusion

For a long period of time, it has been observed that in the research community
there is the circulation on researches for fruit recognition on fruit-360 dataset
which consist almost type of fruits. Over a period with consistent studies on this
paradigm, higher accuracy has been achieved in the results of the studies,
whereas same result has been achieved through the project. Here, in our study
we reflect a working platform with higher accuracy & bigger datasets in relation to
other CNN models, As an outcome of this we are using VGG16 model here its
inference time is quite good. VGG-16 contains large number of layers will have
higher test error and generalize lesser.The motive of project is to concentrate on
reducing human effort and making human life easier for finding best fitted fruit
for their healthy diet. Fruit ripeness detection will be able to reduce the current
ongoing problems. It reduces confusion among the particular fruit whether it is
raw, ripped or yet to ripped. As the above conclusion suggests, there is always a
possibility for enhancement in this domain. But, as of current conclusions our
proposed method outperforms all of the previous studies.

References

1. 273754897_A_Critical_Analysis_of_Artificial_Fruit_Ripening_Scientific_Legisla
tive_and_Socio-Economic_Aspects
2. B. Kanimozhi and R. Malliga, “Classification of Ripe or Unripe Orange Fruits
Using the Color Coding Technique,” vol. 1, no. 3, pp. 43–47, 2017.
3. S. Iqbal, A. Gopal, P. E. Sankaranarayanan, and A. B. Nair, “Classification of
Selected Citrus Fruits Based on Color Using Machine Vision System,” Int. J.
Food Prop., vol. 19, no. 2, pp. 272–288, 2016.
4. N. El-Bendary, E. El Hariri, A. E. Hassanien, and A. Badr, “Using machine
learning techniques for evaluating tomato ripeness,” Expert Syst. Appl., vol.
42, no. 4, pp. 1892–1905, 2015.
5. Meenu Dadwal, V.K.Banga” Estimate Ripeness Level of fruits Using RGB
Color Space and Fuzzy Logic Technique” IJEAT ISSN: 2249 – 8958, Volume2,
Issue-1, October 2012
6. Kranti Raut, Vibha Bora” Assessment Of Fruit Maturity Using Digital Image
Processing”IJSTE ISSN(online): 2349 – 784x,Volume : 3, Issue : 1, June 2016
7. Rocha A, Hauagge D C, Wainer J, Goldenstein S 2010 Automatic fruit and
vegetable classification from images Comput. Electron 70 96–104
3886

8. https://www.kaggle.com/datasets/moltean/fruits
9. https://www.geeksforgeeks.org/vgg-16-cnn-model/\
10. Alok Mishra, Pallavi Asthana, Pooja Khanna,” The Quality Identification Of
Fruits In ImageProcessing Using Matlab”, NCCOTII 2014 ,Jun-2014
11. Palma, J. M., Corpas, F. J., and del Río, L. A. (2011). Proteomics as an
approach to the understanding of the molecular physiology of fruit
development and ripening. J. Proteomics 74, 1230–1243. doi:
10.1016/j.jprot.2011.04.010
12. Identification of Artificially Ripened Fruits Using Machine Learning 2nd
International Conference on Advances in Science & Technology (ICAST) 2019
on 8th, 9th April 2019 by K J Somaiya Institute of Engineering & Information
Technology, Mumbai, India
13. Woo Chaw Seng, Faulty of Computer Science and Information Technology
University of Malaya, “A New Method for Fruits Recognition System”,03 June
2014.
14. Elhariri, E., El-Bendary, N., Fouad, M.M.M., Platos, J., Hassanien, A.E.,
Hussein, A.M.M.: Multi-class SVM based classification approach for tomato
ripeness. In: Abraham, A., Krömer, P., Snášel, V. (eds.) Innovations in Bio-
inspired Computing and Applications. AISC, vol. 237, pp. 175–186. Springer,
Heidelberg (2014)
15. Rocha, A., Hauagge, D.C., Wainer, J., Goldenstein, S.: Automatic produce
classification from images using color, texture and appearance cues. In: XXI
Brazilian Symposium on Computer Graphics and Image Processing,
SIBGRAPI 2008, Campo Grande, pp. 3–10 (2008)
16. M. Tan and Q. V. Le 2019 EfficientNet: Rethinking Model Scaling for
Convolutional Neural Networks arXiv preprint arXiv:1905.11946
17. Suryasa, I.W., Sudipa, I.N., Puspani, I.A.M., Netra, I.M. (2019). Translation
procedure of happy emotion of english into indonesian in kṛṣṇa text. Journal
of Language Teaching and Research, 10(4), 738–746
18. Lopez, M. M. L., Herrera, J. C. E., Figueroa, Y. G. M., & Sanchez, P. K. M.
(2019). Neuroscience role in education. International Journal of Health &
Medical Sciences, 3(1), 21-28. https://doi.org/10.31295/ijhms.v3n1.109

You might also like