A Deep Learning-Based Model For Date Fruit Classification
A Deep Learning-Based Model For Date Fruit Classification
Article
A Deep Learning-Based Model for Date Fruit Classification
Khalied Albarrak 1, * , Yonis Gulzar 1, * , Yasir Hamid 2 , Abid Mehmood 1 and Arjumand Bano Soomro 1
1 Department of Management Information Systems, College of Business Administration, King Faisal University,
Al-Ahsa 31982, Saudi Arabia; [email protected] (A.M.); [email protected] (A.B.S.)
2 Information Security and Engineering Technology, Abu Dhabi Polytechnic,
Abu Dhabi 111499, United Arab Emirates; [email protected]
* Correspondence: [email protected] (K.A.); [email protected] (Y.G.); Tel.: +966-545719118 (Y.G.)
Abstract: A total of 8.46 million tons of date fruit are produced annually around the world. The date
fruit is considered a high-valued confectionery and fruit crop. The hot arid zones of Southwest Asia,
North Africa, and the Middle East are the major producers of date fruit. The production of dates
in 1961 was 1.8 million tons, which increased to 2.8 million tons in 1985. In 2001, the production of
dates was recorded at 5.4 million tons, whereas recently it has reached 8.46 million tons. A common
problem found in the industry is the absence of an autonomous system for the classification of date
fruit, resulting in reliance on only the manual expertise, often involving hard work, expense, and bias.
Recently, Machine Learning (ML) techniques have been employed in such areas of agriculture and
fruit farming and have brought great convenience to human life. An automated system based on ML
can carry out the fruit classification and sorting tasks that were previously handled by human experts.
In various fields, CNNs (convolutional neural networks) have achieved impressive results in image
classification. Considering the success of CNNs and transfer learning in other image classification
problems, this research also employs a similar approach and proposes an efficient date classification
model. In this research, a dataset of eight different classes of date fruit has been created to train the
Citation: Albarrak, K.; Gulzar, Y.; proposed model. Different preprocessing techniques have been applied in the proposed model, such
Hamid, Y.; Mehmood, A.; Soomro, as image augmentation, decayed learning rate, model checkpointing, and hybrid weight adjustment
A.B. A Deep Learning-Based Model to increase the accuracy rate. The results show that the proposed model based on MobileNetV2
for Date Fruit Classification. architecture has achieved 99% accuracy. The proposed model has also been compared with other
Sustainability 2022, 14, 6339. existing models such as AlexNet, VGG16, InceptionV3, ResNet, and MobileNetV2. The results prove
https://doi.org/10.3390/ that the proposed model performs better than all other models in terms of accuracy.
su14106339
Academic Editors: Saqib Iqbal Hakak Keywords: date fruit classification; artificial intelligence; convolutional neural networks; trans-
and Thippa Reddy Gadekallu fer learning
support the buying process by identifying the fruit type with its dietary value and sup-
plying related information and advice (see e.g., [8]). However, the problem becomes more
relevant in an industrial context in carrying out automation of activities such as matching
fruit quality varieties with other information, e.g., nutritional details and price. Besides
alleviating the labor, expense, and bias involved in manual quantification, the automated
inspection also works well for multi-criteria classification and quality assurance.
Date fruit has a high nutritional value and serves as a rich source of calcium, potas-
sium, vitamin C, and iron. Even though date palm trees are cultivated worldwide, they are
considered a major type of fruit within the middle east and the Kingdom of Saudi Arabia
(KSA), specifically [9]. The date palm trees cover about 72% of the total cultivated area in
KSA [10], and thus the Saudi Arabian Ministry of Environment, Water, and Agriculture
pays special attention to the initiatives, especially regarding date production and develop-
ments. Recently, following the recommendation by the ministry, the Food and Agriculture
Organization of the United Nations (FAO) declared 2027 to be the International Year of
the Date Palm [11]. As far as the aforementioned context of automated classification is
concerned, many existing approaches have aimed at classifying dates [12].
In general, there are two key approaches in computer vision, i.e., deep learning (DL)
and traditional techniques. The traditional approaches use techniques such as feature
descriptors along with the essential step of feature extraction involving other cumbersome
steps such as feature selection. Thus, one of the well-known disadvantages of the traditional
approaches lies in their high dependency on human expertise in extracting hand-crafted
features. Nonetheless, there are situations in which traditional techniques with global
features provide satisfactory performance. DL techniques, on the other hand, enable end-
to-end learning in which a model is only provided with a properly annotated dataset to be
used for training. The DL model then automatically extracts the most salient features to
better learn the specific details and patterns from the underlying data. DL-based techniques,
despite their trade-offs regarding computational resources and training time, have been
proven to perform far better than traditional algorithms in computer vision problems [13].
Despite the satisfactory results reported by traditional methods [14], DL-based ap-
proaches are naturally considered the most suitable and effective solution to the problem
of date classification. Using computer vision systems for similar classification tasks has
been a steadily growing research area [15]. As far as vision-based systems are concerned,
convolutional neural networks (CNNs) have been well-recognized in the research com-
munity as a potent mechanism for image classification tasks [16,17]. Recent studies have
also explored CNNs in the specific context of date classification for the automation of
tasks, including their harvesting, sorting, and packaging [18,19]. In essence, CNNs are
deep learning algorithms that take an input image and process it by assigning weights
and biases to its various features. The major strength of CNNs lies in their ability to
recognize (and thus classify) the distinguishing features with minimal pre-processing as
compared with the primitive methods. A typical CNN comprises several layers, such as
convolutional, pooling, and fully connected layers, each with a specific purpose. There
have been many approaches proposed for date fruit classification such as [10,20,21] based
on CNN. In [20], the authors have proposed an approach for classifying three date fruit
types (Aseel, Karbalain, and Kupro) based on color, shape, and size. The approach yielded
an accuracy of 89.2%. Another proposed approach [10] used a Support Vector Machine
(SVM) and classified five types of date fruit based on maturity level, type, and weights
with an accuracy of 99%. However, in [21], the authors have compared the performance of
eight different types of existing approaches while focusing only on one type of date fruit
(Medjool). They claimed that VGG-19 architecture achieved the highest accuracy (99.32%).
This research work proposes a new model for date fruit classification that is based upon
deep learning and CNN. The proposed model is trained and validated based on an in-house
dataset created containing eight different types of date fruits, which are commonly found
in Saudi Arabia. Around 204 to 240 images have been captured for each class and rescaled
to train the model. The proposed model adopts a trained MobileNetV2 architecture [22] to
Sustainability 2022, 14, 6339 3 of 16
successfully accomplish the date fruit identification and classification. The existing model
has been modified by replacing the classification layer with five different layers, aiming
at increasing the accuracy and minimizing the error rate of the classification procedure.
The modified model helps in optimizing the classification procedure and assists in the
identification and classification of various date fruit types.
The following points summarize the contribution of this research work.
• A detailed review has been conducted to investigate the most promising work in the
machine learning/deep learning domain for date fruit classification.
• A new dataset containing eight different types of date fruit has been created.
• A new optimized model based on advanced deep learning techniques has been pro-
posed for the classification of date fruit. Furthermore, different preprocessing tech-
niques have been employed to avoid the chance of overfitting.
• An optimization technique has been implemented to monitor any positive change
in terms of accuracy in the model, based on a backup of the optimal model taken at
the end of each iteration to affirm the proposed model’s accuracy with the minimum
validation loss.
The rest of the paper is structured as follows. Section 2 reported and discussed the
related literature to this research. Section 3 explained and illustrated the proposed model.
Section 4 explains the experimental setup of this research work. The results are illustrated
and explained in Section 5. The conclusions are in Section 6.
2. Related Work
Several researchers have employed artificial intelligence (AI) techniques aiming to
automate many human-based tasks in the food and agriculture sector [14,23,24]. A review
of the possible tasks involved in the automation has been provided in the literature [8,14,15],
which includes the fruit classification, quality check, sorting, grading, maturity level, and
defect detection. Naik and Patel [25] provided an overall direction for selecting the AI
models suitable for fruit classification based on fruit type, features, accuracy, and clas-
sifier. Furthermore, deep learning has been used widely for fruit images classification
and recognition [18]. For example, Sharmila et al. [26] proposed a model to overcome the
issues of aspects detachment by applying CNNs, max-pooling layers, a fully connected
multi-layer neural network, activation factors, and flattening on 10 fruit classes. Thus,
the used classifier model accuracy reached 97%. In [27], a fruit classification approach
was proposed that combines CNN, Recurrent Neural Networks (RNN), and a Long Short-
term Memory Network (LSTM). The CNNs and RNNs were used on 10 different types
of apple fruit image sample to produce discriminative characteristics of the apple fruit
and its sequential labels, while LSTM was used to encode learning at each classification
interval. The classification accuracy for this proposed algorithm reached 98%. Classi-
fication and quality check of date fruit, specifically, using machine learning algorithms
has also become a topic of interest to several researchers [25,28]. To support this interest,
Altaheri et al. [29] published a comprehensive dataset that contains 360 videos of the palms
that can be used for multi-scale images, variable illumination, and different bagging states.
Alresheedi et al. [9] compared detection performance and accuracy of several classical ma-
chine learning methods with CNN on a dataset comprising nine classes of dates fruit. They
found that Multi-Layer Perceptron (MLP) has the highest detection performance and CNN
achieved the highest accuracy.
In addition, a framework for date recognition is presented in a study by [20] which is
based on color, shape, and size. Features were extracted from an established 500 images
dataset of three date fruit types named Aseel, Karbalain, and Kupro. This framework was
constructed based on deep CNNs with 5 neurons input and 10 neurons hidden layers. This
model achieved 89.2% accuracy. In [30], K-Nearest Neighbor (KNN), Linear Discriminant
Analysis, and Artificial Neural Networks (ANN) methods were also employed to classify
and recognize seven different date fruit classes. The ANN method was found to be the
lowest-performing method with the most accurate classifier with an accuracy that reached
This model achieved 89.2% accuracy. In [30], K-Nearest Neighbor (KNN), Linear Discri
minant Analysis, and Artificial Neural Networks (ANN) methods were also employed t
classify and recognize seven different date fruit classes. The ANN method was found t
Sustainability 2022, 14, 6339 be the lowest-performing method with the most accurate classifier with an 4accuracy of 16 tha
reached 99%. A solution proposed by Faisal et al.[10] consisted of three different estima
tion functions aiming to classify date fruits based on maturity level, type, and weights
This solution
99%. A solution used a Support
proposed Vector
by Faisal Machine
et al. (SVM) of
[10] consisted and hasdifferent
three achieved on average 99%
estimation
accuracy in all the estimation functions. An additional study [19]
functions aiming to classify date fruits based on maturity level, type, and weights. focused onThis
sorting dat
fruit based
solution usedon maturity
a Support levelMachine
Vector and health
(SVM)condition. Thus, a on
and has achieved dataset
averageof 99%
fouraccuracy
date fruit type
in
atall the estimation
maturity stages, functions. An additional
and the defective dates study
were [19]
usedfocused
as inputonforsorting date fruit
the CNN model. Thi
based
CNN model constructed using the VGG-16, max-pooling, batch normalization,atdropout
on maturity level and health condition. Thus, a dataset of four date fruit types
maturity
and dense stages, andachieved
layers the defective
97%dates were used as
classification input forOn
accuracy. thethe
CNN model.
other hand,ThisPerez
CNN et al.[21
model constructed using the VGG-16, max-pooling, batch normalization, dropout, and
used only Medjool dates to evaluate and compare the performance of eight differen
dense layers achieved 97% classification accuracy. On the other hand, Perez et al. [21] used
CNNs architectures for sorting and detecting maturity stage. The results of this experi
only Medjool dates to evaluate and compare the performance of eight different CNNs
ment concluded
architectures that VGG-19
for sorting architecture
and detecting maturityperformed
stage. Thebest and archived
results 99.32% accuracy
of this experiment
Other
concluded researchers
that such as Alavi
VGG-19 architecture [31] have
performed bestalso
andproposed a system
archived 99.32% called the Mamdan
accuracy.
fuzzy inference
Other system
researchers such(MFIS) for[31]
as Alavi the have
qualityalsodetermination of 500called
proposed a system Mozafati date fruit b
the Mam-
dani fuzzy inference system (MFIS) for the quality determination of 500
using fuzzy logic as a method for decision making. This system classifies dates based onMozafati date
fruit by usingquality,
measuring fuzzy logic as a method
specifically for decision
the date’s length making. This system
and freshness. Theclassifies
predicteddates
quality ac
based on measuring quality, specifically the date’s length and freshness.
curacy of the system was compared with a human expert and resulted in a 91% accurac The predicted
quality accuracy of the system was compared with a human expert and resulted in a 91%
rate.
accuracy rate.
3.3.Proposed
ProposedModel
Model
InInthis
thissection,
section, details
details of proposed
of the the proposedmodel model are provided.
are provided. An efficient
An efficient model hasmodel ha
been
beenprovided
provided forfor
date fruit
date identification
fruit and classification.
identification The entire
and classification. Theprocess includedincluded
entire process
three
threestages:
stages: (i) (i)
dataset preparation,
dataset (ii) model
preparation, training,
(ii) model and (iii) and
training, model testing,
(iii) model as testing,
shown inas shown
Figure 1. These
in Figure threethree
1. These stagesstages
are further elaborated
are further on in theon
elaborated following subsections.
in the following subsections.
Figure1.1.Research
Figure Research Flow
Flow Diagram
Diagram of Proposed
of Proposed Model.
Model.
Figure2.2.Setup
Figure Setup
to to Capture
Capture Images
Images of Date
of Date Fruits.
Fruits.
Table 1. Details of Different Types of Date Fruits and Their Image Count.
While capturing the images, a distance of 10 inches was set between the date fruit
sample and the mobile phone camera. All the images were captured during the day to
avoid any change in the texture of date fruits. Furthermore, a white ring light has been
used to avoid any shadow effects. Figure 3 shows a sample of captured images of different
types of date fruits. These images are resized (224 × 223 × 3), labeled, and arranged in
separate folders after being captured.
Sustainability 2022,14,
Sustainability2022, 14,6339
x FOR PEER REVIEW 66 of
of 16
17
Figure3.3.Sample
Figure SampleImages
Imagesof
ofEight
EightDifferent
DifferentTypes
TypesofofDate
DateFruit.
Fruit.
3.2.
3.2.Training
Training
In
Inthis
thissubsection,
subsection,we wereport
reportthe thetraining
trainingof ofthe
theproposed
proposedmodel modelbased basedon onthe
thecreated
created
dataset.
dataset. The proposed model is based on MobileNetV2 architecture [22], whichwas
The proposed model is based on MobileNetV2 architecture [22], which wasorigi-
orig-
nally
inallyproposed
proposedfor formobile
mobileand andresource-constrained
resource-constrainedenvironments.environments.The Themainmainmotivation
motivation
behind
behind the adoption of
the adoption ofthis
thisarchitecture
architectureincludesincludes itsits strength
strength in terms
in terms of reducing
of reducing the
the com-
computational and memory expense as well as compliance
putational and memory expense as well as compliance of its design more closely to mobileof its design more closely to
mobile applications. Originally, the MobileNetV2 contains 1000
applications. Originally, the MobileNetV2 contains 1000 nodes in the classification layer. nodes in the classification
layer.
However, However,
to make to itmake it compatible
compatible with our with our problem,
problem, the classification
the classification layer waslayer was
removed
removed and replaced with a customized head. The new
and replaced with a customized head. The new head contained five different layers: (i) head contained five different
layers:
average (i)pooling
averagelayer,
pooling (ii)layer,
flatten(ii) flatten
layer, (iii)layer,
dense (iii) dense
layer, (iv)layer,
dropout(iv) dropout
layer, and layer, and (v)
(v) softmax
softmax layer. The pool size of the average pooling layer was set to (7,7). In the flatten layer,
layer. The pool size of the average pooling layer was set to (7,7). In the flatten layer, the
the flattened neurons were fed to a dense layer with the activation function being Relu.
flattened neurons were fed to a dense layer with the activation function being Relu. This
This was proceeded by setting the probability of a dropout layer with a value of 0.5 and
was proceeded by setting the probability of a dropout layer with a value of 0.5 and the
the addition of eight nodes within the classification layer of the model. This produced a
addition of eight nodes within the classification layer of the model. This produced a mod-
modified version of the MobileNetV2 architecture having eight classification nodes, which
ified version of the MobileNetV2 architecture having eight classification nodes, which was
was better suited to investigate the problem addressed in this study and more suitable
better suited to investigate the problem addressed in this study and more suitable for
for transfer learning. Figure 4 shows the proposed model for the date fruit identification
transfer learning. Figure 4 shows the proposed model for the date fruit identification and
and classification.
classification.
Despite being successful in many practical studies and achieving splendid success,
Despite being successful in many practical studies and achieving splendid success,
traditional machine learning techniques still possess limitations in addressing specific
traditional machine learning techniques still possess limitations in addressing specific
real-world scenarios [32]. On the other hand, new techniques, with the help of transfer
real-world scenarios
knowledge, [32]. On
have improved thethe other hand,of
performance new techniques,
target learners with the help
in target of transfer
domains. The
knowledge, have improved the performance of target learners
dependency of target learners on a large volume of data has been reduced by [32,33] in target domains. Theand
de-
pendency of target learners on a large volume of data has been
minimized the issues related to the unavailability of sufficient training data. There are two reduced by [32,33] and
minimized
ways the issuestransfer
to incorporate relatedlearning.
to the unavailability
In the first case,of sufficient
the model training data. There
is trained are two
from scratch
ways on
based to the
incorporate transfer
new dataset, whereaslearning.
in theIn the case
other first only
case,the
thenewly
modeladded is trained
layersfrom scratch
are trained
based on the new dataset, whereas in the other case only
based on the new dataset, and the existing layers’ weights are kept unchanged. In our the newly added layers are
trained based on the new dataset, and the existing layers’ weights
proposed model, a hybrid approach was adopted in which for the first 20 iterations only, are kept unchanged. In
our proposed model, a hybrid approach was adopted in which
the customized head (newly added layers) was trained based on the date fruit dataset, and for the first 20 iterations
only,
the restthe
of customized
the layers were head (newly
frozen. added
After that,layers) was were
the layers trained based on
unfrozen the date
so that therefruit
wouldda-
taset,
be and weight
a slight the restadjustment
of the layers were
to the frozen.
trained Afterfor
layers that,
thethe
date layers were unfrozen so that
fruit dataset.
there would be a slight weight adjustment to the trained layers for the date fruit dataset.
Sustainability 2022, 14,
Sustainability 2022, 14, 6339
x FOR PEER REVIEW 77 of
of 16
17
Customized Head
Dense (Softmax)
Probability 0.5
Average Pooling
Dense (Relu)
Resizing
Dropout
Flatten
Transformation
Bottleneck Bottleneck
Input Input
Add
Shortcut
Furthermore, in
Furthermore, in the
the proposed
proposed model,
model, different
different preprocessing
preprocessingand/or
and/or model
model tuning
tuning
techniques were
techniques wereused
usedtoto avoid
avoid thethe issue
issue of model
of model overfitting.
overfitting. Such Such techniques
techniques are ex-
are explained
plainedasbriefly
briefly as follows:
follows:
•• Data Augmentation:
Data Augmentation: In In data
data augmentation,
augmentation, different different types
types ofof images
images are
are artificially
artificially
created in various processing manners or by combining
created in various processing manners or by combining multiple processing methods, multiple processing meth-
ods, such as random rotations, shifts, shears, and flips. So, the
such as random rotations, shifts, shears, and flips. So, the MobileNetV2 architecture MobileNetV2 architec-
turemodified
was was modified by adding
by adding five different
five different layers layers as mentioned
as mentioned earlier, earlier, and by incor-
and by incorporating
porating
many many preprocessing
preprocessing techniques. techniques.
To generate Toaugmented
generate augmented
images, thisimages,
research this
workre-
searchanwork
used used
inbuilt an inbuilt
function function
in Keras’ in Keras’
Library [34]. Library
For each[34].image,For 10
each image,images
different 10 dif-
ferentcreated
were imagesrandomly
were created randomly by
by incorporating incorporating
different methods different methods
such as zooming thesuch
image as
zooming the image by 20%, rotating by 30%, width
by 20%, rotating by 30%, width shifting by 10%, and adjusting height by 10%. shifting by 10%, and adjusting
• height by 10%.
Adaptive Learning Rate: Learning rate schedules seek to adjust the learning rate
• Adaptive
during the Learning Rate: Learning
training process by reducing ratetheschedules
learning rate seekaccording
to adjust to the
thelearning
pre-definedrate
during theCommon
schedule. training learning
process rate by reducing
schedulesthe learning
include rate according
time-based to the
decay, step pre-de-
decay, and
fined schedule.
exponential Common
decay. For thislearning
work, the rate schedules
initial learning include
rate wastime-based decay, =
set to INIT_LR step de-
0.0001
cay, then
and and exponential
the decay ofdecay.
the form Fordecay
this work, the initial learning
= INIT_LT/EPOCHS wasrate was set to INIT_LR
used.
• = 0.0001Checkpointing:
Model and then the decay Thisofisthetheform decay where
technique = INIT_LT/EPOCHS
checkpoints are wassetused.
to save the
• Model Checkpointing:
weights of the models whenever This is the theretechnique
is a positive where
change checkpoints are set to accuracy
in the classification save the
on the validation
weights dataset.
of the models It is usedthere
whenever to control and monitor
is a positive change ML in models during training
the classification accu-
at some
racy frequency
on the (fordataset.
validation example, It isatused
the end of eachand
to control epoch/batch).
monitor ML It allowsduring
models us to
specify
trainingaatquantity to monitor,
some frequency (for such
example,as loss or end
at the accuracy
of each onepoch/batch).
training or aItvalidation
allows us
dataset,
to specify and thereafter
a quantity to itmonitor,
can save suchmodel weights
as loss or an entire
or accuracy modelor
on training whenever
a validationthe
monitored quantity is optimum when compared to the
dataset, and thereafter it can save model weights or an entire model whenever the last epoch/batch. In this
research
monitored work, a model
quantity checkpoint
is optimum whenofcompared
the form to checkpoint = Model Checkpoint
the last epoch/batch. In this re-
(fname, monitor
search work, = “val_loss”,
a model checkpoint mode = “min”,
of the save_best_only
form checkpoint = Model= True, verbose =(fname,
Checkpoint 1) was
used.
monitorThis callback monitored
= “val_loss”, mode = “min”, the validation loss of the
save_best_only modelverbose
= True, and would overwrite
= 1) was used.
the
This callback monitored the validation loss of the model and would overwrite the
trained model only when there was a decrease in the loss as compared to the
previous best model.
trained model only when there was a decrease in the loss as compared to the previous
best model.
Sustainability 2022, 14, 6339 8 of 16
• Dropout: This technique is used to avoid the overfitting of a model. In this technique,
neurons are randomly selected and ignored/dropped out during training. This indi-
cates that the contribution of these neurons is temporally ignored to the activation of
downstream neurons and any weight changes are not implemented on any neuron on
the backward pass.
3.3. Testing
At the testing stage, the best-trained model obtained from training was tested. To test
the model, a subset of the dataset was used which contained 25% of the images of the entire
dataset from each class. It is essential to note that the images used in testing the model
were not exposed to the model before. This is to ensure the validity of the model in terms
of accuracy. The results obtained during training and testing were compared to validate
the proposed model.
4. Experimental Setup
The objective of this research is to propose a model which helps in the identification
and classification of different types of date fruits. The performance of the proposed model
was measured along with the other models found in the literature. This section describes (i)
model selection (ii) the data used for training and testing the proposed model and other
models, (iii) the performance of other models compared with the proposed model. The
proposed model was implemented using Python 3.0 on Windows 10 operating system,
with system configuration using i7 processor with 16 GB RAM. The same configuration
was used for the training and testing of other models.
Table 2. Precision, Recall, and F1-score of Different Models while Trained on a Date Fruit Dataset.
4.3.
4.3. Performance
Performance Measures
Measures
To
To measurethe
measure theclassification
classificationperformance
performance of of the
the proposed
proposed model,model, aa wide
wide range
range of
of
matrices were used, derived from the 2 × 2 confusion matrix.
matrices were used, derived from the 2 × 2 confusion matrix.
A
A confusion
confusion matrix (C) is
matrix (C) is theoretically
theoretically defined
definedasasCCi,ji,jwhich
whichisisequal
equaltoto the
the number
number of
of the known observations in the group (i) and predicted observations in the group (j), as
the known observations in the group (i) and predicted observations in the group (j), as
shown in Figure 5.
shown in Figure 5.
Figure5.5. Confusion
Figure ConfusionMatrix
Matrix(C
(Ci,j).
).
i,j
Therewere
There werefour
fourdifferent
differentcasescasesininthe
theconfusion
confusionmatrix
matrix as as follows,
follows, from
from which
which more
more
advancedmetrics
advanced metricswere
wereobtained.
obtained.
•• True
TruePositive:
Positive: If,
If, for
for instance,
instance, aa presented
presented image
image is
is of
of aa Lubana
Lubana date
date fruit,
fruit, and
and the
the
model classifies it as a Lubana date fruit
model classifies it as a Lubana date fruit image. image.
•• True
TrueNegative:
Negative:If,If,for
forinstance,
instance,aapresented
presentedimage
imageisisnot
notofofLubana
Lubanadate
datefruit,
fruit,and
andthe
the
model does not classify it as a date fruit
model does not classify it as a date fruit image. image.
•• False
FalsePositive:
Positive:If,
If, for
for instance,
instance, aa presented
presented imageimage is
is not
not of
of Lubana
Lubana date
date fruit;
fruit; however,
however,
themodel
the modelincorrectly
incorrectlyclassifies
classifiesititas
asthe
theLubana
Lubanadate
datefruit
fruitimage.
image.
•• False
False Negative:
Negative: If,If, for
for instance,
instance, aa presented
presented image
image is of
of Lubana
Lubana date fruit; however,
themodel
the modelincorrectly
incorrectlyclassifies
classifiesititas
assomething
somethingelse.
else.
In the
In themachine
machine learning
learning life
life cycle,
cycle, model
model evaluation
evaluation is
is an
an important
important phase
phase to
to check
check
its performance. To check the performance of our proposed model the following
its performance. To check the performance of our proposed model the following metrics metrics
weremeasured.
were measured.
•• Accuracy: The
Accuracy: The proportion
proportion of
of the
the total
total number
number of
of correct
correct predictions,
predictions, which
which is
is aa sum
sum
of total correct positive and total correct negative instances over the total number
of total correct positive and total correct negative instances over the total number of
instances.
of instances.
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = (𝑇𝑃 + 𝑇𝑁)/(𝑇𝑃 + 𝐹𝑃 + 𝑇𝑁 + 𝐹𝑁)
Accuracy = ( TP + TN )/( TP + FP + TN + FN )
• Precision: A ratio of correct positive prediction over the total number of positively
• Precision:
predicted A ratio of and
instances correct positive prediction
is computed over the
as true positive total
over thenumber
sum of of positively
true positive
predicted instances and is computed as true positive over the sum of true positive and
and false positive.
false positive.
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = 𝑇𝑃/(𝑇𝑃 + 𝐹𝑃)
Precision = TP/( TP + FP)
• Recall: A ratio of correct positive prediction over the total number of actual positive
• Recall:
classes A ratio
was of correct
computed as positive prediction
true positive over
over the theof
sum total
the number of actual
true positive and posi-
false
tive classes was computed as true positive over the sum of the true positive and
negative.
false negative.
𝑅𝑒𝑐𝑎𝑙𝑙 = 𝑇𝑃/(𝑇𝑃 + 𝐹𝑁)
5. Results
5. Results
This section presents results from our experiments comparing the performance of the
proposed model presents
This section with Model I, Model
results fromII, and
our Model III (explained
experiments comparinginthe detail in the follow-
performance of
ingproposed
the section) based
model onwith
the dataset
Model created.
I, ModelAfterII, andthat, the training
Model and validation
III (explained in detailcurves
in theof
all modelssection)
following are discussed.
based on Finally, the class-wise
the dataset created.precision
After that,andtherecall of all and
training models are dis-
validation
cussed.
curves of all models are discussed. Finally, the class-wise precision and recall of all models
The proposed model was compared with three different models, as mentioned be-
are discussed.
low.The proposed model was compared with three different models, as mentioned below.
ModelI:I:Model
Model ModelIIwaswasthe theMobileNetV2
MobileNetV2architecture
architecturewhichwhichhad hadthetheclassification
classificationlayerlayer
modified
modifiedand andcontained
containedonly onlyeighteightnodes
nodestotoclassify
classifythe theaforementioned
aforementioneddate datefruits.
fruits.
ModelII:II:Model
Model ModelII was
II was thetheMobileNetV2
MobileNetV2 architecture which
architecture had the
which hadclassification layer
the classification
replaced with five
layer replaced withdifferent layers layers
five different (same (same
as the asproposed model)
the proposed and the
model) andlast
thelayer was
last layer
customized
was customizedto fit with
to fit the
with dataset used. used.
the dataset
Model
ModelIII:III:Model
ModelIIIIIIwas
wasthe thesame
same Model
asas ModelII but
II but Model
in in ModelIII III
thethe
pre-existing
pre-existing layers of
layers
the model were frozen for the first 20 iterations and the customized
of the model were frozen for the first 20 iterations and the customized head was trained head was trained alone
during these iterations.
alone during After the
these iterations. 20th
After theiteration, the whole
20th iteration, the model
whole was model trained.
was trained.
Figure
Figure66shows
showsthe theoverall
overallaccuracy
accuracyofofall allthe
thefour
fourmodels
modelswhichwhichwere weretrained
trainedon onthe
the
date
datefruit
fruitdataset.
dataset.From
Fromthe thefigure,
figure,ititcancanbebeconcluded
concludedthat thatModel
ModelIIhas hasthe
thelowest
lowestaccuracy
accuracy
ofof64%.
64%.Whereas
Whereasthe resultsofofModel
theresults ModelIIIIwere
werebetter
better(85%) thanModel
(85%)than ModelI IasasModel
ModelIIIIcontained
contained
newly Results of Model III
newly added layers. Results of Model III were slightly better than Model IIwith
added layers. were slightly better than Model II withananaccuracy
accuracy
ofof88%.
88%.This
Thiswaswasbecause
because thethenewly
newly added
added layers
layerswere onlyonly
were trained for the
trained for first 20 iterations
the first 20 itera-
then
tionsonly
thenwhole models
only whole were trained
models for thefor
were trained resttheofrest
theofiterations. The accuracy
the iterations. The accuracy of theof
proposed
the proposed model was 99% because the proposed model contained customized along
model was 99% because the proposed model contained customized heads heads
with other preprocessing techniques. These preprocessing techniques
along with other preprocessing techniques. These preprocessing techniques helped to in- helped to increase
the prediction
crease rate. rate.
the prediction
Figure6.6.Accuracy
Figure AccuracyRate
RateofofAll
AllModels.
Models.
Figure7a–d
Figure 7a–dpresents
presentsthe thetraining
trainingandandvalidation
validationlosslosscurves
curvesas aswell
wellasasthe
thetraining
training
andvalidation
and validationaccuracy
accuracycurvescurvesofofModel
ModelI,I,Model
ModelII, II,and
andModel
ModelIII,III,and
andthetheproposed
proposedmodel model
respectively. Figure 7a
respectively. 7a isisthe
themodel
modelbased
based onon thethe
MobileVnet2
MobileVnet2 architecture, in which
architecture, in whichonly
the last
only classification
the last node
classification nodewaswasremoved.
removed.TheThe plotplot
represents
represents training
trainingandand
validation
validation ac-
curacy asaswell
accuracy wellasas
training
training and validation
and validationloss. TheThe
loss. overall accuracy
overall accuracyof this model
of this was was
model 64%.
It canItbe
64%. cannoticed that even
be noticed that evenafter after
100 epochs, the model
100 epochs, was not
the model wasablenotto learn
able muchmuch
to learn about
about thefruit
the date datedataset.
fruit dataset.
From FigureFrom Figure 7abe
7a it can it can be noticed
noticed that there thatis there
not a is not a smooth
smooth learning
learning
curve whichcurve which indicates
indicates that the modelthat thewasmodel
not ablewas tonot able
learn much.to learn
Though much. Though the
the performance
performance
of the modelof onthe
themodel
training on set
thewas
training
bettersetcompared
was better to compared
the validation to the validation
set. Nonetheless,set.
Nonetheless,
the performance the performance
of Model I was of Model I was not promising.
not promising. The training The trainingrate
accuracy accuracy
of thisrate
modelof
this model
started started
from 50% from 50% and increased
and increased steadily
steadily until the until
100ththe 100thwhen
epoch epochitwhen
touchedit touched
around
around 100%. Something
100%. Something can be noticed
can be noticed from the from the training
training loss as itloss as it with
started started
anwith
initialanvalue
initial
of
value of 2 and kept on reducing till the last epoch. Though the performance
2 and kept on reducing till the last epoch. Though the performance of the model on the of the model
training set was somewhat stable, it failed miserably on the test set, wherein the best ac-
curacy reported was 64% which is far from acceptable. One of the reasons for showing
poor performance was that Model I was not very suitable for the problem.
Figure 7b presents the results of Model II, in which MobileNetV2 was customized by
replacing the classification layer with a customized head containing five different layers.
Sustainability 2022, 14, 6339 The model was trained over 100 epochs, and the performance of the model over the11train- of 16
ing and validation set is presented in this figure. Model II resulted in a total accuracy of
85%, which is around 31% better than Model I. It can be noticed from the Figure that the
accuracy
on of thisset
the training model on the training
was somewhat stable,and validation
it failed set increased
miserably on the testwhereas the loss
set, wherein the kept
best
decreasing.
accuracy However,
reported wasthe results
64% whichareis still not promising
far from when
acceptable. Oneitofcomes to the classification
the reasons for showing
of date
poor fruits.
performance was that Model I was not very suitable for the problem.
Figure
Figure 7b
7c presents
presents the resultsofofModel
theresults ModelII,III,
in in
which
which MobileNetV2
the customizedwas customized
head (as men- by
replacing the classification layer with a customized head containing five
tioned in Model II) was trained for the first 20 iterations, without training the rest of the different layers.
The model
layers wasMobilVnet2.
of the trained overThe 100 epochs,
main ideaandbehind
the performance
this was of to the
makemodel
sureover
thatthe
thetraining
newly
and validation set is presented in this figure. Model II resulted in a total
added layers learned some features about the date fruit dataset. Till the 20th iteration, accuracy of 85%,
it
which is around
was believed 31%
that thebetter than Model
customized I. Ithad
head canlearned
be noticed from features
enough the figurerelated
that the
toaccuracy
the pro-
of this model
vided dataset.onFrom
the training
the 20thand validation
iteration untilset theincreased whereasthe
100th iteration, thewhole
loss kept decreasing.
model (frozen
However, the results are still not promising when it comes to the classification
layers along with the newly added layers) was trained. Figure 7c presents the training of date fruits.
and
Figurecurves
validation 7c presents the results
of Model III. From of Model III, initwhich
the Figure, can bethe customized
noticed head
that the (as mentioned
accuracy of Model
in Model II) was trained for the first 20 iterations, without training the rest of the layers of
III improved (88%) compared to Model II. This was because the customized head was
the MobilVnet2. The main idea behind this was to make sure that the newly added layers
trained separately for the first 20 iterations. From the figure, it can be seen that there was
learned some features about the date fruit dataset. Till the 20th iteration, it was believed
an improvement in both training and the validation sets. There were positive changes in
that the customized head had learned enough features related to the provided dataset.
the curves in comparison with Model II. In other words, the curves of the training and
From the 20th iteration until the 100th iteration, the whole model (frozen layers along with
the newly added layers) was trained. Figure 7c presents the training and validation curves
of Model III. From the Figure, it can be noticed that the accuracy of Model III improved (88%)
compared to Model II. This was because the customized head was trained separately for the
first 20 iterations. From the figure, it can be seen that there was an improvement in both
training and the validation sets. There were positive changes in the curves in comparison
with Model II. In other words, the curves of the training and validation accuracy set kept
increasing, and the loss curves kept decreasing. However, it can be noticed that there was
a deviation between the performance of the training and validation sets. The model had
Sustainability 2022, 14, 6339 12 of 16
higher accuracy on the training set, but when it came to validation accuracy, it was lower,
which indicates that the model was overfitting. Due to that, we concluded that this model
was not performing as expected.
Figure 7d depicts the performance of the proposed model over the training and
validation sets. As mentioned in Section 3.2, the proposed model used the architecture
of MobileNetV2, in which the classification layer was replaced by adding five different
layers as explained in the previous section. In addition, different preprocessing techniques
such as Data Augmentation, Adaptive Learning Rate, Model Checkpointing, and Dropout
techniques were employed to reduce the impact of overfitting.
From Figure 7d, it can be noticed that the accuracy of the proposed model started
at 40% from the first iteration and dramatically increased reaching 99% within the first
10 iterations. From the 10th iteration onwards, the accuracy of the model remained the
same until the end of the 100th iteration. From the figure, it can also be noticed that the
training loss started with a high value at the beginning and then dropped dramatically in
successive iterations, which s considered a good characteristic of a model. Throughout
the training, the proposed model maintained the lowest loss value. From the results, it
can be depicted that during the training, the model performed well and outperformed all
the other models in terms of accuracy. This was due to the fact that adding new layers
and incorporating different preprocessing techniques was very effective and helped in
increasing the accuracy rate.
During the validation process, it could also be seen that the proposed model showed
stability. This was due to incorporating different preprocessing techniques. This began
with the process of data collection, in which a balanced dataset was created containing
almost the same number of images in each class. Further, the incorporation of the data
augmentation technique helped to expose different variations of the dataset to the model.
Similarly, the dropout technique played a vital role in the model’s validation performance.
It made sure that the model did not deviate much from its training performance.
Table 3 presents the validation results of the proposed model for each date fruit class
in terms of Precision, Recall, F1-Score, and Support (number of the instances/images of
a particular class before data augmentation). From the table, it can be noticed that the
precision value for almost all date fruit classes was 1. It can be also noticed that the lost
precision value was 0.97 for Khurdi and Safawi. This was because the size of the Khudri
and Safawi is almost the same. Likewise, the proposed model reported the maximum value
for recall for most of the classes. Table 2 also reports the F1-score of all the classes.
Figure 8 presents the precision and recall of all models based on the date fruit dataset.
From Figure 8a it can be noticed that Model I had the worst precision rate in all classes
as compared to the other models except in the Shishi class where its precision rate was 1.
Model I had poor performance over the Safawi class because it somehow resembled other
classes. When it came to the performance of Model II and Model III, their precision rate
Sustainability 2022, 14, 6339 13 of 16
was somehow acceptable as their precision rate among all classes was above 0.8, except for
model II in class Safawi where the precision rate was around 0.75. The proposed model in
precision rate overperformed all the models in all classes as its precision rate was around 1
for most of the classes. For Khudri and Safawi, the proposed model acquired the lowest
precision rate, which was 0.97, which is still far better than any other model. Figure 8b
presents the recall of all the models. From the figure, it can be noticed that Model I was
poorly performing in all the classes and had the worst rate in the Ambir and Shishi classes
which was less than 0.2. Whereas it can be noticed that Model II and Model III had better
recall rates than Model I. Among all the models, the proposed model outperformed all
Sustainability 2022, 14, x FOR PEER REVIEW 14 of 17
the models and constantly had the highest recall across all the different date fruit classes,
highlighting the success of the proposed model.
This architecture performed equally and efficiently as compared with the other architectures
but was faster and occupied less space while processing. Additionally, this architecture is
best suited for the collected dataset as it was collected using a smartphone.
6. Conclusions
In this study, a CNN-based model is proposed capable of classifying eight different
popular date fruits in Saudi Arabia. The proposed model is trained on an in-house dataset
which contains around 1750 images of eight different date fruits with a frequency between
204 and 240 for each class. Different preprocessing techniques have been incorporated into
the proposed model to improve the accuracy rate such as decayed learning rate, model
checkpointing, image augmentation, and dropout. An existing architecture (MobileNetV2)
has been adopted for a proposed model for classification. In the adopted model, the last
layer has been replaced by five different layers including the average pooling layer, the
flatten layer, the dense layer, the dropout layer, and the softmax layer. To improve the
classification performance, the model was fine-tuned. After thorough experimentation, it
was concluded that the architecture with the proposed composition of layers performed
better than other compositions for the specific classification task addressed in this study.
Hence, this architecture was adopted for the classification as well as the comparison with
other state-of-the-art approaches. The proposed model has achieved 99% accuracy and has
been compared with other existing well-known models. The results have shown that the
proposed model outperformed all the models in terms of accuracy. As a future avenue,
more date fruit varieties can be added and a mobile application to guide the users can be
developed and published. Other CNN models and the latest transformers can also be tried
on the dataset. Furthermore, the current work has compared the results obtained from four
different variations of architectures and adopted the best architecture for testing. However,
a detailed ablation study may be conducted in the future to further analyze the possibilities.
Author Contributions: Conceptualization, Y.G. and Y.H.; Data curation, K.A. and Y.G.; Formal
analysis, Y.G. and Y.H.; Funding acquisition, K.A.; Investigation, Y.G., Y.H., and A.M.; Methodology,
K.A., Y.G., Y.H., and A.B.S.; Project administration, K.A. and Y.G.; Resources, Y.G. and A.B.S.;
Software, Y.H.; Supervision, Y.G.; Validation, Y.G., Y.H., A.M., and A.B.S.; Visualization, K.A., Y.G.,
Y.H., A.M., and A.B.S.; Writing—original draft, K.A., Y.G., Y.H., A.M., and A.B.S.; Writing—review
and editing, K.A., Y.G., A.M., and A.B.S. All authors have read and agreed to the published version
of the manuscript.
Funding: This work was supported by the Deanship of Scientific Research, King Faisal University,
Saudi Arabia grant number RA00017.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The dataset used in this study an in-house dataset(private) created
by Authors.
Sustainability 2022, 14, 6339 15 of 16
Conflicts of Interest: The authors declare no conflict of interest. The funder had no role in the design
of the study and in the writing of the manuscript, or in the decision to publish the results.
References
1. Timmer, C. Agriculture and economic development revisited. Agric. Syst. 1992, 40, 21–58. [CrossRef]
2. King, A. Technology: The Future of Agriculture. Nature 2017, 544, S21–S23. [CrossRef] [PubMed]
3. Yang, X.; Shu, L.; Chen, J.; Ferrag, M.A.; Wu, J.; Nurellari, E.; Huang, K. A Survey on Smart Agriculture: Development Modes,
Technologies, and Security and Privacy Challenges. IEEE/CAA J. Autom. Sin. 2020, 8, 273–302. [CrossRef]
4. Escamilla-García, A.; Soto-Zarazúa, G.M.; Toledano-Ayala, M.; Rivas-Araiza, E.; Gastélum-Barrios, A. Applications of Artificial
Neural Networks in Greenhouse Technology and Overview for Smart Agriculture Development. Appl. Sci. 2020, 10, 3835.
[CrossRef]
5. Liu, W.; Shao, X.-F.; Wu, C.-H.; Qiao, P. A systematic literature review on applications of information and communication
technologies and blockchain technologies for precision agriculture development. J. Clean. Prod. 2021, 298, 126763. [CrossRef]
6. Ali, Q.; Ahmar, S.; Sohail, M.A.; Kamran, M.; Ali, M.; Saleem, M.H.; Rizwan, M.; Ahmed, A.M.; Mora-Poblete, F.; Júnior, A.T.D.A.;
et al. Research advances and applications of biosensing technology for the diagnosis of pathogens in sustainable agriculture.
Environ. Sci. Pollut. Res. 2021, 28, 9002–9019. [CrossRef]
7. Chen, X.; Zhou, G.; Chen, A.; Pu, L.; Chen, W. The fruit classification algorithm based on the multi-optimization convolutional
neural network. Multimed. Tools Appl. 2021, 80, 11313–11330. [CrossRef]
8. Hossain, M.S.; Al-Hammadi, M.H.; Muhammad, G. Automatic Fruit Classification Using Deep Learning for Industrial Applica-
tions. IEEE Trans. Ind. Informatics 2018, 15, 1027–1034. [CrossRef]
9. Alresheedi, K.M.; Aladhadh, S.; Khan, R.U.; Qamar, A.M. Dates Fruit Recognition: From Classical Fusion to Deep Learning.
Comput. Syst. Sci. Eng. 2022, 40, 151–166. [CrossRef]
10. Faisal, M.; Albogamy, F.; Elgibreen, H.; Algabri, M.; Alqershi, F.A. Deep Learning and Computer Vision for Estimating Date Fruits
Type, Maturity Level, and Weight. IEEE Access 2020, 8, 206770–206782. [CrossRef]
11. Available online: Https://Www.Mewa.Gov.Sa/En/MediaCenter/News/Pages/News201220.Aspx (accessed on 21 April 2022).
12. Meshram, V.; Patil, K.; Meshram, V.; Hanchate, D.; Ramkteke, S. Machine learning in agriculture domain: A state-of-art survey.
Artif. Intell. Life Sci. 2021, 1, 100010. [CrossRef]
13. O’Mahony, N.; Campbell, S.; Carvalho, A.; Harapanahalli, S.; Hernandez, G.V.; Krpalkova, L.; Riordan, D.; Walsh, J. Deep Learning
vs. Traditional Computer Vision. In Proceedings of the Science and Information Conference; Springer: Berlin/Heidelberg,
Germany, 2019; pp. 128–144.
14. Behera, S.K.; Rath, A.K.; Mahapatra, A.; Sethy, P.K. Identification, classification & grading of fruits using machine learning &
computer intelligence: A review. J. Ambient Intell. Humaniz. Comput. 2020, 1–11. [CrossRef]
15. Bhargava, A.; Bansal, A. Fruits and vegetables quality evaluation using computer vision: A review. J. King Saud Univ.-Comput. Inf.
Sci. 2021, 33, 243–257. [CrossRef]
16. Sarvamangala, D.R.; Kulkarni, R.V. Convolutional neural networks in medical image understanding: A survey. Evol. Intell. 2021,
15, 1–22. [CrossRef] [PubMed]
17. Zheng, M.; Xu, J.; Shen, Y.; Tian, C.; Li, J.; Fei, L.; Zong, M.; Liu, X. Attention-based CNNs for Image Classification: A Survey. J.
Phys. Conf. Ser. 2022, 2171, 012068. [CrossRef]
18. Altaheri, H.; Alsulaiman, M.; Muhammad, G. Date Fruit Classification for Robotic Harvesting in a Natural Environment Using
Deep Learning. IEEE Access 2019, 7, 117115–117133. [CrossRef]
19. Nasiri, A.; Taheri-Garavand, A.; Zhang, Y.-D. Image-based deep learning automated sorting of date fruit. Postharvest Biol. Technol.
2019, 153, 133–141. [CrossRef]
20. Magsi, A.; Mahar, J.A.; Danwar, S.H. Date Fruit Recognition using Feature Extraction Techniques and Deep Convolutional Neural
Network. Indian J. Sci. Technol. 2019, 12, 1–12. [CrossRef]
21. Pérez-Pérez, B.; Vázquez, J.G.; Salomón-Torres, R. Evaluation of Convolutional Neural Networks’ Hyperparameters with Transfer
Learning to Determine Sorting of Ripe Medjool Dates. Agriculture 2021, 11, 115. [CrossRef]
22. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted Residuals and Linear Bottlenecks. In
Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–28 June
2018; pp. 4510–4520.
23. Hamid, Y.; Wani, S.; Soomro, A.B.; Alwan, A.A.; Gulzar, Y. Smart Seed Classification System Based on MobileNetV2 Architecture.
In Proceedings of the 2022 2nd International Conference on Computing and Information Technology (ICCIT), Tabuk, Saudi
Arabia, 25–27 January 2022; pp. 217–222.
24. Gulzar, Y.; Hamid, Y.; Soomro, A.B.; Alwan, A.A.; Journaux, L. A Convolution Neural Network-Based Seed Classification System.
Symmetry 2020, 12, 2018. [CrossRef]
25. Naik, S.; Patel, B. Machine Vision based Fruit Classification and Grading—A Review. Int. J. Comput. Appl. 2017, 170, 22–34.
[CrossRef]
26. Sharmila, A.; Dhivya Priya, E.L.; Gokul Anand, K.R.; Sujin, J.S.; Soundarya, B.; Krishnaraj, R. Fruit Recognition Approach by
Incorporating MultilayerConvolution Neural Network. In Proceedings of the 2022 4th International Conference on Smart Systems
and Inventive Technology (ICSSIT), Tirunelveli, India, 20–22 January 2022; pp. 1042–1046.
Sustainability 2022, 14, 6339 16 of 16
27. Gill, H.S.; Khehra, B.S. An integrated approach using CNN-RNN-LSTM for classification of fruit images. Mater. Today Proc. 2021,
51, 591–595. [CrossRef]
28. Shoshan, T.; Bechar, A.; Cohen, Y.; Sadowsky, A.; Berman, S. Segmentation and motion parameter estimation for robotic
Medjoul-date thinning. Precis. Agric. 2021, 23, 514–537. [CrossRef]
29. Altaheri, H.; Alsulaiman, M.; Muhammad, G.; Amin, S.U.; Bencherif, M.; Mekhtiche, M. Date fruit dataset for intelligent
harvesting. Data Brief 2019, 26, 104514. [CrossRef] [PubMed]
30. Haidar, A.; Dong, H.; Mavridis, N. Image-Based Date Fruit Classification. In Proceedings of the 2012 IV International Congress
on Ultra Modern Telecommunications and Control Systems, St. Petersburg, Russia, 3–5 October 2012; pp. 357–363.
31. Alavi, N. Quality determination of Mozafati dates using Mamdani fuzzy inference system. J. Saudi Soc. Agric. Sci. 2012, 12,
137–142. [CrossRef]
32. Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A Comprehensive Survey on Transfer Learning. Proc. IEEE
2021, 109, 43–76. [CrossRef]
33. Tan, C.; Sun, F.; Kong, T.; Zhang, W.; Yang, C.; Liu, C. A Survey on Deep Transfer Learning. In Proceedings of the International
Conference on Artificial Neural Networks; Springer: Berlin/Heidelberg, Germany, 2018; pp. 270–279.
34. Arnold, T.B. kerasR: R Interface to the Keras Deep Learning Library. J. Open Source Softw. 2017, 2, 296. [CrossRef]
35. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. NIPS 2012, 60, 84–90.
[CrossRef]
36. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014,
arXiv:1409.1556v6.
37. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In
Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June
2016; pp. 2818–2826.
38. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778.
Reproduced with permission of copyright owner. Further reproduction
prohibited without permission.