Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
13 views60 pages

Sample Final Project Report

The project report titled 'Skin Disease Detection using Deep Learning' details the development of a deep learning model for classifying skin lesions as benign or malignant. The authors, Pratham Saraiya, Pranjal Sawant, Mehul Phatangare, and Ravi Rathod, utilized MobileNetV2 and a custom CNN, with MobileNetV2 achieving superior accuracy. The report emphasizes the importance of early melanoma detection and the potential impact of their automated classification system on medical diagnostics.

Uploaded by

ishan shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views60 pages

Sample Final Project Report

The project report titled 'Skin Disease Detection using Deep Learning' details the development of a deep learning model for classifying skin lesions as benign or malignant. The authors, Pratham Saraiya, Pranjal Sawant, Mehul Phatangare, and Ravi Rathod, utilized MobileNetV2 and a custom CNN, with MobileNetV2 achieving superior accuracy. The report emphasizes the importance of early melanoma detection and the potential impact of their automated classification system on medical diagnostics.

Uploaded by

ishan shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

A Project Report

on

Skin Disease Detection using Deep


Learning

Submitted in partial fulfillment of the requirements of the degree of


Bachelor in Engineering by

Pratham Saraiya (20UF15670CM046)


Pranjal Sawant (20UF15479CM047)
Mehul Phatangare (20UF15734CM041)
Ravi Rathod (20UF15520CM045)

Under the guidance of

Ms. Priyanka Ghule

Department of Computer Engineering


Shah and Anchor Kutchhi Engineering College
Chembur, Mumbai – 400088.
2023 – 2024
CERTIFICATE
This is to certify that the report of the project entitled

Skin Disease Detection using Deep Learning


is a bonafide work of

Pratham Saraiya (20UF15670CM046)


Pranjal Sawant (20UF15479CM047)
Mehul Phatangare (20UF15734CM041)
Ravi Rathod (20UF15520CM045)

submitted to the
UNIVERSITY OF MUMBAI
during semester VIII in partial fulfilment of the requirement for the award of
the degree of
BACHELOR OF ENGINEERING
in
COMPUTER ENGINEERING.

Ms. Priyanka Ghule


Guide

Prof. Uday Bhave Dr. Bhavesh Patel


Head of the Department Principal
Attendance Certificate
Date:

To,
The Principal,
Shah and Anchor Kutchhi Engineering College,
Chembur, Mumbai-88

Subject: Confirmation of Attendance

Respected Sir,

This is to certify that Final year students Pratham Saraiya, Pranjal Sawant, Ravi Rathod,
Mehul Phatangare have duly attended the sessions on the day allotted to them during the
period from 08 January 2024 to 03 April 2024 for performing the Project titled ”Skin Disease
Detection using Deep Learning”. They were punctual and regular in their attendance.
Following is the detailed record of the student’s attendance.

Attendance Record:

Pratham Saraiya Pranjal Sawant Ravi Rathod Mehul Phatangare


Date
Present/Absent Present/Absent Present/Absent Present/Absent

Ms.Priyanka Ghule
Approval for Project Report for B. E.
Semester VIII

This project report entitled ”Skin Disease Detection using Deep Learning” by Pratham Saraiya,
Pranjal Sawant, Mehul Phatangare and Ravi Rathod is approved for semester VIII in partial
fulfilment of the requirement for the award of the degree of Bachelor of Engineering.

Examiners

1.

2.

Guide

1.

Guide

2.

Date:

Place: Mumbai

iv
Declaration

We declare that this written submission represents our ideas in our own words and where
others’ ideas or words have been included, we have adequately cited and referenced the orig-
inal sources. We also declare that we have adhered to all principles of academic honesty and
integrity and have not misrepresented or fabricated or falsified any idea/data/fact/source in
our submission. We understand that any violation of the above will be cause for disciplinary
action by the Institute and can also evoke penal action from the sources which have thus not
been properly cited or from whom proper permission has not been taken when needed.

Name of the Student Roll No. Signature

Pratham Saraiya 20UF15670CM046

Pranjal Sawant 20UF15479CM047

Mehul Phatangare 20UF15734CM041

Ravi Rathod 20UF15520CM045

Date:

Place: Mumbai

v
Acknowledgement

We would like to express our sincere gratitude to all those who have supported and guided
us throughout the process of conducting this report on ”Skin Disease Detection using Deep
Learning”. This endeavor would not have been possible without their valuable contributions
and assistance.

We are thankful to our college Shah and Anchor Kutchhi Engineering College for consider-
ing our project and extending help at all stages needed during our work of collecting infor-
mation regarding the project.

We are deeply indebted to our Principal Dr. Bhavesh Patel and Head of the Computer
Engineering Department Prof. Uday Bhave for giving us this valuable opportunity to do this
project. We express our hearty thanks to them for their assistance without which it would
have been difficult in finishing this project synopsis and project review successfully.

We take this opportunity to express our profound gratitude and deep regards to our guide
Ms.Priyanka Ghule for her exemplary guidance, monitoring and constant encouragement
throughout the course of this project.

This work would not have been possible without the collective efforts of these individuals
and organizations. While any shortcomings in this report are solely our responsibility, their
contributions have significantly enriched its content.

vi
Abstract

Early and accurate detection of melanoma is crucial for successful treatment outcomes. In
this study, we leverage deep learning techniques to automate the classification of skin lesions
into benign and malignant categories. We developed and fine-tuned two distinct models:
a custom Proposed CNN and MobileNetV2 with transfer learning. After extensive train-
ing over various epochs and meticulous fine-tuning, MobileNetV2 emerged as the superior
model, demonstrating the highest accuracy in lesion classification. MobileNetV2, a cutting-
edge architecture crafted by CNN, underwent rigorous validation to verify its performance
consistency and reliability. The model was trained on a comprehensive dataset, ensuring
broad representation of skin lesions. Subsequent validation techniques, including cross-
validation and performance on unseen data, substantiated the model’s robustness, with the
results reflecting a high level of precision in distinguishing between benign and malignant
melanomas.
The performance of the system was methodically assessed using standard evaluation metrics,
confirming the model’s potential as a reliable tool for the early detection of melanoma. The
application of such advanced computer-aided diagnosis systems marks a significant contri-
bution to medical diagnostics. By facilitating prompt and reliable lesion classification, our
approach paves the way for enhancing the clinical workflow and ultimately improving pa-
tient care outcomes.

vii
Table of Contents
Approval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Declaration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2. Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1 Survey of Existing system . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Limitation of Existing system or research gap . . . . . . . . . . . . . . . . 7
2.3 Problem Statement and Objective . . . . . . . . . . . . . . . . . . . . . . 8
2.4 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3. Software Requirement Specification . . . . . . . . . . . . . . . . . . . . . . . 9
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 Overall Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.3 External Interface Requirements . . . . . . . . . . . . . . . . . . . . . . . 12
3.4 System Features and Requirements . . . . . . . . . . . . . . . . . . . . . . 13
3.4.1 Functional Requirements . . . . . . . . . . . . . . . . . . . . . . . 13
3.4.2 Non-functional Requirements . . . . . . . . . . . . . . . . . . . . 13
3.4.3 Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . 13
3.4.4 Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . 14
3.5 Other Nonfunctional Requirements . . . . . . . . . . . . . . . . . . . . . . 14
4. Project Scheduling and Planning . . . . . . . . . . . . . . . . . . . . . . . . . 16
5. Proposed System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.2 Details of Hardware& Software . . . . . . . . . . . . . . . . . . . . . . . . 18
5.3 Design Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.4 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.5 MobileNetV2 Model using Transfer Learning Source Code and Result . . . 27
5.6 Proposed CNN Model Source Code and Result . . . . . . . . . . . . . . . 32
6. Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.1 Testing with Multiple Datasets . . . . . . . . . . . . . . . . . . . . . . . . 36

viii
6.2 Application Testing and Integration of MobileNetV2 . . . . . . . . . . . . 37
7. Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
8. Future Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
9. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
A. Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
A.1 Plagiarism Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

ix
List of Figures

Figure 4.1 Gantt Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Figure 5.1 MobileNetV2 Architecture[21] . . . . . . . . . . . . . . . . . . . . 20


Figure 5.2 Process flow of MobileNetV2 Model . . . . . . . . . . . . . . . . . 22
Figure 5.3 Process Flow of Proposed CNN . . . . . . . . . . . . . . . . . . . . 25
Figure 5.4 Importing Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Figure 5.5 Defining Path and Count of Classes . . . . . . . . . . . . . . . . . . 28
Figure 5.6 Loading and Augmenting Training Dataset . . . . . . . . . . . . . . 28
Figure 5.7 Building MobileNetV2 model using Transfer Learning . . . . . . . . 29
Figure 5.8 MobileNetV2 Model Summary . . . . . . . . . . . . . . . . . . . . 29
Figure 5.9 Printing Accuracies . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Figure 5.10 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Figure 5.11 Validation Loss and Validation Accuracy achieved using MobileNetV2
using transfer learning . . . . . . . . . . . . . . . . . . . . . . . . . 30
Figure 5.12 Final Accuracy achieved by incorporating MobileNetV2 model using
Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Figure 5.13 Importing Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Figure 5.14 Loading and Augmenting training dataset . . . . . . . . . . . . . . . 33
Figure 5.15 Building CNN model . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Figure 5.16 Plots for Proposed CNN . . . . . . . . . . . . . . . . . . . . . . . . 35
Figure 5.17 Epoch Readings for Proposed CNN . . . . . . . . . . . . . . . . . . 35

Figure 7.1 MobileNetV2 plot . . . . . . . . . . . . . . . . . . . . . . . . . . . 40


Figure 7.2 Proposed CNN Plot . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Figure 7.3 Classification Module . . . . . . . . . . . . . . . . . . . . . . . . . 42
Figure 7.4 Image Classified as Malignant . . . . . . . . . . . . . . . . . . . . . 43
Figure 7.5 Image Classified as Benign . . . . . . . . . . . . . . . . . . . . . . 44

x
Chapter 1

Introduction

The skin is an important part of the body and consists of these layers: epidermis, dermis and
hypodermis. Of these layers, the epidermis, which is exposed to the external environment, is
prone to various skin diseases, and melanoma is a major concern. Within the epidermis are
melanocytes, which are responsible for regulating skin pigmentation. Skin cancer develops
when these melanocyte cells multiply uncontrollably. The main cause of this condition is
excessive exposure to ultraviolet (UV) radiation from the sun, tanning beds and sunlamps.
It is very important to note that skin cancer can affect people regardless of age, skin color
or gender[1]. With revolutionizing changes in the field of deep learning, it has become easy
to automate the process of disease detection using tools and techniques provided by deep
learning algorithms. Use of CNN architecture which works well with image processing
and analysis tasks would provide promising results in classifying skin lesions like detecting
Melanoma and melanocytic nevi accurately.
The advent of the deep learning era, especially the development of Convolutional Neural
Networks (CNN), has revolutionized image classification and expanded its scope to der-
matology. However, the computational requirements of these network models make them
unsuitable for real-time image classification on mobile devices due to limitations such as
limited hardware, including memory and power limitations and mobile computing capabil-
ities. Thus, the need to address these challenges prompted the development of lightweight
architectures such as MobileNetV2, which aim to reduce model parameters and computa-
tional cost while maintaining efficiency [5].
Timely identification of these diseases will help to be treated at the right time which could
help in successfully saving lives. A reliable automated system will help for significantly re-
duce manual interventions and examinations providing more accurate and consistent results.
Moreover spending high finance on testing for these diseases this system can help to identify
so it could assist medical professionals in identifying the problem. Successful implemen-
tation can revolutionize dermatological diagnostics, making way for faster more accessible,
and more accurate health care solutions. It could have a positive and substantial in public
health. This can help medical professionals and even nonmedical professionals which serves

1
Chapter 1. Introduction

as a purpose for research and personal diagnosing and treatment plans. We aim to contribute
towards digital healthcare advancements and provide easy-to-use, accessible, and efficient
diagnostic tools.

1.1 Background
The project aims to develop a system that helps in detecting whether the skin lesion is
melanoma or not using deep learning technology. The approach combines knowledge of
Dermatology with deep learning techniques. The important components includes gathering
a dataset which has diverse images of skin lesions with class label as Benign or Malignant ,
designing a convolutional neural network architecture particularly MobileNetV2 for extract-
ing various features, and fine tuning the parameters.

1.2 Motivation
Skin diseases are a common global health problem that affects people in all population
groups and significantly reduces their quality of life. The importance of timely and accu-
rate diagnosis cannot be overstated as it is crucial for effective treatment and cure of these
diseases. However, access to dermatology expertise is often limited, particularly in under-
served and remote areas. Such disparities in access to health care underscore the critical
need for innovative solutions.In response to this challenge, the integration of artificial intel-
ligence (AI) and deep learning technologies offers transformative potential. By automating
and improving the diagnostic process, these advanced computing technologies can signif-
icantly support healthcare professionals. They can help reduce the dermatology treatment
gap and ensure that more people receive an accurate diagnosis quickly. This project aims
to harness the power of artificial intelligence to create a reliable, efficient and accessible
diagnostic tool that will have a positive impact on global dermatology healthcare..

Computer Engineering 2
Chapter 2

Literature Review

2.1 Survey of Existing system


Authors S. Mathur and T. Jain proposed the use of dermnet dataset. Here the dataset con-
tains 19,000 images and 85% and 15% are used for training and testing purposes respectively.
Comparison between MobileNet, Inception V2, Inception V3, and Xception is used where
accuracy achieved is 85.704%, 89.868%, 92.97%, and 95.031% respectively. For MobileNet
only 30 layers are used. For Inception V2 and Inception V3 google framework is used[1].
Author Rarasmaya Indraswari suggested multiple datasets are used for comparison where
data is classified into two classes for Melanoma disease. Transfer learning is applied here
which allows us to leverage classification techniques accurately with a minimal dataset. The
MobileNetV2 uses 19 inverted residual bottleneck layers in which on the first layer 32 filters
were applied, producing a output of size 7x7x1280 pixels. For comparison, four algorithms
used are MobileNetV2, ReseNet50V2, InceptionV3, InceptionResnetV2 whose accuracy is
85%, 84%, 81%, 78% respectively[2].
Comparative Analysis by author Debasis Prasad Sahoo proposed that they have used 9605
skin cancer images which are categorized as malignant and benign for training and 1000
samples for testing the model and applied on various CNN models such as pre-trained VGG-
16,Resnet50, GoogleNet and among these basic CNN attained 83.30% accuracy, VGG gave
87.80% & Resnet gave 89.30% for 2 classes. Out of these, ResNet performed satisfactorily
and gave the best accuracy form the other models[3].
A Hybrid Model implemented by author S. Kusuma for Skin Disease Classification using
Transfer Learning. The authors have implemented skin cancer detection using deep learning
models Alexnet Architecture and the hybrid alexnet model where they have imported the
dataset from kaggle which consists 1800 images of melanoma of both benign and malignant
classes and then they are split and imported to the Alexnet model where it provided an accu-
racy of 99.25% and an hybrid alexnet architecture which provided an boosting accuracy of
99.75%[4].
Author Evgin Goceri put forth lightweight models like SqueezeNet, ShuffleNet and Mo-

3
Chapter 2. Literature Review

bileNet which are used along with MobilNetV2 and RMNV2. Five diseases which are identi-
fied are Hemangioma, Acne Vulgaris, Psoriasis, Rosacea, Seborrheic Dermatitis. SqueezeNet
got training accuracy of 92.08% and testing accuracy of 68.26%. MobilNetV2 got training
accuracy of 88.75% and testing accuracy of 81.19%[5].
Authors C. K, P. C. Siddalingaswamy proposed a classification system that leverages con-
volutional neural networks (CNNs) which are based on transfer-learning to classify dermo-
scopic images into eight distinct categories. Multi-headed neural networks in which State-
of-the-art pre-trained models are integrated through a functional model-based approach. The
ensemble technique of blending is employed to efficiently combine predictions, achieving a
balanced multi-class accuracy of 81.2% on the ISIC 2019 dataset[6].
Authors J. Samraj and R. Pavithra evaluated various techniques for Melanoma detection by
focusing on the importance of early accurate diagnosis. Computer-aided diagnosis (CAD)
systems play an important role in providing accurate classification results. The detection
rates are enhanced by using image processing methods and neural network-based models.
This paper focuses on enhancing melanoma image analysis. This paper focuses on enhanc-
ing melanoma image analysis. Key features include: Improving image quality by applying
filters. Utilizing texture pattern extraction to enhance feature magnitude.Optimal arrange-
ment of feature extraction with careful attribute selection[7].
Author Wiem Abbes suggested various the deep learning approach that gives promising re-
sults on dermoscopic images which require specialized equipment, and optical skin images
captured with a standard camera offer a cost-effective alternative. A transfer learning ap-
proach is used to overcome the scarcity of a large database for training deep networks in this
modality. A new CNN architecture and deep learning networks have been developed, with
the convolutional neural network achieving an impressive 97% detection rate, a significant
advancement in melanoma diagnosis[8].
The Model built by author C. A. Hartanto involves the detection of Actinic Keratosis, and
Melanoma. In this paper comparison between MobileNetV2 and R-CNN is shown. For
MobileNet V2 maximum accuracy was 86.1% While for R-CNN maximum accuracy was
87.2%. Also, loss value for MobilNetV2 and R-CNN was 2% and 0.3%. R-CNN was had
minimum loss as well as Maximum accuracy as well[9].
Author E. Megha put forth a method using a deep learning, R-CNN neural architecture and
Inception-V2 where the dataset was collected of five skin classes like tinea,cold sore,etc.
and then trained using R-CNN and the model was integrated in the website it provided an
accuracy of 98% with Inception-v2 which was the best and with R-CNN it achieved 90%
accuracy[10].
Author S. D. Sharma covered various methods for skin disease detection, including color-
based approaches, feature extraction, segmentation algorithms, and integration of multiple
algorithms. Studies utilize artificial neural networks (ANNs) and CNN models incorporating
with algorithms such as SVM. Techniques like data augmentation and transfer learning are

Computer Engineering 4
Chapter 2. Literature Review

employed to improve accuracy. Overall, the research emphasizes the importance of combin-
ing different methodologies for effective and early detection of skin diseases, contributing to
the development of reliable diagnostic tools[11].
The research by author L. Vincent showcased various innovative approaches to skin dis-
ease detection and classification. These methods leverage advanced technologies such as
deep learning and machine learning algorithms like Convolutional Neural Networks (CNNs)
and Support Vector Machines (SVMs), and image processing techniques. For instance, one
study employs a deep learning-based minimized U-Net model to effectively detect psoriasis
lesions from RGB images with high accuracy. Another paper introduces a fully automated
system for dermatological disease recognition using machine learning algorithms like CNN
and SVM, achieving impressive accuracy rates. Additionally, a CAD framework utilizing
sophisticated image processing methods and classification techniques achieves excellent per-
formance in early detection and classification of skin lesions. These studies demonstrate the
potential of combining advanced algorithms and image processing techniques for accurate
and efficient skin disease diagnosis[12].
Author E. Kanca highlighted the significance of early and accurate diagnosis of skin can-
cer, a prevalent global health concern. Dermoscopy, a non-invasive imaging technique, en-
hances visualization of skin lesions, aiding in diagnosis. Initially, machine learning-based
approaches focused on manual feature extraction, utilizing shape, color, and texture attributes
for classification. Recent studies propose advanced algorithms like the K-NN model for
classifying melanoma, nevus, and seborrheic keratosis. These methods entail preprocess-
ing, segmentation, feature extraction, and classification steps, aiming to improve diagnostic
accuracy. Evaluation metrics are utilized to assess performance. Challenges include class
imbalance in datasets and difficulties in distinguishing lesion classes. Future research aims
to address these issues through artifact removal, color enhancement, and exploring alterna-
tive machine learning approaches to enhance classification accuracy[13].
Author S. Shrimali introduced a mobile application naming SkinScan, leveraging deep learn-
ing to diagnose various types of skin cancer accurately and efficiently. It addresses the press-
ing need for early detection due to the increasing incidence of skin cancer globally. The
application incorporates a fine-tuned EfficientNetB7 CNN model, achieving a validation ac-
curacy of 95% and an F1 score of 0.94. Through comparative analysis and optimization,
SkinScan surpasses existing tools, offering a user-friendly interface, self-assessment tests for
cancer risk, UV radiation guidelines, and comprehensive information on skin cancer types
and treatments. The application aims to be globally accessible and scalable, potentially re-
ducing skin cancer rates worldwide by facilitating early diagnosis. Future plans involve
clinical testing and deployment on major app stores[14].
Author S. Kohli introduced Dermatobot, an intelligent chatbot designed to diagnose com-
mon skin diseases using a combination of computer vision and natural language processing
(NLP). By leveraging image classification models like EfficientNet B4 and semantic sim-

Computer Engineering 5
Chapter 2. Literature Review

ilarity measures from Universal Sentence Encoders, Dermatobot offers accurate diagnoses
based on user-provided symptoms and images. The chatbot also recommends mild remedies,
though it’s emphasized that it’s not a substitute for professional dermatological care. The ar-
chitecture employs microlithic backend components, REST API endpoints, and React.js for
the frontend. Future work includes expanding the disease classes, improving image dataset
quality, incorporating time series data for better diagnosis, and extending language support
beyond English[15].
Author V. Nivedita proposed a method for determining a skin diseases utilizing image pro-
cessing techniques, Python, and the YOLOV3 tool. It addresses the challenge of accurately
diagnosing skin conditions, which can arise from various factors like genetics, aging, aller-
gies, and environmental factors. By analyzing images of the affected skin area, the system
aims to provide fast and reliable diagnosis without the need for physical examination. The
research covers four common skin diseases: acne, melanoma, blisters, and cold sores, pre-
senting a technique for diagnosing each. Previous related works are discussed, highlighting
methods such as k-means clustering, color image processing, and segmentation for disease
detection. The proposed method involves data collection, preprocessing, feature extraction,
and classification stages, leveraging tools like OpenCV and YOLOV3. Results show suc-
cessful detection and classification of skin diseases, paving the way for future advancements
in computer-aided diagnosis and treatment[16].
Author N. Abhvankar outlines various approaches and deep learning algorithms used for
skin cancer detection, focusing on melanoma and non-melanoma types. It discusses the rise
of skin cancer cases due to factors like ozone layer depletion and emphasizes the need for ac-
curate detection methods. Deep learning algorithms, mainly Convolutional Neural Networks
(CNNs), are highlighted as promising tools for aiding dermatologists in precise diagnosis.
Different models and combinations of algorithms are compared, showcasing their accuracy
and effectiveness in classifying various types of skin diseases. Pre-processing techniques
such as data augmentation, SMOTE, and image enhancement are described to improve model
performance. The survey also presents results from implemented models, including CNN
and ResNet50, indicating high accuracies achieved through preprocessing methods. Addi-
tionally, the survey touches upon the distribution of skin cancer cases by gender, anatomical
location, and age groups. It concludes by suggesting future directions for research, such as
hybrid models and further training with larger datasets, to enhance detection accuracy[17].
Author M. Hossain proposed a method for detecting skin cancer using Convolutional Neu-
ral Networks (CNNs), focusing on various versions of the ResNet model. With a dataset
of 6,599 images from Kaggle, the study trains and tests ResNet18, ResNet50, ResNet101,
and ResNet152 models. Results show increasing accuracy with deeper ResNet architectures,
with ResNet152 achieving the highest accuracy of 89.64%. The study emphasize how impor-
tant a deep learning frameworks like PyTorch and CNNs in medical image classification are.
Overall, the research demonstrates the potential of ResNet models in accurately diagnosing

Computer Engineering 6
Chapter 2. Literature Review

skin cancer, which could greatly assist dermatologists in improving diagnostic efficiency and
patient outcomes. Future work may involve exploring larger datasets and investigating novel
CNN architectures for further performance enhancements[18].
Author J. Alam proposed an effective way for detecting skin diseases using deep learning,
aiming to address the limitations of expensive and limited medical equipment for diagno-
sis. It suggests leveraging image-based diagnosis systems coupled with image processing
and deep learning techniques to detect skin diseases at an early stage. The proposed system
focuses on feature extraction and classification using convolutional neural networks (CNN)
and support vector machines (SVM). The study reviews existing techniques, highlighting
methods such as color image processing, deep learning architectures like CNN, and spe-
cific approaches for detecting various skin diseases. It utilizes the HAM10000 dataset for
training and evaluates multiple CNN models with different filter patterns and dropout rates.
The results demonstrate improvements in accuracy and efficiency, with the proposed models
achieving up to 85.14% accuracy in skin disease detection[19].
Authors M. S. Junayed, A. N. M. Sakib presented an innovative method utilizing deep con-
volutional neural networks (CNNs) to classify five distinct categories of Eczema, leveraging
a dataset gathered for this purpose. Addressing the scarcity of detection systems for Eczema,
the study employs data augmentation and regularization techniques to enhance performance.
The suggested model achieves an accuracy of 96.2%, surpassing previous state-of-the-art
methods. Evaluation metrics such as sensitivity, specificity, precision, and accuracy demon-
strate the effectiveness of the model, outperforming pre-trained models InceptionV3 and
MobileNetV1. The research underscores the importance of dataset enrichment and proposed
future directions for further improvement, including expanding the dataset and implementing
segmentation and detection techniques[20].

2.2 Limitation of Existing system or research gap


The existing system faces challenges due to an imbalance between the Benign and Malignant
classes within a limited dataset, which can constrain the model’s effectiveness. Moreover,
navigating compliance-related approval processes for medical applications adds complex-
ity to the development and deployment of such systems. Additionally, hardware limitations
may hinder real-time detection capabilities, impacting the system’s performance. Further-
more, the variance in skin lesion appearances presents a significant hurdle, requiring robust
algorithms capable of accurately identifying diverse types of lesions for effective diagnosis
and treatment. Addressing these challenges will be crucial for advancing the efficacy and
reliability of skin lesion detection systems in medical applications.

Computer Engineering 7
Chapter 2. Literature Review

2.3 Problem Statement and Objective


Problem Statement
Melanoma is a malignant form of skin cancer its timely and accurate diagnosis is very much
critical for providing successful effective treatments. Limited access to dermatologists vari-
ability in the skin lesions are some of the characteristics that lead to inaccurate and delayed
diagnosis which in turn would lead to higher diagnosis cost and poorer patient outcomes.
Diagnosis done on visual basis is very much difficult even for experienced dermatologists
which leads to the need of advance efficient diagnostic tools. While considering false alarms
and using deep learning techniques this project aims to provide a well reliable and robust
tool for accurate in time diagnosis of melanoma by automating the diagnosis process.

Objective
We have implemented the imperative need for accurate and timely melanoma diagnosis by
harnessing advanced technologies, particularly deep learning. Our project aims to lever-
age deep learning techniques to create efficient and reliable diagnostic tools for melanoma.
Through the analysis of skin images, our focus lies on achieving early detection of poten-
tially malignant skin lesions, significantly enhancing the chances of successful treatment and
favorable patient outcomes.

• Early Detection: The primary goal of melanoma detection is to detect potentially


malignant skin lesions at an early stage. Early detection increases the likelihood of
successful treatment and better patient outcomes, as melanoma, when caught early, is
often highly curable.
• Accurate Diagnosis: The designed system accurately differentiate between benign
moles and potentially malignant moles, minimizing false positives and negatives, which
ensures that only truly suspicious lesions are flagged for further examination.

2.4 Scope
In healthcare, the primary emphasis in the scope of melanoma detection revolves around
early identification and diagnosis of this serious form of skin cancer. This entails extensive
education efforts directed at both healthcare professionals and the general public, aiming to
enhance the recognition of early signs and symptoms of melanoma. Additionally, dermatol-
ogy services are a cornerstone of melanoma detection within healthcare systems, ensuring
that individuals at risk or those with suspicious skin lesions have access to the expertise of
dermatologists.

Computer Engineering 8
Chapter 3

Software Requirement Specification

3.1 Introduction
Purpose
The main purpose of the proposed system is early, accurate and automatic detection of be-
nign or malignant skin lesions using the MobileNetV2 architecture. Skin Disease has a wide
range impact with the potential to cause discomfort ,pain and sometimes even proves to be
fatal. In underserved or remote areas access to dermatologists can be limited this project aims
to build a bridge offering reliable and accessible tool for preliminary diagnosis. The use of
Convolutional Neural Networks (CNN) a deep learning model showed promising results in
tasks involving image processing, video processing or object detection. MobileNetV2 is a
CNN architecture that is well suited for deploying proposed system on a Mobile applica-
tion. The main aim is to leverage deep learning techniques for timely accurate detection of
skin disease, in turn contributing to the healthcare industry providing enhanced quality and
improved outcomes

Intended Audience
• Developers: By understanding the working ,technical details, algorithms and code
structure this group will be responsible for the implementing and modifying the Mo-
bileNetV2 architecture.
• Project Stakeholders: It will help this group of people to get a high-level understanding
of the purpose and impact of the proposed system on the healthcare industry.
• Healthcare professionals and Dermatologists: The proposed system can be used for
assistance by the professionals in their work.
• End users : The proposed system will be used by individuals facing issues with skin
lesions, doesn’t have proper dermatologist guidance within their vicinity.

Overall The system will be used to analyse whether the individual has a lesion that depicts

9
Chapter 3. Software Requirement Specification

Skin cancer or not. System is designed to assist in the early recognition of suspicious moles
or skin lesions, potentially indicating the presence of melanoma.

Product Scope
By leveraging the Deep learning techniques the proposed system will automate Skin disease
detection based on input images provided by the end users. Developed system will be acces-
sible by the users with varying hardware capabilities. It will help in preliminary screening
of the skin lesions by assisting the users. It is not intended to replace professional medi-
cal opinions/advices. By undergoing rigorous process of testing to ensure that accuracy is
maintained well resulting in accurate, reliable and robustness in detection of skin lesions.

3.2 Overall Description


Product Perspective
The proposed system will give end users an intuitive user interface that can be adjusted to
fit a variety of devices, guaranteeing user accessibility. When detecting, the system can
communicate with the dataset using images that are already there as a point of reference. For
some features, the system could be dependent on external libraries, frameworks, or APIs.

Product Functions
1. Image upload: Using the interface, end users can upload pictures of their skin lesions.
2. Image processing: To improve quality and standardize input for the detection model,
the uploaded images will undergo pre-processing.
3. Skin Disease Detection: The submitted photos are analyzed using the MobileNetV2
deep learning model to determine if the skin lesion is benign or malignant.
4. The model’s integration with the interface will make the detection findings accessible
to dermatologists.

Together, these features make up the fundamental powers of the MobileNetV2 based skin
disease detection system. Every feature is made to improve the system’s overall usability,
accessibility, and efficacy for the targeted users.

User Classes and Characteristics


• Patients: They might not need any prior medical training; a user-friendly interface will
make identifying simple for them. They only need to click and upload the photographs
to the software in order to receive findings right away.
• Dermatologists and other medical professionals: They have medical expertise and ex-

Computer Engineering 10
Chapter 3. Software Requirement Specification

perience, which will aid in the diagnosis of various skin illnesses, and they may use
the system for early detection.
• Administrators: They are in charge of system maintenance and user management.
Administrators are in charge of monitoring system performance and ensuring data
privacy and security for users.
• Developers: They have technical competence as well as domain understanding, and
they can deploy, alter, and configure the system, as well as manage error reporting and
debugging tools.

Use cases
Users interact with a user-friendly web interface designed to capture images of their skin
condition safely. Once submitted, these images are processed by MobileNetV2 machine
learning model, carefully trained to detect signs of Melanoma. Using advanced image analy-
sis techniques, the system accurately evaluates the images and provides a possible diagnosis
promptly. This streamlined process not only shortens the diagnostic stage but also improves
accuracy, allowing for timely medical intervention and potentially improving outcomes

Operating Environment
• Backend and Server: The system will be provided the backend services with the help
of flask which can run in any OS such as Windows or Mac. The Deep learning model
will be integrated with the flask backend and then it will work as an API with the
system. Flask will serve as the backend framework which will run all the backend
environment operations of the system for that flask should be installed on the system,
with that Python is also necessary as flask is a Python framework.
• Frontend : For the UI, Flutter and Dart-based Mobile application will be developed
to run the services, and these will be integrated with flask backend so that model
can provide the prediction on the basis of the input given. For running the flutter
services first you need to install the flutter environment on the system. Dart is used as
a language which is object-oriented used in the flutter framework.

Design and Implementation Constraints


• Hardware Constraints:
– Server Requirements:- Hardware of the server which is hosted on Flask and the
model which is built using tensorflow libraries should meet the requirements for
running the model efficiently i.e. CPU,RAM and GPU Mobile.
– Device Compatibility:- Performance of the app which is created running on flut-
ter/dart based frontend may vary for each device to ensure the device has all the
optimised required facilities.

Computer Engineering 11
Chapter 3. Software Requirement Specification

• Software Constraints:
– Compatibility: Ensure that all the flask backend, tensorflow and Api and all the
python libraries are compatible so they work seamlessly and should be kept up-to
date.
– Libraries and Framework: Choice of an specific library and also a framework can
affect the functionality and stability of the system.

Assumptions and Dependencies


• Assumptions:
– Data Quality: It assumes that the data which is used for training the model has
high quality images which provides all the fields required and provides accurately
labelled classes.
– Connectivity: Users should have stable internet connectivity while uploading
images on the mobile application, so to get an accurate prediction.
– User Proficiency: User Interface is designed in a way which can be accessible to
all types of users depending if the user has a compatible mobile phone.
• Dependencies:
– Deep Learning Models: The entire system depends on the model itself, if any
changes or updates are there for MobileNetV2 or the CNN model and if the
TensorFlow libraries are not updated it can affect the system.
– Frontend: The application frontend is developed using Flutter and Dart, System
relies on the compatibility that both remain stable for efficient run.
– Backend: The main body of the application is the backend part which is devel-
oped using Flask, a framework of Python.

3.3 External Interface Requirements


User Interfaces
• User Image Upload Interface : The proposed system will offer a image upload interface
which allows the users to upload the images of the skin lesions through the GUI with
a image upload button.
• Result Display Interface : The results that the model will provide on the skin lesion
uploaded will be displayed indicating whether the condition is malignant or benign.

Hardware Interfaces
• Mobile Device Interfaces: The proposed system should be compatible with various
mobile devices so that end users can interact with the application and upload images.
Components include a touchscreen interface and a camera to capture images of skin

Computer Engineering 12
Chapter 3. Software Requirement Specification

lesions.

Software Interfaces
• Deep learning Framework Interfaces : The proposed system will interact with the deep
learning framework TensorFlow for implementing the MobileNetV2 model.
• Image Processing Module : The module will pre-process the uploaded images, before
sending them for detection.

Communications Interfaces
The proposed system will use Flask API for integration with the application for data ex-
change.

3.4 System Features and Requirements


3.4.1 Functional Requirements
• User Access: As it does not require any registration and professionals, any user can
open and access this application.
• Image Uploading: Upload the images through the mobile app, the accepted images
will be in JPEG, PNG formats.
• Detection of Melanoma: From the image provided, the model will run the analysis
and on the basis of the prediction provided by the model, it will give the output.
• Display of Result: Users provide images of skin lesions to the app’s classification
module, click the ”Classify” button, and the app screen displays the results whether
the condition is benign or malignant.

3.4.2 Non-functional Requirements


• Accuracy and reliability: System should provide the accuracy in a few seconds.
• Scalability: It should be highly available and not down for maintenance.
• Accuracy: MobileNetV2 achieves the best and highest accuracy amongst all the neu-
ral architectures compatible with mobile application.

3.4.3 Hardware Requirements


• Processor: Intel i5-10th Gen / AMD Ryzen-5 3500U
• Graphics: 2.00 GB Integrated Graphics / Nvidia MX-350 2.00 GB Graphics
• System type: 64-bit Operating System, x64-based processor.

Computer Engineering 13
Chapter 3. Software Requirement Specification

3.4.4 Software Requirements


• Operating system : Windows 10 or 11 Home , Mac OS,Linux.
• IDE : Python IDLE, VsCode, Jupyter.
• Google Colaboratory
• Flutter Environment , Android Studio
• Tensorflow Libraries

3.5 Other Nonfunctional Requirements


Performance Requirements
1. Response Time : The proposed system should ensure a responsive user experience by
providing detection results in seconds after the user uploads the image.
2. Accuracy : The proposed system should ensure accurate detection results.
3. Resource Utilization : The proposed system requires sufficient storage on user’s de-
vice.

Safety Requirements
The planned system won’t offer any medical services or medical prescription, protecting
against inaccurate recommendations. Respect user privacy and obtain informed consent be-
fore conducting any examinations or sharing medical information.

Security Requirements
The proposed system will not provide and medical recommendations and the users will be
advised to consult a professional for advice and treatment. It will not retain any kind of user
sensitive or personal data. The system will provide easy and understandable instructions to
the users regarding the usage of the system.

Software Quality Attributes


• Usability : The proposed system user interface must be user-friendly ensuring easy
use for individuals with different technology proficiency.
• Reliability : The proposed system must be available to the users 99% of the time
• Maintainability : The code of the proposed system must be properly documented for
maintenance and future updates.

Computer Engineering 14
Chapter 3. Software Requirement Specification

Business Rules
The proposed system will not involve any payments or financial transactions from the users
as it is a diagnostic tool not a commercial tool.

Computer Engineering 15
Chapter 4

Project Scheduling and Planning

Figure 4.1: Gantt Chart

16
Chapter 5

Proposed System

5.1 Algorithm
Convolutional Neural Networks (CNNs) are a category of profound learning models partic-
ularly custom fitted for dealing with grid-like information, such as pictures and recordings.
These models are broadly utilized in visual acknowledgment errands due to their capacity to
handle and analyze visual information productively. CNNs utilize convolutional layers pre-
pared with channels that efficiently extricate highlights from input information. These layers
are proficient at recognizing designs and connections at different spatial scales, capturing
complex highlights basic for point by point picture investigation. The design of CNNs regu-
larly incorporates different layers of convolutions taken after by pooling layers that perform
downsampling to diminish spatial measurements and computational complexity.
The flexibility of CNNs expands past basic picture classification; they are fundamentally
to computer vision errands such as video handling, protest location, and picture division.
The convolutional layers serve not as it were to identify highlights but moreover to clas-
sify visual information through ensuing thick layers, making CNNs profoundly successful
for comprehensive picture handling errands. Advancements in CNN structures have sig-
nificantly affected areas requiring point by point visual understanding, counting therapeutic
picture examination, independent driving, and other regions where exactness and proficiency
are significant.
MobileNetV2 stands out as a specialized CNN design planned to function on versatile and
edge gadgets where computational assets are constrained. It leverages depthwise distinguish-
able convolutions—a procedure that altogether decreases the computational stack without
compromising demonstrate exactness. This highlight is vital for creating lightweight models
that are deployable on gadgets with constrained handling capabilities, such as smartphones,
IoT gadgets, and implanted frameworks. MobileNetV2’s productive arrange structure em-
powers it to back real-time applications on versatile stages, making it progressively prevalent
for on-device profound learning applications where speed and proficiency are fundamental..

17
Chapter 5. Proposed System

Layers of CNN architecture


1. Convolutional Layer: This is the core component of a CNN. It uses a variety of filters on
the input image to generate feature maps that emphasise edges, textures, and objects. Each
filter identifies distinct parts of the image.

2. Activation Layer: After convolution, ReLU (Rectified Linear Unit) which is an acti-
vation function is employed to provide nonlinearity to the model, helping it to learn more
complicated patterns.

3. Pooling Layer: This is also referred as subsampling or downsampling. This layer min-
imizes the spatial size of feature maps to simplify computation and extract dominating fea-
tures, allowing for feature identification that is invariant to scale and orientation changes.

4. Fully Connected Layer: Following numerous convolutional layers along with pooling
layers, the neural network performs high-level reasoning. The features are flattened into a
vector and fed into fully connected layers that behave similarly to a regular neural network.

5. Output Layer: The final layer generates a probability distribution over the target classes
using a softmax or sigmoid activation function (depending on the task).

5.2 Details of Hardware& Software


• Hardware
1. CPU (Central Processing Unit) : Multicore CPU will be preferrable tasks such as
data processing.
2. GPU (Graphical Processing Unit) : High-end GPU would help speeding up the
computations.
3. RAM (Random Access Memory) : Sufficient RAM most preferably 8GB would
support in handling large datasets.
4. Internet Connectivity : Stable internet connectivity for accessing online resources.
• Software
1. Operating system : Choose a Operating system compatible with deep learning
tasks.
2. Deep learning Framework : Select a framework for developing and training deep
learning models such as keras, Pytorch etc.
3. Python : Ensure that proper Python version is installed with relevant libraries.
4. IDE (Integrated Development Environment) : Choose appropriate IDE for cod-
ing, debugging and managing project files.
5. Model Deployment tools : For deploying the model you should choose a frame-

Computer Engineering 18
Chapter 5. Proposed System

work like Flask, TensorFlow.

5.3 Design Details


The project design revolves around MobileNetV2, a potent convolutional neural network
(CNN) tailored for image classification in skin cancer detection. The meticulously cu-
rated dataset undergoes rigorous preprocessing and augmentation to enhance diversity. Mo-
bileNetV2 serves as the core architecture, augmented with additional layers for binary clas-
sification. Training incorporates Binary Cross-Entropy loss, Adam optimizer, and early stop-
ping to mitigate overfitting. Model evaluation involves standard metrics, including accuracy
and sensitivity. We have compared alternative architectures along with fine-tuning, and de-
veloped a user-friendly application. The design prioritizes MobileNetV2’s efficiency, lever-
ages transfer learning advantages, and facilitates multi-scale feature extraction, establishing
a robust foundation for accurate skin cancer classification.

5.4 Methodology
MobileNetV2, a core component of this skin cancer classification project, stands out as a
convolutional neural network (CNN) architecture designed to excel in both efficiency and
high-performance image classification tasks. It represents an evolutionary step from the orig-
inal MobileNet, introducing several pivotal innovations that render it a compelling choice
across a broad spectrum of computer vision applications. Notably, its efficiency is a hall-
mark, particularly in the context of medical imaging tasks such as skin cancer classifica-
tion, where considerations of model size and computational resources are paramount. Mo-
bileNetV2 achieves this by harnessing depth-wise separable convolutions, a critical element
that effectively reduces parameters and computational demands, resulting in a model that is
both lightweight and highly efficient. Such efficiency is vital for real-time applications and
resource-constrained settings, ensuring that the model’s performance remains responsive and
accessible.

Further enhancing MobileNetV2’s capabilities are the introduction of ”inverted residuals”


and ”linear bottlenecks.” These innovations are pivotal for enhancing information flow within
the network. By incorporating residual connections, the model gains the capacity to learn
both shallow and deep features, amplifying its ability to capture intricate patterns within skin
lesion images. The linear bottlenecks further enrich feature learning, making MobileNetV2
an exceptional choice for tasks requiring the analysis of intricate and multi-scale visual pat-
terns. The architecture’s ability to facilitate multi-scale feature extraction is especially valu-
able in the domain of skin cancer classification, given the significant variations in size and
characteristics among skin lesions. This multi-scale approach equips the model to detect
subtle details as well as more prominent malignancy indicators, providing a comprehensive

Computer Engineering 19
Chapter 5. Proposed System

Figure 5.1: MobileNetV2 Architecture[21]

foundation for image analysis.


In the context of this project, MobileNetV2’s unique ability to balance efficiency with ef-
fectiveness enables the development of a skin cancer classification model that upholds high
accuracy without overwhelming computational resources. This attribute holds substantial
significance in healthcare applications, where seamless integration into clinical workflows
is essential without imposing excessive computational demands. Leveraging the pre-trained
weights of MobileNetV2, often derived from extensive image datasets, provides a valuable
head start in the development of accurate skin cancer classification models. These pre-trained
weights already encapsulate a wide spectrum of image features learned from diverse data
sources, rendering MobileNetV2 as a robust feature extractor. This transfer learning capa-
bility not only accelerates the model development process but also elevates its performance,
saving time and resources while ensuring the model’s effectiveness in skin cancer classifica-
tion.

Data Collection and Splitting: The datasets used in ISIC 2019, and ISIC 2020 were ob-
tained from the International Society for Digital Imaging of the Skin and its ISIC project[22].
The total number of images is 10,192 where data is split into 60:20:20 for training, testing,
and validation. We utilized a dataset consisting of 6112 instances for training, with an equal
distribution of 3056 malignant and 3056 benign cases. To evaluate the performance of our
model, we employed a test set comprising 2040 instances, equally split between 1020 malig-
nant and 1020 benign cases. Additionally, we employed a validation set of 2040 instances,
mirroring the test set’s distribution with 1020 malignant and 1020 benign cases. This dataset

Computer Engineering 20
Chapter 5. Proposed System

distribution ensures a balanced representation of both malignant and benign cases across all
phases of model development, facilitating robust evaluation and validation of our proposed
approach. Unlike the larger ISIC datasets, the Dermis dataset contains a more detailed col-
lection of 1000 dermatology images[23]. This dataset comes from a collaboration involving
dermatology institutions that provide annotated clinical images for educational and research
purposes. In particular, the Dermis dataset contains 500 malignant and 500 benign cases
equally, which provides a balanced approach for training and testing machine learning mod-
els. In our project, the Dermis dataset was strategically divided into training, testing, and
validation segments to ensure comprehensive model training and accurate performance eval-
uation. The training consisted of 720 uniformly distributed images with 360 malignant and
360 benign cases. The test set contained 200 images, again equally divided into 100 malig-
nant and 100 benign cases, to evaluate the initial performance of our models. In addition,
the validation set contained 80 images, including 40 malignant and 40 benign cases, which
were used to fine-tune the models and verify their diagnostic accuracy under controlled con-
ditions. This balanced distribution in a smaller dataset like Dermis is particularly useful for
high-fidelity model training where data quality and specificity are critical to developing reli-
able diagnostic tools.

Model Building and Training: The Dermis dataset undergoes training using a neural net-
work architecture meticulously crafted to extract meaningful features and classify data ef-
fectively. The model architecture comprises several layers, each tailored to handle specific
aspects of the data and optimize learning. At the forefront of the architecture are convo-
lutional layers, where feature maps are generated by convolving input data with learnable
filters. These filters vary in number and size across different configurations. The model
starts with a sizable number of filters, such as 1024, gradually decreasing to 256, 512, or
256 in subsequent layers. The choice of filter size, along with the kernel size, is crucial for
capturing relevant patterns at different scales within the data. A pooling layer follows the
convolutional layer to downsample the feature map, reducing computational complexity and
preventing overfitting. Max pooling and global average pooling are utilized alternatively,
providing different mechanisms for feature selection and abstraction. Maximum pooling
preserves the most important features of each pool, while global average pooling calculates
the average value over the entire feature map, providing a more general representation.
Dropout layers are strategically inserted after certain convolutional layers to improve gen-
eralization and prevent the model from remembering noise in the training data. Dropout
randomly disables some neurons during training, allowing the network to learn more robust
features and reducing interdependencies between neurons. The learning rate is an important
hyperparameter that determines the step size of gradient descent optimization and is carefully
chosen to balance training speed and convergence. It oscillates between 0.001 and 0.0001

Computer Engineering 21
Chapter 5. Proposed System

Figure 5.2: Process flow of MobileNetV2 Model

across different configurations, with occasional adjustments during training using learning
rate reduction strategies. Training occurs over multiple epochs, typically 25 or 50 iterations
through the entire dataset, allowing the model to gradually improve its performance. How-
ever, to prevent overfitting and ensure optimal generalization, early stopping mechanisms are
employed. These mechanisms monitor validation losses and stop training if no improvement
is observed over a predefined number of epochs specified by the patience parameter.
The model’s performance is evaluated through accuracy, measured as the percentage of cor-
rectly classified instances. Across various configurations, the model achieves accuracies
ranging from 75% to 83%, reflecting its ability to effectively classify the Dermis dataset.
In addition to these core elements, certain configurations incorporate additional fine-tuning
techniques, such as learning rate reduction by a factor of 5 at specific epochs. This com-
prehensive approach to model training and hyperparameter tuning underscores the iterative
nature of deep learning model development, where adjustments are made systematically to
achieve optimal performance on the target task.
The ISIC Archive Dataset undergoes training using a meticulously crafted neural network ar-
chitecture tailored to optimize feature extraction and classification performance. The model
architecture includes convolutional layers, pooling layers, and dense layers, each serving a
specific purpose in the learning process. Beginning with convolutional layers, the model
employs varying numbers of filters, ranging from 1024 to 32, and diverse kernel sizes to
capture essential features at different scales within the data. Following the convolutional
layer, a pooling layer using max pooling or global average pooling is applied to downsample
the feature map, thereby reducing the computational complexity and preventing overfitting.

Computer Engineering 22
Chapter 5. Proposed System

Dense layers come into play after the feature extraction stage, facilitating the learning of
high-level representations. The architecture incorporates dense layers with different config-
urations of neurons, ranging from 256 to 1024, followed by dropout layers with dropout rates
of 0.25 or 0.5 to mitigate overfitting and enhance generalization. The learning rate, a critical
hyperparameter governing the rate of model optimization, is carefully selected to balance
training speed and convergence. It remains consistent at 0.001 across various configurations,
with occasional adjustments during training using learning rate reduction strategies, such as
reducing the learning rate by a factor of 5 or 6 at specific epochs. Training proceeds over
multiple epochs, typically 25 or 100 iterations through the entire dataset, allowing the model
to gradually refine its predictive capabilities. However, to prevent overfitting and ensure op-
timal generalization, the model employs strategies such as freezing base layers and reducing
the learning rate during training. The model’s performance is assessed through accuracy,
measured as the percentage of correctly classified instances. Across different configurations,
the model achieves accuracies ranging from 70.64% to an impressive 92.01%, demonstrating
its effectiveness in classifying the ISIC Archive Dataset. This comprehensive approach to
model training and hyperparameter tuning underscores the iterative nature of deep learning
model development, where adjustments are made systematically to achieve optimal perfor-
mance on the target task.

Computer Engineering 23
Chapter 5. Proposed System

Convolutional Layers Pooling Dense Layers Learning rate Epoch Accuracy (%)
Dermis Data Set
1024 Max 1024 Dropout(0.5) 0.001 25 77
512 Max 1024 Dropout(0.5) 0.001 25 75
256 Max 1024 Dropout(0.5) 0.001 25 80
128 Global avg
512 Max 1024 Dropout(0.5) 0.0001 25 79.50
256 Max Patience(10)
128 Global avg Early Stopping
512 Max 1024 Dropout(0.5) 0.0001 50 77.50
256 Max Patience(15)
128 Global avg Early Stopping
512 Max 1024 Dropout(0.5) 0.0001 50 80
256 Max
128 Global avg
256 Max 1024 Dropout(0.5) 0.0001 50 82
128 Max
64 Global avg
256 Max 512 Dropout(0.5) 0.001 50 82
128 Max
64 Global avg ReduceLr
256 Max 1024 Dropout(0.5) 0.001 50 83
128 Max
64 Global avg
ISIC Archive Dataset
1024 Max 1024 Dropout(0.5) 0.001 25 70.64
512 Max + Reduce LR - 5
256 Global avg Base[-10] freeze
1024 Max 1024 Dropout(0.5) 0.001 25 77.18
512 Max + Reduce LR - 6
256 Global avg Base[-10] freeze
32 Max 1024 Dropout(0.25) 0.001 25 90.04
64 Max + Dropout(0.5)
128 Max
256 Max
512 Max
32 Max 1024 0.001 100 92.01
64 Max Dropout(0.5)
128 Max
256 Max
512 Max
512 Max
Table 5.1: Applied Transfer Learning

Computer Engineering 24
Chapter 5. Proposed System

Implementing with Proposed CNN Model:

Figure 5.3: Process Flow of Proposed CNN

The development of our proposed CNN model followed a structured iterative refinement ap-
proach, designed to optimize performance in melanoma classification. The model architec-
ture commenced with a simple CNN configuration, incorporating an input layer designed to
process 256x256 pixel images, multiple convolutional layers with respective kernels for fea-
ture extraction, activation functions to introduce non-linearities, pooling layers for downsam-
pling, and densely connected layers for classification, concluding with a sigmoid-activated
output layer for binary classification (benign versus malignant). Initial model specification
involved training the Dermis dataset with a stochastic gradient descent optimizer with a
learning rate of 0.001 for 25 epochs. The results of the first iteration on this dataset gave
a training accuracy of 50.14% and a validation accuracy of 48.75%, which was the basis
for subsequent refinements. The second iteration, still using the Dermis dataset, involved
increasing the number of convolutional layers and reducing dense layers while maintaining
a removal rate of 0.5 to reduce overfitting. These changes improved accuracy by 76.11%
in training and 81.25% in validation. Further improvements in the third iteration included
the introduction of global average pooling in the final layers to enhance principal general-
ization and extend the training to 30 epochs. These modifications significantly filled the
performance gap, achieving 82.92% training accuracy and 85% validation accuracy on the
Dermis dataset. To overcome data limitations, we transitioned from the Dermis dataset to
the more extensive ISIC dataset, maintaining consistent parameters, which boosted the accu-
racies to 87.60% in training and 88.87% in validation. However, attempts to push the model

Computer Engineering 25
Chapter 5. Proposed System

performance through an extended 100-epoch training regimen revealed a susceptibility to


overfitting, as evidenced by a decline in testing accuracy post an initial rise. This compre-
hensive, iterative process not only honed the architectural specifics of our model but also
emphasized the critical balance between model complexity and generalization capability in
achieving robust performance in melanoma detection.

Convolutional Layers Pooling Dense Layers Epoch Testing Accuracy (%)


Dermis Data Set
32 Max 1024 Dropout(0.5) 25 50
64 Max
128 Global avg
32 Max 64 25 78
64 Max 128 Dropout(0.5)
128 Max
256 Max
512 Max
1024 Max
32 Max 64 30 84.50
64 Max 128 Dropout(0.5)
128 Max
256 Max
512 Max
1024 Global avg
ISIC Archive
32 Max 1024 Dropout(0.5) 30 88.92
64 Max
128 Max
256 Max
512 Max
1024 Global avg
Table 5.2: Proposed CNN

Computer Engineering 26
Chapter 5. Proposed System

5.5 MobileNetV2 Model using Transfer Learning Source Code


and Result

Figure 5.4: Importing Libraries

Computer Engineering 27
Chapter 5. Proposed System

Figure 5.5: Defining Path and Count of Classes

Figure 5.6: Loading and Augmenting Training Dataset

Computer Engineering 28
Chapter 5. Proposed System

Figure 5.7: Building MobileNetV2 model using Transfer Learning

Figure 5.8: MobileNetV2 Model Summary

Computer Engineering 29
Chapter 5. Proposed System

Figure 5.9: Printing Accuracies

Figure 5.10: Parameters

Figure 5.11: Validation Loss and Validation Accuracy achieved using MobileNetV2 using
transfer learning

Computer Engineering 30
Chapter 5. Proposed System

Figure 5.12: Final Accuracy achieved by incorporating MobileNetV2 model using Transfer
Learning

Computer Engineering 31
Chapter 5. Proposed System

5.6 Proposed CNN Model Source Code and Result

Figure 5.13: Importing Libraries

Computer Engineering 32
Chapter 5. Proposed System

Figure 5.14: Loading and Augmenting training dataset

Computer Engineering 33
Chapter 5. Proposed System

Figure 5.15: Building CNN model

Computer Engineering 34
Chapter 5. Proposed System

Figure 5.16: Plots for Proposed CNN

Figure 5.17: Epoch Readings for Proposed CNN

Computer Engineering 35
Chapter 6

Testing

6.1 Testing with Multiple Datasets


Integration of Diverse Datasets: During the testing phase, our approach capitalized on the
robustness provided by two distinct datasets: the Dermis dataset and the ISIC dataset. This
strategy was instrumental in exposing our models—both the custom-designed CNN and the
MobileNetV2 with transfer learning—to a wide and varied collection of data. The inclusion
of diverse datasets ensured that our models encountered a broad spectrum of melanoma im-
ages, which is critical for developing a more generalized and effective classification system.

Testing Methodology and Execution: Each dataset presents its unique challenges and char-
acteristics; the Dermis dataset, known for its detailed clinical case reports, and the ISIC
dataset, recognized for its extensive library of skin lesion images. Our testing involved a
thorough evaluation of the models’ performance on these datasets individually and in com-
bination.

Performance Analysis: The models were rigorously tested against a multitude of image
variations, encompassing a range of melanoma presentations. This extensive testing revealed
that the MobileNetV2 model, utilizing transfer learning, consistently outperformed the cus-
tom CNN model. The efficiency of MobileNetV2 can be attributed to its advanced archi-
tectural design, which has been fine-tuned for high-precision tasks across different datasets,
proving its adaptability and accuracy in our application.

Comparative Results: The results indicate that the MobileNetV2 model achieved superior
accuracy, demonstrating its robustness and reliability. The extensive data provided by both
Dermis and ISIC datasets facilitated the comprehensive training of the models, enhancing
their ability to identify melanoma accurately. These findings underscore the importance of
using heterogeneous data to train deep learning models for medical diagnosis, particularly in

36
Chapter 6. Testing

dermatology.

6.2 Application Testing and Integration of MobileNetV2


Objective of Application Testing: The focal point of application testing was to rigorously
evaluate the successful integration of the MobileNetV2 model within our Flask API-powered
application. The MobileNetV2 model, carefully trained on the ISIC dataset, plays a key
role in accurately classifying skin lesions as benign or malignant. Ensuring the seamless
functionality of this model within the application’s ecosystem was paramount, as it directly
influences the utility and reliability of the application in clinical decision-making processes.

Testing Methodology: Our methodology for application testing was two-pronged: First, we
conducted a series of functional tests to verify the model’s integration, scrutinizing every
aspect of the application’s operational pipeline for potential discrepancies or failures. This
encompassed input validation, model response times, and accurate output rendering. Second,
we executed a set of user-centric tests to ascertain the application’s usability, focusing on the
user interface (UI) and the user experience (UX) to ensure intuitiveness and ease of use.

Integration Verification: A significant portion of the testing phase was devoted for confirm-
ing the integration of the MobileNetV2 model. This entailed validating the Flask API end-
points, ensuring they correctly communicated with the model, and maintained data integrity
throughout the process. The application’s back-end logs and front-end response elements
were thoroughly inspected for consistency and accuracy after each classification request.

Classification Robustness: In parallel with integration tests, we placed a high emphasis on


the robustness of the classification functionality. A comprehensive suite of tests with varied
images was executed to evaluate the model’s classification consistency. This step was vital in
measuring the application’s performance and reliability, especially given the critical nature
of medical diagnostics.

Performance Evaluation: Performance metrics such as classification accuracy, response


time, and system stability were meticulously recorded. These metrics provided invaluable
insights into the application’s efficiency and identified any areas that required optimization.

Analysis: The results from our application testing demonstrated that the MobileNetV2
model was effectively integrated into the application framework. The model showcased
high accuracy in classification, with prompt response times that meet the expectations for
real-time analysis. The stability of the application during these tests further reinforced the

Computer Engineering 37
Chapter 6. Testing

model’s readiness for deployment in a clinical environment.

Computer Engineering 38
Chapter 7

Results and Analysis

In the quest to develop a reliable application for melanoma detection, we explored two
distinct approaches: a custom-designed convolutional neural network (CNN) and the es-
tablished MobileNetV2 architecture through transfer learning. After rigorous testing and
analysis, MobileNetV2 emerged superior in terms of accuracy, prompting us to leverage
its strengths for our application. We chose Flutter, a versatile UI framework, to create a
seamless integration of the machine learning model into a user-friendly mobile interface.
Utilizing Dart, we crafted an intuitive user interface that simplifies the user journey, enabling
easy navigation and use. This integration of MobileNetV2 with our Flutter-based applica-
tion culminated in a robust classification tool, readily available to users. Our application now
adeptly provides melanoma classification services, offering a streamlined, accessible solu-
tion for early detection and awareness, thereby serving as a testament to the synergy between
advanced AI methodologies and user-centric design.

Models Performed Accuracy (%)


Proposed CNN Model 88.92%
MobileNetV2 with Transfer Learning 92.01%
Table 7.1: Applied Transfer Learning

39
Chapter 7. Results and Analysis

Figure 7.1: MobileNetV2 plot

Computer Engineering 40
Chapter 7. Results and Analysis

Figure 7.2: Proposed CNN Plot

Computer Engineering 41
Chapter 7. Results and Analysis

Figure 7.3: Classification Module

Computer Engineering 42
Chapter 7. Results and Analysis

Figure 7.4: Image Classified as Malignant

Computer Engineering 43
Chapter 7. Results and Analysis

Figure 7.5: Image Classified as Benign

Computer Engineering 44
Chapter 8

Future Scope

The future scope of the proposed system includes several promising directions. Technically,
we would like to improve the accuracy of our model by incorporating additional deep learn-
ing architectures and considering ensemble techniques. In the future, we plan to expand
the scope of diagnosis to other skin diseases and increase its usefulness in dermatological
medicine. To make our app more accessible, introduce multilingual support and integrate
accessibility features to make our app available to a wider audience. Additionally, partner-
ships with healthcare providers and integration with electronic health record (EHR) systems
are planned to optimize clinical workflow.

45
Chapter 9

Conclusion

Our study demonstrated the effectiveness of transfer learning with MobileNetV2 in the con-
text of melanoma detection, achieving a notable testing accuracy of 92.01%. This model
leveraged the pre-trained MobileNetV2 architecture, known for its efficiency and compact
structure suitable for mobile applications, and applied a layer of custom features to enhance
its diagnostic capabilities. By incorporating additional convolutional layers and dropout
functionalities at strategic points, we effectively tailored the model to handle the unique
challenges of image classification. The integration of these handcrafted features with the
robust MobileNetV2 allowed for the extraction of features essential for accurately distin-
guishing between benign and malignant skin lesions.
In contrast, our Proposed CNN model, designed from scratch, involved multiple iterations
to refine its architecture and improve performance. Despite these efforts, which included in-
creasing convolutional depth and adjusting pooling strategies, this model achieved lower ac-
curacy rates compared to the transfer learning approach. The iterative enhancements helped
in understanding the critical balance necessary between model complexity and generaliza-
tion capabilities, culminating in a final testing accuracy of 88.92%. The direct comparison
highlights the advantages of utilizing transfer learning, particularly using a pre-trained net-
work like MobileNetV2. This approach not only saves computational resources but also
leverages the vast amount of pre-existing knowledge, which can be especially beneficial in
domains like medical imaging where precision is paramount. The success of MobileNetV2
in our tests underscores its potential in embedded systems and applications requiring high
efficiency, presenting a compelling case for its adoption in clinical settings where rapid and
accurate diagnosis is critical.

46
Bibliography

[1] S. Mathur and T. Jain, ”Dermatological Disease Detection Employing Transfer Learn-
ing,” 2023 11th International Conference on Internet of Everything, Microwave Engineering,
Communication and Networks (IEMECON), Jaipur, India, 2023, pp. 1-6, doi: 10.1109/IEME-
CON56962.2023.10092304.

[2] Rarasmaya Indraswari, Rika Rokhana, Wiwiet Herulambang, “Melanoma image classi-
fication based on MobileNetV2 network,” Procedia Computer Science, Volume 197, 2022,
Pages 198-207, ISSN 1877-0509, https://doi.org/10.1016/j.procs.2021.12.132.

[3] D. P. Sahoo, M. Rout, P. K. Mallick and S. R. Samanta, ”Comparative Analysis of Med-


ical Images using Transfer Learning Based Deep Learning Models,” 2022 International Con-
ference on Advancements in Smart, Secure and Intelligent Computing (ASSIC), Bhubaneswar,
India, 2022, pp. 1-8, doi: 10.1109/ASSIC55218.2022.10088373.

[4] S. Kusuma, G. Vasundharadevi and D. M. Abhinay Kanth, ”A Hybrid Model for Skin
Disease Classification using Transfer Learning,” 2022 Third International Conference on
Intelligent Computing Instrumentation and Control Technologies (ICICICT), Kannur, India,
2022, pp. 1093-1096, doi: 10.1109/ICICICT54557.2022.9917705.

[5] Evgin Goceri,”Diagnosis of skin diseases in the era of deep learning and mobile technol-
ogy,” Computers in Biology and Medicine, Volume 134, 2021, 104458, ISSN 0010-4825,
https://doi.org/10.1016/j.compbiomed.2021.104458.

[6] C. K, P. C. Siddalingaswamy, S. Pathan and N. D’souza, ”A Multiclass Skin Lesion clas-


sification approach using Transfer learning based convolutional Neural Network,” 2021 Sev-
enth International conference on Bio Signals, Images, and Instrumentation (ICBSII), Chen-
nai, India, 2021, pp. 1-6, doi: 10.1109/ICBSII51839.2021.9445175.

[7] J. Samraj and R. Pavithra, ”Deep Learning Models of Melonoma Image Texture Pat-
tern Recognition,” 2021 IEEE International Conference on Mobile Networks and Wireless
Communications (ICMNWC), Tumkur, Karnataka, India, 2021, pp. 1-6, doi: 10.1109/ICM-

47
Bibliography

NWC52512.2021.9688345.

[8] Wiem Abbes, Dorra Sellami,”Deep Neural Networks for Melanoma Detection from Op-
tical Standard Images using Transfer Learning,” Procedia Computer Science, Volume 192,
2021, Pages 1304-1312, ISSN 1877-0509, https://doi.org/10.1016/j.procs.2021.08.134.

[9] C. A. Hartanto and A. Wibowo, ”Development of Mobile Skin Cancer Detection using
Faster R-CNN and MobileNet v2 Model,” 2020 7th International Conference on Informa-
tion Technology, Computer, and Electrical Engineering (ICITACEE), Semarang, Indonesia,
2020, pp. 58-63, doi: 10.1109/ICITACEE50144.2020.9239197.

[10] E. Megha and R. Jones S.B., ”Real Time Application of Deep Learning Approach
in Exogenous Skin Problem Identification,” 2020 Third International Conference on Smart
Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 2020, pp. 842-847, doi:
10.1109/ICSSIT48917.2020.9214293.

[11] S. D. Sharma, S. Sharma, A. K. Pathak and N. Mohamed, ”Real-time Skin Disease


Prediction System using Deep Learning Approach,” 2023 2nd Edition of IEEE Delhi Sec-
tion Flagship Conference (DELCON), Rajpura, India, 2023, pp. 1-6, doi: 10.1109/DEL-
CON57910.2023.10127569.

[12] L. Vincent and J. Roopa Jayasingh, ”Comparison of Psoriasis Disease Detection and
Classification Through Various Image Processing Techniques-A Review,” 2022 6th Interna-
tional Conference on Devices, Circuits and Systems (ICDCS), Coimbatore, India, 2022, pp.
122-124, doi: 10.1109/ICDCS54290.2022.9780692.

[13] E. Kanca and S. Ayas, ”Learning Hand-Crafted Features for K-NN based Skin Disease
Classification,” 2022 International Congress on Human-Computer Interaction, Optimization
and Robotic Applications (HORA), Ankara, Turkey, 2022, pp. 1-4, doi: 10.1109/HORA5527
8.2022.9799834.

[14] S. Shrimali, ”Development of a Mobile Application for the Early Detection of Skin Can-
cer using Image Processing Algorithms, Transfer Learning, and AutoKeras,” 2022 5th Inter-
national Conference of Computer and Informatics Engineering (IC2IE), Jakarta, Indonesia,
2022, pp. 100-105, doi: 10.1109/IC2IE56416.2022.9970048.

[15] S. Kohli, U. Verma, V. V. Kirpalani and R. Srinath, ”Dermatobot: An Image Process-


ing Enabled Chatbot for Diagnosis and Tele-remedy of Skin Diseases,” 2022 3rd Interna-

Computer Engineering 48
Bibliography

tional Conference for Emerging Technology (INCET), Belgaum, India, 2022, pp. 1-5, doi:
10.1109/INCET54531.2022.9824756.

[16] V. Nivedita, K. Subramaniam, M. Ramya and B. D. Parameshachari, ”Machine Learn-


ing Based Skin Disease Analyzer with Image Processing,” 2022 IEEE North Karnataka Sub-
section Flagship International Conference (NKCon), Vijaypur, India, 2022, pp. 1-6, doi:
10.1109/NKCon56289.2022.10127040.

[17] N. Abhvankar, H. Pingulkar, K. Chindarkar and A. P. I. Siddavatam, ”Detection of


Melanoma and Non-Melanoma type of Skin Cancer using CNN and RESNET,” 2021 Asian
Conference on Innovation in Technology (ASIANCON), PUNE, India, 2021, pp. 1-6, doi:
10.1109/ASIANCON51346.2021.9544656.

[18] M. Hossain, K. Sadik, M. M. Rahman, F. Ahmed, M. N. Hossain Bhuiyan and M.


M. Khan, ”Convolutional Neural Network Based Skin Cancer Detection (Malignant vs Be-
nign),” 2021 IEEE 12th Annual Information Technology, Electronics and Mobile Com-
munication Conference (IEMCON), Vancouver, BC, Canada, 2021, pp. 0141-0147, doi:
10.1109/IEMCON53756.2021.9623192.

[19] J. Alam, ”An Efficient Approach for Skin Disease Detection using Deep Learning,”
2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE),
Brisbane, Australia, 2021, pp. 1-8, doi: 10.1109/CSDE53843.2021.9718427.

[20] M. S. Junayed, A. N. M. Sakib, N. Anjum, M. B. Islam and A. A. Jeny, ”EczemaNet: A


Deep CNN-based Eczema Diseases Classification,” 2020 IEEE 4th International Conference
on Image Processing, Applications and Systems (IPAS), Genova, Italy, 2020, pp. 174-179,
doi: 10.1109/IPAS50080.2020.9334929.

[21] https://blog.roboflow.com/how-to-train-mobilenetv2-on-a-custom-dataset/amp/”

[22] https://www.kaggle.com/datasets/sallyibrahim/skin-cancer-isic-2019-2020-malignant-
or-benign

[23] https://www.kaggle.com/datasets/farhatullah8398/skin-lesion-dermis-dataset

Computer Engineering 49
Appendix A

Appendices

A.1 Plagiarism Report

50

You might also like