Sample Final Project Report
Sample Final Project Report
on
submitted to the
UNIVERSITY OF MUMBAI
during semester VIII in partial fulfilment of the requirement for the award of
the degree of
BACHELOR OF ENGINEERING
in
COMPUTER ENGINEERING.
To,
The Principal,
Shah and Anchor Kutchhi Engineering College,
Chembur, Mumbai-88
Respected Sir,
This is to certify that Final year students Pratham Saraiya, Pranjal Sawant, Ravi Rathod,
Mehul Phatangare have duly attended the sessions on the day allotted to them during the
period from 08 January 2024 to 03 April 2024 for performing the Project titled ”Skin Disease
Detection using Deep Learning”. They were punctual and regular in their attendance.
Following is the detailed record of the student’s attendance.
Attendance Record:
Ms.Priyanka Ghule
Approval for Project Report for B. E.
Semester VIII
This project report entitled ”Skin Disease Detection using Deep Learning” by Pratham Saraiya,
Pranjal Sawant, Mehul Phatangare and Ravi Rathod is approved for semester VIII in partial
fulfilment of the requirement for the award of the degree of Bachelor of Engineering.
Examiners
1.
2.
Guide
1.
Guide
2.
Date:
Place: Mumbai
iv
Declaration
We declare that this written submission represents our ideas in our own words and where
others’ ideas or words have been included, we have adequately cited and referenced the orig-
inal sources. We also declare that we have adhered to all principles of academic honesty and
integrity and have not misrepresented or fabricated or falsified any idea/data/fact/source in
our submission. We understand that any violation of the above will be cause for disciplinary
action by the Institute and can also evoke penal action from the sources which have thus not
been properly cited or from whom proper permission has not been taken when needed.
Date:
Place: Mumbai
v
Acknowledgement
We would like to express our sincere gratitude to all those who have supported and guided
us throughout the process of conducting this report on ”Skin Disease Detection using Deep
Learning”. This endeavor would not have been possible without their valuable contributions
and assistance.
We are thankful to our college Shah and Anchor Kutchhi Engineering College for consider-
ing our project and extending help at all stages needed during our work of collecting infor-
mation regarding the project.
We are deeply indebted to our Principal Dr. Bhavesh Patel and Head of the Computer
Engineering Department Prof. Uday Bhave for giving us this valuable opportunity to do this
project. We express our hearty thanks to them for their assistance without which it would
have been difficult in finishing this project synopsis and project review successfully.
We take this opportunity to express our profound gratitude and deep regards to our guide
Ms.Priyanka Ghule for her exemplary guidance, monitoring and constant encouragement
throughout the course of this project.
This work would not have been possible without the collective efforts of these individuals
and organizations. While any shortcomings in this report are solely our responsibility, their
contributions have significantly enriched its content.
vi
Abstract
Early and accurate detection of melanoma is crucial for successful treatment outcomes. In
this study, we leverage deep learning techniques to automate the classification of skin lesions
into benign and malignant categories. We developed and fine-tuned two distinct models:
a custom Proposed CNN and MobileNetV2 with transfer learning. After extensive train-
ing over various epochs and meticulous fine-tuning, MobileNetV2 emerged as the superior
model, demonstrating the highest accuracy in lesion classification. MobileNetV2, a cutting-
edge architecture crafted by CNN, underwent rigorous validation to verify its performance
consistency and reliability. The model was trained on a comprehensive dataset, ensuring
broad representation of skin lesions. Subsequent validation techniques, including cross-
validation and performance on unseen data, substantiated the model’s robustness, with the
results reflecting a high level of precision in distinguishing between benign and malignant
melanomas.
The performance of the system was methodically assessed using standard evaluation metrics,
confirming the model’s potential as a reliable tool for the early detection of melanoma. The
application of such advanced computer-aided diagnosis systems marks a significant contri-
bution to medical diagnostics. By facilitating prompt and reliable lesion classification, our
approach paves the way for enhancing the clinical workflow and ultimately improving pa-
tient care outcomes.
vii
Table of Contents
Approval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Declaration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2. Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1 Survey of Existing system . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Limitation of Existing system or research gap . . . . . . . . . . . . . . . . 7
2.3 Problem Statement and Objective . . . . . . . . . . . . . . . . . . . . . . 8
2.4 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3. Software Requirement Specification . . . . . . . . . . . . . . . . . . . . . . . 9
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 Overall Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.3 External Interface Requirements . . . . . . . . . . . . . . . . . . . . . . . 12
3.4 System Features and Requirements . . . . . . . . . . . . . . . . . . . . . . 13
3.4.1 Functional Requirements . . . . . . . . . . . . . . . . . . . . . . . 13
3.4.2 Non-functional Requirements . . . . . . . . . . . . . . . . . . . . 13
3.4.3 Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . 13
3.4.4 Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . 14
3.5 Other Nonfunctional Requirements . . . . . . . . . . . . . . . . . . . . . . 14
4. Project Scheduling and Planning . . . . . . . . . . . . . . . . . . . . . . . . . 16
5. Proposed System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.2 Details of Hardware& Software . . . . . . . . . . . . . . . . . . . . . . . . 18
5.3 Design Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.4 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.5 MobileNetV2 Model using Transfer Learning Source Code and Result . . . 27
5.6 Proposed CNN Model Source Code and Result . . . . . . . . . . . . . . . 32
6. Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.1 Testing with Multiple Datasets . . . . . . . . . . . . . . . . . . . . . . . . 36
viii
6.2 Application Testing and Integration of MobileNetV2 . . . . . . . . . . . . 37
7. Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
8. Future Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
9. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
A. Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
A.1 Plagiarism Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
ix
List of Figures
x
Chapter 1
Introduction
The skin is an important part of the body and consists of these layers: epidermis, dermis and
hypodermis. Of these layers, the epidermis, which is exposed to the external environment, is
prone to various skin diseases, and melanoma is a major concern. Within the epidermis are
melanocytes, which are responsible for regulating skin pigmentation. Skin cancer develops
when these melanocyte cells multiply uncontrollably. The main cause of this condition is
excessive exposure to ultraviolet (UV) radiation from the sun, tanning beds and sunlamps.
It is very important to note that skin cancer can affect people regardless of age, skin color
or gender[1]. With revolutionizing changes in the field of deep learning, it has become easy
to automate the process of disease detection using tools and techniques provided by deep
learning algorithms. Use of CNN architecture which works well with image processing
and analysis tasks would provide promising results in classifying skin lesions like detecting
Melanoma and melanocytic nevi accurately.
The advent of the deep learning era, especially the development of Convolutional Neural
Networks (CNN), has revolutionized image classification and expanded its scope to der-
matology. However, the computational requirements of these network models make them
unsuitable for real-time image classification on mobile devices due to limitations such as
limited hardware, including memory and power limitations and mobile computing capabil-
ities. Thus, the need to address these challenges prompted the development of lightweight
architectures such as MobileNetV2, which aim to reduce model parameters and computa-
tional cost while maintaining efficiency [5].
Timely identification of these diseases will help to be treated at the right time which could
help in successfully saving lives. A reliable automated system will help for significantly re-
duce manual interventions and examinations providing more accurate and consistent results.
Moreover spending high finance on testing for these diseases this system can help to identify
so it could assist medical professionals in identifying the problem. Successful implemen-
tation can revolutionize dermatological diagnostics, making way for faster more accessible,
and more accurate health care solutions. It could have a positive and substantial in public
health. This can help medical professionals and even nonmedical professionals which serves
1
Chapter 1. Introduction
as a purpose for research and personal diagnosing and treatment plans. We aim to contribute
towards digital healthcare advancements and provide easy-to-use, accessible, and efficient
diagnostic tools.
1.1 Background
The project aims to develop a system that helps in detecting whether the skin lesion is
melanoma or not using deep learning technology. The approach combines knowledge of
Dermatology with deep learning techniques. The important components includes gathering
a dataset which has diverse images of skin lesions with class label as Benign or Malignant ,
designing a convolutional neural network architecture particularly MobileNetV2 for extract-
ing various features, and fine tuning the parameters.
1.2 Motivation
Skin diseases are a common global health problem that affects people in all population
groups and significantly reduces their quality of life. The importance of timely and accu-
rate diagnosis cannot be overstated as it is crucial for effective treatment and cure of these
diseases. However, access to dermatology expertise is often limited, particularly in under-
served and remote areas. Such disparities in access to health care underscore the critical
need for innovative solutions.In response to this challenge, the integration of artificial intel-
ligence (AI) and deep learning technologies offers transformative potential. By automating
and improving the diagnostic process, these advanced computing technologies can signif-
icantly support healthcare professionals. They can help reduce the dermatology treatment
gap and ensure that more people receive an accurate diagnosis quickly. This project aims
to harness the power of artificial intelligence to create a reliable, efficient and accessible
diagnostic tool that will have a positive impact on global dermatology healthcare..
Computer Engineering 2
Chapter 2
Literature Review
3
Chapter 2. Literature Review
bileNet which are used along with MobilNetV2 and RMNV2. Five diseases which are identi-
fied are Hemangioma, Acne Vulgaris, Psoriasis, Rosacea, Seborrheic Dermatitis. SqueezeNet
got training accuracy of 92.08% and testing accuracy of 68.26%. MobilNetV2 got training
accuracy of 88.75% and testing accuracy of 81.19%[5].
Authors C. K, P. C. Siddalingaswamy proposed a classification system that leverages con-
volutional neural networks (CNNs) which are based on transfer-learning to classify dermo-
scopic images into eight distinct categories. Multi-headed neural networks in which State-
of-the-art pre-trained models are integrated through a functional model-based approach. The
ensemble technique of blending is employed to efficiently combine predictions, achieving a
balanced multi-class accuracy of 81.2% on the ISIC 2019 dataset[6].
Authors J. Samraj and R. Pavithra evaluated various techniques for Melanoma detection by
focusing on the importance of early accurate diagnosis. Computer-aided diagnosis (CAD)
systems play an important role in providing accurate classification results. The detection
rates are enhanced by using image processing methods and neural network-based models.
This paper focuses on enhancing melanoma image analysis. This paper focuses on enhanc-
ing melanoma image analysis. Key features include: Improving image quality by applying
filters. Utilizing texture pattern extraction to enhance feature magnitude.Optimal arrange-
ment of feature extraction with careful attribute selection[7].
Author Wiem Abbes suggested various the deep learning approach that gives promising re-
sults on dermoscopic images which require specialized equipment, and optical skin images
captured with a standard camera offer a cost-effective alternative. A transfer learning ap-
proach is used to overcome the scarcity of a large database for training deep networks in this
modality. A new CNN architecture and deep learning networks have been developed, with
the convolutional neural network achieving an impressive 97% detection rate, a significant
advancement in melanoma diagnosis[8].
The Model built by author C. A. Hartanto involves the detection of Actinic Keratosis, and
Melanoma. In this paper comparison between MobileNetV2 and R-CNN is shown. For
MobileNet V2 maximum accuracy was 86.1% While for R-CNN maximum accuracy was
87.2%. Also, loss value for MobilNetV2 and R-CNN was 2% and 0.3%. R-CNN was had
minimum loss as well as Maximum accuracy as well[9].
Author E. Megha put forth a method using a deep learning, R-CNN neural architecture and
Inception-V2 where the dataset was collected of five skin classes like tinea,cold sore,etc.
and then trained using R-CNN and the model was integrated in the website it provided an
accuracy of 98% with Inception-v2 which was the best and with R-CNN it achieved 90%
accuracy[10].
Author S. D. Sharma covered various methods for skin disease detection, including color-
based approaches, feature extraction, segmentation algorithms, and integration of multiple
algorithms. Studies utilize artificial neural networks (ANNs) and CNN models incorporating
with algorithms such as SVM. Techniques like data augmentation and transfer learning are
Computer Engineering 4
Chapter 2. Literature Review
employed to improve accuracy. Overall, the research emphasizes the importance of combin-
ing different methodologies for effective and early detection of skin diseases, contributing to
the development of reliable diagnostic tools[11].
The research by author L. Vincent showcased various innovative approaches to skin dis-
ease detection and classification. These methods leverage advanced technologies such as
deep learning and machine learning algorithms like Convolutional Neural Networks (CNNs)
and Support Vector Machines (SVMs), and image processing techniques. For instance, one
study employs a deep learning-based minimized U-Net model to effectively detect psoriasis
lesions from RGB images with high accuracy. Another paper introduces a fully automated
system for dermatological disease recognition using machine learning algorithms like CNN
and SVM, achieving impressive accuracy rates. Additionally, a CAD framework utilizing
sophisticated image processing methods and classification techniques achieves excellent per-
formance in early detection and classification of skin lesions. These studies demonstrate the
potential of combining advanced algorithms and image processing techniques for accurate
and efficient skin disease diagnosis[12].
Author E. Kanca highlighted the significance of early and accurate diagnosis of skin can-
cer, a prevalent global health concern. Dermoscopy, a non-invasive imaging technique, en-
hances visualization of skin lesions, aiding in diagnosis. Initially, machine learning-based
approaches focused on manual feature extraction, utilizing shape, color, and texture attributes
for classification. Recent studies propose advanced algorithms like the K-NN model for
classifying melanoma, nevus, and seborrheic keratosis. These methods entail preprocess-
ing, segmentation, feature extraction, and classification steps, aiming to improve diagnostic
accuracy. Evaluation metrics are utilized to assess performance. Challenges include class
imbalance in datasets and difficulties in distinguishing lesion classes. Future research aims
to address these issues through artifact removal, color enhancement, and exploring alterna-
tive machine learning approaches to enhance classification accuracy[13].
Author S. Shrimali introduced a mobile application naming SkinScan, leveraging deep learn-
ing to diagnose various types of skin cancer accurately and efficiently. It addresses the press-
ing need for early detection due to the increasing incidence of skin cancer globally. The
application incorporates a fine-tuned EfficientNetB7 CNN model, achieving a validation ac-
curacy of 95% and an F1 score of 0.94. Through comparative analysis and optimization,
SkinScan surpasses existing tools, offering a user-friendly interface, self-assessment tests for
cancer risk, UV radiation guidelines, and comprehensive information on skin cancer types
and treatments. The application aims to be globally accessible and scalable, potentially re-
ducing skin cancer rates worldwide by facilitating early diagnosis. Future plans involve
clinical testing and deployment on major app stores[14].
Author S. Kohli introduced Dermatobot, an intelligent chatbot designed to diagnose com-
mon skin diseases using a combination of computer vision and natural language processing
(NLP). By leveraging image classification models like EfficientNet B4 and semantic sim-
Computer Engineering 5
Chapter 2. Literature Review
ilarity measures from Universal Sentence Encoders, Dermatobot offers accurate diagnoses
based on user-provided symptoms and images. The chatbot also recommends mild remedies,
though it’s emphasized that it’s not a substitute for professional dermatological care. The ar-
chitecture employs microlithic backend components, REST API endpoints, and React.js for
the frontend. Future work includes expanding the disease classes, improving image dataset
quality, incorporating time series data for better diagnosis, and extending language support
beyond English[15].
Author V. Nivedita proposed a method for determining a skin diseases utilizing image pro-
cessing techniques, Python, and the YOLOV3 tool. It addresses the challenge of accurately
diagnosing skin conditions, which can arise from various factors like genetics, aging, aller-
gies, and environmental factors. By analyzing images of the affected skin area, the system
aims to provide fast and reliable diagnosis without the need for physical examination. The
research covers four common skin diseases: acne, melanoma, blisters, and cold sores, pre-
senting a technique for diagnosing each. Previous related works are discussed, highlighting
methods such as k-means clustering, color image processing, and segmentation for disease
detection. The proposed method involves data collection, preprocessing, feature extraction,
and classification stages, leveraging tools like OpenCV and YOLOV3. Results show suc-
cessful detection and classification of skin diseases, paving the way for future advancements
in computer-aided diagnosis and treatment[16].
Author N. Abhvankar outlines various approaches and deep learning algorithms used for
skin cancer detection, focusing on melanoma and non-melanoma types. It discusses the rise
of skin cancer cases due to factors like ozone layer depletion and emphasizes the need for ac-
curate detection methods. Deep learning algorithms, mainly Convolutional Neural Networks
(CNNs), are highlighted as promising tools for aiding dermatologists in precise diagnosis.
Different models and combinations of algorithms are compared, showcasing their accuracy
and effectiveness in classifying various types of skin diseases. Pre-processing techniques
such as data augmentation, SMOTE, and image enhancement are described to improve model
performance. The survey also presents results from implemented models, including CNN
and ResNet50, indicating high accuracies achieved through preprocessing methods. Addi-
tionally, the survey touches upon the distribution of skin cancer cases by gender, anatomical
location, and age groups. It concludes by suggesting future directions for research, such as
hybrid models and further training with larger datasets, to enhance detection accuracy[17].
Author M. Hossain proposed a method for detecting skin cancer using Convolutional Neu-
ral Networks (CNNs), focusing on various versions of the ResNet model. With a dataset
of 6,599 images from Kaggle, the study trains and tests ResNet18, ResNet50, ResNet101,
and ResNet152 models. Results show increasing accuracy with deeper ResNet architectures,
with ResNet152 achieving the highest accuracy of 89.64%. The study emphasize how impor-
tant a deep learning frameworks like PyTorch and CNNs in medical image classification are.
Overall, the research demonstrates the potential of ResNet models in accurately diagnosing
Computer Engineering 6
Chapter 2. Literature Review
skin cancer, which could greatly assist dermatologists in improving diagnostic efficiency and
patient outcomes. Future work may involve exploring larger datasets and investigating novel
CNN architectures for further performance enhancements[18].
Author J. Alam proposed an effective way for detecting skin diseases using deep learning,
aiming to address the limitations of expensive and limited medical equipment for diagno-
sis. It suggests leveraging image-based diagnosis systems coupled with image processing
and deep learning techniques to detect skin diseases at an early stage. The proposed system
focuses on feature extraction and classification using convolutional neural networks (CNN)
and support vector machines (SVM). The study reviews existing techniques, highlighting
methods such as color image processing, deep learning architectures like CNN, and spe-
cific approaches for detecting various skin diseases. It utilizes the HAM10000 dataset for
training and evaluates multiple CNN models with different filter patterns and dropout rates.
The results demonstrate improvements in accuracy and efficiency, with the proposed models
achieving up to 85.14% accuracy in skin disease detection[19].
Authors M. S. Junayed, A. N. M. Sakib presented an innovative method utilizing deep con-
volutional neural networks (CNNs) to classify five distinct categories of Eczema, leveraging
a dataset gathered for this purpose. Addressing the scarcity of detection systems for Eczema,
the study employs data augmentation and regularization techniques to enhance performance.
The suggested model achieves an accuracy of 96.2%, surpassing previous state-of-the-art
methods. Evaluation metrics such as sensitivity, specificity, precision, and accuracy demon-
strate the effectiveness of the model, outperforming pre-trained models InceptionV3 and
MobileNetV1. The research underscores the importance of dataset enrichment and proposed
future directions for further improvement, including expanding the dataset and implementing
segmentation and detection techniques[20].
Computer Engineering 7
Chapter 2. Literature Review
Objective
We have implemented the imperative need for accurate and timely melanoma diagnosis by
harnessing advanced technologies, particularly deep learning. Our project aims to lever-
age deep learning techniques to create efficient and reliable diagnostic tools for melanoma.
Through the analysis of skin images, our focus lies on achieving early detection of poten-
tially malignant skin lesions, significantly enhancing the chances of successful treatment and
favorable patient outcomes.
2.4 Scope
In healthcare, the primary emphasis in the scope of melanoma detection revolves around
early identification and diagnosis of this serious form of skin cancer. This entails extensive
education efforts directed at both healthcare professionals and the general public, aiming to
enhance the recognition of early signs and symptoms of melanoma. Additionally, dermatol-
ogy services are a cornerstone of melanoma detection within healthcare systems, ensuring
that individuals at risk or those with suspicious skin lesions have access to the expertise of
dermatologists.
Computer Engineering 8
Chapter 3
3.1 Introduction
Purpose
The main purpose of the proposed system is early, accurate and automatic detection of be-
nign or malignant skin lesions using the MobileNetV2 architecture. Skin Disease has a wide
range impact with the potential to cause discomfort ,pain and sometimes even proves to be
fatal. In underserved or remote areas access to dermatologists can be limited this project aims
to build a bridge offering reliable and accessible tool for preliminary diagnosis. The use of
Convolutional Neural Networks (CNN) a deep learning model showed promising results in
tasks involving image processing, video processing or object detection. MobileNetV2 is a
CNN architecture that is well suited for deploying proposed system on a Mobile applica-
tion. The main aim is to leverage deep learning techniques for timely accurate detection of
skin disease, in turn contributing to the healthcare industry providing enhanced quality and
improved outcomes
Intended Audience
• Developers: By understanding the working ,technical details, algorithms and code
structure this group will be responsible for the implementing and modifying the Mo-
bileNetV2 architecture.
• Project Stakeholders: It will help this group of people to get a high-level understanding
of the purpose and impact of the proposed system on the healthcare industry.
• Healthcare professionals and Dermatologists: The proposed system can be used for
assistance by the professionals in their work.
• End users : The proposed system will be used by individuals facing issues with skin
lesions, doesn’t have proper dermatologist guidance within their vicinity.
Overall The system will be used to analyse whether the individual has a lesion that depicts
9
Chapter 3. Software Requirement Specification
Skin cancer or not. System is designed to assist in the early recognition of suspicious moles
or skin lesions, potentially indicating the presence of melanoma.
Product Scope
By leveraging the Deep learning techniques the proposed system will automate Skin disease
detection based on input images provided by the end users. Developed system will be acces-
sible by the users with varying hardware capabilities. It will help in preliminary screening
of the skin lesions by assisting the users. It is not intended to replace professional medi-
cal opinions/advices. By undergoing rigorous process of testing to ensure that accuracy is
maintained well resulting in accurate, reliable and robustness in detection of skin lesions.
Product Functions
1. Image upload: Using the interface, end users can upload pictures of their skin lesions.
2. Image processing: To improve quality and standardize input for the detection model,
the uploaded images will undergo pre-processing.
3. Skin Disease Detection: The submitted photos are analyzed using the MobileNetV2
deep learning model to determine if the skin lesion is benign or malignant.
4. The model’s integration with the interface will make the detection findings accessible
to dermatologists.
Together, these features make up the fundamental powers of the MobileNetV2 based skin
disease detection system. Every feature is made to improve the system’s overall usability,
accessibility, and efficacy for the targeted users.
Computer Engineering 10
Chapter 3. Software Requirement Specification
perience, which will aid in the diagnosis of various skin illnesses, and they may use
the system for early detection.
• Administrators: They are in charge of system maintenance and user management.
Administrators are in charge of monitoring system performance and ensuring data
privacy and security for users.
• Developers: They have technical competence as well as domain understanding, and
they can deploy, alter, and configure the system, as well as manage error reporting and
debugging tools.
Use cases
Users interact with a user-friendly web interface designed to capture images of their skin
condition safely. Once submitted, these images are processed by MobileNetV2 machine
learning model, carefully trained to detect signs of Melanoma. Using advanced image analy-
sis techniques, the system accurately evaluates the images and provides a possible diagnosis
promptly. This streamlined process not only shortens the diagnostic stage but also improves
accuracy, allowing for timely medical intervention and potentially improving outcomes
Operating Environment
• Backend and Server: The system will be provided the backend services with the help
of flask which can run in any OS such as Windows or Mac. The Deep learning model
will be integrated with the flask backend and then it will work as an API with the
system. Flask will serve as the backend framework which will run all the backend
environment operations of the system for that flask should be installed on the system,
with that Python is also necessary as flask is a Python framework.
• Frontend : For the UI, Flutter and Dart-based Mobile application will be developed
to run the services, and these will be integrated with flask backend so that model
can provide the prediction on the basis of the input given. For running the flutter
services first you need to install the flutter environment on the system. Dart is used as
a language which is object-oriented used in the flutter framework.
Computer Engineering 11
Chapter 3. Software Requirement Specification
• Software Constraints:
– Compatibility: Ensure that all the flask backend, tensorflow and Api and all the
python libraries are compatible so they work seamlessly and should be kept up-to
date.
– Libraries and Framework: Choice of an specific library and also a framework can
affect the functionality and stability of the system.
Hardware Interfaces
• Mobile Device Interfaces: The proposed system should be compatible with various
mobile devices so that end users can interact with the application and upload images.
Components include a touchscreen interface and a camera to capture images of skin
Computer Engineering 12
Chapter 3. Software Requirement Specification
lesions.
Software Interfaces
• Deep learning Framework Interfaces : The proposed system will interact with the deep
learning framework TensorFlow for implementing the MobileNetV2 model.
• Image Processing Module : The module will pre-process the uploaded images, before
sending them for detection.
Communications Interfaces
The proposed system will use Flask API for integration with the application for data ex-
change.
Computer Engineering 13
Chapter 3. Software Requirement Specification
Safety Requirements
The planned system won’t offer any medical services or medical prescription, protecting
against inaccurate recommendations. Respect user privacy and obtain informed consent be-
fore conducting any examinations or sharing medical information.
Security Requirements
The proposed system will not provide and medical recommendations and the users will be
advised to consult a professional for advice and treatment. It will not retain any kind of user
sensitive or personal data. The system will provide easy and understandable instructions to
the users regarding the usage of the system.
Computer Engineering 14
Chapter 3. Software Requirement Specification
Business Rules
The proposed system will not involve any payments or financial transactions from the users
as it is a diagnostic tool not a commercial tool.
Computer Engineering 15
Chapter 4
16
Chapter 5
Proposed System
5.1 Algorithm
Convolutional Neural Networks (CNNs) are a category of profound learning models partic-
ularly custom fitted for dealing with grid-like information, such as pictures and recordings.
These models are broadly utilized in visual acknowledgment errands due to their capacity to
handle and analyze visual information productively. CNNs utilize convolutional layers pre-
pared with channels that efficiently extricate highlights from input information. These layers
are proficient at recognizing designs and connections at different spatial scales, capturing
complex highlights basic for point by point picture investigation. The design of CNNs regu-
larly incorporates different layers of convolutions taken after by pooling layers that perform
downsampling to diminish spatial measurements and computational complexity.
The flexibility of CNNs expands past basic picture classification; they are fundamentally
to computer vision errands such as video handling, protest location, and picture division.
The convolutional layers serve not as it were to identify highlights but moreover to clas-
sify visual information through ensuing thick layers, making CNNs profoundly successful
for comprehensive picture handling errands. Advancements in CNN structures have sig-
nificantly affected areas requiring point by point visual understanding, counting therapeutic
picture examination, independent driving, and other regions where exactness and proficiency
are significant.
MobileNetV2 stands out as a specialized CNN design planned to function on versatile and
edge gadgets where computational assets are constrained. It leverages depthwise distinguish-
able convolutions—a procedure that altogether decreases the computational stack without
compromising demonstrate exactness. This highlight is vital for creating lightweight models
that are deployable on gadgets with constrained handling capabilities, such as smartphones,
IoT gadgets, and implanted frameworks. MobileNetV2’s productive arrange structure em-
powers it to back real-time applications on versatile stages, making it progressively prevalent
for on-device profound learning applications where speed and proficiency are fundamental..
17
Chapter 5. Proposed System
2. Activation Layer: After convolution, ReLU (Rectified Linear Unit) which is an acti-
vation function is employed to provide nonlinearity to the model, helping it to learn more
complicated patterns.
3. Pooling Layer: This is also referred as subsampling or downsampling. This layer min-
imizes the spatial size of feature maps to simplify computation and extract dominating fea-
tures, allowing for feature identification that is invariant to scale and orientation changes.
4. Fully Connected Layer: Following numerous convolutional layers along with pooling
layers, the neural network performs high-level reasoning. The features are flattened into a
vector and fed into fully connected layers that behave similarly to a regular neural network.
5. Output Layer: The final layer generates a probability distribution over the target classes
using a softmax or sigmoid activation function (depending on the task).
Computer Engineering 18
Chapter 5. Proposed System
5.4 Methodology
MobileNetV2, a core component of this skin cancer classification project, stands out as a
convolutional neural network (CNN) architecture designed to excel in both efficiency and
high-performance image classification tasks. It represents an evolutionary step from the orig-
inal MobileNet, introducing several pivotal innovations that render it a compelling choice
across a broad spectrum of computer vision applications. Notably, its efficiency is a hall-
mark, particularly in the context of medical imaging tasks such as skin cancer classifica-
tion, where considerations of model size and computational resources are paramount. Mo-
bileNetV2 achieves this by harnessing depth-wise separable convolutions, a critical element
that effectively reduces parameters and computational demands, resulting in a model that is
both lightweight and highly efficient. Such efficiency is vital for real-time applications and
resource-constrained settings, ensuring that the model’s performance remains responsive and
accessible.
Computer Engineering 19
Chapter 5. Proposed System
Data Collection and Splitting: The datasets used in ISIC 2019, and ISIC 2020 were ob-
tained from the International Society for Digital Imaging of the Skin and its ISIC project[22].
The total number of images is 10,192 where data is split into 60:20:20 for training, testing,
and validation. We utilized a dataset consisting of 6112 instances for training, with an equal
distribution of 3056 malignant and 3056 benign cases. To evaluate the performance of our
model, we employed a test set comprising 2040 instances, equally split between 1020 malig-
nant and 1020 benign cases. Additionally, we employed a validation set of 2040 instances,
mirroring the test set’s distribution with 1020 malignant and 1020 benign cases. This dataset
Computer Engineering 20
Chapter 5. Proposed System
distribution ensures a balanced representation of both malignant and benign cases across all
phases of model development, facilitating robust evaluation and validation of our proposed
approach. Unlike the larger ISIC datasets, the Dermis dataset contains a more detailed col-
lection of 1000 dermatology images[23]. This dataset comes from a collaboration involving
dermatology institutions that provide annotated clinical images for educational and research
purposes. In particular, the Dermis dataset contains 500 malignant and 500 benign cases
equally, which provides a balanced approach for training and testing machine learning mod-
els. In our project, the Dermis dataset was strategically divided into training, testing, and
validation segments to ensure comprehensive model training and accurate performance eval-
uation. The training consisted of 720 uniformly distributed images with 360 malignant and
360 benign cases. The test set contained 200 images, again equally divided into 100 malig-
nant and 100 benign cases, to evaluate the initial performance of our models. In addition,
the validation set contained 80 images, including 40 malignant and 40 benign cases, which
were used to fine-tune the models and verify their diagnostic accuracy under controlled con-
ditions. This balanced distribution in a smaller dataset like Dermis is particularly useful for
high-fidelity model training where data quality and specificity are critical to developing reli-
able diagnostic tools.
Model Building and Training: The Dermis dataset undergoes training using a neural net-
work architecture meticulously crafted to extract meaningful features and classify data ef-
fectively. The model architecture comprises several layers, each tailored to handle specific
aspects of the data and optimize learning. At the forefront of the architecture are convo-
lutional layers, where feature maps are generated by convolving input data with learnable
filters. These filters vary in number and size across different configurations. The model
starts with a sizable number of filters, such as 1024, gradually decreasing to 256, 512, or
256 in subsequent layers. The choice of filter size, along with the kernel size, is crucial for
capturing relevant patterns at different scales within the data. A pooling layer follows the
convolutional layer to downsample the feature map, reducing computational complexity and
preventing overfitting. Max pooling and global average pooling are utilized alternatively,
providing different mechanisms for feature selection and abstraction. Maximum pooling
preserves the most important features of each pool, while global average pooling calculates
the average value over the entire feature map, providing a more general representation.
Dropout layers are strategically inserted after certain convolutional layers to improve gen-
eralization and prevent the model from remembering noise in the training data. Dropout
randomly disables some neurons during training, allowing the network to learn more robust
features and reducing interdependencies between neurons. The learning rate is an important
hyperparameter that determines the step size of gradient descent optimization and is carefully
chosen to balance training speed and convergence. It oscillates between 0.001 and 0.0001
Computer Engineering 21
Chapter 5. Proposed System
across different configurations, with occasional adjustments during training using learning
rate reduction strategies. Training occurs over multiple epochs, typically 25 or 50 iterations
through the entire dataset, allowing the model to gradually improve its performance. How-
ever, to prevent overfitting and ensure optimal generalization, early stopping mechanisms are
employed. These mechanisms monitor validation losses and stop training if no improvement
is observed over a predefined number of epochs specified by the patience parameter.
The model’s performance is evaluated through accuracy, measured as the percentage of cor-
rectly classified instances. Across various configurations, the model achieves accuracies
ranging from 75% to 83%, reflecting its ability to effectively classify the Dermis dataset.
In addition to these core elements, certain configurations incorporate additional fine-tuning
techniques, such as learning rate reduction by a factor of 5 at specific epochs. This com-
prehensive approach to model training and hyperparameter tuning underscores the iterative
nature of deep learning model development, where adjustments are made systematically to
achieve optimal performance on the target task.
The ISIC Archive Dataset undergoes training using a meticulously crafted neural network ar-
chitecture tailored to optimize feature extraction and classification performance. The model
architecture includes convolutional layers, pooling layers, and dense layers, each serving a
specific purpose in the learning process. Beginning with convolutional layers, the model
employs varying numbers of filters, ranging from 1024 to 32, and diverse kernel sizes to
capture essential features at different scales within the data. Following the convolutional
layer, a pooling layer using max pooling or global average pooling is applied to downsample
the feature map, thereby reducing the computational complexity and preventing overfitting.
Computer Engineering 22
Chapter 5. Proposed System
Dense layers come into play after the feature extraction stage, facilitating the learning of
high-level representations. The architecture incorporates dense layers with different config-
urations of neurons, ranging from 256 to 1024, followed by dropout layers with dropout rates
of 0.25 or 0.5 to mitigate overfitting and enhance generalization. The learning rate, a critical
hyperparameter governing the rate of model optimization, is carefully selected to balance
training speed and convergence. It remains consistent at 0.001 across various configurations,
with occasional adjustments during training using learning rate reduction strategies, such as
reducing the learning rate by a factor of 5 or 6 at specific epochs. Training proceeds over
multiple epochs, typically 25 or 100 iterations through the entire dataset, allowing the model
to gradually refine its predictive capabilities. However, to prevent overfitting and ensure op-
timal generalization, the model employs strategies such as freezing base layers and reducing
the learning rate during training. The model’s performance is assessed through accuracy,
measured as the percentage of correctly classified instances. Across different configurations,
the model achieves accuracies ranging from 70.64% to an impressive 92.01%, demonstrating
its effectiveness in classifying the ISIC Archive Dataset. This comprehensive approach to
model training and hyperparameter tuning underscores the iterative nature of deep learning
model development, where adjustments are made systematically to achieve optimal perfor-
mance on the target task.
Computer Engineering 23
Chapter 5. Proposed System
Convolutional Layers Pooling Dense Layers Learning rate Epoch Accuracy (%)
Dermis Data Set
1024 Max 1024 Dropout(0.5) 0.001 25 77
512 Max 1024 Dropout(0.5) 0.001 25 75
256 Max 1024 Dropout(0.5) 0.001 25 80
128 Global avg
512 Max 1024 Dropout(0.5) 0.0001 25 79.50
256 Max Patience(10)
128 Global avg Early Stopping
512 Max 1024 Dropout(0.5) 0.0001 50 77.50
256 Max Patience(15)
128 Global avg Early Stopping
512 Max 1024 Dropout(0.5) 0.0001 50 80
256 Max
128 Global avg
256 Max 1024 Dropout(0.5) 0.0001 50 82
128 Max
64 Global avg
256 Max 512 Dropout(0.5) 0.001 50 82
128 Max
64 Global avg ReduceLr
256 Max 1024 Dropout(0.5) 0.001 50 83
128 Max
64 Global avg
ISIC Archive Dataset
1024 Max 1024 Dropout(0.5) 0.001 25 70.64
512 Max + Reduce LR - 5
256 Global avg Base[-10] freeze
1024 Max 1024 Dropout(0.5) 0.001 25 77.18
512 Max + Reduce LR - 6
256 Global avg Base[-10] freeze
32 Max 1024 Dropout(0.25) 0.001 25 90.04
64 Max + Dropout(0.5)
128 Max
256 Max
512 Max
32 Max 1024 0.001 100 92.01
64 Max Dropout(0.5)
128 Max
256 Max
512 Max
512 Max
Table 5.1: Applied Transfer Learning
Computer Engineering 24
Chapter 5. Proposed System
The development of our proposed CNN model followed a structured iterative refinement ap-
proach, designed to optimize performance in melanoma classification. The model architec-
ture commenced with a simple CNN configuration, incorporating an input layer designed to
process 256x256 pixel images, multiple convolutional layers with respective kernels for fea-
ture extraction, activation functions to introduce non-linearities, pooling layers for downsam-
pling, and densely connected layers for classification, concluding with a sigmoid-activated
output layer for binary classification (benign versus malignant). Initial model specification
involved training the Dermis dataset with a stochastic gradient descent optimizer with a
learning rate of 0.001 for 25 epochs. The results of the first iteration on this dataset gave
a training accuracy of 50.14% and a validation accuracy of 48.75%, which was the basis
for subsequent refinements. The second iteration, still using the Dermis dataset, involved
increasing the number of convolutional layers and reducing dense layers while maintaining
a removal rate of 0.5 to reduce overfitting. These changes improved accuracy by 76.11%
in training and 81.25% in validation. Further improvements in the third iteration included
the introduction of global average pooling in the final layers to enhance principal general-
ization and extend the training to 30 epochs. These modifications significantly filled the
performance gap, achieving 82.92% training accuracy and 85% validation accuracy on the
Dermis dataset. To overcome data limitations, we transitioned from the Dermis dataset to
the more extensive ISIC dataset, maintaining consistent parameters, which boosted the accu-
racies to 87.60% in training and 88.87% in validation. However, attempts to push the model
Computer Engineering 25
Chapter 5. Proposed System
Computer Engineering 26
Chapter 5. Proposed System
Computer Engineering 27
Chapter 5. Proposed System
Computer Engineering 28
Chapter 5. Proposed System
Computer Engineering 29
Chapter 5. Proposed System
Figure 5.11: Validation Loss and Validation Accuracy achieved using MobileNetV2 using
transfer learning
Computer Engineering 30
Chapter 5. Proposed System
Figure 5.12: Final Accuracy achieved by incorporating MobileNetV2 model using Transfer
Learning
Computer Engineering 31
Chapter 5. Proposed System
Computer Engineering 32
Chapter 5. Proposed System
Computer Engineering 33
Chapter 5. Proposed System
Computer Engineering 34
Chapter 5. Proposed System
Computer Engineering 35
Chapter 6
Testing
Testing Methodology and Execution: Each dataset presents its unique challenges and char-
acteristics; the Dermis dataset, known for its detailed clinical case reports, and the ISIC
dataset, recognized for its extensive library of skin lesion images. Our testing involved a
thorough evaluation of the models’ performance on these datasets individually and in com-
bination.
Performance Analysis: The models were rigorously tested against a multitude of image
variations, encompassing a range of melanoma presentations. This extensive testing revealed
that the MobileNetV2 model, utilizing transfer learning, consistently outperformed the cus-
tom CNN model. The efficiency of MobileNetV2 can be attributed to its advanced archi-
tectural design, which has been fine-tuned for high-precision tasks across different datasets,
proving its adaptability and accuracy in our application.
Comparative Results: The results indicate that the MobileNetV2 model achieved superior
accuracy, demonstrating its robustness and reliability. The extensive data provided by both
Dermis and ISIC datasets facilitated the comprehensive training of the models, enhancing
their ability to identify melanoma accurately. These findings underscore the importance of
using heterogeneous data to train deep learning models for medical diagnosis, particularly in
36
Chapter 6. Testing
dermatology.
Testing Methodology: Our methodology for application testing was two-pronged: First, we
conducted a series of functional tests to verify the model’s integration, scrutinizing every
aspect of the application’s operational pipeline for potential discrepancies or failures. This
encompassed input validation, model response times, and accurate output rendering. Second,
we executed a set of user-centric tests to ascertain the application’s usability, focusing on the
user interface (UI) and the user experience (UX) to ensure intuitiveness and ease of use.
Integration Verification: A significant portion of the testing phase was devoted for confirm-
ing the integration of the MobileNetV2 model. This entailed validating the Flask API end-
points, ensuring they correctly communicated with the model, and maintained data integrity
throughout the process. The application’s back-end logs and front-end response elements
were thoroughly inspected for consistency and accuracy after each classification request.
Analysis: The results from our application testing demonstrated that the MobileNetV2
model was effectively integrated into the application framework. The model showcased
high accuracy in classification, with prompt response times that meet the expectations for
real-time analysis. The stability of the application during these tests further reinforced the
Computer Engineering 37
Chapter 6. Testing
Computer Engineering 38
Chapter 7
In the quest to develop a reliable application for melanoma detection, we explored two
distinct approaches: a custom-designed convolutional neural network (CNN) and the es-
tablished MobileNetV2 architecture through transfer learning. After rigorous testing and
analysis, MobileNetV2 emerged superior in terms of accuracy, prompting us to leverage
its strengths for our application. We chose Flutter, a versatile UI framework, to create a
seamless integration of the machine learning model into a user-friendly mobile interface.
Utilizing Dart, we crafted an intuitive user interface that simplifies the user journey, enabling
easy navigation and use. This integration of MobileNetV2 with our Flutter-based applica-
tion culminated in a robust classification tool, readily available to users. Our application now
adeptly provides melanoma classification services, offering a streamlined, accessible solu-
tion for early detection and awareness, thereby serving as a testament to the synergy between
advanced AI methodologies and user-centric design.
39
Chapter 7. Results and Analysis
Computer Engineering 40
Chapter 7. Results and Analysis
Computer Engineering 41
Chapter 7. Results and Analysis
Computer Engineering 42
Chapter 7. Results and Analysis
Computer Engineering 43
Chapter 7. Results and Analysis
Computer Engineering 44
Chapter 8
Future Scope
The future scope of the proposed system includes several promising directions. Technically,
we would like to improve the accuracy of our model by incorporating additional deep learn-
ing architectures and considering ensemble techniques. In the future, we plan to expand
the scope of diagnosis to other skin diseases and increase its usefulness in dermatological
medicine. To make our app more accessible, introduce multilingual support and integrate
accessibility features to make our app available to a wider audience. Additionally, partner-
ships with healthcare providers and integration with electronic health record (EHR) systems
are planned to optimize clinical workflow.
45
Chapter 9
Conclusion
Our study demonstrated the effectiveness of transfer learning with MobileNetV2 in the con-
text of melanoma detection, achieving a notable testing accuracy of 92.01%. This model
leveraged the pre-trained MobileNetV2 architecture, known for its efficiency and compact
structure suitable for mobile applications, and applied a layer of custom features to enhance
its diagnostic capabilities. By incorporating additional convolutional layers and dropout
functionalities at strategic points, we effectively tailored the model to handle the unique
challenges of image classification. The integration of these handcrafted features with the
robust MobileNetV2 allowed for the extraction of features essential for accurately distin-
guishing between benign and malignant skin lesions.
In contrast, our Proposed CNN model, designed from scratch, involved multiple iterations
to refine its architecture and improve performance. Despite these efforts, which included in-
creasing convolutional depth and adjusting pooling strategies, this model achieved lower ac-
curacy rates compared to the transfer learning approach. The iterative enhancements helped
in understanding the critical balance necessary between model complexity and generaliza-
tion capabilities, culminating in a final testing accuracy of 88.92%. The direct comparison
highlights the advantages of utilizing transfer learning, particularly using a pre-trained net-
work like MobileNetV2. This approach not only saves computational resources but also
leverages the vast amount of pre-existing knowledge, which can be especially beneficial in
domains like medical imaging where precision is paramount. The success of MobileNetV2
in our tests underscores its potential in embedded systems and applications requiring high
efficiency, presenting a compelling case for its adoption in clinical settings where rapid and
accurate diagnosis is critical.
46
Bibliography
[1] S. Mathur and T. Jain, ”Dermatological Disease Detection Employing Transfer Learn-
ing,” 2023 11th International Conference on Internet of Everything, Microwave Engineering,
Communication and Networks (IEMECON), Jaipur, India, 2023, pp. 1-6, doi: 10.1109/IEME-
CON56962.2023.10092304.
[2] Rarasmaya Indraswari, Rika Rokhana, Wiwiet Herulambang, “Melanoma image classi-
fication based on MobileNetV2 network,” Procedia Computer Science, Volume 197, 2022,
Pages 198-207, ISSN 1877-0509, https://doi.org/10.1016/j.procs.2021.12.132.
[4] S. Kusuma, G. Vasundharadevi and D. M. Abhinay Kanth, ”A Hybrid Model for Skin
Disease Classification using Transfer Learning,” 2022 Third International Conference on
Intelligent Computing Instrumentation and Control Technologies (ICICICT), Kannur, India,
2022, pp. 1093-1096, doi: 10.1109/ICICICT54557.2022.9917705.
[5] Evgin Goceri,”Diagnosis of skin diseases in the era of deep learning and mobile technol-
ogy,” Computers in Biology and Medicine, Volume 134, 2021, 104458, ISSN 0010-4825,
https://doi.org/10.1016/j.compbiomed.2021.104458.
[7] J. Samraj and R. Pavithra, ”Deep Learning Models of Melonoma Image Texture Pat-
tern Recognition,” 2021 IEEE International Conference on Mobile Networks and Wireless
Communications (ICMNWC), Tumkur, Karnataka, India, 2021, pp. 1-6, doi: 10.1109/ICM-
47
Bibliography
NWC52512.2021.9688345.
[8] Wiem Abbes, Dorra Sellami,”Deep Neural Networks for Melanoma Detection from Op-
tical Standard Images using Transfer Learning,” Procedia Computer Science, Volume 192,
2021, Pages 1304-1312, ISSN 1877-0509, https://doi.org/10.1016/j.procs.2021.08.134.
[9] C. A. Hartanto and A. Wibowo, ”Development of Mobile Skin Cancer Detection using
Faster R-CNN and MobileNet v2 Model,” 2020 7th International Conference on Informa-
tion Technology, Computer, and Electrical Engineering (ICITACEE), Semarang, Indonesia,
2020, pp. 58-63, doi: 10.1109/ICITACEE50144.2020.9239197.
[10] E. Megha and R. Jones S.B., ”Real Time Application of Deep Learning Approach
in Exogenous Skin Problem Identification,” 2020 Third International Conference on Smart
Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 2020, pp. 842-847, doi:
10.1109/ICSSIT48917.2020.9214293.
[12] L. Vincent and J. Roopa Jayasingh, ”Comparison of Psoriasis Disease Detection and
Classification Through Various Image Processing Techniques-A Review,” 2022 6th Interna-
tional Conference on Devices, Circuits and Systems (ICDCS), Coimbatore, India, 2022, pp.
122-124, doi: 10.1109/ICDCS54290.2022.9780692.
[13] E. Kanca and S. Ayas, ”Learning Hand-Crafted Features for K-NN based Skin Disease
Classification,” 2022 International Congress on Human-Computer Interaction, Optimization
and Robotic Applications (HORA), Ankara, Turkey, 2022, pp. 1-4, doi: 10.1109/HORA5527
8.2022.9799834.
[14] S. Shrimali, ”Development of a Mobile Application for the Early Detection of Skin Can-
cer using Image Processing Algorithms, Transfer Learning, and AutoKeras,” 2022 5th Inter-
national Conference of Computer and Informatics Engineering (IC2IE), Jakarta, Indonesia,
2022, pp. 100-105, doi: 10.1109/IC2IE56416.2022.9970048.
Computer Engineering 48
Bibliography
tional Conference for Emerging Technology (INCET), Belgaum, India, 2022, pp. 1-5, doi:
10.1109/INCET54531.2022.9824756.
[19] J. Alam, ”An Efficient Approach for Skin Disease Detection using Deep Learning,”
2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE),
Brisbane, Australia, 2021, pp. 1-8, doi: 10.1109/CSDE53843.2021.9718427.
[21] https://blog.roboflow.com/how-to-train-mobilenetv2-on-a-custom-dataset/amp/”
[22] https://www.kaggle.com/datasets/sallyibrahim/skin-cancer-isic-2019-2020-malignant-
or-benign
[23] https://www.kaggle.com/datasets/farhatullah8398/skin-lesion-dermis-dataset
Computer Engineering 49
Appendix A
Appendices
50