Mini Document 2
Mini Document 2
on
BACHELOR OF TECHNOLOGY
in
COMPUTER SCIENCE & ENGINEERING
by
21WH1A05C5 Ch Mallika
21WH1A0598 D Tanusha
22WH5A0511 T Abhinaya
CERTIFICATE
This is to certify that the Industrial Oriented Mini Project entitled on “SIGNATURE
FORGERY DETECTION USING MACHINE LEARNING” is a bonafide work car-
ried out by Ms. Ch Mallika (21WH1A0C5), Ms. D Tanusha(21WH1A0598), and
Ms. T Abhinaya (22WH5A0511) in the partial fulfillment for the award of B.Tech.
degree in Computer Science & Engineering, BVRIT HYDERABAD College of En-
gineering for Women, Bachupally, Hyderabad, affiliated to Jawaharlal Nehru Tech-
nological University Hyderabad, Hyderabad under my guidance and supervision. The
results embodied in the project work have not been submitted to any other University
or Institute for the award of any degree or diploma.
External Examiner
DECLARATION
We hereby declare that the work presented in this project entitled “SIGNA-
TURE FORGERY DETECTION USING MACHINE LEARNING” submitted to-
wards completion of Project Work in IV year of B.Tech., CSE at ‘BVRIT HYDER-
ABAD College of Engineering for Women’, Hyderabad is an authentic record of our
original work carried out under the guidance of Dr. G Naga Satish, Professor, Depart-
ment of CSE.
Ch Mallika
(21WH1A05C5)
D Tanusha
(21WH1A0598)
T Abhinaya
(22WH5A0511)
ACKNOWLEDGMENT
Ch Mallika
(21WH1A05C5)
D Tanusha
(21WH1A0598)
T Abhinaya
(22WH5A0511)
ABSTRACT
ii
TABLE OF CONTENTS
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
LIST OF TERMS AND ABBREVIATIONS . . . . . . . . . . . . . . . vii
1 INTRODUCTION 1
1.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Existing Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Proposed Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 LITERATURE WORK 5
2.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 Literarure Survey . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Research Gaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Tools and Technologies . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.1 Programming Language . . . . . . . . . . . . . . . . . . . . . . 11
2.3.2 Libraries and Frameworks . . . . . . . . . . . . . . . . . . . . . 11
2.3.3 Tools for Signature Analysis . . . . . . . . . . . . . . . . . . . 13
3 METHODOLOGY 14
3.1 Proposed Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5 CONCLUSION 27
iii
6 REFERENCES 28
Appendices
iv
LIST OF FIGURES
v
LIST OF TABLES
vi
LIST OF TERMS AND ABBREVIATIONS
vii
CHAPTER 1
INTRODUCTION
In today’s rapidly evolving digital landscape, ensuring the authenticity of signatures has
become a critical challenge due to the increasing prevalence of forgery and fraudulent
activities.In legal transactions, financial transactions as well as in any personal trans-
actions authentication is needed, and that authentication is signing. The conventional
method of signature verification is often manual, which is tedious, slow, has a signifi-
cant human error, and struggles with the ever-increasing stacks of papers to be verified.
With this in mind, it is imperative to emphasize the automation of signatures in a more
reliable and efficient manner.
In this ever-evolving world, with the emergence of ML and image processing tech-
nologies, the challenges are now being addressed in a more efficient manner. In ad-
dressing these problems, the present project is designed to construct a system that can
swiftly and accurately differentiate a real signature from a forgery one. This system first
weighs the signature images to eliminate noise and past reweighted images, and then it
proceeds to replacing the images with binarised ones. From there key areas such as area
ratio, centroids, eccentricity and skewness are collected. All these details are provided
to a multi-layered perceptron for classification.
The system can be deployed in the fields that require fast and effective means of au-
thentication such as banking, law and e-commerce. The automated verification stream-
ing is embedded to the system in order to improve its reliability, reduce human errors
and increase overall security in marker-based verification systems. In addition, this
System is designed to be robust and flexible which helps in its application in real-world
environments that involve extensive datasets and intricate forgery cases.
1
1.1 Objectives
The primary objective of this project is to develop an accurate and reliable forgery detec-
tion model that classifies signatures as genuine or forged, including intricate and skillful
forgeries. This will involve building a robust machine-learning-based system that lever-
ages advanced methodologies to ensure precision and reliability. To achieve this, the
project will employ state-of-the-art image processing techniques, including noise re-
moval, grayscale conversion, binarization, and signature region cropping, enhancing
the quality of input data and significantly improving the accuracy of subsequent analy-
ses.
A key component of this project is extracting geometric and statistical features,
such as area ratio, centroid, eccentricity, skewness, and kurtosis, which are essential for
distinguishing forgeries. These features form the foundation for classification, enabling
the model to differentiate between genuine and forged signatures with high accuracy.
The project will design and train a Multilayer Perceptron (MLP) neural network on
a diverse dataset to ensure generalizability to real-world scenarios, including unseen
forgeries.
In addition to achieving accurate detection, the project aims to enhance the reliabil-
ity and security of signature verification processes, reducing human error and improving
efficiency in applications like banking and legal documentation. By optimizing the sys-
tem’s architecture, computational overheads will be minimized, ensuring high accuracy
while maintaining efficiency for real-time environments. Ultimately, the project aspires
to deliver a robust, scalable solution for forgery detection, addressing critical challenges
in signature verification and fostering trust in automated systems.
Over the years, various approaches have been developed to tackle the problem of sig-
nature forgery detection. These methods range from traditional image processing tech-
niques to advanced machine learning and deep learning models. Researchers have also
explored hybrid models that combine feature-based and learning-based approaches to
improve detection accuracy. Key works in this domain include:
2
1. Time-Aligned Recurrent Neural Networks (TA-RNNs): Tolosana et al. in-
troduced TA-RNNs combined with Dynamic Time Warping (DTW) for online
signature verification. Their model demonstrated robustness against skilled forg-
eries but struggled with random forgery detection in some datasets.
The proposed system incorporates multiple stages to ensure efficient and accurate sig-
nature forgery detection, starting with image preprocessing, feature extraction, clas-
sification, and performance evaluation. Each stage plays a critical role in the overall
functionality of the system and contributes to building a reliable and scalable solution
for real-world applications.
3
Image Preprocessing: The first step in the process involves enhancing the quality
of the signature images to ensure they are suitable for analysis. This includes applying
techniques such as grayscale conversion to simplify the image by removing color in-
formation, Gaussian blurring to reduce noise and smoothen the image, and binarization
using Otsu’s thresholding to segment the signature from the background. Additionally,
the signature region is cropped to focus the analysis on the relevant area, eliminating
unnecessary parts of the image. These preprocessing steps are essential to standardize
the input data and improve the accuracy of subsequent stages.
Feature Extraction: After preprocessing, the system extracts geometric and sta-
tistical features that capture the unique characteristics of each signature. Key features
include the ratio of white pixels to total pixels, centroid, eccentricity, solidity, skew-
ness, and kurtosis. These features provide a detailed and robust representation of the
signatures, allowing the model to effectively distinguish between genuine and forged
samples.
4
CHAPTER 2
LITERATURE WORK
The field of signature forgery detection has advanced significantly over the years, with
early research predominantly relying on handcrafted features and traditional statisti-
cal methods for both online and offline signature verification. While these approaches
achieved moderate success, they often fell short in detecting skilled and dynamic forg-
eries due to their inability to capture complex variations in signature patterns.
Recent advancements have seen a shift toward deep learning-based methods, which
leverage automated feature extraction and robust architectures like Convolutional Neu-
ral Networks (CNNs) and Siamese networks. These techniques have demonstrated
remarkable improvements in accuracy and scalability. However, their application to
offline signature verification still faces notable challenges, including computational in-
efficiency and dependence on high-quality datasets. Additionally, ensuring consistent
performance across diverse signature styles and forgery types remains a critical area of
ongoing research.
Tolosana et al. proposed a robust system for online signature verification us-
ing Time-Aligned Recurrent Neural Networks (TA-RNNs). The study utilized a
dataset of 70,000 signatures acquired using stylus and finger inputs from 1,526
users. The method incorporated Dynamic Time Warping (DTW) for aligning sig-
natures and addressed variations in signing style. TA-RNNs demonstrated excep-
5
tional performance in detecting skilled forgeries, making them suitable for high-
security applications. However, the system faced challenges in random forgery
detection, where the differences between signatures were minimal. The reliance
on online signature data also limited its applicability for offline signature verifi-
cation tasks.
2. Siamese Network for Learning Genuine and Forged Offline Signature Veri-
fication [2]
Amruta B. Jagtap et al. introduced a Siamese neural network approach for offline
signature verification. This model utilized datasets like GPDS and MCYT, com-
prising thousands of genuine and forged signatures. The Siamese architecture
focused on learning embeddings to capture subtle differences between genuine
and forged signatures. It effectively handled small datasets and offered a simpli-
fied training process. However, the model’s performance was constrained when
applied to large datasets, where variations in writing styles posed additional chal-
lenges. Despite its limitations, the study demonstrated the potential of Siamese
networks in signature verification.
Dinesh Rao Adithya et al. focused on using Artificial Neural Networks (ANNs)
for offline signature verification. Their methodology combined basic image pre-
processing techniques with geometric feature extraction, such as area and perime-
ter calculations. The ANN model demonstrated adaptability to different datasets
6
and signature styles, making it a promising solution for diverse applications.
However, the limited set of features restricted the model’s ability to identify in-
tricate forgeries. This research emphasized the importance of robust feature en-
gineering in achieving higher accuracy in forgery detection.
Dr. Nader Ebrahimpour investigated the use of pre-trained models like Mo-
bileNetv2, ResNet18, and DenseNet121 for offline signature verification. These
models eliminated the need for manual feature extraction, achieving accuracy lev-
els as high as 98.75%. MobileNetv2, in particular, proved effective for resource-
constrained devices due to its small size and fast processing. However, models
like ResNet18 required extensive computational resources and longer training
times. This study highlighted the trade-offs between model complexity and real-
time applicability in signature verification.
7
applications.
8
Table 2.1 Literature Analysis on Signature Forgery Detection Techniques
[1] DeepSign: Deep Tolosana et al. TA-RNNs, Dynamic High accuracy in skilled Limited applicability to
On-Line Signature Time Warping (DTW) forgery detection; suitable offline tasks; struggles
Verification for high-security applica- with random forgery de-
tions. tection.
[2] Siamese Network Amruta B. Jagtap Siamese Neural Net- Effective for small Performance constrained
for Learning Gen- et al. works datasets; simplified train- for large datasets; difficul-
uine and Forged ing process. ties with diverse writing
Offline Signature styles.
Verification
[3] Signature Forgery G. Prabhakar Convolutional Neural Automated feature extrac- Requires substantial com-
and Real Verifica- Reddy et al. Networks (CNNs) tion; adapts well to com- putational resources; un-
tion System plex forgery cases. suitable for real-time ap-
plications.
[4] Signature Analysis Dinesh Rao Artificial Neural Net- Adaptable to diverse Limited feature set affects
for Forgery Detec- Adithya et al. works (ANNs), Geo- datasets and styles. the detection of intricate
tion metric Feature Extrac- forgeries.
tion
[5] Handwritten Sig- Dr. Nader MobileNetv2, Eliminates manual feature High computational
natures Forgery Ebrahimpour ResNet18, extraction; achieves high demand; resource-heavy
Detection Using DenseNet121 accuracy. models like ResNet18.
Pre-Trained Deep
Learning Methods
[6] Average Inten- Sathya et al. Gaussian Filters, Sup- Achieves high accu- Highly sensitive to noise
sity Sign (AIS) port Vector Machines racy; efficient for small in larger datasets; requires
Feature-Based (SVM) datasets. intensive preprocessing.
Offline Signature
Verification Using
Machine Learning
[7] Offline Signature Jivesh Poddar et al. CNNs, Crest-Trough Eliminates manual feature Inefficient for real-time
Recognition and Methods, Harris Cor- selection. applications due to high
Forgery Detection ner Detection, SURF processing complexity.
Using Deep Learn- Algorithms
ing
[8] Online Signature Muhammad DFT, LFDA, Fast Eu- Efficient for real-time ap- Lacks robust noise re-
Verification for Rizwan et al. clidean Distance plications. moval preprocessing, af-
Forgery Detection fecting accuracy in noisy
scenarios.
[9] An Efficient Trans- Muhammad Raf- VGG-16, Neural Net- Achieves high accuracy Resource-intensive;
fer Learning Model sun Sheikh et al. works, SVM, KNN, (96.7%); effective for reliant on pre-trained
for Predicting Random Forest, Deci- small datasets. models, limiting flexibil-
Forged Signatures sion Tree ity.
Using Machine
Learning
[10] Machine Learning- M. Muzaffar Systematic Review Consolidates perfor- Challenges in handling
Based Offline Sig- Hameed et al. of Machine Learning mance metrics; highlights diverse handwriting
nature Verification Techniques (e.g., effectiveness of deep styles; reliance on large
Systems: A Sys- CNNs) learning. labeled datasets.
tematic Review
9
2.2 Research Gaps
10
ture styles. Models trained on specific datasets often fail to perform effectively across
various populations, writing instruments, or cultural signature styles. This limitation
underscores the need for adaptable and universally effective models.
Finally, the integration of advanced features into existing systems remains underex-
plored. Current approaches often rely on a limited set of features, such as geometric or
intensity-based properties. Incorporating advanced statistical or deep-learned features
could significantly enhance the ability to distinguish subtle differences between genuine
and forged signatures.
• Python: The entire project is implemented in Python, which is widely used for
machine learning, computer vision, and GUI applications.
2.3.2 Libraries and Frameworks
1. NumPy:
• For numerical computations and array manipulation.
2. Pandas:
• To handle CSV files and perform data analysis.
Machine Learning
1. Scikit-learn:
• Logistic Regression: Used for modeling and predicting outcomes.
• Train-Test Split: For splitting the dataset into training and testing sets.
• Metrics: For calculating accuracy and generating classification reports.
Deep Learning
1. TensorFlow:
• For building and training the neural network for signature classification.
• Includes features like placeholder usage and optimizer definitions.
2. Keras:
11
• A high-level API built on TensorFlow for simplifying model creation and
evaluation.
3. TF v1 Compatibility Mode:
1. SciPy:
1. Matplotlib:
2. Scikit-image:
3. Pillow (PIL):
1. Tkinter:
• To create the GUI for the application, allowing users to input data and inter-
act with the system.
1. OS:
• For handling file paths and creating directories for saving features and re-
sults.
Others
1. Time:
12
2.3.3 Tools for Signature Analysis
1. Custom Preprocessing:
2. Feature Extraction:
13
CHAPTER 3
METHODOLOGY
The proposed model for signature forgery detection integrates various advanced tech-
niques from machine learning and image processing to identify whether a given sig-
nature is genuine or forged. The model is designed to address common challenges in
signature verification, such as the need for high accuracy, scalability, and the ability to
generalize well across different datasets. The architecture is composed of several key
stages, each focused on specific tasks to improve the overall efficiency and accuracy of
the detection process.
Data Acquisition and Preprocessing:
Data preprocessing plays a crucial role in improving the quality and robustness of the
signature forgery detection system. The dataset used consists of a collection of genuine
and forged signature images, collected from various sources, ensuring diversity and
complexity in the dataset. These images are typically scanned or collected in RGB
format, and are organized into training and testing sets. The preprocessing steps include
several image enhancement and transformation techniques to ensure that the system can
work efficiently with varying image qualities and characteristics.
• Noise Reduction: The raw signature images may contain noise due to back-
ground interference, scan quality, or inconsistencies during image capturing. Gaus-
sian blur is applied to smooth the image and remove small variations that could
14
Fig. 3.1 Methodology of Proposed Model
• Cropping: The next step is cropping the image around the signature. This is
done by identifying the bounding box of the signature, based on pixel intensity,
which removes the unnecessary background and focuses the model’s attention on
the signature itself.
Feature Extraction:
After preprocessing, the next step is to extract important features from the signature
images. These features are critical as they help the model make informed decisions
during classification. Various geometric and statistical measures are computed to form
the feature set. These features encapsulate key characteristics of the signature, including
its shape, structure, and overall form.
15
• Area Ratio: The area ratio feature calculates the proportion of white pixels
(background) to the total pixels in the signature region. This feature is partic-
ularly useful in detecting forgeries where there may be inconsistencies in the
overall area covered by the signature.
• Centroid: The centroid represents the normalized center of mass of the signa-
ture. It is a measure of how evenly distributed the pixels are within the signature.
Forgeries may exhibit shifts in this centroid due to variations in writing pressure
or stroke dynamics.
• Solidity: Solidity is the ratio of the area of the signature to the area of its convex
hull (the smallest convex shape that can contain the signature). A high solidity
value indicates a compact and well-defined signature, while forgeries often ex-
hibit lower solidity.
• Input Layer: The input layer receives the extracted features from the signature
images, such as area ratio, centroid, eccentricity, and others. These features are
16
used as the basis for making predictions.
• Hidden Layers: The hidden layers of the MLP use the tanh activation function to
capture the complex relationships between features. The use of multiple hidden
layers allows the model to learn intricate patterns in the data that are not immedi-
ately obvious.
• Output Layer: The output layer generates a binary output, indicating whether
a signature is genuine or forged. This binary classification helps to simplify the
decision-making process for the user.
The model uses the Adam optimizer to adjust the weights during training. This
optimizer is known for its efficiency and ability to converge quickly, making it an ideal
choice for this task. Additionally, a loss function based on categorical cross-entropy is
used to measure the difference between the predicted and true labels, ensuring that the
model learns effectively.
Evaluation and Validation:
The model is evaluated using various metrics to ensure its performance is robust and
reliable. The dataset is divided into training and testing subsets, and the model’s ability
to generalize is tested by evaluating it on previously unseen data. Metrics such as
accuracy, precision, recall, and F1-score are used to measure the effectiveness of the
model. Cross-validation techniques are also employed to avoid overfitting and ensure
that the model is generalizable across different signature datasets.
GUI Integration:
To provide a user-friendly interface, a Tkinter-based graphical user interface (GUI) is
developed. The GUI allows users to easily upload signature images and interact with the
model. After uploading, the system preprocesses the image, extracts relevant features,
and passes the data through the trained model for classification. The final result is
displayed, indicating whether the signature is genuine or forged. This interactive system
ensures that the model is accessible to users with minimal technical expertise.
17
3.2 Datasets
For training and testing the model, a diverse dataset of signature images is crucial.
The dataset used in this project consists of nearly 2000 images, collected from Kaggle,
which includes both genuine and forged signatures. Each image is scanned and con-
verted to RGB format, ensuring that the data is consistent and ready for preprocessing.
The dataset is divided into two main categories: genuine signatures and forged signa-
tures. These images are collected from different individuals, providing a broad range of
signature styles and variations.
In addition to the primary dataset, various augmentation techniques such as rotation,
scaling, and translation are applied to artificially increase the size of the dataset. This
ensures that the model is exposed to a wide variety of signature styles and forgery
techniques, improving its ability to generalize.
3.3 Algorithms
The success of the proposed system relies on the integration of several algorithms from
both image processing and machine learning. These algorithms work together to pre-
process the images, extract features, train the classification model, and make predic-
tions.
• Gaussian Blur: This algorithm is used to reduce image noise by applying a filter
that averages the pixel values in a neighborhood around each pixel. This results
in a smoother image, which helps in extracting clean and accurate features.
• Otsu’s Thresholding: This method is used to convert grayscale images into bi-
nary images by determining an optimal threshold. This technique automatically
calculates the threshold that minimizes the within-class variance of pixel values,
making it ideal for signature segmentation.
• Adam Optimizer: The Adam optimizer is used to adjust the weights of the MLP
18
during training. It combines the benefits of both Momentum and RMSProp, en-
suring faster convergence and better performance in complex models.
Evaluating the performance of the signature forgery detection system is crucial to under-
standing its effectiveness and ensuring it meets the required standards. The following
metrics are used to evaluate the performance of the model:
Evaluating the performance of the signature forgery detection system is essential to
assess its effectiveness and ensure it meets the required standards. Various metrics are
employed for this purpose. Accuracy measures the percentage of correctly classified
signatures, both genuine and forged, out of all test cases, serving as a common evalu-
ation metric, though it may be insufficient for imbalanced datasets. Precision focuses
on the proportion of genuine signatures that are correctly identified as genuine, mini-
mizing false positives that could misclassify authentic signatures as forgeries. Recall,
on the other hand, emphasizes the detection of forged signatures by measuring the per-
centage of forgeries correctly identified, ensuring that false negatives are minimized.
The F1-score, a harmonic mean of precision and recall, provides a balanced evaluation
of the model’s performance, particularly when both false positives and false negatives
are equally significant. Additionally, the confusion matrix offers a detailed breakdown
of the model’s performance by presenting true positives, true negatives, false positives,
and false negatives, enabling a deeper understanding of areas that require improvement.
19
CHAPTER 4
The result of this system is a robust offline signature verification model that can deter-
mine the authenticity of a given signature. By preprocessing the input image to extract
key features such as ratio, centroid, eccentricity, solidity, skewness, and kurtosis, the
model leverages a neural network-based approach for classification. The system eval-
uates whether the signature is genuine or forged by comparing extracted features from
the test image with those in the training dataset. The GUI provides a user-friendly
interface to load and analyze signature images, offering accurate results based on the
implemented deep learning model. This system effectively aids in forgery detection,
ensuring reliability and efficiency in signature verification tasks.
20
Fig. 4.2 Original image
21
Fig. 4.4 Binary image
22
Fig. 4.6 Output for Case 1
23
Fig. 4.8 Original image
24
Fig. 4.10 Binary image
25
Fig. 4.12 Output for Case 2
26
CHAPTER 5
CONCLUSION
This project addresses the growing need for reliable and efficient signature forgery de-
tection, leveraging advanced machine learning and image processing techniques. Tra-
ditional methods often struggle to capture the complex patterns and variations inherent
in signature data. The proposed framework utilizes a combination of geometric and
statistical feature extraction methods, along with a Multilayer Perceptron (MLP) neural
network, to classify signatures as genuine or forged. By streamlining the feature extrac-
tion and classification process, the framework ensures enhanced accuracy, scalability,
and robustness.
The results highlight the framework’s effectiveness in detecting forged signatures.
With the use of MLP, accuracy improved significantly from 85.10% to 94.72% after
incorporating the feature extraction techniques. Additionally, metrics such as precision,
recall, and F1-score show substantial improvements, further reducing false positives
and false negatives. The integration of advanced image processing techniques, such as
Gaussian blur for noise reduction and Otsu’s thresholding for binarization, contributes
to the system’s high precision and reliability in distinguishing between genuine and
forged signatures.
While the proposed model demonstrates significant advantages, there is room for
further enhancements. Future work could focus on incorporating additional features,
such as temporal dynamics and pressure patterns, to capture more detailed charac-
teristics of signatures. Moreover, exploring deep learning models, such as Convolu-
tional Neural Networks (CNNs), could potentially improve the system’s ability to rec-
ognize more intricate forgeries. Expanding the evaluation with more diverse datasets
and testing the system under real-world conditions will further solidify the robustness
and adaptability of the model.
27
Overall, this project provides a valuable contribution to the field of signature veri-
fication, offering a scalable, efficient, and accurate solution for forgery detection. The
proposed system holds promise for a wide range of applications, from banking to legal
document verification, where secure and reliable authentication is paramount.
28
References
29
tional Conference on Computer, Communication, Chemical, Materials and Elec-
tronic Engineering (IC4ME2). IEEE, 2021, pp. 1–4.
[10] M. M. Hameed, R. Ahmad, M. L. M. Kiah, and G. Murtaza, “Machine learning-
based offline signature verification systems: A systematic review,” Signal Pro-
cessing: Image Communication, vol. 93, p. 116139, 2021.
30
Appendices
31
Appendix A
Sample Code
def gui():
def check_signature():
global myimg
train_person_id = e1.get()
test_image_path = e2.get()
top = tk.Toplevel()
myimg =
ImageTk.PhotoImage(Image.open(test_image_path).convert("RGB"))
imglabel = tk.Label(top,image=myimg, bg="white")
imglabel.grid(row=0, column=0)
label3 = tk.Label(top,text="Checking Signature for
Forgery")
label3.grid(row=1,column=0)
preproc(test_image_path)
train_path = ’C:\\Users\\tnush\\Desktop\\Mini
project\\Features\\Training/training_’ +
train_person_id + ’.csv’
testing(test_image_path)
test_path = ’C:\\Users\\tnush\\Desktop\\Mini
project\\TestFeatures/testcsv.csv’
def readCSV(train_path, test_path, type2=False):
32
corr_train = keras.utils.to_categorical(correct, 2) #
Converting to one hot
# Network Parameters
n_hidden_1 = 7 # 1st layer number of neurons
n_hidden_2 = 10 # 2nd layer number of neurons
n_hidden_3 = 30 # 3rd layer
n_classes = 2 # no. of classes (genuine or forged)
# tf Graph input
X = tf.placeholder("float", [None, n_input])
Y = tf.placeholder("float", [None, n_classes])
33
’h3’: tf.Variable(tf.random_normal([n_hidden_2,
n_hidden_3])),
’out’: tf.Variable(tf.random_normal([n_hidden_1,
n_classes], seed=2))
}
biases = {
’b1’: tf.Variable(tf.random_normal([n_hidden_1],
seed=3)),
’b2’: tf.Variable(tf.random_normal([n_hidden_2])),
’b3’: tf.Variable(tf.random_normal([n_hidden_3])),
’out’: tf.Variable(tf.random_normal([n_classes],
seed=4))
}
34