Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
10 views9 pages

Unit-7 Evaluation Notes

Evaluation is the final stage in the AI project cycle, assessing a model's performance using testing data to ensure reliability. The Confusion Matrix is a key tool for comparing predictions against actual outcomes, including terms like True Positive and False Negative. Four main evaluation metrics are discussed: Accuracy, Precision, Recall, and F1 Score, each providing insights into the model's effectiveness.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views9 pages

Unit-7 Evaluation Notes

Evaluation is the final stage in the AI project cycle, assessing a model's performance using testing data to ensure reliability. The Confusion Matrix is a key tool for comparing predictions against actual outcomes, including terms like True Positive and False Negative. Four main evaluation metrics are discussed: Accuracy, Precision, Recall, and F1 Score, each providing insights into the model's effectiveness.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

CLASS-10 UNIT-7 EVALUATION NOTES

What is Evaluation?
�Evaluation is the final stage in AI Project Cycle. Once a
model has been made and trained, it needs to go through
proper testing so that one can calculate the efficiency and
performance of the model. Hence, the model is tested with
the help of Testing Data.
�Evaluation is the process of understanding the reliability
and final performance of any AI model by giving the test
data set into the model and comparing it`s output with
actual answers.

Why do we need evaluation?


While in modelling, we make different types of models.
Then a decision to be taken which model is better than
another. So for that proper testing and evaluation is needed
to calculate the efficiency and performance of a model.
An efficient evaluation model proves helpful in selecting
the most suitable modelling method that would represent
our data.
Evaluation is basically done by two things: answers if a
new dataset is introduced to the model. This situation is
known as overfitting. UNIT-1 Communication Skills
1. Prediction:- The output given by the machine after
training and testing the data is known as Prediction.
(Output of the machine)
2. Reality:- Reality is the real situation and real scenario
where prediction has been made by the machine. (Reality
or truth)
1. Prediction- output given by the machine
2. Reality- real scenario about image shown when
prediction is done.
Confusion Matrix
CONFUSION MATRIX
1. The comparison between the results of Prediction and
reality is called the Confusion Matrix.
2. It is a record that helps in evaluation.
3. It is not a calculation; it is a performance measurement
for machine learning classification problems where output
can be two or more classes

TERMINOLOGIES OF CONFUSION MATRIX:


1. Positive: The prediction is positive for the scenario.
For example, there wil be board exams.
2. Negative: The prediction is negative for the scenario.
For example, there wil be no board exams conducted
this year.
3. True Positive:-The predicted value matches the actual
value i.e.; the actual value was positive and the model
predicted a positive value.
4. True Negative : The predicted value matches the actual
value i.e the actual value was negative and the model
predicted a negative value.
5. False Positive (Type 1 error): the actual value was
negative but the model predicted a positive value.
6. False Negative (Type 2 error): The predicted value was
falsely predicted i.e.; the actual value was positive but
the model predicted a negative value

PARAMETERS TO EVALUATE THE MODEL


There are 4 methods to evaluate the model
1. Accuracy- It is the percentage of correct predictions out
of all the observations.
A prediction is correct if it matches the reality.
All True positive and True Negative are the cases in which
the Prediction matches with reality
2. Precision Parameter-
It is defined as the percentage of true positive cases versus
all the cases where the prediction is true. It takes True
Positives and False Positives.

Note:-If Precision is high, this means the True Positive


cases are more, giving lesser False predictions.
3. Recall Parameter/Sensitivity:-
It is the fraction of positive cases that are correctly
identified.
The recall is the number of correctly identified positive
results divided by the number of all samples that should
have been identified as positive.
We can see that the Numerator in both Precsion and Recall
is same; True Positive . But in the denominator, Precision
counts the False Positive while Recall takes False Negative
into consideration.
4.F1 Score:- F1 score also called F-score or F-measure is
the measure of a test’s accuracy. It can be defined as the
measure of balance between precision and recall. The F1
score is a number between 0 and 1 and is the harmonic
mean of precision and recall
It can be defined as the measure of balance between
precision and recall.
QUESTION:-
Calculate accuracy, precision, recall, and F1 score for the
following Confusion Matrix. Suggest which metric would
not be a good evaluation parameter and why?

You might also like