Question: Problem 4 : Classification Report There are metrics other than accuracy that can be used to evaluate a classification model. Precision and recall are two
Problem : Classification Report
There are metrics other than accuracy that can be used to evaluate a classification model. Precision and recall are two such
metrics that can be used to convey how well the model performs on observations in specific classes. Before we can officially
define these metrics, we need to introduce a few preliminary definitions. In a binary classification problem, there are two
possible classes. We will refer to one of the classes as the positive class and will refer to the other as the negative class.
This designation is often arbitrary. Assume that we have used a classification model to generate class predictions for a data
set. We can group the observations using the following designations:
An observations is considered to be:
A true positive if it was predicted to be in the positive class, and actually was in the positive class.
A false positive if it was predicted to be in the positive class, but actually was in the negative class.
A true negative if it was predicted to be in the negative class, and actually was in the negative class.
A false negative if it was predicted to be in the negative class, but actually was in the positive class.
For a set of predictions, let TP FP TN and FN denote the number of true positives, false positives, true negatives, and false
negatives, respectively.
The model's positive precision, positive recall, negative precision, and negative recall scores are defined as follows:
Positive Precision: Number of True Positives
Number of Positive Predictions
Positive Recall: Number of True Positives
Number of Positive Observations
Negative Precision: Number of True Negatives
Number of Negative Predictions
Negative Recall: Number of True Negatives
Number of Negative Observations
The precision for a particular class is an estimate of the probability of a correct classification, given that the model has
classified an observation as that class. The recall for a particular class is an estimate of the probability of a correct
classification, given that the observation is actually a member of that class.
Write a function called classificationreport that accepts two parameters: truey and predy This function
will print several metrics used to evaluation the performance of a classification model based on the supplied values of
truey and predy The function should perform the following steps:
Create a local variable called classes that stores the unique values that appear in truey You may use
npunique for this. Going forward, treat the value in classes as the "negative class" and the value in
classes as the "positive class".
Use findaccuracy to calculate and store the model's accuracy.
Use NumPy and no loops to calculate TP FP TN and FN
Calculate the positive precision, positive recall, negative precision, and negative recall.
Print several lines displaying the results of these calculations, as shown below. The first two lines of this output
should display the names of the positive and negative classes.
Format your results as shown below. Include a blank line between "Negative Class" and "Accuracy". Make sure
that the values use to replace the xxxx symbols are leftaligned. All numeric output should be rounded to four
decimal places.
Positive Class: xxxx
Negative Class: xxxx
Accuracy: xxxx
Positive Precision: xxxx
Positive Recall: xxxx
Negative Precision: xxxx
Negative Recall: xxxx
This function should not return any value.
We will now call apply this function to the examples in Problem
Use the classificationreport function to display a report for the medical diagnosis model from Problem
Use the classificationreport function to display a report for the image classification model from Problem
Step by Step Solution
There are 3 Steps involved in it
1 Expert Approved Answer
Step: 1 Unlock
Question Has Been Solved by an Expert!
Get step-by-step solutions from verified subject matter experts
Step: 2 Unlock
Step: 3 Unlock
