Question: You are asked to evaluate the performance of two classification models, M1 and M2. The test set you have chosen contains 26 binary attributes, labeled
You are asked to evaluate the performance of two classification models, M1 and M2. The test set you have chosen contains 26 binary attributes, labeled as A through Z. Table 5.5 shows the posterior probabilities obtained by applying the models to the test set. (Only the posterior probabilities for the positive class are shown). As this is a two-class problem, P(?) = 1 ? P(+) and P(?|A, . . . , Z) = 1 ? P(+|A, . . . , Z). Assume that we are mostly interested in detecting instances from the positive class.
a. Plot the ROC curve for both M1 and M2. (You should plot them on the same graph.) Which model do you think is better?
(b) For model M1, suppose you choose the cutoff threshold to be t = 0.5. In other words, any test instaces whose posterior probability is greater than t will be classified as a positive example. Compute the precision, recall, and F-measure for the model at this threshold value.
(c) Repeat the analysis for part (c) using the same cut off threshold on model M2. Compare the F-measure results for both models. Which model is better? Are the results consistent with what you expect from the ROC curve?
(d) Repeat part (c) for model M1 using the threshold t = 0.1. Which threshold do you prefer, t = 0.5 or t = 0.1? Are the results consistent with what you expect from the ROC curve?
Table 5.14. Posterior probabilites for Exercise 17 P(+A, True Instance Class | , z, | P(+A, , Z, M) 0.73 0.69 0.44 2 3 4 5 6 7 8 9 10 0.55 0.67 0.47 0.08 0.15 0.45 0.35 Mb 0.61 0.03 0.68 0.31 0.45 0.09 0.38 0.05 0.01 0.04
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
