Question: The following table summarizes a data set with three attributes A, B, C and two class labels +, . Build a two-level decision tree. (a)
.png)
(a) According to the classification error rate, which attribute would be chosen as the first splitting attribute? For each attribute, show the contingency table and the gains in classification error rate.
(b) Repeat for the two children of the root node.
(c) How many instances are misclassified by the resulting decision tree?
(d) Repeat parts (a), (b), and (c) using C as the splitting attribute.
(e) Use the results in parts (c) and (d) to conclude about the greedy nature of the decision tree induction algorithm.
00050005 Nu In-+- 50000500 T T T T F F F F |T T F F T T F F |T F T F T F T F
Step by Step Solution
3.39 Rating (174 Votes )
There are 3 Steps involved in it
a The error rate for the data without partitioning on any attribute is After splitting on attribute ... View full answer
Get step-by-step solutions from verified subject matter experts
Document Format (1 attachment)
908-M-S-D-A (8613).docx
120 KBs Word File
