Suppose you are running a learning experiment on a new algorithm. You have a data set consisting of 2 examples of each of two classes. Yon plan to use leave-one-nut cross-validation. As a baseline, you run your experimental setup on a simple majority classifier. (A majority classifier is given a set of training data and then always outputs the class that is in the majority in the training set, regardless of the input.) You expect the majority classifier to score about 50% on leave-one-out cross-validation, but to your surprise, it scores zero. Can you explain why?
Answer to relevant QuestionsIn the recursive construction of decision trees, it sometimes happens that a mixed set of positive and negative examples remains at a leaf node, even after all the attributes have been used. Suppose that we have p positive ...Show, by translating into conjunctive normal form and applying resolution, that the conclusion drawn concerning Brazilians is sound.Repeat Exercise 20.1, this time plotting the values of P (D m+1 = lime│h MAP) and P (D m+1 = lime│hML).Consider the application of EM to learn the parameters for the network in Figure (a), given the true parameters in Equation (20.7).a. Explain why the EM algorithm would not work if there were just two attributes in the model ...Suppose that a training set contains only a single example, repeated 100 times. In 80 of the 100 cases, the single output value is I; in the other 20, it is 0. What will a back- propagation network predict for this example, ...
Post your question