Question: [ 6 pts ] Suppose S = { ( x i , y i ) } i = 1 n s u b R d

[6 pts] Suppose S={(xi,yi)}i=1nsubRd{-1,+1} is a linearly separable training dataset. We saw
in that there exists a w**inRd such that yi(:w**,xi:)>1 for all iin[n]. Recall that the Perceptron
algorithm outputs a hyperplane that separates the positive and negative examples (i.e.,yi(:w,xi:)>0,
for all iin[n].
(a) Devise a new algorithm called MARGIN-PERCEPTRON that outputs a widehat(w) that separates the positive
and negative examples by a margin, that is,yi(:(widehat(w)),xi:)1 for all iin[n].
(b) Suppose, as in class, that Rmaxi||x||2 and Bmin{||w||2 : for all {:yi(:w,xi:)1}. Show using
the technique we used in class to show that MARGIN-PERCEPTRON in at most B2(R2+2) steps. [6 pts] Suppose S={(xi,yi)}i=1nsubRd{-1,+1} is a linearly separable training dataset. We saw
in that there exists a w**inRd such that yi(:w**,xi:)>1 for all iin[n]. Recall that the Perceptron
algorithm outputs a hyperplane that separates the positive and negative examples (i.e.,yi(:w,xi:)>0,
for all iin[n].
(a) Devise a new algorithm called MARGIN-PERCEPTRON that outputs a widehat(w) that separates the positive
and negative examples by a margin, that is,yi(:(widehat(w)),xi:)1 for all iin[n].
(b) Suppose, as in class, that Rmaxi||x||2 and Bmin{||w||2 : for all {:yi(:w,xi:)1}. Show using
the technique we used in class to show that MARGIN-PERCEPTRON in at most B2(R2+2) steps.
 [6 pts] Suppose S={(xi,yi)}i=1nsubRd{-1,+1} is a linearly separable training dataset. We

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Databases Questions!