Question: Here we want to prove the averaged perceptron training algorithm (shown below) learns a hyperplane described by N1k=1Kckwk+N1k=1Kckbk, where ck is the number of training

 Here we want to prove the averaged perceptron training algorithm (shown

Here we want to prove the averaged perceptron training algorithm (shown below) learns a hyperplane described by N1k=1Kckwk+N1k=1Kckbk, where ck is the number of training points that wk has not been updated (including the first training point that results in wk ). To simplify the proof we assume having only one epoch. To make the problem easier, we can introduce the following notation: - Let Ik be the index of the example that results in wk. Then ck=Ik+1Ik. - Let IK+1=N. - Let xIk be training example that results in wk and yIk be its label. Then given our notation so far, we can write wk=wk1+yIkxIk. - We can rewrite line 9 as: u=u+yIkIkxIk. Show that N1k=1Kckwk=wKN1U, where U=k=1KyIkIkxIk. The bias part follows the same logic so we ignore that for the proof. Hint: use the definition of the updates and expand the left-hand side

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Databases Questions!