Question: 3. Consider the perceptron learning algorithm using (batch) gradient descent, discussed in the class. We will assume that the data is defined using two features
| 3. | Consider the perceptron learning algorithm using (batch) gradient descent, discussed in the class. We will assume that the data is defined using two features (x1 and x2). We are interested in learning the perceptron boundary by training on the following training data:
|
| a) | The gradient of the objective function with respect to the augmented weights, just before the second update is [-1.95, 0.14] | ||
| b) | The squared loss value for the updated perceptron after second update on the training data is 2.05 | ||
| c) | The weight vector w after two iterations will be [-0.61, 0.31, 0.11] | ||
| d) | The perceptron after second update makes 0 mistakes on the training data |
Step by Step Solution
There are 3 Steps involved in it
1 Expert Approved Answer
Step: 1 Unlock
Question Has Been Solved by an Expert!
Get step-by-step solutions from verified subject matter experts
Step: 2 Unlock
Step: 3 Unlock
