Question: The following code is a perceptron implementation ( with three do - nothing lines 5 9 - 6 1 ) . import numpy as np

The following code is a perceptron implementation (with three do-nothing lines 59-61).
import numpy as np
class Perceptron(object):
"""Perceptron classifier.
Parameters
-----------_
n_iter : int
Passes over the training dataset.
random_state : int
Random number generator seed for random weight
initialization.
Attributes
_-----------
w_ : 1d-array
Weights after fitting.
errors_ : list
Number of misclassifications (updates) in each epoch.
iter_trained : int
The number of iterations it took for training.
"!"!
def (self, n_iter=50, random_state=1):
self.n_iter = n_iter
self.random_state = random_state
self.iter_trained =-1
def fit(self, X, y):
"""Fit training data.
Parameters
X : {array-like}, shape =[n_examples, n_features]
Training vectors, where n_examples is the number of examples and
n_features is the number of features.
y : array-like, shape =[n_examples]
Target values.
Returns Returns
self : object
"\prime")
rgen = np.random.RandomState(self.random_state)
self.w_= rgen.normal(loc=0.0, scale=0.01, size=1+ X.shape[1])
self.errors_=[]
for _ in range(self.n_iter):
errors =0
for xi, target in zip(X, y):
update = self.predict(xi)- target
self.w_[1:]+= update * xi
self.w_[0]+= update
errors += int(update !=0.0)
self.errors_.append(errors)
####### New code for doing nothing. - MEH
this_code_does_nothing = Truereturn self
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:])+ self.w_[0]
def predict(self, X):
"""Return class label after unit step"""
return np.where(self.net_input(X)>=0.0,1,-1)
There are significant errors and omissions in the above perceptron implementation. Work on the above cell and modify the code so that:
(i) The lines containing errors are commented out, and new lines are added with corrected code.
(ii) The omissions are corrected.
(iii) The fit function stops when no more iterations are necessary, and stores the number of iterations required for the training.
(iv) The perceptron maintains a history of its weights, i.e. the set of weights after each point is processed.
At each place where you have modified the code, please add clear comments surrounding it, similarly to the "do-nothing" code. Make sure you
evaluate the cell again, so that following cells will be using the modified perceptron. Question 2: Experimenting with hyperparameters
ppn = Perceptron(eta=0.0001, n iter=20, random_state=1)
ppn.fit(X, y)
plt.plot(range(1, len(ppn.errors_)+1), ppn.errors_, marker='o')
plt.xticks(range(1,21)) # Set integer x-axis labels
plt.xlabel('Epochs')
plt.ylabel('Number of updates')
plt.show()
Show hidden output
Next steps:
Explain error
Running the above code, you can verify whether your modification in Question 1 works correctly. The point of this question is to experiment with
the hyperparameter , the learning rate. Here are some specific questions:
(i) Find values of for which the process requires 10,20,30, and 40 iterations to converge.
(ii) Is it always the case that raising leads to a reduced (or equal) number of iterations? Explain with examples.
(iii) Find two different settings for the random state, that give different convergence patterns for the same value of .
(iv) Based on your experiences in parts (i)-(iii), would binary search be an appropriate strategy for determining values of for which the
perceptron converges within a desired number of iterations?
Please give your answers in the cell below.
The following code is a perceptron implementation

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Programming Questions!