Question: Question 1. Practice with logistic regression Let's first load the textbook's implementation of logistic regression with gradient descent. class LogisticRegressionGD(object): Logistic Regression Classifier using gradient
Question 1. Practice with logistic regression Let's first load the textbook's implementation of logistic regression with gradient descent. class LogisticRegressionGD(object): """Logistic Regression Classifier using gradient descent. Parameters ------------ eta : float Learning rate (between 0.0 and 1.0) n_iter : int Passes over the training dataset. random_state : int Random number generator seed for random weight initialization. Attributes ----------- w_ : 1d-array Weights after fitting. loss_ : list Logistic loss function value in each epoch. """ def __init__(self, eta=0.05, n_iter=100, random_state=1): self.eta = eta self.n_iter = n_iter self.random_state = random_state def fit(self, X, y): """ Fit training data. Parameters ---------- X : {array-like}, shape = [n_examples, n_features] Training vectors, where n_examples is the number of examples and n_features is the number of features. y : array-like, shape = [n_examples] Target values. Returns ------- self : object """ rgen = np.random.RandomState(self.random_state) self.w_ = rgen.normal(loc=0.0, scale=0.01, size=1 + X.shape[1]) self.loss_ = [] for i in range(self.n_iter): net_input = self.net_input(X) output = self.activation(net_input) errors = (y - output) self.w_[1:] += self.eta * X.T.dot(errors) self.w_[0] += self.eta * errors.sum() # compute the logistic `loss` loss = -y.dot(np.log(output)) - ((1 - y).dot(np.log(1 - output))) self.loss_.append(loss) return self def net_input(self, X): """Calculate net input""" return np.dot(X, self.w_[1:]) + self.w_[0] def activation(self, z): """Compute logistic sigmoid activation""" return 1. / (1. + np.exp(-np.clip(z, -250, 250))) def predict(self, X): """Return class label after unit step""" return np.where(self.net_input(X) >= 0.0, 1, 0) # equivalent to: # return np.where(self.activation(self.net_input(X)) >= 0.5, 1, 0) Below you can see the first 3 data points of the data set, all labeled as 'setosa'. Let's set the numerical value for 'setosa' to 1. (i.e. y = 1). Input : X[0:3] Output : array([[5.1, 1.4], [4.9, 1.4], [4.7, 1.3]]) Suppose the initial weights of the logistic neuron are w0=0.1, w1=-0.2, w2=0.1 Q1-1. Write the weights after processing data points 0,1,2, with learning rate =0.1=0.1 and show your calculations. This is similar to the previous assignment, only done now for the logistic neuron. You can also use LogisticRegressionGD to check your calculations. Q1-2. Given our data X, let =2Xd=2 and =3Xd=3 be the quadratic and cubic features. Using code from the notebook on polynomial regression, generate =2Xd=2 and =3Xd=3 Q1-3. Using LogisticRegressionGD fit X, =2Xd=2 and =3Xd=3. Here you should set =0.1=0.1 and >1000niter>1000. For each of these three cases, report the loss function value for the model computed by LogisticRegressionGD. Here it is expected that the loss value decreases as d increases. Show your calculations and code : Answer all three questions
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
