Question: Problem 1 . [ 3 0 points ] Consider a binary training data S = { ( x 1 , y 1 ) , (
Problem points Consider a binary training data S x yx yxn yn where the
feature vectors are xi in Rd and yi in i n Note that in the lectures we assumed
yi in
Show that
Pyx; w Py x; wy Py x; wy
Following the derivation of logistic regression in lectures, derive the loglikelihood for the training
data when the label of each training example is set to be yi in and sigmoid function is used
to covert the linear predictions to probabilities.
Then, calculate the gradient, and write down the gradient descent GD update rule for the obtained
optimization problem and discuss the contribution of each training example to updated solution in
every iteration of GD In particular, compare the contribution of a misclassified example with the
contribution of a correctly classified example to the gradient.
Step by Step Solution
There are 3 Steps involved in it
1 Expert Approved Answer
Step: 1 Unlock
Question Has Been Solved by an Expert!
Get step-by-step solutions from verified subject matter experts
Step: 2 Unlock
Step: 3 Unlock
