Question: In this problem, we will walk through a single step of the gradient descent algorithm for logistic regression. Assume two dimension input. Recap: f (

In this problem, we will walk through a single step of the gradient descent algorithm for logistic
regression. Assume two dimension input. Recap:
f (x; w, b)=\sigma (w x + b)
Cross entropy loss L(y, y)=[y log y +(1 y) log(1y)]
The single update step \theta t+1=\theta t \eta \theta L(f (x; \theta ), y), where \theta =[w1, w2, b]T
Now given
Initial parameters : w1= w2= b =0,(\theta 0=[0,0,0]))
Learning rate \eta =0.1
data example : x =[3,2], y =1
(a)(4 pts) Compute the first gradient \theta L(f (x; \theta ), y).
(b)(4 pts) Compute the updated parameter vector \theta 1 from the single update step.

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Programming Questions!