Question: Implement the Gradient Descent method as a function: function xstar = gradDescent ( f , alpha, x 1 , tol ) We seek a local
Implement the Gradient Descent method as a function:
function xstar gradDescentf alpha, x tol
We seek a local minimum of the anonymous function given an initial guess The objective function f is a scalarvalued function with vector input. The scalar alpha is the scale on the step size in the gradient direction. We will return the final iterate as our approximation xstar.
As a convention, please use the variable n: number of steps. This will be important because if alpha is chosen to be too small, the algorithm may take a very long time to converge. If the algorithm takes more than iterations, stop and return a warning.
We will stop iterating once the norm of the gradient is smaller than ie OR you have computed iterations. This second condition is used to indicate the method is slow for that particular example.
Step by Step Solution
There are 3 Steps involved in it
1 Expert Approved Answer
Step: 1 Unlock
Question Has Been Solved by an Expert!
Get step-by-step solutions from verified subject matter experts
Step: 2 Unlock
Step: 3 Unlock
