Question: ) Consider a one - layer non - linear neural network y = Sigmoid ( W x + b ) , if we have a
Consider a onelayer nonlinear neural network y SigmoidW x b if we have a large initialization of parameter W would this cause vanishing gradient or exploding gradient problem? Briefly discuss the reason.
Step by Step Solution
There are 3 Steps involved in it
1 Expert Approved Answer
Step: 1 Unlock
Question Has Been Solved by an Expert!
Get step-by-step solutions from verified subject matter experts
Step: 2 Unlock
Step: 3 Unlock
