Question: Given below is an implementation of the sigmoid activation function ( s ) during the backward pass: Observe the input and output values for the
Given below is an implementation of the sigmoid activation function s during the backward pass:
Observe the input and output values for the function:
a Input
b Output
Note that the gradient values given as input to the function are all at the extremes near
Now explain why the values in #"are not good for learning and how that problem is overcome in ReLu.
Step by Step Solution
There are 3 Steps involved in it
1 Expert Approved Answer
Step: 1 Unlock
Question Has Been Solved by an Expert!
Get step-by-step solutions from verified subject matter experts
Step: 2 Unlock
Step: 3 Unlock
