Question: 6 . [ Neural Networks ] ( 5 pts ) a . ( 1 pts ) True or False: For any neural network, the validation
Neural Networks pts
a pts True or False: For any neural network, the validation loss will always decrease monotonically with the number of iterations of gradient descent, provided the step size is sufficiently small.
True
False
b Let f be a fullyconnected neural network with input x in mathbbRM P hidden layers with K nodes per layer and logistic activation functions, and a single logistic output. Let g be the same network as f except we insert another hidden layer with K nodes that have no activation function or equivalently, the identity activation function so that g has P hidden layers. Denote this new layer Ltext new Assume that there are no bias terms for any layer nor for the input. Please select one option for all the following questions.
i pts f can learn the same decision boundary as g if the additional linear layer is placed
immediately after the input.
immediately before the last sigmoid activation function.
anywhere in between the above two choices.
none of the above.
ii pts Assume that Ltext new is placed in between two other hidden layers in g How many more parameters does g learn compared to f
K
K
K P
K M
K
iii. pts True or False: After training f and g to convergence, g can have a lower training loss than f Use the same assumption as in ii
True
False
iv pts True or False: After training f and g to convergence, f can have a lower training loss than g Use the same assumption as in ii
True
False
Step by Step Solution
There are 3 Steps involved in it
1 Expert Approved Answer
Step: 1 Unlock
Question Has Been Solved by an Expert!
Get step-by-step solutions from verified subject matter experts
Step: 2 Unlock
Step: 3 Unlock
