Question: An MLP has two input nodes, one hidden layer, and two outputs. Recall that the output for layer l is given by a(l)=hl(Wla(l1)+bl). The two

An MLP has two input nodes, one hidden layer, and two outputs. Recall that the output for layer l is given by a(l)=hl(Wla(l1)+bl). The two sets of weights and biases are given by: W1=[1324]W2=[2223]b1=[10]b2=[04] The non-linear activation for the hidden layer is ReLU (rectified linear unit) - that is h(x)= max(x,0). The output layer is linear (i.e., identity activation function). What is the output activation for input x=[+1;1]T
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
