Question: An MLP has two input nodes, one hidden layer, and two outputs. Recall that the output for layer l is given by a(l)=hl(Wla(l1)+bl). The two

 An MLP has two input nodes, one hidden layer, and two

An MLP has two input nodes, one hidden layer, and two outputs. Recall that the output for layer l is given by a(l)=hl(Wla(l1)+bl). The two sets of weights and biases are given by: W1=[1324]W2=[2223]b1=[10]b2=[04] The non-linear activation for the hidden layer is ReLU (rectified linear unit) - that is h(x)= max(x,0). The output layer is linear (i.e., identity activation function). What is the output activation for input x=[+1;1]T

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Databases Questions!