Question: Derive a gradient descent based learning rule for a perceptron that uses tanh as activation function. Assume that the perceptron has D inputs and a

Derive a gradient descent based learning rule for a perceptron that uses
tanh as activation function. Assume that the perceptron has D inputs and a
bias term and we have N training examples (Xi, Yi) I =1,2,..., N to learn
weights of the perceptron. Please note that the derivative of tanh(x) is 1
tanh2(x)
2. Show the mathematical working of Artificial Neural Network by taking
the case in figure below. First two columns are the input values for X1 and
X2 and the third column is the desired output. Show only 3 iterations.
Learning rate =0.2
Threshold =0.5
Actual output = W1X1+W2X2
Next weight adjustment = Wn+ Wn
Change in weight = Wn = learning rate *(desired output- actual output)
* Xn
Show two complete iterations for acquiring the desired output.
X1 X2 W1 W2 D Y W1 W2
 Derive a gradient descent based learning rule for a perceptron that

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Databases Questions!