Question: Use a fully connected two layer neural network to classify the same dataset of Hw#1 (for both questions). The first layer (input layer) has four
Use a fully connected two layer neural network to classify the same dataset of Hw#1 (for both questions). The first layer (input layer) has four neurons and the second layer (output layer) has only one neuron. You should use the sigmoid activation function and the same learning rules that was discussed in the class. Please print the input, Ws, and output values after training. Also explain your experience with this network in compare with the one you did in HW# 1
*** HW1 :
- Use Jupyter to train a simple perceptron model to classify the patterns on a NAND function. To begin training, select your initial weights randomly between 2 and -2.
| X1 | X2 | Target |
| 0 | 0 | 1 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 0 |
Extra Points- 2% : Make an animation movie to show the movement of your classification line from beginning to final position.
- Use Jupyter to train a simple perceptron model to classify a set of randomly generated patterns. Generate randomly two set of data set, each one with 50 data patterns, in two dimensions x1 and x One data set has 0 <= x1 <= 3 and 0 <= x2 <= 3 with target of 0, and the second data set has 6 <= x1 <= 9 and 6 <= x2 <= 9 with target of 1. To begin training, select your initial weights randomly between 2 and -2
***** HW1 code :
import numpy as np import matplotlib.pyplot as plt %matplotlib inline
NUM_FEATURES = 2 Max_Iter = 100 learning_rate = 0.7
x = np.array([[0, 0], [1, 0], [1, 1], [0, 1]]) # training input features y = np.array([1, 1, 1, 0]) # training labels for OR operation
W = np.random.randint(-2,2,size=(2)) b = np.random.rand(1)
N, d = np.shape(x) # number of samples and number of features
for k in range(Max_Iter): for j in range(N): yHat_j = x[j, :].dot(W) + b yHat_j = 1.0 / (1.0 + np.exp(-yHat_j)) err = y[j] - yHat_j
deltaW = err * x[j, :] deltaB = err W = W + learning_rate * deltaW b = b + learning_rate * deltaB # print('W:' + str(W)) #print weight and threshold term after each iteration # print('b:' + str(b)) # Now plot the fitted line. We need only two points to plot the line plot_x = np.array([np.min(x[:, 0] - 0.2), np.max(x[:, 1]+0.2)]) plot_y = - 1 / W[1] * (W[0] * plot_x + b) # comes from, w0*x + w1*y + b = 0 then y = (-1/w1) (w0*x + b)
plt.scatter(x[:, 0], x[:, 1])#, c=y, s=100, cmap='viridis') plt.xlim([-0.2, 1.2]); plt.ylim([-0.2, 1.25]); plt.plot(plot_x, plot_y, color='k', linewidth=2) plt.show() activation = x.dot(W) + b loss = (activation > 0)!=y if np.sum(loss)==0: break
print("Final values of weights & threshold: ") print('W:' + str(W)) print('b:' + str(b))
#testing activation = x.dot(W) + b
print("Testing: ") print("Input \t: Ouput") for i in range(4): if (activation[i] > 0.0): print(x[i]," : ",1) else: print(x[i]," : ",0)
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
