Question: You must hand in both your Python code and your output to receive credit for all questions in this assignment Questions 1 . Complete the
You must hand in both your Python code and your output to receive credit for
all questions in this assignment
Questions
Complete the program in the file grad descent shell.py in order to implement the gradient
descent algorithm for the linear neural network. To do this, in order to compute the gradient
of the error function, you may use either
E
N
k
W Ak YkAT
k
where N is the number of training pairs, Yk is the kth output training vector and Ak represents
a column in the matrix A discussed in class or the matrix form
E W A Y AT
Complete the program by filling in Python code at the specified points within the file
a Upload your working Python script file
b Make sure to output your training matrices X and Y or you will not receive credit for
this assignment.
c Upload an image of your mserror plot in png or jpg format
d Make sure to output your weight matrices W for both pinv and linearnn along with
their sum squared difference: w sum abs diff.
Complete the progran below by filling in Python
code at the specified points.
To be handed into Canvas assignments:
A Upload your Python script file
B Upload an image of your MSE plot in png or jpg format
C Make sure to show your weight matrices
for both pinv and linearnn along with sum squred difference: wsumabsdiff
import numpy as np
import matplotlib.pyplot as plt
def linearnnxynits,eta:
#initialize dimensions
xshapexshape
yshapeyshape
nxshape
Nxshape
kyshape
#build the input data matrix
znponesN
anpconcatenatexz axis
#initialize random weights
wnprandom.normalsizek n
#initialize mserror storage
mserrornpzerosnits
#
#train the network using gradient descent
for itcount in rangenits:
Section of code to update network weights
Compute the gradient of the mean squared error
and store it in a variable named 'gradientofmse'
Given the variable 'gradientofmsenorm', update the
network weight matrix, w using the gradient descent formula
given in class
gradientofmsenormnpsqrtnpsumgradientofmsegradientofmse
#calculate MSE
etemp;
for trcount in rangeN:
#get fXkYk column vector
avecnpconcatenatex:trcount axis
vknpdotwavecy:trcount
#calculate the error
etempetempvktransposevk
mserroritcountNetemp;
#test the output
netoutnpdotwa
return netout,mserror,w
#
#'main' starts here
#set up the training set
n
k
N
xnprandom.normalsizen N
ynprandom.normalsizek N
#compute the closed form solution
znponesN
anpconcatenatexz axis
wpinvnpdotynplinalg.pinva
printWeight matrix computed using the pseudoinverse:
printwpinv
print
Section of code to call the gradient descent function to computer the weight matrix
You must input a value for the number of iterations, nits
You must input a value for the learning rate, eta
You should run numerical tests with a few different values
netout,mserror,wlinearnnxynits,eta
printWeight matrix computed via gradient descent:
printw
print
Compute the sum of the absolute value of the
difference between the weight matrices
from pinv and gradient descent
wsumabsdiffnpsumnpabsolutewpinvw
printSum of absolute value of the difference between weight matrices:
printwsumabsdiff
#plot the nean squared error
pltplotmserror:
plttitleMean Squared Error versus Iteration'
pltxlabelIteration #
pltylabelMSE
plttitleMean Squared Error versus Iteration'
#pltshow
pltsavefigplotpng
Step by Step Solution
There are 3 Steps involved in it
1 Expert Approved Answer
Step: 1 Unlock
Question Has Been Solved by an Expert!
Get step-by-step solutions from verified subject matter experts
Step: 2 Unlock
Step: 3 Unlock
