Question: use python jupyter lab: 5 . Q 3 Parameters Tuning 5 . 1 . For the dataset in X , find good parameters than can
use python jupyter lab:
QParameters Tuning
For the dataset in Xfind good parameters than can achieve a very small loss.
For example, you could get a loss
Use the original gradient descent without early stopping for this question
Run the gradient descent function for various settings,
Use eta from this set
Use nepochs from this set
import sys
from packaging import version
import sklearn
import matplotlib.pyplot as plt
import numpy as np
from sklearn.preprocessing import adddummyfeature
import numpy as np
from sklearn import metrics #Import scikitlearn metrics module for accuracy calculation
from sklearn.modelselection import traintestsplit
from sklearn.metrics import confusionmatrix, ConfusionMatrixDisplay
from sklearn.datasets import makeblobs
printSklearn package",sysversioninfo
printSklearn package",sklearn.version
assert sysversioninfo
#assert version.parsesklearnversion version.parse
pltrcfont size
pltrcaxes labelsize titlesize
pltrclegend fontsize
pltrcxtick labelsize
pltrcytick labelsize
##
def fxw:
returnxT@witem
def JvectorizedXyw:
m Xshape
V X@wy
sumsquarederror VT @ V
return sumsquarederrormitem
def JdeltavectorizedXyw:
m Xshape
sumw XT @ X@wy
#return summ
return sumwm
def simplegradientvectorizedXytheta, nepochs, eta:
thetapath theta
for epoch in rangenepochs:
gradients JdeltavectorizedXytheta
theta theta eta gradients
thetapath.appendtheta
returnthetapath
##
eta # learning rate
nepochs
m lenX # number of instances
printXshape", Xshape
nprandom.seed
theta nprandom.randn # randomly initialized model parameters
printLoss at initial theta",JvectorizedXytheta
thetapath simplegradientvectorizedXytheta,nepochs, eta
printLoss at final theta",JvectorizedXythetapath
##
nprandom.seed
theta nprandom.randn # randomly initialized model parameters
besttheta theta
besteta
bestnepochs
bestloss JvectorizedXytheta
##Your code here..
## Iterate over the eta and nepochs and update the best values
## when the loss improves get smaller loss than bestloss
printBest Loss", bestloss
printBest theta", besttheta.ravel
printBest eta", besteta
printBest nepochs", bestnepochs
##
#using sklearn package
#using sklearn package
#This code is provided to give you an idea of how small the loss can be
#You should not aim to get the exact number, but you should get a close one.
from sklearn.linearmodel import LinearRegression
linreg LinearRegressionfitinterceptFalse
linreg.fitX y
besttheta linreg.coefreshape
printJvectorizedXybesttheta
Step by Step Solution
There are 3 Steps involved in it
1 Expert Approved Answer
Step: 1 Unlock
Question Has Been Solved by an Expert!
Get step-by-step solutions from verified subject matter experts
Step: 2 Unlock
Step: 3 Unlock
