Question: def grad ( beta , b , xTr , yTr , xTe, yTe, C , kerneltype, kpar = 1 ) : INPUT:
def gradbeta b xTr yTr xTe, yTe, C kerneltype, kpar:
INPUT:
beta : n dimensional vector that stores the linear combination coefficient
xTr : nxd dimensional matrix training set, each row is an input vector
yTr : n dimensional vector training label, each entry is a label
b : scalar bias
xTe : mxd dimensional matrix test set, each row is an input vector
yTe : m dimensional vector test label, each entry is a label
C : scalar constant that controls the tradeoff between lregularizer and hingeloss
kerneltype: either 'linear','polynomial',rbf
kpar : kernel parameter inverse kernel width gamma in case of RBF degree in case of polynomial
OUTPUTS:
betagrad : n dimensional vector the gradient of the hinge loss with respect to the alphas
bgrad : constant the gradient of he hinge loss with respect to the bias, b
n d xTrshape
betagrad npzerosn
bgrad npzeros
# compute the kernel values between xTr and xTr
kerneltrain computeKkerneltype xTr xTr kpar
# compute the kernel values between xTe and xTr
kerneltest computeKkerneltype xTe, xTr kpar
# # YOUR CODE HERE
Step by Step Solution
There are 3 Steps involved in it
1 Expert Approved Answer
Step: 1 Unlock
Question Has Been Solved by an Expert!
Get step-by-step solutions from verified subject matter experts
Step: 2 Unlock
Step: 3 Unlock
