Question: def svm _ loss ( X , y , Weight,reg _ para,batch _ size,num _ classes ) : # Here you are required to calculate

def svm_loss(X,y,Weight,reg_para,batch_size,num_classes):
# Here you are required to calculate the hinge loss with L2 regularization, and the gradient dW for updating the weight
margin_delta =1 # define the delta to determine the margin softness
loss =0.0 #initialize the loss with 0.0 value
dW = np.zeros(Weight.shape).astype('float') # initialize the gradient as zero
# %%%%%%%%%%%%%% implement your code below (10~15 lines)%%%%%%%%%%%%%%
scores = np.dot(X,Weight) # predict the class score to generate a (number of sample, number of class) tensor
correct_class_scores = scores[np.arange(num_train), y] # find the predict score of the correct class return a ( number of sample, ) tensor
correct_class_scores = np.reshape(correct_class_scores,(batch_size,1)) # reshape the predct score into (number of sample, 1) tensor for matrix propagation
margin = scores - correct_class_scores + margin_delta # calculate the margin
margins[np.arange(num_train), y]=
margins[margins <=0]=
margins[margins >0]=
loss += #calculate loss with regularization part 1-2 lines
row_sum = # sum the margin per data point 1 by N
margins[np.arange(num_train), y]=
dW +=
# %%%%%%%%%%%%%% your code ends here %%%%%%%%%%%%%%
return loss, dW # return the loss and gradient

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Programming Questions!