Question: ## THE AVAILABLE SOLUTION on site DOES NOT WORK FOR ALL TEST CASES SHOWN. IT WORKS FOR FIRST TWO CASES. PLEASE PROVIDE A FULL SOLUTION.

## THE AVAILABLE SOLUTION on site DOES NOT WORK FOR ALL TEST CASES SHOWN. IT WORKS FOR FIRST TWO CASES. PLEASE PROVIDE A FULL SOLUTION. ## Part Two: Compute Loss [Graded] Now you will implement the function loss. The function takes in model parameters beta, b, training points as xTr, yTr and testing points as xTe, yTe, along with hyperparameters C, kerneltype, kpar. You will need to calculate both kernel matrices and using computeK on xTr, xTr and xTr, xTe respectively. When we use the loss function later on, we are going to be a little clever: we will use it both for testing and training loss. During training, we will call loss(beta, b, xTr, yTr, xTr, yTr, C, kerneltype, kpar) so that the hinge loss gets calculated on . During testing, we will just call loss(beta, b, xTr, yTr, xTe, yTe, C, kerneltype, kpar) so that the hinge loss gets calculated on . Therefore, you should implement loss keeping in mind how we will call it during training and testing. def loss(beta, b, xTr, yTr, xTe, yTe, C, kerneltype, kpar=1): """ Calculates the loss (regularizer + squared hinge loss) for testing data against training data and parameters beta, b. Input: beta : n-dimensional vector that stores the linear combination coefficients b : bias term, a scalar xTr : nxd dimensional data matrix (training set, each row is an input vector) yTr : n-dimensional vector (training labels, each entry is a label) xTe : mxd dimensional matrix (test set, each row is an input vector) yTe : m-dimensional vector (test labels, each entry is a label) C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss) kerneltype: either of ['linear', 'polynomial', 'rbf'] kpar : kernel parameter (inverse sigma^2 in case of 'rbf', degree p in case of 'polynomial') Output: loss_val : the total loss obtained with (beta, xTr, yTr, b) on xTe and yTe, a scalar """ loss_val = 0.0 # compute the kernel values between xTr and xTr kernel_train = computeK(kerneltype, xTr, xTr, kpar) # compute the kernel values between xTr and xTe kernel_test = computeK(kerneltype, xTr, xTe, kpar) # YOUR CODE HERE return loss_val ** TEST CASES -- MAKE SURE THE SOLUTION PROVIDED WORKS FOR ALL TEST CASES BELOW; # These tests test whether your loss() is implemented correctly xTr_test, yTr_test = generate_data() n, d = xTr_test.shape # Check whether your loss() returns a scalar def loss_test1(): beta = np.zeros(n) b = np.zeros(1) loss_val = loss(beta, b, xTr_test, yTr_test, xTr_test, yTr_test, 10, 'rbf') return np.isscalar(loss_val) # Check whether your loss() returns a nonnegative scalar def loss_test2(): beta = np.random.rand(n) b = np.random.rand(1) loss_val = loss(beta, b, xTr_test, yTr_test, xTr_test, yTr_test, 10, 'rbf') return loss_val >= 0 # Check whether you implement l2-regularizer correctly def loss_test3(): beta = np.random.rand(n) b = np.random.rand(1) loss_val = loss(beta, b, xTr_test, yTr_test, xTr_test, yTr_test, 0, 'rbf') loss_val_grader = loss_grader(beta, b, xTr_test, yTr_test, xTr_test, yTr_test, 0, 'rbf') return (np.linalg.norm(loss_val - loss_val_grader) < 1e-5) # Check whether you implement square hinge loss correctly def loss_test4(): beta = np.zeros(n) b = np.random.rand(1) loss_val = loss(beta, b, xTr_test, yTr_test, xTr_test, yTr_test, 10, 'rbf') loss_val_grader = loss_grader(beta, b, xTr_test, yTr_test, xTr_test, yTr_test, 10, 'rbf') return (np.linalg.norm(loss_val - loss_val_grader) < 1e-5) # Check whether you implement square hinge loss correctly def loss_test5(): beta = np.zeros(n) b = np.random.rand(1) loss_val = loss(beta, b, xTr_test, yTr_test, xTr_test, yTr_test, 10, 'rbf') loss_val_grader = loss_grader(beta, b, xTr_test, yTr_test, xTr_test, yTr_test, 10, 'rbf') return (np.linalg.norm(loss_val - loss_val_grader) < 1e-5) # Check whether you implement loss correctly def loss_test6(): beta = np.zeros(n) b = np.random.rand(1) loss_val = loss(beta, b, xTr_test, yTr_test, xTr_test, yTr_test, 100, 'rbf') loss_val_grader = loss_grader(beta, b, xTr_test, yTr_test, xTr_test, yTr_test, 100, 'rbf') return (np.linalg.norm(loss_val - loss_val_grader) < 1e-5) # Check whether you implement loss correctly for testing data def loss_test7(): xTe_test, yTe_test = generate_data() m, _ = xTe_test.shape beta = np.zeros(n) b = np.random.rand(1) loss_val = loss(beta, b, xTr_test, yTr_test, xTe_test, yTe_test, 100, 'rbf') loss_val_grader = loss_grader(beta, b, xTr_test, yTr_test, xTe_test, yTe_test, 100, 'rbf') return (np.linalg.norm(loss_val - loss_val_grader) < 1e-5) runtest(loss_test1,'loss_test1') runtest(loss_test2,'loss_test2') runtest(loss_test3,'loss_test3') runtest(loss_test4,'loss_test4') runtest(loss_test5,'loss_test5') runtest(loss_test6,'loss_test6') runtest(loss_test7,'loss_test7')

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Databases Questions!