Question: For problem , instead of implementing validation set approach, proceed to use leave one-out cross-validation (function knn.cv()). Run it for K = 1,3,10 and compare

For problem , instead of implementing validation set approach, proceed to use leave one-out cross-validation (function knn.cv()). Run it for K = 1,3,10 and compare the resulting CV errors. Use all observations of Auto data set for relevant predictors, not just the ”training subset” (as we are not doing any train/test subdivision here).

(a) Run set. seed(1) command prior to each cv. knn() call, for uniformity of the results.

(b) First do it for data in its original form (as you had in 5(d), no scaling). What are the test errors? Which method wins? Show the code for just one of K values (e.g. K = 3).

(c) Then do it for scaled data (as you had in 5(e)). What are the test errors? Which method wins? Show the code for just one of K values (e.g. K = 3)

(d) Which results should we trust more- validation set approach from problem 5? Or CV results here? Why?

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock

a K 1 LeaveOneOut CrossValidation Error 01464286 ... View full answer

blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Accounting Questions!