Question: Question 7 . 0 { points: 1 } Next, we'll use cross - validation on our training data to choose k . In k -

Question 7.0
{points: 1}
Next, we'll use cross-validation on our training data to choose k. In k-nn classification, we used accuracy to see how well our predictions matched the true labels.
In the context of k-nn regression, we will use RMSPE as the scoring instead. Interpreting the RMSPE value can be tricky but generally speaking, if the prediction
values are very close to the true values, the RMSPE will be small. Conversely, if the prediction values are not very close to the true values, the RMSPE will be quite
large.
Let's perform a cross-validation and choose the optimal k!
First, create a pipeline for k-nn. We are still using the k-nearest neighbours algorithm, and we will also use the StandardScaler to standardize the numerical
values. Store your pipeline in an object called marathon_pipe. Finally, perform a cross-validation with 5 folds using the cross_validate function. Remember
that since the cross_validate function always maximizes its "score", and here we're using RMSPE (lower is better!), we need to specify that we're using the
negative RMSPE ( "neg_root_mean_squared_error" ).
marathon_cv
 Question 7.0 {points: 1} Next, we'll use cross-validation on our training

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Databases Questions!