Question: K - Nearest Neighbors: Classification Algorithm Given a positive integer K and an observation x 0 , we classify Y 0 via the following manner:

K-Nearest Neighbors: Classification Algorithm
Given a positive integer K and an observation x0, we classify Y0 via the following manner:
(1) Identify the K closest observations to x0 in Euclidian distance, i.e. the usual distance metric
between two points in Euclidian space
(2) For each class, l =1,..., J, calculate P (Y = l|x0)=1
K
i in N0
I(yi = l)
(3) Identify the class l for which P (Y = l|x0) is largest and classify y0= l
(4) If there is a tie (meaning you set K =6 and 3 votes occur for each of two classes, i.e. both estimated
probabilities are 0.5, so the largest is not well-defined), there is no one concensus regarding how to
proceed:
Some suggest only using odd K values in this case
Some suggest decreasing the value of K for this observation until the tie is broken
K-nearest neighbors is derived utilizing intuition and heuristics, not hardcore mathematics - so
this question doesnt necesarilly have a right answer - however, some truly brilliant methods
have their routes in human intuition, but were eventually backed by rigorous mathematics
K-Nearest Neighbors: Visualizing the Classification Algorithm
The left panel of Figure 2.14 shows how the prediction for a new point x would be made when
K =3- two of the closest three observations belong to the blue class, thus the observation is
predicted to be blue - the right panel background shows the decision rule for all possible values in the
figure, with the true observed data superimposed - yellow Os were in the yellow set, blue Os in the blue set
19

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Databases Questions!