Question: Suppose we have four positive data points: (-0.2,0.6), (-0.4, 1.2), (-0.6, 0.2), (-1.2, 0.4), and two negative data points: (-0.2,0.2), (-0.8,0.8). Unfortunately, this data

Suppose we have four positive data points: (-0.2,0.6), (-0.4, 1.2), (-0.6, 0.2), (-1.2, 0.4), and two 

Suppose we have four positive data points: (-0.2,0.6), (-0.4, 1.2), (-0.6, 0.2), (-1.2, 0.4), and two negative data points: (-0.2,0.2), (-0.8,0.8). Unfortunately, this data set is not linearly separable. The kernel trick is to find a mapping of each data point x to some feature vector (r) such that there is a function K called kernel which satisfies K(x, x') = o(x)To(x). Here, we consider the following normalized kernel: (where ||z|| is the length of vector ) K(x, r'): = -1,2 -0,8 -0.4 1. What is the feature vector (z) corresponding to this kernel? Plot z and the corresponding p(x) for each training point in the training set. 0.8 0.4 * -0.8 |||||||| 0,4 0.8 1:2 1 2. Are the feature vectors linearly separable in the feature space? If yes, determine the weight vector of the maximum margin separator.

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock

The kernel given in the question is the normalized kernel Kx x fracx cdot xx x which resembles the cosine similarity measure often used in vector spac... View full answer

blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Computer Network Questions!