Question: Exercise 2 E [4 points]. Given a training set D = {(x(i), y(i)), i = 1, .., M}, where x() RN and y() {1,2,

Exercise 2 E [4 points]. Given a training set D = {(x(i), y(i)), i = 1, ..., M}, where x()  RN and y()  {1,2,

Exercise 2 E [4 points]. Given a training set D = {(x(i), y(i)), i = 1, .., M}, where x() RN and y() {1,2, ..., C}, derive the maximum likelihood estimates of the naive Bayes for real valued xmodeled with a Laplacian distribution, i.e., (-105 = Hsled). p(x, y = c) = 1 20j|c exp Exercise 3 [4 points]. Prove that in binary classification, the posterior of linear discrim- inant analysis, i.e., p(y = 1|x; y, , ), is in the form of a sigmoid function p(y = 1|x; 0) = 1 1+e-0x where is a function of {y, , }. Hint: remember to use the convention of letting to 1 that incorporates the bias term into the parameter vector 0. = 2

Step by Step Solution

3.44 Rating (163 Votes )

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock

Exercise 2 To derive the maximum likelihood estimates of the naive Bayes for realvalued data modeled with a Laplacian distribution we need to estimate ... View full answer

blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Programming Questions!