Question: 3. Now consider a probabilistic formulation of basis function regression. This is useful as a way to in- corporate measurement noise. For example, images may

3. Now consider a probabilistic formulation of

3. Now consider a probabilistic formulation of basis function regression. This is useful as a way to in- corporate measurement noise. For example, images may contain white noise due to lighting variations, hardware issues, and other reasons. Here, we'll assume the target output y is equal to f(x) plus Gaussian noise. Specifically, we assume y given z follows a Gaussian distribution with mean f(-2), and variance 02. We write this as y~ N(f(x), 02). As above, we assume a single variable input/output problem, with training data {(Ii, yi)}{1, and that f(x) is a weighted sum of basis functions evaluated at x, with weights w = [WO, ..., wK)?. (a) Formulate the Maximum Likelihood (ML) objective (without solving for the weights). (b) What can you say about the negative log likelihood as compared to the LS objective above in Q1? (c) Now, suppose that the model parameters (weights) follow a Gaussian distribution with zero mean with some fixed isotropic covariance a-'I, i.e., w ~ N(0,a-'I). Formulate and take the negative log of the Maximum a Posteriori (MAP) objective. Note: You may ignore the evidence term as it does not depend on the parameters of interest. (d) What can you say about minimizing the negative log posterior compared to the LS objective above? (e) What happens if we assume that the model parameters follow a Uniform distribution? What can you say about ML and MAP estimates in that case? 3. Now consider a probabilistic formulation of basis function regression. This is useful as a way to in- corporate measurement noise. For example, images may contain white noise due to lighting variations, hardware issues, and other reasons. Here, we'll assume the target output y is equal to f(x) plus Gaussian noise. Specifically, we assume y given z follows a Gaussian distribution with mean f(-2), and variance 02. We write this as y~ N(f(x), 02). As above, we assume a single variable input/output problem, with training data {(Ii, yi)}{1, and that f(x) is a weighted sum of basis functions evaluated at x, with weights w = [WO, ..., wK)?. (a) Formulate the Maximum Likelihood (ML) objective (without solving for the weights). (b) What can you say about the negative log likelihood as compared to the LS objective above in Q1? (c) Now, suppose that the model parameters (weights) follow a Gaussian distribution with zero mean with some fixed isotropic covariance a-'I, i.e., w ~ N(0,a-'I). Formulate and take the negative log of the Maximum a Posteriori (MAP) objective. Note: You may ignore the evidence term as it does not depend on the parameters of interest. (d) What can you say about minimizing the negative log posterior compared to the LS objective above? (e) What happens if we assume that the model parameters follow a Uniform distribution? What can you say about ML and MAP estimates in that case

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related General Management Questions!