New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
pattern recognition and machine learning
Pattern Recognition And Machine Learning 1st Edition Christopher M Bishop - Solutions
1.5 () Using the definition (1.38) show that var[f(x)] satisfies (1.39).
1.4 ( ) www Consider a probability density px(x) defined over a continuous variable x, and suppose that we make a nonlinear change of variable using x = g(y), so that the density transforms according to (1.27). By differentiating (1.27), show that the location y of the maximum of the density in
1.3 ( ) Suppose that we have three coloured boxes r (red), b (blue), and g (green).Box r contains 3 apples, 4 oranges, and 3 limes, box b contains 1 apple, 1 orange, and 0 limes, and box g contains 3 apples, 3 oranges, and 4 limes. If a box is chosen at random with probabilities p(r) = 0.2, p(b)
1.2 () Write down the set of coupled linear equations, analogous to (1.122), satisfied by the coefficients wi which minimize the regularized sum-of-squares error function given by (1.4).
1.1 () www Consider the sum-of-squares error function given by (1.2) in which the function y(x,w) is given by the polynomial (1.1). Show that the coefficients w = {wi} that minimize this error function are given by the solution to the following set of linear equations Mj=0 Aijwj = Ti (1.122)where
1.15 ( ) www In this exercise and the next, we explore how the number of independent parameters in a polynomial grows with the orderM of the polynomial and with the dimensionality D of the input space. We start by writing down the Mth order term for a polynomial in D dimensions in the formD
1.8 ( ) www By using a change of variables, verify that the univariate Gaussian distribution given by (1.46) satisfies (1.49). Next, by differentiating both sides of the normalization condition ∞−∞N x|μ, σ2 dx = 1 (1.127)with respect to σ2, verify that the Gaussian satisfies (1.50).
1.9 () www Show that the mode (i.e. the maximum) of the Gaussian distribution(1.46) is given by μ. Similarly, show that the mode of the multivariate Gaussian(1.52) is given by μ.
1.10 () www Suppose that the two variables x and z are statistically independent.Show that the mean and variance of their sum satisfies E[x + z] = E[x] + E[z] (1.128)var[x + z] = var[x] + var[z]. (1.129)
1.17 ( ) www The gamma function is defined byΓ(x) ≡ ∞0 ux−1e−u du. (1.141)Using integration by parts, prove the relation Γ(x + 1) = xΓ(x). Show also thatΓ(1) = 1 and hence that Γ(x + 1) = x! when x is an integer.
1.16 ( ) In Exercise 1.15, we proved the result (1.135) for the number of independent parameters in the Mth order term of a D-dimensional polynomial. We now find an expression for the total number N(D,M) of independent parameters in all of the terms up to and including the M6th order. First
1.14 ( ) Show that an arbitrary square matrix with elements wij can be written in the form wij = wS ij + wA ij where wS ij and wA ij are symmetric and anti-symmetric matrices, respectively, satisfying wS ij = wS ji and wA ij = −wA ji for all i and j. Now consider the second order term in a
1.13 () Suppose that the variance of a Gaussian is estimated using the result (1.56) but with the maximum likelihood estimate μML replaced with the true value μ of the mean. Show that this estimator has the property that its expectation is given by the true variance σ2.
1.12 ( ) www Using the results (1.49) and (1.50), show that E[xnxm] = μ2 + Inmσ2 (1.130)where xn and xm denote data points sampled from a Gaussian distribution with meanμ and variance σ2, and Inm satisfies Inm = 1 if n = m and Inm = 0 otherwise.Hence prove the results (1.57) and (1.58).
1.11 () By setting the derivatives of the log likelihood function (1.54) with respect to μand σ2 equal to zero, verify the results (1.55) and (1.56).
1.18 ( ) www We can use the result (1.126) to derive an expression for the surface area SD, and the volume VD, of a sphere of unit radius in D dimensions. To do this, consider the following result, which is obtained by transforming from Cartesian to polar coordinatesD i=1 ∞−∞e−x2 i dxi
Showing 800 - 900
of 816
1
2
3
4
5
6
7
8
9
Step by Step Answers