In Example 2. 2 we saw that the statistical error can be expressed (see (2.20)) as [

Question:

In Example 2.

2 we saw that the statistical error can be expressed (see (2.20)) as

\[ \int_{0}^{1}\left(\left[1, \ldots, u^{p-1}\right]\left(\widehat{\boldsymbol{\beta}}-\boldsymbol{\beta}_{p}\right)\right)^{2} \mathrm{~d} u=\left(\widehat{\boldsymbol{\beta}}-\boldsymbol{\beta}_{p}\right)^{\top} \mathbf{H}_{p}\left(\widehat{\boldsymbol{\beta}}-\boldsymbol{\beta}_{p}\right) \]

By Exercise 10 the random vector \(\boldsymbol{Z}_{n}:=\sqrt{n}\left(\widehat{\boldsymbol{\beta}}_{n}-\boldsymbol{\beta}_{p}\right)\) has asymptotically a multivariate normal distribution with mean vector \(\mathbf{0}\) and covariance matrix \(\mathbf{V}:=\ell^{*} \mathbf{H}_{p}^{-1}+\mathbf{H}_{p}^{-1} \mathbf{M}_{p} \mathbf{H}_{p}^{-1}\) . Use Theorem C. 2 to show that the expected statistical error is asymptotically

\[ \begin{equation*} \mathbb{E}\left(\widehat{\boldsymbol{\beta}}-\boldsymbol{\beta}_{p}\right)^{\top} \mathbf{H}_{p}\left(\widehat{\boldsymbol{\beta}}-\boldsymbol{\beta}_{p}\right) \simeq \frac{\ell^{*} p}{n}+\frac{\operatorname{tr}\left(\mathbf{M}_{p} \mathbf{H}_{p}^{-1}\right)}{n}, \quad n \rightarrow \infty \tag{2.54} \end{equation*} \]

We note a subtle technical detail: In general, convergence in distribution does not imply convergence in \(L_{p}\)-norm (see Example C.6), and so here we have implicitly assumed that \(\left\|\boldsymbol{Z}_{n}\right\| \xrightarrow{\text { d }}\) Dist. \(\Rightarrow\left\|\boldsymbol{Z}_{n}\right\| \xrightarrow{L_{2}}\) constant \(:=\lim _{n \uparrow \infty} \mathbb{E}\left\|\boldsymbol{Z}_{n}\right\|\).

 

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question

Data Science And Machine Learning Mathematical And Statistical Methods

ISBN: 9781118710852

1st Edition

Authors: Dirk P. Kroese, Thomas Taimre, Radislav Vaisman, Zdravko Botev

Question Posted: