Let (Y in{0,1}) be a response variable and let (h(boldsymbol{x})) be the regression function [ h(boldsymbol{x}):=mathbb{E}[Y mid

Question:

Let \(Y \in\{0,1\}\) be a response variable and let \(h(\boldsymbol{x})\) be the regression function

\[ h(\boldsymbol{x}):=\mathbb{E}[Y \mid \boldsymbol{X}=\boldsymbol{x}]=\mathbb{P}[Y=1 \mid \boldsymbol{X}=\boldsymbol{x}] \]

Recall that the Bayes classifier is \(g^{*}(\boldsymbol{x})=1\{h(\boldsymbol{x})>1 / 2\}\). Let \(g: \mathbb{R} \rightarrow\{0,1\}\) be any other classifier function. Below, we denote all probabilities and expectations conditional on \(\boldsymbol{X}=\boldsymbol{x}\) as \(\mathbb{P}_{x}[\cdot]\) and \(\mathbb{E}_{x}[\cdot]\).

(a) Show that

\[ \mathbb{P}_{x}[g(\boldsymbol{x}) eq Y]=\overbrace{\mathbb{P}_{x}\left[g^{*}(\boldsymbol{x} eq \boldsymbol{Y})\right]}^{\text {irreducible error }}+|2 h(\boldsymbol{x})-1| \mathbb{\square}\left\{g(\boldsymbol{x}) eq g^{*}(\boldsymbol{x})\right\} . \]

Hence, deduce that for a learner \(g_{\mathscr{T}}\) constructed from a training set \(\mathscr{T}\), we have

\[ \mathbb{E}\left[\mathbb{P}_{x}\left[g_{\mathscr{T}}(\boldsymbol{x}) eq Y \mid \mathscr{T}\right]\right]=\mathbb{P}_{x}\left[g^{*}(\boldsymbol{x}) eq Y\right]+|2 h(\boldsymbol{x})-1| \mathbb{P}\left[g_{\mathscr{T}}(\boldsymbol{x}) eq g^{*}(\boldsymbol{x})\right] \]

where the first expectation and last probability operations are with respect to \(\mathscr{T}\).

(b) Using the previous result, deduce that for the unconditional error (that is, we no longer condition on \(\boldsymbol{X}=\boldsymbol{x}\) ), we have

\[ \mathbb{P}\left[g^{*}(\boldsymbol{X}) eq Y\right] \leqslant \mathbb{P}\left[g_{\mathscr{T}}(\boldsymbol{X}) eq Y\right] \]

(c) Show that, if \(g_{\mathscr{T}}:=\mathbb{1}\left\{h_{\mathscr{T}}(\boldsymbol{x})>1 / 2\right\}\) is a classifier function such that as \(n \rightarrow \infty\)
\[ h_{\mathscr{T}} \stackrel{d}{\rightarrow} Z \sim \mathscr{N}\left(\mu(\boldsymbol{x}), \sigma^{2}(\boldsymbol{x})\right) \]
for some mean and variance functions \(\mu(\boldsymbol{x})\) and \(\sigma^{2}(\boldsymbol{x})\), respectively, then \[ \mathbb{P}_{x}\left[g_{\mathscr{T}}(\boldsymbol{x}) eq g^{*}(\boldsymbol{x})\right] \rightarrow \Phi\left(\frac{\operatorname{sign}(1-2 h(\boldsymbol{x}))(2 \mu(\boldsymbol{x})-1)}{2 \sigma(\boldsymbol{x})}\right) \]
where \(\Phi\) is the cdf of a standard normal random variable.

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question

Data Science And Machine Learning Mathematical And Statistical Methods

ISBN: 9781118710852

1st Edition

Authors: Dirk P. Kroese, Thomas Taimre, Radislav Vaisman, Zdravko Botev

Question Posted: