a. Begin with one population and assume that (y_{1}, ldots, y_{n}) is an i.i.d. sample from a

Question:

a. Begin with one population and assume that \(y_{1}, \ldots, y_{n}\) is an i.i.d. sample from a Bernoulli distribution with mean \(\pi\). Show that the maximum likelihood estimator of \(\pi\) is \(\bar{y}\).

b. Now consider two populations. Suppose that \(y_{1}, \ldots, y_{n_{1}}\) is an i.i.d. sample from a Bernoulli distribution with mean \(\pi_{1}\) and that \(y_{n_{1}+1}, \ldots, y_{n_{1}+n_{2}}\) is an i.i.d. sample from a Bernoulli distribution with mean \(\pi_{2}\), where the samples are independent of one another.

b(i). Show that the maximum likelihood estimator of \(\pi_{2}-\pi_{1}\) is \(\bar{y}_{2}-\bar{y}_{1}\).

b(ii). Determine the variance of the estimator in part b(i).

c. Now express the two population problem in a regression context using one explanatory variable. Specifically, suppose that \(x_{i}\) only takes on the values of zero and one. Of the \(n\) observations, \(n_{1}\) take on the value \(x=0\). These \(n_{1}\) observations have an average \(y\) value of \(\bar{y}_{1}\). The remaining \(n_{2}=n-n_{1}\) observations have value \(x=1\) and an average \(y\) value of \(\bar{y}_{2}\). Using the logit case, let \(b_{0, M L E}\) and \(b_{1, M L E}\) represent the maximum likelihood estimators of \(\beta_{0}\) and \(\beta_{1}\), respectively.

c(i). Show that the maximum likelihood estimators satisfy the equations

\[\bar{y}_{1}=\pi\left(b_{0, M L E}\right)\]

and

\[\bar{y}_{2}=\pi\left(b_{0, M L E}+b_{1, M L E}\right) .\]

c(ii). Use part c(i) to show that the maximum likelihood estimator for \(\beta_{1}\) is \(\pi^{-1}\left(\bar{y}_{2}\right)-\pi^{-1}\left(\bar{y}_{1}\right)\).

\(\mathrm{c}\) (iii). With the notation \(\pi_{1}=\pi\left(\beta_{0}\right)\) and \(\pi_{2}=\pi\left(\beta_{0}+\beta_{1}\right)\), confirm that the information matrix can be expressed as

\[\mathbf{I}\left(\beta_{0}, \beta_{1}\right)=n_{1} \pi_{1}\left(1-\pi_{1}\right)\left(\begin{array}{ll}1 & 0 \\0 & 0\end{array}\right)+n_{2} \pi_{2}\left(1-\pi_{2}\right)\left(\begin{array}{ll}1 & 1 \\1 & 1\end{array}\right) .\]

c(iv). Use the information matrix to determine the large sample variance of the maximum likelihood estimator for \(\beta_{1}\).

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Question Posted: