This is based on Abrevaya (1997). Consider the fixed effects logit model given in (11.4) with (T=2).

Question:

This is based on Abrevaya (1997). Consider the fixed effects logit model given in (11.4) with \(T=2\). In (11.10) and (11.11) we showed the conditional maximum likelihood of \(\beta\), call it \(\widehat{\beta}_{C M L}\), can be obtained by running a logit estimator of the dependent variable \(1(\Delta y=1)\) on the independent variables \(\Delta x\) for the subsample of observations satisfying \(y_{i 1}+y_{i 2}=1\). Here \(1(\Delta y=1)\) is an indicator function taking the value one if \(\Delta y=1\). Therefore, \(\widehat{\beta}_{C M L}\) maximizes the log-likelihood \[\ln L_{c}(\beta)=\sum_{i \in \vartheta}[1(\Delta y=1) \ln F(\Delta x \beta)+1(\Delta y=-1) \ln (1-F(\Delta x \beta))]\]
where \(\vartheta=\left\{i: y_{i 1}+y_{i 2}=1\right\}\).

(a) Maximize the unconditional log-likelihood for (11.4) given by \(\ln L\left(\beta, \mu_{i}\right)=\sum_{i=1}^{N} \sum_{t=1}^{2}\left[y_{i t} \ln F\left(x_{i t}^{\prime} \beta+\mu_{i}\right)+\left(1-y_{i t}\right) \ln \left(1-F\left(x_{i t}^{\prime} \beta+\mu_{i}\right)\right)\right]\)
with respect to \(\mu_{i}\) and show that \(i\)
\[\widehat{\mu}_{i}= \begin{cases}-\infty & \text { if } y_{i 1}+y_{i 2}=0 \\ -\left(x_{i 1}+x_{i 2}\right)^{\prime} \beta / 2 & \text { if } y_{i 1}+y_{i 2}=1 \\ +\infty & \text { if } y_{i 1}+y_{i 2}=2\end{cases}\]

(b) Concentrate the likelihood by plugging \(\widehat{\mu}_{i}\) in the unconditional likelihood and show that \[\ln L\left(\beta, \widehat{\mu}_{i}\right)=\sum_{i \in \vartheta} 2\left[1(\Delta y=1) \ln F\left(\Delta x^{\prime} \beta / 2\right)+1(\Delta y=-1) \ln \left(1-F\left(\Delta x^{\prime} \beta / 2\right)\right)\right]\]
Use the symmetry of \(F\) and the fact that \[1(\Delta y=1)=y_{i 2}=1-y_{i 1} \quad \text { and } \quad 1(\Delta y=-1)=y_{i 1}=1-y_{i 2} \quad \text { for } i \epsilon \vartheta .\]

(c) Conclude that \(\ln L\left(\beta, \widehat{\mu}_{i}\right)=2 \ln L_{c}(\beta / 2)\). This shows that a scale adjusted maximum likelihood estimator is equivalent to the conditional maximum likelihood estimator, i.e., \(\widehat{\beta}_{M L}=2 \widehat{\beta}_{C M L}\). Whether a similar result hold for \(T>2\) remains an open question.

\[\begin{equation*}
\operatorname{Pr}\left[y_{i t}=1\right]=\operatorname{Pr}\left[y_{i t}^{*}>0\right]=\operatorname{Pr}\left[v_{i t}>-x_{i t}^{\prime} \beta-\mu_{i}\right]=F\left(x_{i t}^{\prime} \beta+\mu_{i}\right) \tag{11.4}
\end{equation*}\]

\[\begin{align*}
\operatorname{Pr}\left[y_{i 1}\right. & \left.=1, y_{i 2}=0 / y_{i 1}+y_{i 2}=1\right]=\frac{\operatorname{Pr}\left[y_{i 1}=1, y_{i 2}=0\right]}{\operatorname{Pr}\left[y_{i 1}+y_{i 2}=1\right]}  \tag{11.10}\\
& =\frac{e^{\mu_{i}+x_{i 1}^{\prime} \beta}}{e^{\mu_{i}+x_{i 1}^{\prime} \beta}+e^{\mu_{i}+x_{i 2}^{\prime} \beta}}=\frac{e^{x_{i 1}^{\prime} \beta}}{e^{x_{i 1}^{\prime} \beta}+e^{x_{i 2}^{\prime} \beta}}=\frac{1}{1+e^{\left(x_{i 2}-x_{i 1}\right)^{\prime} \beta}}
\end{align*}\]

\[\begin{equation*}
\operatorname{Pr}\left[y_{i 1}=0, y_{i 2}=1 / y_{i 1}+y_{i 2}=1\right]=\frac{e^{x_{i 2}^{\prime} \beta}}{e^{x_{i 1}^{\prime} \beta}+e^{x_{i 2}^{\prime} \beta}}=\frac{e^{\left(x_{i 2}-x_{i 1}\right)^{\prime} \beta}}{1+e^{\left(x_{i 2}-x_{i 1}\right)^{\prime} \beta}} \tag{11.11}
\end{equation*}\]

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question
Question Posted: