# Get questions and answers for Categorical Data Analysis

## GET Categorical Data Analysis TEXTBOOK SOLUTIONS

1 Million+ Step-by-step solutions
Refer to Table 3.10. In the same survey, gender was cross-classified with party identification. Table 3.11 shows some results. Explain how to interpret all the results on this printout.

Table 3.10:

Table 3.11:

Refer to Table 3.10.

a. Using X2 and G2, test the hypothesis of independence between party identification and race. Report the P-values and interpret.

b. Partition chi-squared into components regarding the choice between Democrat and Independent and between these two combined and Republican. Interpret.

Table 3.10:

For testing independence, show that X2 ≤ n min (I – 1, J – 1). Hence V2 = X2 / [n min(I – 1, J – 1)] falls between 0 and 1 (Carmer 1946). For 2 × 2 tables, X2 / n is often called phi-squared; it equals Goodman and Kruskal’s tau. Other measures based on X2 include the contingency coefficient [X2/(X2 + n)]1/2 (Pearson 1904).

For counts {ni}, the power divergence statistic for testing goodness of fit is

a. For Î» = 1, show that this equals X2.

b. As Î» â†’ 0, show that it converges to G2

c. As Î»â†’ â€“1, show that it converges to 2âˆ‘ÂµÌ‚i log (ÂµÌ‚i/ni), the minimum discrimination information statistic (Gokhale and Kuliback 1978).

d. For Î» = â€“1/2 show that it equals 4 âˆ‘(âˆšni â€“ âˆšÂµÌ‚i)2, the Freemanâ€“TukLy statistic (Freeman and Tukey 1950).

Use a partitioning argument to explain why G2 for testing independence cannot increase after combining two rows (or two columns) of a contingency table.

Assume independence, and let pij= nij/n and Ï€Ì‚ij= pi+p+j.

a. Show that pij and Ï€Ì‚ij are unbiased for Ï€ij = Ï€i+ Ï€+j.

b. Show that var(pij) = Ï€i+ Ï€+j(1 â€“ Ï€i+ Ï€+j)/n

c. Using E(pi+ p+j)2 = E(p2i+) E(p2+j) and E(p2i+) = var (pi+) + [E(pi+)]2, show that

When a test statistic has a continuous distribution, the P-value has a null uniform distribution, P(P-value ≤ α) = α for 0 < α < 1. For Fisher’s exact test, explain why under the null, P(P-value ≤ α) ≤ α for 0 < α < 1.

A contingency table for two independent binomial variable has counts (3, 0 / 0, 3) by row. For H0: π1 = π2 and Hα : π1 > π2, show that the P-value equals 1/64 for the exact unconditional test and 1/20 for Fisher’s exact test.

Refer to Problem 3.42 and exact tests using X2with HÎ±: Ï€1â‰  Ï€2. Explain why the unconditional P-value, evaluated at Ï€ = 0.5, is related to Fisher conditional P-values for various tables by

Thus, the unconditional P-value of 1/32 is a weighted average of the Fisher P-value for the observed column margins and P-values of 0 corresponding to the impossibility of getting results as extreme as observed if other margins had occurred i.e.

Data from Prob. 3.42:

A contingency table for two independent binomial variable has counts (3, 0 / 0, 3) by row. For H0: Ï€1 = Ï€2 and HÎ± : Ï€1 > Ï€2, show that the P-value equals 1/64 for the exact unconditional test and 1/20 for Fisherâ€™s exact test.

Consider exact tests of independence, given the marginais, for the I × I table having nii = 1 for i = 1,.....I, and nij = 0 otherwise.

Show that (a) tests that order tables by their probabilities, X2, or G2 have P-value = 1.0, and (b) the one-sided test that orders tables by an ordinal statistic such as r or C – D has P-value = (1/I!).

A Monte Carlo scheme randomly samples M separate I × J tables having observed margins to approximate Po = P(X2 ≥ X2o) for an exact test. Let P̂ be the sample proportion of the M tables with X2 ≥ X2o. Show that P(|P̂ – Po| ≤ B) = 1 – α requires that M ≈ z2α/2 Po(1–Po)/B2.

Show that the conditional ML estimate of θ satisfies n211 = E(n11) for distribution (3.18).

Using graphs or tables, explain what is meant by no interaction in modeling response Y and explanatory X and Z when:

a. All variables are continuous (multiple regression).

b. Y and X are continuous, Z is categorical (analysis of covariance).

c. Y is continuous, X and Z are categorical (two-way ANOVA).

d. Y is binary, X and Z are categorical (logit model).

Let Yibe bin(ni, Ï€i) at xi, and let pi= yi/ni. For binomial GLMs with logit link:

a. For pi near Ï€i, show that

b. Show that z1(t) in (5.23) is a linearized version of the ith sample logit, evaluated at approximation Ï€i(t) for Ï€Ì‚i.

For an I × 2 contingency table, consider logit model (5.4).

Given (πi > 0), show how to find (βi) satisfying βI = 0.

For the population of subjects having Y = j, X has a N(µj, σ2)distribution, j = 0,1.

Using Bayes theorem, show that P(Y = 1|x) satisfies the logistic regression model with β = (µ1 – µ0))/σ2.

Table 5.19 refers to a sample of subjects randomly selected for an Italian study on the relation between income and whether one possesses a travel credit card. At each level of annual income in millions of lira, the table indicates the number of subjects sampled and the number possessing at least one travel credit card. Analyze these data.

Table 5.19:

According to the Independent newspaper (London, Mar. 8, 1994), the Metropolitan Police in London reported 30,475 people as missing in the year ending March 1993. For those of age 13 or less, 33 of 3271 missing males and 38 of 2486 missing females were still missing a year later. For ages 14 to 18, the values were 63 of 7256 males and 108 of 8877 females; for ages 19 and above, the values were 157 of 5065 males and 159 of 3520 females. Analyze and interpret.

For a study using logistic regression to determine characteristics associated with remission in cancer patients, Table 5.10 shows the most important explanatory variable, a labeling index (U). This index measures proliferative activity of cells after a patient receives an injection of tritiated thymidine, representing the percentage of cells that are â€œlabeled.â€™ The response Y measured whether the patient achieved remission (1 = yes). Software reports Table 5.11 for a logistic regression model using LI to predict the probability of remission.

Table 5.10:

Table 5.11:

a. Show how software obtained Ï€Ì‚ = 0.068 when LI = 8.

b. Show that Ï€Ì‚ = 0.5 when LI = 26.0.

c. Show that the rate of change in Ï€Ì‚ is 0.009 when LI = 8 and 0.036 when LI = 26.

d. The lower quartile and upper quartile for LI are 14 and 28. Show that Ï€Ì‚ increases by 0.42, from 0.15 to 0.57, between those values.

e. For a unit change in LI, show that the estimated odds of remission multiply by 1.16. LEMS 199

f. Explain how to obtain the confidence interval reported for the odds ratio. Interpret.

g. Construct a Wald test for the effect. Interpret.

h. Conduct a likelihood-ratio test for the effect, showing how to construct the test statistic using the â€“2 log L values reported.

Consider the class of binary models (4.8) and (4.9). Suppose that the standard cdf Φ corresponds to a probability density function ϕ that is symmetric around 0.

a. Show that x at which π(x) = 0.5 is x = – α/β.

b. Show that the rate of change in π(x) when π(x) = 0.5 is βϕ(0).

Show this is 0.25β for the logit link and β/√π (where π = 3.14. ..) for the probit link.

c. Show that the probit regression curve has the shape of a normal cdf with mean – α/β and standard deviation 1/ |β|.

Let yij be observation j of a count variable for group i, i = 1,...,I, j = 1,..., ni. Suppose that {Yij} ae independent Poisson with E(Yij) = µi.

a. Show that the ML estimate of µi is µ̂i = y̅i = ∑j yij/ni,

b. Simplify the expression for the deviance for this model. [For testing this model, it follows from Fisher that the deviance and the Pearson statistic ∑ij (yij – y̅i)2/y̅i have approximate chi-squared distributions with df = ∑i(ni – 1). For a single group, Cochran (1954) referred to ∑j(y1j – y̅1)2/y̅1 as the variance test for the fit of a Poisson distribution, since it compares the sample variance to the estimated Poisson distribution, since it compares the sample variance to the estimated Poisson variance y̅1.]

A GLM has parameter β with sufficient statistic S. A goodness-of-fit test statistic T has observed value to. If β were known, a P-value is P = P(T ≥ to; β). Explain why P(T ≥ to | S) is the uniform minimum variance unbiased estimator of P.

A binomial GLM πi = Φ(∑j βj xij) with arbitrary inverse link function Φ assumes that niYi has a bin(ni, πi) distribution. Find wi in (4.27) and hence cov͡ (β̂). For logistic regression, show that wi = ni πi(1 – πi).

Let Yi be a bin(ni, πi) variate for group i, i = 1, ...... N, with {Yi} independent. Consider the model that π1 = .... = πN. Denote that common value by π. For observations {yi} show that π̂ = (∑yi)/(∑ni).

When all ni = 1, for testing this model’s fit in the N × 2 table, show that X2 = n. Thus, goodness-of-fit statistics can be completely uninformative for ungrouped data.

For the logistic regression model with 3 > 0, show that (a) as x → ∞, π(x) is monotone increasing, and (b) the curve for π(x) is the cdf of a logistic distribution having mean – α/β and standard deviation π/(|β|√3/).

For binary data, define a GLM using the log link. Show that effects refer to the relative risk. Why do you think this link is not often used?

Describe the purpose of the link function of a GLM. What is the identity link? Explain why it is not often used with binomial or Poisson responses.

Refer to Problem 4.6. The wafers are also classified by thickness of silicon coating (z = 0, low; z = 1, high). The first five imperfection counts reported for each treatment refer to z = 0 and the last five refer to z = 1. Analyze these data.

Data from Prob. 4.6:

An experiment analyzes imperfection rates for two processes used to fabricate silicon wafers for computer chips. For treatment A applied to 10 wafers, the numbers of imperfections are 8, 7, 6, 6, 3, 4, 7, 2, 3, 4.

Treatment B applied to 10 other wafers has 9,9,8, 14,8, 13, 11,5, 7,6 imperfections. Treat the counts as independent poisson variates having means µA and µB.

Table 4.8 shows the free-throw shooting, by game, of Shaq Oâ€™NeaI of the Los Angeles Lakers during the 2000 NBA (basketball) playoffs. Commentators remarked that his shooting varied dramatically from game to game. In game i, suppose that Yi= number of free throws made out of niattempts is a bin(ni, Ï€i) variate and the {Yi} are independent.

a. Fit the model, Ï€i = Î±, and find and interpret Î±Ì‚ and its standard error. Does the model appear to fit adequately?

b. Adjust the standard error for overdispersion. Using the original SE and its correction, find and compare 95% confidence intervals for Î±. Interpret.

Table 4.8:

For the negative binomial model fitted to the crab satellite counts with log link and width predictor, µ̂ = –4.05, β̂ = 0.192 (SE = 0.048), k̂–1 = 1.106 (SE = 0.197). Interpret. Why is SE for β so different from SE = 0.020 for the corresponding Poisson GLM in Sec 4.3.2? Which is more appropriate? Why?

In Section 4.3.2, refer to the Poisson model with identity link. The fit using least squares is µ̂ = –10.42 + 0.51x (SE = 0.11). Explain why the parameter estimates differ and why the SE values are so different.

Refer to Table 4.3.

a. Fit a Poisson loglinear model using both W = weight and C = color to predict Y = number of satellites. Assigning dummy variables, treat C as a nominal factor. Interpret parameter estimates.

b. Estimate E(Y) for female crabs of average weight (2.44 kg) that are (i) medium light, and (ii) dark.

c Test whether color is needed in the model.

d. The estimated color effects are monotone across the four categories. Fit a simpler model that treats C as quantitative and assumes a linear effect. Interpret its color effect and repeat the analyses of parts (b) and (c). Compare the fit to the model in part (a). Interpret.

Table 4.3:

Refer to Problem 4.7. Using the identity link with x = weight, ÂµÌ‚ = â€“2.60 + 2.264x, where Î²Ì‚ = 2.264 has SE = 0.228. Repeat parts (a) through (c).

Data from Prob. 4.7:

For Table 4.3, Table 4.7 shows SAS output for a Poisson loglinear model fit using X = weight and Y = number of satellites.

a. Estimate E(Y) for female crabs of average weight, 2.44 kg.

b. Use Î²Ì‚ to describe the weight effect. Show how to construct the reported confidence interval.

c. Construct a Wald test that Y is independent of X. Interpret.

Table 4.3:

Table 4.7:

For Table 4.3, Table 4.7 shows SAS output for a Poisson loglinear model fit using X = weight and Y = number of satellites.

a. Estimate E(Y) for female crabs of average weight, 2.44 kg.

b. Use Î²Ì‚ to describe the weight effect. Show how to construct the reported confidence interval.

c. Construct a Wald test that Y is independent of X. Interpret.

d. Can you conduct a likelihood-ratio test of this hypothesis? If not, what else do you need?

e. Is there evidence of overdispersion? If necessary, adjust standard errors and interpret.

Table 4.3:

Table 4.7:

Refer to Problem 4.6. The sample mean and variance are 5.0 and 4.2 for treatment A and 9.0 and 8.4 for treatment B.

a. Is there evidence of overdispersion for the Poisson model having a dummy variable for treatment? Explain.

b. Fit the negative binomial loglinear model. Note that the estimated dispersion parameter is O and that estimates of treatment means and standard errors are the same as with the Poisson loglinear GLM.

c. For the overall sample of 20 observations, the sample mean and variance are 7.0 and 10.2. Fit the loglinear model having only an intercept term under Poisson and negative binomial assumptions. Compare results, and compare confidence intervals for the overall mean response. Why do they differ?

Data from Prob. 4.6:

An experiment analyzes imperfection rates for two processes used to fabricate silicon wafers for computer chips. For treatment A applied to 10 wafers, the numbers of imperfections are 8, 7, 6, 6, 3, 4, 7, 2, 3, 4.

Treatment B applied to 10 other wafers has 9,9,8, 14,8, 13, 11,5, 7,6 imperfections. Treat the counts as independent poisson variates having means µA and µB.

For games in baseballâ€™s National League during nine decades, Table 4.6 shows the percentage of times that the starting pitcher pitched a complete game.

a. Treating the number of games as the same in each decade, the ML fit of the linear probability model is Ï€Ì‚ = 0.7578 â€“ 0.0694x, where x = decade (x = 1,2,.. . .9). Interpret 0.7578 and â€“ 0.0694.

b. Substituting x = 10, 11, 12, predict the percentages of complete games for the next three decades. Are these predictions plausible? Why?

c. The ML fit with logistic regression is Ï€Ì‚ = exp(l.148 â€“ 0.315x)/[1+ exp(l.148 â€“ 0.315x)]. Obtain Ï€Ì‚i for x = 10, 11, 12. Are these more plausible?

Table 4.6:

In the 2000 U.S. presidential election, Palm Beach County in Florida was the focus of unusual voting patterns (including a large number of illegal double votes) apparently caused by a confusing â€œbutterfly ballot.â€ Many voters claimed that they voted mistakenly for the Reform Party candidate, Pat Buchanan, when they intended to vote for Al Gore. Figure 4.8 shows the total number of votes for Buchanan plotted against the number of votes for the Reform Party candidate in 1996 (Ross Perot), by county in Florida.

a. In county i, let Ï€i, denote the proportion of the vote for Buchanan and let xi denote the proportion of the vote for Perot in 1996. For the linear probability model fitted to all counties except Palm Beach County, Ï€Ì‚i = â€“0.0003 + 0.0304xi. Give the value of P in the interpretation: The estimated proportion vote for Buchanan in 2000 was roughly P% of that for Perot in 1996.

b. For Palm Beach County, Ï€i = 0.0079 and xi = 0.0774. Does this result appear to be an outlier? Explain.

c. For Logistic regression, log[Ï€Ì‚i/(1â€“Ï€Ì‚i)] = â€“7.164 + 12.219 xi. Find Ï€Ì‚i in Palm Beach County. Is that county an outlier for this model.

Figure 4.8:

For Table 3.7 with scores (0, 0.5, 1.5, 4.0, 7.0) for alcohol consumption. ML fitting of the linear probability model for malformation has output.

Interpret the model fit. Use it to estimate the relative risk of malformation for alcohol consumption levels 0 and 7.0.

Table 3.7:

For Table 4.2, refit the linear probability model or the logistic regression model using the scores (a) (0, 2, 4, 6), (b) (0, 1, 2, 3), and (c) (1, 2, 3, 4). Compare Î²Ì‚ for the three choices. Compare fitted values. Sum marize the effect of linear transformations of scores, which preserve relative sizes of spacings between scores.

Table 4.2:

For Table 4.3, let Y = 1 if a crab has at least one satellite, and Y = 0 otherwise. Using x = weight, fit the linear probability model.

a. Use ordinary least squares. Interpret the parameter estimates. Find the estimated probability at the highest observed weight (5.20 kg). Comment.

b. Fit the logistic regression model. Show that the fitted probability at a weight of 5.20 kg equals 0.9968.

c. Fit the probit model. Find the fitted probability at 5.20 kg.

Table 4.3:

An experiment analyzes imperfection rates for two processes used to fabricate silicon wafers for computer chips. For treatment A applied to 10 wafers, the numbers of imperfections are 8, 7, 6, 6, 3, 4, 7, 2, 3, 4.

Treatment B applied to 10 other wafers has 9,9,8, 14,8, 13, 11,5, 7,6 imperfections. Treat the counts as independent poisson variates having means µA and µB.

a. Fit the model log µ = α + βx, where x = 1 for treatment B and x = 0 for treatment A. Show that exp(β) = µBA, and interpret its estimate.

b. Test H0: µA = µB with the Wald or likelihood ratio test of H0:β = 0. Interpret.

c. Construct a 95% confidence interval for µA / µA.

d. Test H0: µA = µB based on this result: If Y1 and Y2 are independent Poisson with means µ1 and µ2, then (Y1|Y1 + Y2) is binomial with n = Y1 + Y2 and π = µ1/(µ1 + µ2).

Refer to the tea-tasting data (Table 3.8). Construct the null distributions of the ordinary P-value and the mid-P-value for Fisherâ€™s exact test with HÎ±: 0 > 1. Find and compare their expected values.

Table 3.8:

Suppose that Yi is Poisson with g(µi) = a + βxi, where xi = 1 for i = 1,..., nA from group A and xi = 0 for i = nA + 1,..., nA + nB from group B. Show that for any link function g, the likelihood equations (4.22) imply that fitted means µ̂A and µ̂B equal the sample means.

For binary data with sample proportion yi based on ni trials, we use quasi-likelihood to fit a model using variance function. Show that parameter estimates are the same as for the binomial GLM but that the covariance matrix multiplies by ϕ.

A 2 Ã— J table has ordinal response. Let Fj|i= Ï€1|i+ ..... + Ï€j|i. When Fj|2â‰¤ Fj|1for j = 1,......, J, the conditional distribution in row 2 is stochastically higher than the one in row 1. Consider the cumulative odds ratios

Show that log Î¸> 0 for all j is equivalent to row 2 being stochastically higher than row 1. Explain why row 2 is then more likely than row 1 to have observations at the high end of the ordinal scale.

For a 2 × 2 table of counts {nij} show that the odds ratio is invariant to

(a) Interchanging rows with columns,

(b) Multiplication of cell counts within rows or within columns by c ≠ 0. Show that the difference of proportions and the relative risk do not have these properties.

For a diagnostic test of a certain disease, π1 denotes the probability that the diagnosis is positive given that a subject has the disease, and π2 denotes the probability that the diagnosis is positive given that a subject does not have it. Let ρ denote the probability that a subject does have the disease.

a. Given that the diagnosis is positive, show that the probability that a subject does have the disease is π1 ρ/[π1 ρ + π2(1 – ρ)].

b. Suppose that a diagnostic test for HIV+ status has both sensitivity and specificity equal to 0.95, and ρ = 0.005. Find the probability that a subject is truly HIV+ , given that the diagnostic test is positive. To better understand this answer, find the joint probabilities relating diagnosis to actual disease status, and discuss their relative sizes.

For the geometric distribution p(y) = πy(1– π), y = 0, 1, 2,... , show that the tail method for constructing a confidence interval [i.e., equating P(Y ≥ y) and P(Y ≤ y) to α/2] yields [(α/2)1/y, (1 – a/2)1/(y+1)]. Show that all π between 0 and 1 – a/2 never fall above a confidence interval, and hence the actual coverage probability exceeds 1 – a/2 over this region.

For a sequence of independent Bernoulli trials, Y is the number of successes before the kth failure. Explain why its probability mass function is the negative binomial,

[For it, E(Y) = kÏ€/(1â€“Ï€) and var(Y) = kÏ€/1(1â€“Ï€)2, so var(Y) > E(Y); the poisson is the limit as k â†’ âˆž and Ï€ â†’ 0 with kÏ€ = Âµ fixed.]

Treatments A and B were compared on a binary response for 40 pairs of subjects matched on relevant covariates. For each pair, treatments were assigned to the subjects randomly. Twenty pairs of subjects made the same response for each treatment. Six pairs had a success for the subject receiving A and a failure for the subject receiving B, whereas the other 14 pairs had a success for B and a failure for A. Use the Cochran – Mantel – Haenszel procedure to test independence of response and treatment.

For the horseshoe crab data, fit a model using weight and width as predictors. Conduct

(a) A likelihood-ratio test of H0: β1 = β2, = 0,

(b) Separate tests for the partial effects. Why does neither test in part (b) show evidence of an effect when the test in part (a) shows strong evidence?

The horseshoe crab width values in Table 4.3 have xÌ… = 26.3 and sx= 2.1. If the true relationship were similar to the fitted equation in Section 5.1.3, about how large a sample yields P(type II error) = 0.10, with Î± = 0.05, for testing H0: Î² = 0 against HÎ±: Î² > 0?

Table 4.3:

Refer to Problem 5.1. Table 6.18 shows output for fitting a probit model. Interpret the parameter estimates (a) using characteristics of the normal cdf response curve, (b) finding the estimated rate of change in the probability of remission where it equals 0.5, and (c) finding the difference between the estimated probabilities of remission at the upper and lower quartiles of the labeling index, 14 and 28.

Table 6.18:

Data Problem 5.1:

For a study using logistic regression to determine characteristics associated with remission in cancer patients, Table 5.10 shows the most important explanatory variable, a labeling index (U). This index measures proliferative activity of cells after a patient receives an injection of tritiated thymidine, representing the percentage of cells that are â€œlabeled.â€™ The response Y measured whether the patient achieved remission (1 = yes). Software reports Table 5.11 for a logistic regression model using LI to predict the probability of remission.

a. Show how software obtained Ï€Ì‚ = 0.068 when LI = 8.

b. Show that Ï€Ì‚ = 0.5 when LI = 26.0.

c. Show that the rate of change in Ï€Ì‚ is 0.009 when LI = 8 and 0.036 when LI = 26.

Use odds ratios in Table 8.3 to illustrate the collapsibility conditions.

a. For (A, C, M), all conditional odds ratios equal 1.0. Explain why all reported marginal odds ratios equal 1.0.

b. For (AC, M), explain why (i) all conditional odds ratios are the same as the marginal odds ratios, and (ii) all ÂµÌ‚ac+ = nac+.

c. For (AM, CM), explain why (i) the AC conditional odds ratios of 1.0 need not be the same as the AC marginal odds ratio, (ii) the AM and CM conditional odds ratios are the same as the marginal odds ratios, and (iii) all ÂµÌ‚a+m = na+m and ÂµÌ‚+cm = n+cm.

d. For (AC, AM, CM), explain why (i) no conditional odds ratios need be the same as the related marginal odds ratios, and (ii) the fitted marginal odds ratios must equal the sample marginal odds ratios.

Table 8.3:

For model (AC, AM, CM) with Table 8.3, the standardized Pearson residual in each cell equals Â± 0.63. Interpret, and explain why each one has the same absolute value. By contrast, model (AM, CM) has standardized Pearson residual Â± 3.70 in each cell where M = yes (e.g., + 3.70 when A = C = yes) and Â± 12.80 in each cell where M = no (e.g., + 12.80 when A = C = yes). Interpret.

Table 8.3:

Refer to Table 2.6. Let D = defendantâ€™s race, V = victimsâ€™ race, and P = death penalty verdict. Fit the loglinear model (DV, DP, PV).

a. Using the fitted values, estimate and interpret the odds ratio between D and P at each level of V. Note the common odds ratio property.

b. Calculate the marginal odds ratio between D and P, (I) using the fitted values, and (ii) using the sample data. Why are they equal? Contrast the odds ratio with part (a). Explain why Simpsonâ€™s paradox occurs.

c. Fit the corresponding logit model, treating P as the response. Show the correspondence between parameter estimates and fit statistics.

Table 2.6:

Prove that the Pearson residuals for the linear logit model applied to a I × 2 contingency table satisfy X2 = ∑1i = 1 e2i. Note that this holds for a binomial GL.M with any link.

For a sequence of s nested models M1,..., Ms, model Ms is the most complex. Let ν denote the difference in residual df between M1 and Ms.

a. Explain why for j < k, G2(M| Mk) ≤ G2(M| Ms).

b. Assume model Mj, so that Mk also holds when k > j. For all k > j, as n → ∞, P[G(Mj | Mk) > X2v (α)] ≤ α. Explain why?

Refer to Problem 2.12.

a. Fit the model with G and D main effects. Using it, estimate the AG conditional odds ratio. Compare to the marginal odds ratio, and explain why they are so different. Test its goodness of fit.

b. Fit the two models excluding department A. Again consider lack of fit, and interpret.

Refer to Problem 11.1. Suppose that we expressed the data with a 3 Ã— 2 partial table of drug-by-response for each subject, to use a generalized CMH procedure to test marginal homogeneity. Explain why the 911 + 279 subjects who make the same response for every drug have no effect on the test.

Data from Problem 11.1:

Refer to Table 8.3. Viewing the table as matched triplets, construct the marginal distribution for each substance. Find the sample proportions of students who used marijuana, alcohol, and cigarettes. Test the hypothesis of marginal homogeneity. Interpret results.

Table 8.3:

Consider the model µi = β, i = 1, ..., n, assuming that υ(µi) = µi. Suppose that actually var(Yi) = µi2. Using the univariate version of GEE described in section 11.4, show that u(β) = ∑i(yi – β)/β and β̂ = y̅. Show that V in (11.10) equals β/n, the actual asymptotic variance (11.11) simplifies to β2/n, and its consistent estimate is ∑i(yi – y̅)2/n2.

Repeat Problem 11.23 assuming that υ(µi) = σ2 when actually var(Yi) = µi.

Data from Problem 11.23:

Consider the model µi = β, i = 1, ..., n, assuming that υ(µi) = µi. Suppose that actually var(Yi) = µi2. Using the univariate version of GEE described in section 11.4, show that u(β) = ∑i(yi – β)/β and β̂ = y̅. Show that V in (11.10) equals β/n, the actual asymptotic variance (11.11) simplifies to β2/n, and its consistent estimate is ∑i(yi – y̅)2/n2.

Consider the model µi = β, i = 1,..., n, for independent Poisson observations. For β̂ = y̅, show that the model-based asymptotic variance estimate is y̅/n, whereas the robust estimate of the asymptotic variance is ∑i (yi – y̅)2/n2. Which would you expect to be better (a) if the Poisson model holds, and (b) if there is severe overdispersion?

a. For a univariate response, how is quasi-likelihood (QL) inference different from ML inference? When are they equivalent?

b. Explain the sense in which GEE methodology is a multivariate version of QL.

d. Describe conditions under which GEE parameter estimators are consistent and conditions under which they are not. For conditions in which they are consistent, explain why.

What is wrong with this statement?: “For a first-order Markov chain, Yt is independent of Yt–2.”

Suppose that loglinear model (Y0, Y1,...,YT) holds. Is this a Markov chain?

Gamblers A and B have a total of I dollars. They play games of pool repeatedly. Each game they each bet \$1, and the winner takes the other’s dollar. The outcomes of the games are statistically independent, and A has probability π and B has probability 1 – π of winning any game. Play stops when one player has all the money. Let Yt denote A’s monetary total after t games.

State the transition probability matrix. (For this gambler’s ruin problem, 0 and I are absorbing states. Eventually, the chain enters one of these and stays. The other states are transient.)

Refer to Table 4.8 on the free-throw shooting of Shaq Oâ€™Neal. In game i, suppose that yi= number made out of niattempts is a bin(ni, Ï€i) variate and {yi} are independent.

a. Fit the model, logit(Ï€i) = Î±. Find and interpret Ï€Ì‚i. Does the model appear to fit adequately?

b. Fit the model, logit(Ï€i) = Î± + ui, where {ui} are independent N(0, Ïƒ2). Use Î±Ì‚ and ÏƒÌ‚ to summarize Oâ€™ Nealâ€™s free-throw shooting.

c. Explain how the model in part (a) is a special case of that in part (b). Is there evidence that the one in part (b) fits better?

Table 4.8:

Consider the logistic-normal model (12.10) for the abortion opinion data, under the constraint σ = 0.

a. Explain why the fit is the same as an ordinary logit model treating the three responses for each subject as if they were independent responses for three separate subjects.

b. Fit the model. Interpret, and explain why {β̂t – β̂u} are quite different from those in Section 12.3.2 allowing σ > 0.

In some situations, X2 and G2 take very similar values. Explain the joint influence on this event of (a) whether the model holds, (b) whether the sample size n is large, and (c) whether the number of cells N is large.

Refer to Section 1.5.6. Using the likelihood function to obtain the information, find the approximate standard error of π̂.

For the multinomial (n,{Ï€j}) distribution with c > 2, confidence limits for Ï€jare the solutions of

a. Using the Bonferroni inequality, argue that these c intervals simultaneously contain all {Ï€j} (for large samples) with probability at least 1â€“ Î±.

b. Show that the standard deviation of Ï€Ì‚j â€“ Ï€Ì‚k is [Ï€j + Ï€â€“ (Ï€j â€“ Ï€k)2]/n. For large n, explain why the pfrobability is at least 1 â€“ Î± that the Wald confidence intervals.

Simultaneously contain the a = c(c â€“ 1)/2 differences {Ï€j â€“ Ï€k}

The 1988 General Social Survey compiled by the National Opinion Research Center asked: â€œDo you support or oppose the following measures to deal with AIDS9 (1) Have the government pay all of the health care costs of AIDS patients; (2) Develop a government information program to promote safe sex practices, such as the use of condoms.â€ Table 8.16 summarizes opinions about health care costs (H) and the information program (I), classified also by the respondentâ€™s gender (G).

Table 8.16:

a. Fit loglinear models (GH, GI), (GH, HI), (GI, HI), and (GH, GI, HI). Show that models that lack the HI term fit poorly.

b. For model (GH, GI, HI), show that 95% Wald confidence intervals equal (0.55, 1.10) for the GH conditional odds ratio and (0.99, 2.55) for the GI conditional odds ratio. Interpret. Is it plausible that gender has no effect on opinion for these issues?

Refer to Table 8.17 from the 1991 General Social Survey. White subjects were asked: (B) â€œDo you favor busing of (Negro/Black) and white school children from one school district to another?â€, (P) â€œIf your party nominated a (Negro/Black) for President, would you vote for him if he were qualified for the job?â€, (D) â€œDuring the last few years, has anyone in your family brought a friend who was a (Negro/Black) home for dinner?â€ The response scale for each item was (yes, no, donâ€™t know). Fit model (BD, BP, DP).

a. Analyze the modelâ€™s goodness of fit. Interpret.

b. Conduct inference for the BP conditional association using a Wald or likelihood-ratio confidence interval and test. Interpret.

Table 8.17:

Refer to Section 8.3.2. Explain why software for which parameters sum to zero across levels of each index reports λ̂11AC = λ̂22AC = 0.514 and λ̂12AC = λ̂21AC = – 0.514, with SE = 0.044 for each term.

Table 8.18 refers to automobile accident records in Florida in 1988.

a. Find a loglinear model that describes the data well. Interpret associations.

b. Treating whether killed as the response, fit an equivalent logit model. Interpret the effects.

Table 8.18:

Refer to Table 8.19. Subjects were asked their opinions about government spending on the environment (E), health (H), assistance to big cities (C), and law enforcement (L).

a. Table 8.20 shows some results, including the two-factor estimates, for the homogeneous association model. Check the fit, and interpret.

b. All estimates at category 3 of each variable equal 0. Report the estimated conditional odds ratios using the too much and too little categories for each pair of variables. Summarize the associations. Based on these results, which term(s) might you consider dropping from the model? Why?

c. Table 8.21 reports {Î»Ì‚ehEH} when parameters sum to zero within rows and within columns, and when parameters are zero in the first row and first column. Show how these yield the estimated EH conditional odds ratio for the too much and too little categories. Compare to part (b). Construct a confidence interval for that odds ratio. Interpret.

Table 8.19:

Table 8.20:

Table 8.21:

Refer to the logit model in Problem 5.24. Let A = opinion on abortion.

a. Give the symbol for the loglinear model that is equivalent to this logit model.

b. Which logit model corresponds to loglinear model (AR. AP, GRPY?

c. State the equivalent loglinear and logit models for which (I) A is jointly independent of G, R, and P; (ii) there are main effects of R on A, but A is conditionally independent of G and P, given R: (iii) there is interaction between P and R in their effects on A, and G has main effects.

Table 5.24:

Let Y denote a subject’s opinion about current laws legalizing abortion (1 = support), for gender h (h = 1, female: h = 2, male), religious affiliation i (i = 1, Protestant: i = 2, Catholic; i = 3, Jewish), and political party affiliation j (j = 1, Democrat; j = 2, Republican; j = 3, Independent). For survey data, software for fitting the model logit [P(Y = 1) = α + βhG + βiR + βjP] reports α̂ = 0.62, β̂1G = 0.08, β̂1G = 0.08, β̂2G = –0.08, β̂1R = –0.16, β̂2R = –0.25, β̂3R = 0.41, β̂1P = 0.87, β̂1P = –1.27, β̂3P = 0.40.

For a multiway contingency table, when is a logit model more appropriate than a loglinear model? When is a loglinear model more appropriate?

The book’s Web site (www.stat.ufl.edu/ ∼aa/cda/cda.html) has a 2 × 3 × 2 × 2 table relating responses on frequency of attending religious services, political views, opinion on making birth control available to teenagers, and opinion about a man and woman having sexual relations before marriage. Analyze these data using loglinear models.

Suppose that {µij = nπij} satisfy the independence model (8.1).

a. Show that λYa – λYb = log(π+a / π+b).

b. Show that {all λYj = 0} is equivalent to π+j = 1/J for all j.

Consider the model for a 2 × 2 table. π11 = θ2, π12 = π21 = θ(1 – θ), π22 = (1 – θ)2, where θ is unknown (Problems 3.31 and 10.34).

a. Find the matrix A in (14.14) for this model.

b. Use A to obtain the asymptotic variance of θ̂. (As a check, it is simple to find it directly using the inverse of – E∂2L/∂θ2, where L is the log likelihood.) For which θ value is the variance maximized? What is the distribution of θ̂ if θ = 0 or θ = 1?

c. Find the asymptotic covariance matrix of √n π̂.

d. Find df for testing fit using X2.

For a multinomial (n, {πi}) distribution, show the correlation between pi and pj is –[πi πj/(1 – πi)(1 – πj)]1/2. What does this equal when πi = 1 – πj and πk = 0 for k ≠ i, j?

a. Refer to Problem 14.6. If Tn is Poisson, show √Tn has asymptotic variance 1/4.

b. For a binomial sample with n trials and sample proportion p, show the asymptotic variance of sin-1(√p) is 1/4n. [This transformation and the one in part (a) are variance stabilizing, producing variates with asymptotic variances that are the same for all values of the parameter. Traditionally, these transformations were employed to make ordinary least squares applicable to count data.]

Data from Problem 14.6:

Suppose that Tn has a Poisson distribution with mean λ = nµ, for fixed µ > 0. For large n, show that the distribution of log Tn is approximately normal with mean log(λ) and variance λ–1. [By the central limit theorem, Tn/n is approximately N(µ, µ/n) for large n.]

Let p denote the sample proportion for n independent Bernoulli trials. Find the asymptotic distribution of the estimator [p(1 – p)]1/2 of the standard deviation. What happens when π = 0.5?

Let Y be a Poisson random variable with mean µ.

a. For a constant c > 0, show that

E[log(Y + c)] = log µ + (c – 1/2)/µ + O(µ–2)

(Note that log(Y + c) = log µ + log[1 + (Y + c – µ)/µ].)

b. Cell counts in a 2 × 2 table are independent Poisson random variables. Use part (a) to argue that to reduce bias in estimating the log odds ratio, a sensible estimator is the sample log odds ratio after adding 1/2 to each cell.

Refer to Table 13.6. For those with race classified as â€œother,â€ the sample counts for (0, 1, 2, 3, 4, 5, 6) homicides were (55, 5, 1, 0, 1, 0, 0). Fit an appropriate model simultaneously to these data and those for white and black race categories. Interpret by making pairwise comparisons of the three pairs of means.

Table 13.6:

For the counts of horseshoe-crab satellites in Table 4.3, Table 13.10 shows the results of ML fitting of the negative binomial model using width as the predictor, with the identity link.

a. State and interpret the prediction equation.

b. Show that at a predicted ÂµÌ‚, the estimated variance is roughly ÂµÌ‚ + ÂµÌ‚2.

c. The corresponding Poisson GLM has fit ÂµÌ‚ = â€“11.53 + 0.55x (SE = 0.06). Compare 95% confidence intervals for the slopes for the two models. Interpret, and indicate whether overdispersion seems to exist relative to the Poisson GLM.

Table 4.3

Table 13.10

One question in the 1990 General Social Survey asked subjects how many times they had sexual intercourse in the preceding month. Table 13.9 shows responses, classified by gender.

a. The sample means were 5.9 for males and 4.3 for females; the sample variances were 54.8 and 34.4. The mode for each gender was 0. Does an ordinary Poisson GLM seem appropriate? Explain.

b. The Poisson GLM with log link and a dummy variable for gender (1 = males, 0 = females) has gender estimate 0.308 (SE = 0.038). Explain why this implies a ratio of 1.36 for the fitted means. (This is also the ratio of sample means, since this model has fitted means equal to sample means.) Show that the WaId 95% confidence interval for the ratio of means for males and females is (1.26, 1.47).

c. For the negative binomial model, the log likelihood increases by 248.7 (deviance decreases by 497.3). The estimated difference between the log means is also 0.308, but now SE = 0.127. Show that the 95% confidence interval for the ratio of means is (1.06, 1.75). Compare to the Poisson GLM, and interpret.

Table 13.9:

For the train accidents in Problem 9.19, a negative binomial model assuming constant log rate over the 14-year period has estimate –4.177 (SE = 0.153) and estimated dispersion parameter 0.012. Interpret.

Data from Problem 9.19:

A table at the text’s Web site (www.stat.ufl.edu/ ∼aa/cda/cda.html) shows the number of train miles (in millions) and the number of collisions involving British Rail passenger trains between 1970 and 1984. A Poisson model assuming a constant log rate α̂ over the 14-year period has α̂ = – 4.177 (SE = 0.1325) and X2 = 14.8 (df = 13). Interpret.

In Problem 12.2 about Shaq Oâ€™Nealâ€™s free-throw shooting, the simple binomial model, Ï€i= Î±, has lack of fit. Fit the beta-binomial model, or use the quasi-likelihood approach with that variance structure. Use the fit to summarize his free-throw shooting, by giving an estimated mean and standard deviation for Ï€i.

Problem 12.2:

Refer to Table 4.8 on the free-throw shooting of Shaq Oâ€™Neal. In game i, suppose that yi = number made out of ni attempts is a bin(ni, Ï€i) variate and {yi} are independent.

Table 4.8:

A data set on pregnancy rates among girls under 18 years of age in 13 north central Florida counties has information on a 3-year total for each county i on ni = number of births and yi = number of those for which mother had age under 18.

a. A beta-binomial model states that given {πi}, {Yi} are independent {bin(n, πi)} variates, and (πi) are independent from a beta(α, β) distribution. The ML estimated parameters are α̂ = 9.9 and β̂ = 240.8. Use the mean and variance to describe the estimated beta distribution and the estimated marginal distribution of Yi (as a function of ni).

b. Quasi-likelihood using variance function (13.10) for the model logit(µi) = α has α̂ = –3.18 and ρ̂ = 0.005. Describe the estimated mean and variance of Yi.

c. Quasi-likelihood using variance (13.11) for the model logit(µi) = α has α̂ = –3.35 and ϕ̂ = 8.3. Describe the estimated mean and variance of Yi.

d. The logistic-normal GLMM, logit(πi) = α + ui, yields α̂ = – 3.24 and σ̂ = 0.33. Describe the estimated mean of Yi [Recall (12.8)].

For capture â€“ recapture experiments, Coull and Agresti (1999) used a loglinear model with exchangeable association and no higher-order terms. Explain why the model expected frequencies satisfy

log Âµ(y1,..., yT) = Î» + Î²1 y1 + ... + Î²T yT + Î²(y1 y2 + y1 y3 + ... + yTâ€“1 yT).

Show that the fit of this model to Table 12.6 yields NÌ‚ = 90.5 and a 95% profile-likelihood confidence interval for N of (75, 125).

Table 12.6:

Summarize advantages and disadvantages of using a GLMM approach compared to a marginal model approach. Describe conditions under which parameter estimators are consistent for (a) marginal models using GEE, (b) marginal models using ML, (c) GLMM using PQL, and (d) GLMM using ML.

For ordinal square I Ã— I tables of counts {nab}, model (12.3) for binary matched-pairs responses (Yi1, Yi2) for subject i extends to

logit[P(Yit â‰¤ j|ui)] = Î±j + Î²xt + ui

with {ui} independent N(0, Ïƒ2) variates and x1 = 0 and x2 = 1.

a. Explain how to interpret Î², and compare to the interpretation of Î² in the corresponding marginal model (10.14).

b. This model implies model (12.3) for each 2 Ã— 2 collapsing that combines categories 1 through j for one outcome and categories j + 1 through I for the other. Use the form of the conditional ML (or random effects ML) estimator for binary matched pairs to explain why

is a consistent estimator of Î².

c. Treat these (I â€“ 1) collapsed 2 Ã— 2 tables naively as if they are independent samples. Show that adding the numerators and adding the denominators of the separate estimates of eÎ² motivates the summary estimator of Î²,

Explain why Î²Ìƒ is consistent for Î² even recognizing the actual dependence.

Explain why the logistic-normal model is not helpful for capture–recapture experiments with only two captures.

Consider the matched-pairs random effects model (12.3). For given Î²0, let Î´0be such that ÂµÌ‚12= n12+ Î´0and ÂµÌ‚21= n21â€“ Î´0satisfies log(ÂµÌ‚21/ÂµÌ‚12) = Î²0. Suppose {ÂµÌ‚ij} has nonnegative log odds ratio. Explain why:

a. The likelihood-ratio statistic for testing H0 : Î² = Î²0 in this model equals

b. The likelihood-ratio test of H0: Î² = 0 is the test of symmetry.

In the Rasch model, logit[P(Yit= 1)] = Î±i+ Î²t, Î±iis a fixed effect.

a. Assuming independence of responses for different subjects and for different observations on the same subject, show that the log likelihood is

b. Show that the likelihood equations are y+t = âˆ‘i P(Yit = 1) and yi+ = âˆ‘t P(Yit = 1) for all i and t. Explain why conditioning on {yi+} yields a distribution that does not depend on {Î±i}.

The GLMM for binary data using probit link function is

Φ–1[P(Yit = 1 | ui)] = x’it β + z’it ui,

where Φ is the N(0, 1) cdf and ui has N(0, ∑) pdf, f(ui; ∑).

a. Show that the marginal mean is

P(Yt = 1) = ʃ P(Z – z’it ui ≤ x’it β) f(ui; ∑) d ui,

where Z is a standard normal variate that is independent of ui.

b. Since Z – z’it ui has a N(0, 1 + z’it ∑zit) distribution, deduce that

Φ–1[P(Yt = 1)] = x’it β[1 + z’it ∑zit]–1/2.

Hence, the marginal model is a probit model with attenuated effect. In the univariate random intercept case, show the marginal effect equals that from the GLMM divided by √1 + σ2.

For a binary response, consider the random effects model

logit[P(Yit = 1|ui)] = α + βt + ui, t = 1,..., T,

where {ui} are independent N(0, σ2), and the marginal model

logit[P(Yt = 1)] = α + βt*, t = 1,..., T.

For identifiability, βT = βT* = 0. Explain why all βt = 0 implies that all βt* = 0. Is the converse true?

Showing 100 - 200 of 341
Join SolutionInn Study Help for
1 Million+ Textbook Solutions
Learn the step-by-step answers to your textbook problems, just enter our Solution Library containing more than 1 Million+ textbooks solutions and help guides from over 1300 courses.