New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
statistics principles and methods
Plane Answers To Complex Questions 4th Edition Ronald Christensen - Solutions
Exercise 4.3.1 An experiment was conducted to see which of four brands of blue jeans were most resistant to wearing out as a result of students kneeling before their linear models instructor begging for additional test points. In a class of 32 students, 8 students were randomly assigned to each
Exercise 4.3.2 After the final exam of spring quarter, 30 of the subjects of the previous experiment decided to test the sturdiness of 3 brands of sport coats and 2 brands of shirts. In this study, sturdiness was measured as the length of time before tearing when the instructor was hung by his
Exercise 5.1 Consider the ANOVA model yi j = μ +αi+ei j, i = 1, . . . ,t, j = 1, . . . ,N, with the ei js independent N(0,σ 2). Suppose it is desired to test the hypotheses αi = αi for all i = i. Show that there is one number, called the LSD, so that the least significant difference rejects
Exercise 5.2 In the model of Exercise 5.1, let t = 4. Suppose we want to use the LSD method to test contrasts defined by Name λ1 λ2 λ3 λ4 A 1 1 −1 −1 B 0 0 1 −1 C 1/3 1/3 1/3 −1 Describe the procedure. Give test statistics for each test that is to be performed.
Exercise 5.3 Show that for testing all hypotheses in a six-dimensional space with 30 degrees of freedom for error, if the subspace F test is omitted and the nominal LSD level is α = 0.005, then the true error rate must be less than 0.25.
Exercise 5.7.1 Compare all pairs of means for the blue jeans exercise of Chapter 4. Use the following methods:(a) Scheff´e’s method, α = 0.01,(b) LSD method, α = 0.01,(c) Bonferroni method, α = 0.012,(d) Tukey’s HSD method, α = 0.01,(e) Newman–Keuls method, α = 0.01.
Exercise 5.7.2 Test whether the four orthogonal contrasts you chose for the blue jeans exercise of Chapter 4 equal zero. Use all of the appropriate multiple comparison methods discussed in this chapter to control the experimentwise error rate at α = 0.05 (or thereabouts).
Exercise 5.7.3 Compare all pairs of means in the coat–shirt exercise of Chapter 4. Use all of the appropriate multiple comparison methods discussed in this chapter to control the experimentwise error rate at α = 0.05 (or thereabouts).
Exercise 5.7.4 Suppose that in a balanced one-way ANOVA the treatment means¯ y1·, . . . , ¯ yt· are not independent but have some nondiagonal covariance matrix V.How can Tukey’s HSD method be modified to accommodate this situation?
Exercise 5.7.5 For an unbalanced one-way ANOVA, give the contrast coefficients for the contrast whose sum of squares equals the sum of squares for treatments.Show the equality of the sums of squares.
Exercise 6.1 For simple linear regression, find the MSE, Var(ˆβ0), Var(ˆβ1), and Cov(ˆβ0, ˆβ1).
Exercise 6.2 Use Scheff´e’s method of multiple comparisons to derive the Working–Hotelling simultaneous confidence band for a simple linear regression line E(y) =β0+β1x.
Exercise 6.3 For predicting y = (y1, . . . ,yq) from x = (x1, . . . ,xp−1) we say that a predictor f (x) is best if the scalar E{[y− f (x)][y− f (x)]} is minimized. Show that with simple modifications, Theorems 6.3.1 and 6.3.4 hold for the extended problem, as does Proposition 6.3.5.
Exercise 6.4 Consider an inner product space X and a subspace X0. Suppose that any vector y ∈X can be written uniquely as y = y0+y1 with y0 ∈X0 and y1 ⊥X0. Let M(x) be a linear operator on X in the sense that for any x ∈ X , M(x) ∈X and for any scalars a1,a2 and any vectors x1, x2, M(a1x1
Exercise 6.5 Show that for a linear model with an intercept, R2 is simply the square of the correlation between the data yi and the predicted values ˆ yi.
Exercise 6.5 Assume that Vxx is nonsingular. Show that ρy·x = 0 if and only if the best linear predictor of y1 based on x and y2 equals the best linear predictor of y1 based on x alone.
Exercise 6.6 If (yi1,yi2, xi1,xi2, . . . ,xi,p−1), i=1, . . . ,n are independent N(μ,V), find the distribution of√n− p−1 ry·x!1−r2 y·x when ρy·x = 0.
Exercise 6.7 Show that if M is the perpendicular projection operator onto C(X)with X =⎡⎢⎣w1...wn⎤⎥⎦and M =⎡⎢⎣ T1...Tn⎤⎥⎦, then wi = wj if and only if Ti = Tj.
Exercise 6.8 Discuss the application of the traditional lack of fit test to the problem where Y = Xβ +e is a simple linear regression model.
Exercise 6.9 LetMi be the perpendicular projection operator ontoC(Xi), i=1, 2.Show that the perpendicular projection operator onto C(Z) is MZ =M1 0 0 M2.Show that SSE(Z) = SSE(X1)+SSE(X2), where SSE(Xi) is the sum of squares for error from fitting Yi = Xiβi+ei, i = 1, 2.
Exercise 6.10 Test the model yi j =β0+β1xi+β2x2 i +ei j for lack of fit using the data:xi 1.00 2.00 0.00 −3.00 2.50 yi j 3.41 22.26 −1.74 79.47 37.96 2.12 14.91 1.32 80.04 44.23 6.26 23.41 −2.55 81.63 18.39
Exercise 6.11 Using the following data, test the model yi j =β0+β1xi1+β2xi2+ei j for lack of fit. Explain and justify your method.X1 X2 Y X1 X2 Y 31 9.0 122.41 61 2.2 70.08 43 8.0 115.12 36 4.7 66.42 50 2.8 64.90 52 9.4 150.15 38 5.0 64.91 38 1.5 38.15 38 5.1 74.52 41 1.0 45.67 51 4.6 75.02 41
Exercise 6.12(a) Find the model matrix for the orthogonal polynomial model Y = Tγ +e corresponding to the model yi j =β0+β1xi+β2x2 i +β3x3 i +ei j, i = 1,2,3, 4, j = 1, . . . ,N, where xi = a+(i−1)t.Hint: First consider the case N = 1.(b) For the model yi j = μ +αi +ei j, i = 1,2,3, 4, j =
Exercise 6.13 Repeat Exercise 6.11 with N = 2 and x1 = 2, x2 = 3, x3 = 5, x4 = 8.
Exercise 6.8.4 Consider the model yi = β0 +β1xi +ei, eis i.i.d. N(0,σ 2di), i = 1,2,3, . . . ,n, where the dis are known numbers. Derive algebraic formulas forˆβ0, ˆβ1, Var(ˆβ0), and Var(ˆβ1).
Exercise 6.8.5 Consider the model yi = β0 +β1xi +ei, eis i.i.d. N(0,σ 2), i =1,2,3, . . . ,n. If the xis are restricted to be in the closed interval [−10,15], determine how to choose the xis to minimize(a) Var(ˆβ0).(b) Var(ˆβ1).(c) How would the choice of the xis change if they were
Exercise 6.8.6 Find E[y− ˆE(y|x)]2 in terms of the variances and covariances of x and y. Give a “natural” estimate of E y− ˆE (y|x)2.
Exercise 6.8.7 Test whether the data of Example 6.2.1 indicate that the multiple correlation coefficient is different from zero.
Exercise 6.8.8 Test whether the data of Example 6.2.1 indicate that the partial correlation coefficient ρy1·2 is different from zero.
Exercise 6.8.9 Show that(a) ρ12·3 =ρ12− ρ13ρ23 1−ρ2 131−ρ2 23(b) ρ12·34 =ρ12·4−ρ13·4ρ23·4 1−ρ2 13·41−ρ2 23·4.
Exercise 6.8.10 Show that in Section 2, γ∗ =β∗ and β0 =γ0−(1/n)Jn 1Zγ∗.
Exercise 7.1 Prove the claims of the previous paragraph. In particular, show that linear combinations of the vectors presented retain their structure, that the vectors are orthogonal to the columns of X corresponding to the grand mean and the treatment effects, and that the vectors based on
Exercise 7.2 Does the statement “the interactions add nothing to the model”mean that γ11 =γ12 = · · · =γab? If it does, justify the statement. If it does not, what does the statement mean?
Exercise 7.3 Find the ANOVA table for the two-way ANOVA without interaction model when there are proportional numbers. Find the least squares estimate of a contrast in the αis. Find the variance of the contrast and give a definition of orthogonal contrasts that depends only on the contrast
Exercise 7.4 Using proportional numbers, find the ANOVA table for the twoway ANOVA with interaction model.
Exercise 7.5 Analyze the following data as a two-factor ANOVA where the subscripts i and j indicate the two factors.yi jks i 1 2 3 j 1 0.620 1.228 0.615 1.342 3.762 2.245 0.669 2.219 2.077 0.687 4.207 3.357 0.155 2.000 2 1.182 3.080 2.240 1.068 2.741 0.330 2.545 2.522 3.453 2.233 1.647 1.527 2.664
Exercise 7.6 Analyze the following data as a two-factor ANOVA where the subscripts i and j indicate the two factors.yi jks i 1 2 3 j 1 1.620 2.228 2.999 1.669 3.219 1.615 1.155 4.080 2.182 3.545 2 1.342 3.762 2.939 0.687 4.207 2.245 2.000 2.741 1.527 1.068 0.809 2.233 1.942 2.664 1.002 The
Exercise 7.7 Show that C(Mη ) ⊥C(Mαγ ) and that C(Mηγ ) ⊥C(Mαγ ). Give an explicit characterization of a typical vector in C(Mαηγ ) and show that your characterization is correct.
Exercise 7.8 Analyze the following three-way ANOVA: The treatments (amount of flour, brand of flour, and brand of shortening) are indicated by the subscripts i, j, and k, respectively. The dependent variable is a “chewiness” score for chocolate chip cookies. The amounts of flour correspond to
Exercise 7.7.1 In the mid-1970s, a study on the prices of various motor oils was conducted in (what passes for) a large town in Montana. The study consisted of pricing 4 brands of oil at each of 9 stores. The data follow.Brand Store P H V Q 1 87 95 95 82 2 96 104 106 97 3 75 87 81 70 4 81 94 91 77
Exercise 7.7.2 An experiment was conducted to examine thrust forces when drilling under different conditions. Data were collected for four drilling speeds and three feeds. The data are given below.Speed Feed 100 250 400 550 121 98 83 58 124 108 81 59 0.005 104 87 88 60 124 94 90 66 110 91 86 56 329
Exercise 7.7.3 Consider the model yi jk = μ +αi+η j +γi j +ei jk, i = 1,2,3, 4, j = 1,2, 3, k = 1, . . . ,Ni j, where for i = 1 = j, Ni j = N, and N11 = 2N.This model could arise from an experimental design having α treatments of No Treatment (NT), a1, a2, a3 and η treatments of NT, b1, b2.
Exercise 7.7.4 Consider the linear model yi j =μ +αi+η j+ei j, i=1, . . . ,a, j =1, . . . ,b. As in Section 1, write X = [X0,X1, . . . ,Xa,Xa+1, . . . ,Xa+b]. If we write the observations in the usual order, we can use Kronecker products to write the model matrix. Write X = [J,X∗,X∗∗],
Exercise 7.7.5 Consider the balanced two-way ANOVA with interaction model yi jk = μ +αi +η j +γi j +ei jk, i = 1, . . . ,a, j = 1, . . . ,b, k = 1, . . . ,N, with ei jks independent N(0,σ 2). Find E[Y(1n Jn n +Mα )Y] in terms of μ, the αis, the η js, and the γi js.
Exercise 7.7.6 For Example 7.6.1, develop a test for H0 : A10+A11 = A20+A21.
Exercise 8.1 Using a randomized complete block design is supposed to reduce the variability of treatment comparisons. If the randomized complete block model is taken as yi j = μ +αi+β j +ei j, ei js i.i.d. N(0,σ 2), i = 1, . . . ,a, j = 1, . . . ,b, argue that the corresponding variance for a
Exercise 8.2 In the 4×4 Latin square of the examples, show that the 9 degrees of freedom for (αβ ) interaction are being divided into 3 degrees of freedom for γand 6 degrees of freedom for error.the model yhi jk = μ +αh+βi+γ j +ηk +ehi jk.C1 C2 C3 C4 C5 R1 T1τ1 T2τ3 T3τ5 T4τ2 T5τ4 R2
Exercise 8.4 For model (1), C(Mτ ) is given by Proposition 4.2.3. What is C(Mτ )in the notation of model (2)?
Exercise 8.5 Show that the contrasts in the τi js corresponding to the contrastsΣλiαi and ΣΣλiη j(αβ )i j are ΣΣλiτi j and ΣΣλiη jτi j, respectively.
Exercise 8.6.2 Show that the set of indices i = 1, . . . ,a, j = 1, . . . ,a, and k =(i+ j+a−1)mod(a) determines a Latin square design.
Exercise 9.1 Consider a one-way ANOVA with one covariate. The model is yi j = μ +αi+ξ xi j +ei j, i=1, . . . ,t, j =1, . . . ,Ni. Find the BLUE of the contrast Σt i=1λiαi. Find the variance of the contrast.
Exercise 9.2 Consider the problem of estimating βp in the regression model yi =β0+β1xi1+· · ·+βpxip+ei. (2)Let ri be the ordinary residual from fitting yi =α0+α1xi1+· · ·+αp−1xip−1+ei and si be the residual from fitting xip =γ0+γ1xi1+· · ·+γp−1xip−1+ei.Show that the least
Exercise 9.3 Suppose λ 1β and λ 2γ are estimable in model (9.0.1). Use the normal equations to find find least squares estimates of λ 1β and λ 2γ .Hint: Reparameterize the model as Xβ +Zγ = Xδ +(I−M)Zγ and use the normal equations on the reparameterized model. Note that Xδ = Xβ
Exercise 9.4 Derive the test for model (9.0.1) versus the reduced model Y =Xβ +Z0γ0 +e, where C(Z0) ⊂ C(Z). Describe how the procedure would work for testing H0 : γ2 = 0 in the model yi j =μ +αi+η j +γ1zi j1+γ2zi j2+ei j, i = 1, . . . ,a, j = 1, . . . ,b.
Exercise 9.5 An experiment was conducted with two treatments. There were four levels of the first treatment and five levels of the second treatment. Besides the data y, two covariates were measured, x1 and x2. The data are given below. Analyze the data with the assumption that there is no
Exercise 9.6 Show that P =M∗ 0 0 Ir
Exercise 9.7 Derive −γˆ for a randomized complete block design when r = 1.
Exercise 9.8 Show that ξ τ is estimable if and only if ξ τ is a contrast.
Exercise 9.9 Show that if ξ τ and ητ are contrasts and that if ξ η = 0, thenξ τ = 0 and ητ = 0 put orthogonal constraints on C(X,Z), i.e., the treatment sum of squares can be broken down with orthogonal contrasts in the usual way.
Exercise 9.10 Derive the analysis for a Latin square with one row missing.
Exercise 9.11 Eighty wheat plants were grown in each of 5 different fields.Each of 6 individuals (A, B, C, D, E, and F) were asked to pick 8 “representative”plants in each field and measure the plants’s heights. Measurements were taken on 6 different days. The data consist of the differences
Exercise 9.12 Prove that display (2) is true.F = YM(I−M)˜ZY/r[(I−M)˜Z ]Y(I−P)Y/[n−r(X,˜Z )]∼ F(r[(I−M)˜Z],n−r[X,˜Z]). (2)
Exercise 9.6.1 Sulzberger (1953) and Williams (1959) examined the maximum compressive strength parallel to the grain (y) of 10 hoop trees and how it was affected by temperature. A covariate, the moisture content of the wood (x), was also measured. Analyze the data, which are reported
Exercise 9.6.2 Suppose that in Exercise 7.7.1 on motor oil pricing, the observation on store 7, brand H was lost. Treat the stores as blocks in a randomized complete block design. Plug in an estimate of the missing value and analyze the data without correcting the MSTrts or any variance estimates.
Exercise 9.6.3 The missing value procedure that consists of analyzing the model(Y −Zγˆ) = Xβ +e has been shown to give the correct SSE and BLUEs; however, sums of squares explained by the model are biased upwards. For a randomized complete block design with a treatments and b blocks and the
Exercise 9.6.4 State whether each design given below is a balanced incomplete block design, and if so, give the values ofb, t, k, r, and λ .(a) The experiment involves 5 treatments: A, B, C, D, and E. The experiment is laid out as follows.Block Treatments Block Treatments 1 A,B,C 6 A,B,D 2 A,B,E 7
Exercise 10.1 Show that A0Y is a BLUE of Xβ if and only if, for every estimable function λ β such that ρX =λ , ρA0Y is a BLUE of λ β .
Exercise 10.2 Show that if Λ = PX and if A0Y is a BLUE of Xβ , then PA0Y is a BLUE of Λβ .
Exercise 10.3 The BLUE of Xβ can be obtained by taking T =V +XX. Prove this by showing that(a) C(X) ⊂C(T) if and only if TT−X = X, and(b) if T =V +XX, then TT−X = X.
Exercise 10.4 Ferguson (1967, Section 2.7, page 74) proves the following:Lemma (3) If S is a convex subset of Rn, and Z is an n-dimensional random vector for which Pr(Z ∈ S)=1 and for which E(Z) exists and is finite, then E(Z) ∈ S.Use this lemma to show that C(V) =C e∈M eedP ⊂M.
Exercise 10.5 Prove observations (a), (b), and (c).(a) In practice, when a covariance matrix other than σ 2I is appropriate, most of the time the condition C(X) ⊂C(V) will be satisfied. When C(X) ⊂C(V), then Y ∈ C(X,V) =C(V) a.s., so the condition 1Xβ ∈ Y +C(V) a.s. merely indicates that
Exercise 10.10 Show that ordinary least squares estimates are best linear unbiased estimates in the model Y = Xβ +e, E(e) = 0, Cov(e) =V if the columns of X are eigenvectors of V.
Exercise 10.11 Use Definition B.31 and Proposition 10.4.6 to show that MC(X)∩C(V) = M−MW where C(W) =C[M(I−MV )].
Exercise 10.13 Show that Cov(BY) = 0 iff B = B∗Q for some matrix B∗.
Exercise 10.13 Show that the results in this section do not depend on the particular choice of Q.
Exercise 10.14 Let C(V0) =C(X)∩ C(V), C(X) =C(V0,X1), C(V) =C(V0,V1)with the columns of V0, V1, and X1 being orthonormal. Show that the columns of[V0,V1,X1] are linearly dependent.
Exercise 1.1 Write the following models in matrix notation:(a) Multiple regression yi =β0+β1xi1+β2xi2+β3xi3+ei,i = 1, . . . , 6.(b) Two-way ANOVA with interaction yi jk = μ +αi+β j +γi j +ei jk, i = 1,2, 3, j = 1, 2, k = 1, 2.(c) Two-way analysis of covariance (ACOVA) with no interaction yi
Exercise 1.2 Let W be an r×s random matrix, and let A and C be n×r and n×s matrices of constants, respectively. Show that E(AW +C) = AE(W)+C. If B is an s×t matrix of constants, show that E(AWB) = AE(W)B. If s = 1, show that Cov(AW +C) = ACov(W)A.
Exercise 1.3 Show that Cov(Y) is nonnegative definite for any random vector Y.The covariance of two random vectors with possibly different dimensions can be defined. If Wr×1 and Ys×1 are random vectors with EW = γ and EY = μ, then the covariance ofW and Y is the r×s matrix Cov(W,Y) = E[(W
Exercise 1.4 Show that the function f (y) given above is the density of Y when Y ∼ N(μ,V) and V is nonsingular.Hint: If Z has density fZ(z) and Y = G(Z), the density of Y is fY (y) = fZ(G−1(y))|det(dG−1)|, where dG−1 is the derivative (matrix of partial derivatives) of G−1 evaluated at y.
Exercise 1.5 Show that if Y is an r-dimensional random vector with Y ∼N(μ,V)and if B is a fixed n×r matrix, then BY ∼ N(Bμ,BVB).In linear model theory, Theorem 1.2.3 is often applied to establish independence of two linear transformations of the data vector Y.
Exercise 1.6 Show that if Y is a random vector and if E(Y) = 0 and Cov(Y) = 0, then Pr[Y = 0] = 1.Hint: For a random variable w with Pr[w ≥ 0] = 1 and k > 0, show that Pr[w ≥k] ≤ E(w)/k. Apply this result to YY.
Exercise 1.7 (a) Show that if V is nonsingular, then the three conditions in Theorem 1.3.6 reduce to AVA = A. (b) Show that YV−Y has a chi-squared distribution with r(V) degrees of freedom when μ ∈ C(V).
Exercise 1.8 Prove Theorem 1.3.9.Hints: Let V = QQ and write Y = μ +QZ, where Z ∼ N(0, I). Using|=to indicate independence, show thatQAQZμAQZ|=QBQZμBQZ and that, say, YAY is a function QAQZ and μAQZ.
Exercise 1.9 Let M be the perpendicular projection operator onto C(X). Show that (I−M) is the perpendicular projection operator onto C(X)⊥. Find tr(I−M) in terms of r(X).
Exercise 1.10 For a linear model Y = Xβ +e, E(e) = 0, Cov(e) =σ 2I, show that E(Y) = Xβ and Cov(Y) =σ 2I.
Exercise 1.11 For a linear model Y = Xβ +e, E(e) = 0, Cov(e) = σ 2I, the residuals areˆ e =Y −X ˆβ = (I−M)Y, where M is the perpendicular projection operator onto C(X). Find(a) E( ˆ e).(b) Cov( ˆ e).(c) Cov( ˆ e,MY).(d) E( ˆ e ˆ e).(e) Show that ˆ e ˆ e =YY −(YM)Y.
Exercise 1.5.1 Let Y =(y1,y2,y3) be a random vector. Suppose that E(Y) ∈M, whereM is defined by M = {(a,a−b,2b)|a,b ∈ R}.(a) Show thatM is a vector space.(b) Find a basis forM.(c) Write a linear model for this problem (i.e., find X such that Y = Xβ +e, E(e) = 0).(d) If β = (β1,β2) in
Exercise 1.5.2 Let Y = (y1,y2,y3) with Y ∼ N(μ,V), whereμ = (5,6,7)and V =⎡⎣2 0 1 0 3 2 1 2 4⎤⎦.Find(a) the marginal distribution of y1,(b) the joint distribution of y1 and y2,(c) the conditional distribution of y3 given y1 = u1 and y2 = u2, (d) the conditional distribution of y3
Exercise 1.5.3 The density of Y = (y1,y2,y3) is(2π)−3/2|V|−1/2e−Q/2, where Q = 2y21+y22+y23+2y1y2−8y1−4y2+8.Find V−1 and μ.
Exercise 1.5.4 Let Y ∼ N(Jμ,σ 2I) and let O =n−1/2J,O1 be an orthogonal matrix.(a) Find the distribution of OY.(b) Show that ¯ y· = (1/n)JY and that s2 =YO1O1Y/(n−1).(c) Show that ¯ y· and s2 are independent.Hint: Show that YY =YOOY =Y(1/n)JJY +YO1O1Y.
Exercise 1.5.5 Let Y = (y1,y2) have a N(0, I) distribution. Show that if A =1 a a 1B =1 b b 1, then the conditions of Theorem 1.3.7 implying independence of YAY and YBY are satisfied only if |a| = 1/|b| and a = −b. What are the possible choices for a and b?
Exercise 1.5.6 Let Y = (y1,y2, y3) have a N(μ,σ 2I) distribution. Consider the quadratic forms defined by the matrices M1, M2, and M3 given below.(a) Find the distribution of each YMiY.(b) Show that the quadratic forms are pairwise independent.(c) Show that the quadratic forms are mutually
Exercise 1.5.7 Let A be symmetric, Y ∼ N(0,V), and w1, . . . ,ws be independentχ2(1) random variables. Show that for some value of s and some numbers λi, YAY ∼ Σsi=1λiwi.Hint: Y ∼ QZ so YAY ∼ ZQAQZ. Write QAQ = PD(λi)P.
Exercise 1.5.8. Show that(a) for Example 1.0.1 the perpendicular projection operator onto C(X) is M =1 6J6 6 +1 70⎡⎢⎢⎢⎢⎢⎣25 15 5 −5 −15 −25 15 9 3 −3 −9 −15 5 3 1 −1 −3 −5−5 −3 −1 1 3 5−15 −9 −3 3 9 15−25 −15 −5 5 15 25⎤⎥⎥⎥⎥⎥⎦;(b) for
Exercise 2.1 Show that for λ β estimable,λ ˆβ −λ β MSEλ (XX)−λ∼ t(dfE).Find the form of an α level test of H0 : λ β = 0 and the form for a (1−α)100%confidence interval for λ β .Hint: The test and confidence interval can be found using the methods of Appendix E.
Exercise 2.2 Let y11,y12, . . . ,y1r be N(μ1,σ 2) and y21, y22, . . . ,y2s be N(μ2,σ 2)with all yi js independent. Write this as a linear model. For the rest of the problem use the results of Chapter 2. Find estimates of μ1, μ2, μ1 −μ2, and σ 2. Using Appendix E and Exercise 2.1, form an
Exercise 2.3 Let y1,y2, . . . ,yn be independent N(μ,σ 2). Write a linear model for these data. For the rest of the problem use the results of Chapter 2, Appendix E, and Exercise 2.1. Form an α = .01 test for H0 : μ = μ0, where μ0 is some known fixed number and form a 95% confidence interval
Exercise 2.4(a) Show that AVA = AV =VA.(b) Show that AV−1A = AV−1 =V−1A.(c) Show that A is the same for any choice of XV−1X−.
Exercise 2.5 Show that A is the perpendicular projection operator onto C(X)when the inner product between two vectors x and y is defined as xV−1y.
Showing 4000 - 4100
of 6202
First
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
Last
Step by Step Answers