New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
modern mathematical statistics with applications
Probability And Measure Wiley Series In Probability And Mathematical Statistics 3rd Edition Patrick Billingsley - Solutions
=+. (a) Show that {X„) is a martingale with respect to (5"} if and only if, for all n and all stopping times + such that +
=+(b) Show that, if {X ,, ) is a martingale and + is a bounded stopping time, then E[X,] = E[X,].
=+35.15. 31.9f Suppose that 9.1 % and A E _, and prove that P[A] - IA with probability 1. Compare Lebesgue's density theorem.
=+35.16. Theorems 35.6 and 35.9 have analogues in Hilbert space. For n ≤ co, let P ,, be the perpendicular projection on a subspace M ,,. Then Px - P_x for all x if either (a) M1 CM2 C . . . and M_ is the closure of Un. < M ,, or (b) M. M2 5 ... and M_ = n.
=+35.17. Suppose that 0 has an arbitrary distribution, and suppose that, conditionally on 0, the random variables Y1, Y2 ,... are independent and normally dis-
=+tributed with mean 0 and variance o2. Construct such a sequence (0, Y], Y2 ,.. . ).Prove (35.31).
=+35.18. It is shown on p. 471 that optional stopping has no effect on likelihood ratios.This is not true of tests of significance. Suppose that X1, X2 ,... are indepen-
=+dent and identically distributed and assume the values 1 and 0 with probabili-ties p and 1 - p. Consider the null hypothesis that p = ; and the alternative that p > ;. The usual .05-level test of significance is to reject the null hypothesis if /(X1+.+Xm-in)> 1.645.(35.40)
=+For this test the chance of falsely rejecting the null hypothesis is approximately P[N> 1.645] =05 if n is large and fixed. Suppose that n is not fixed in advance of sampling, and show by the law of the iterated logarithm that, even
=+if p is, in fact, ;, there are with probability 1 infinitely many n for which(35.40) holds.
=+35.19. (a) Suppose that (35.32) and (35.33) hold. Suppose further that, for constants S2, s ,, "E" -of -p 1 and s ,, "ER _ E[Y (, 12 5 .. ]] -> 0, and show that$, ER_YA = N. Hint: Simplify the proof of Theorem 35.11.
=+(b) The Lindeberg-Levy theorem for martingales. Suppose that..., Y_1, YG, Y ,,...is stationary and ergodic (p. 494) and that E[Y2] > and E[Y\Yx-1, Y-2 .... ]=0.
=+Prove that CA_ Y / Vn is asymptotically normal. Hint: Use Theorem 36.4 and the remark following the statement of Lindeberg's Theorem 27.2.
=+35.20. 24.41 Suppose that the o-field $_ in Problem 24.4 is trivial. Deduce from Theorem 35.9 that P[AT-"S ] - PAS_] = P(A) with probability 1, and conclude that T is mixing.
=+36.1. 1 Suppose that [X ,: t € T] is a stochastic process on (0, 5", P) and A E S .Show that there is a countable subset S of T for which P[AlX ,, [ET] =P[AIX ,, t € S] with probability 1. Replace A by a random variable and prove a similar result.
=+36.2. Let T be arbitrary and let K(s, t) be a real function over T x T. Suppose that K is symmetric in the sense that K(s, t) = K(t,s) and nonnegative-definite in the sense that Ci,j-1 K(t1, t,)x;x; ≥ 0 for k ≥ 1, 11, ..., in T, and x1 ,..., Xk
=+. Show that there exists a process [ X ,: t € T] for which (X ,, ..., X, ) has the centered normal distribution with covariances K(( ,, t,), i, j = 1 ...., k.
=+36.3. Let L be a Borel set on the line, let / consist of the Borel subsets of L, and let L' consist of all maps from T into L. Define the appropriate notion of cylinder, and let IT be the o-field generated by the cylinders. State a version of Theorem 36.1 for (L', . T). Assume T countable, and
=+36.4. Suppose that the random variables X1, X2, ... assume the values 0 and 1 and P[X] = 1 i.o.] = 1. Let u be the distribution over (0, 1] of L" _, X ,, /2". Show that on the unit interval with the measure u, the digits of the nonterminating dyadic expansion form a stochastic process with the
=+36.5. 36.3 1 There is an infinite-dimensional version of Fubini's theorem. In the construction in Problem 36.3, let L =/ = (0,1), T= {1,2 ,... ), let / consist of the Borel subsets of I, and suppose that each k-dimensional distribution is the
=+k-fold product of Lebesgue measure over the unit interval. Then IT is a countable product of copies of (0, 1), its elements are sequences x = (x1, x2, ... )of points of (0, 1), and Kolmogorov's theorem ensures the existence on (IT, AT)of a product probability measure T: "[x: x; Sa, isn] =a1 x,
=+(a) Define : I" > IT - IT by 4(*1 ...., x11), ( >1, y2. ... ) = (x1 ,., x1, y1, Y2 .... ).Show that & is measurable " X ST/GT and -1 is measurable IT/X ST. Show that $"1(A ,, X ) = T, where A ,, is n-dimensional Lebesgue measure restricted to I".
=+(b) Let f be a function measurable T and, for simplicity, bounded. Define
=+f. (x+1, x=+2 ... ) = f .... [{(y .... >> x+1 ... ) dy. ... dyn;in other words, integrate out the coordinates one by one. Show by Problem
=+34.18, martingale theory, and the zero-one law that(36.30)except for x in a set of T-measure 0.
=+(c) Adopting the point of view of part (a), let g.(x1, ..., x,) be the result of integrating the variable (ym+), y+2) ... ) out (with respect to w) from f(x) ,.;., x ,., y„.+1, ... ). This may suggestively be written as f(x ..... Y+1,Y+2 .... )dyn+ dyn+2 ....
=+Show that g(x) ,..., x„) ->f(x1, x2 .... ) except for x in a set of 1-measure 0.
=+36.6. (a) Let T be an interval of the line. Show that " fails to contain the sets of:linear functions, polynomials, constants, nondecreasing functions, functions of bounded variation, differentiable functions, analytic functions, functions con-
=+tinuous at a fixed tp, Borel measurable functions. Show that it fails to contain the set of functions that: vanish somewhere in T, satisfy x(s)
=+(b) Let C be the set of continuous functions on T = [0, 0). Show that A E RT and ACC imply that A = Ø. Show, on the other hand, that A € 927 and
=+C CA do not imply that A = RT.
=+36.7. Not all systems of finite-dimensional distributions can be realized by stochastic processes for which 22 is the unit interval. Show that there is on the unit interval with Lebesgue measure no process [ X ,: t ≥ 0] for which the X, are independent and assume the values 0 and 1 with
=+36.8. Here is an application of the existence theorem in which T is not a subset of the line. Let (N, , v) be a measure space, and take T to consist of the .4-sets of finite v-measure. The problem is to construct a generalized Poisson process,
=+a stochastic process [X 4: A € T ] such that (i) X , has the Poisson distribution with mean v( A) and (ii) X4 ....., X, are independent if A1 ,..., A ,, are disjoint. Hint: To define the finite-dimensional distributions, generalize this construction: For A, B in T, consider independent random
=+37.1. 36.21 Show that K(s, t) = min{s, t) is nonnegative-definite; use Problem 36.2 to prove the existence of a process with the finite-dimensional distributions prescribed for Brownian motion.
=+37.2. Let X(t) be independent, standard normal variables, one for each dyadic rational t (Theorem 20.4; the unit interval can be used as the probability space). Let W(0) = 0 and W(n) = ER_ X(k). Suppose that W(t) is already defined for dyadic rationals of rank n, and put+--IN 1-
=+Prove by induction that the W(r) for dyadic t have the finite-dimensional distributions prescribed for Brownian motion. Now construct a Brownian motion with continuous paths by the argument leading to Theorem 37.1. This avoids an appeal to Kolmogorov's existence theorem.
=+37.3. 1 For each n define new variables W (t) by setting W.(k/2") = W(k/2") for dyadics of order n and interpolating linearly in between. Set 8, =sup, s "|Wn+ ((t)- W,(r)|, and show that-+-max-0≤k
=+The construction in the preceding problem makes it clear that the difference here is normal with variance 1/2"+2. Find positive x ,, such that Ex ,, and EP[8 ,, 2x„] both converge, and conclude that outside a set of probability 0, W.(t, w) converges uniformly over bounded intervals. Replace
=+37.4. 36.6 1 Let T = [0,00), and let P be a probability measure on (RT, $7 ) having the finite-dimensional distributions prescribed for Brownian motion. Let C consist of the continuous elements of RT.
=+(a) Show that P.(C)=0, or P*(RT-C) - 1 (see (3.9) and (3.10)). Thus completing (R7, 97, P) will not give C probability 1.
=+(b) Show that P*(C) = 1.
=+37.5. Suppose that [W ,: t ≥ 0] is some stochastic process having independent, sta-tionary increments satisfying E[W,] - 0 and E[W,2] =t. Show that if the finite-dimensional distributions are preserved by the transformation (37.11), then they must be those of Brownian motion.
=+37.6. Show that n,>00[W]: > > t] contains only sets of probability 0 and 1. Do the same for n.>00[W]: 0 <
=+37.7. Show by a direct argument that W( ., w) is with probability 1 of unbounded variation on [0,1]: Let Y, = E ?_ , [W(i2-") - W((i - 1)2"")|. Show that Y, has mean 2"/"E[W]]] and variance at most Var[ W ]. Conclude that _P[Y ,,< n]
=+37.8. Show that the Poisson process as defined by (23.5) is measurable.
=+37.9. Show that for T = [0,0) the coordinate-variable process [Z ,: t € T] on (R™, @™)is not measurable.
=+37.10. Extend Theorem 37.4 to the set [t: W(t,@) =@].
=+37.11. Let +, be the first time the Brownian motion hits @ > 0: 7 = inf[t: W, ≥ @].
=+Show that the distribution of 7, has over (0, 00) the density(37.46)
=+Show that E[+ ] = ", Show that ", has the same distribution as a2/NV2, where N is a standard normal variable.
=+37.12. 1 (a) Show by the strong Markov property that T, and Ta+g -T. are independent and that the latter has the same distribution as 7g. Conclude that ha * hg =ha+p. Show that Br, has the same distribution as Ta 6
=+(b) Show that each h, is stable-see Problem 28.10.
=+37.13. 1 Suppose that X1, X2 ,... are independent and each has the distribution(37.46).
=+(a) Show that (X, + . . . +X )/n2 also has the distribution (37.46). Contrast this with the law of large numbers.
=+(b) Show that P[n "2 max , " Xk ≤x] - exp(-@v2/mx ) for x> 0. Relate this to Theorem 14.3.
=+37.14. 37.11 + Let p(s, t) be the probability that a Brownian path has at least one zero in (s, t). From (37.46) and the Markov property deduce p(s,1)- arcos VE.(37.47)Hint: Condition with respect to W ,.
=+37.15. 1 (a) Show that the probability of no zero in (r, 1) is (2/m) arcsin vt and hence that the position of the last zero preceding 1 is distributed over (0,1)with density ₸-1(t(1 -t))-1/2.
=+(b) Similarly calculate the distribution of the position of the first zero follow-ing time 1.
=+(c) Calculate the joint distribution of the two zeros in (a) and (b).
=+. 1 (a) Show by Theorem 37.8 that inf, ., Y, (u) and inf, , Z (u) both converge in distribution to inf,
=+(b) Let A.(s, t) be the event that S ,, the position at time k in a symmetric random walk, is 0 for at least one k in the range sn ≤ k ≤ tn, and show that P(A„(s, t) -+ (2/IT) arcos vs/t....
=+(c) Let T ,, be the maximum k such that k ≤n and S, = 0. Show that T /n has asymptotically the distribution with density Tr-1((1 -t))-1/2 over (0,1). As
=+this density is larger at the ends of the interval than in the middle, the last time during a night's play a gambler was even is more likely to be either early or late than to be around midnight.
=+37.17. 1 Show that p(s, t) = p(t-1, s-1) = p(cs, ct). Check this by (37.47) and also by the fact that the transformations (37.11) and (37.12) preserve the properties of Brownian motion.
=+37.18. Deduce by the reflection principle that (M ,, W,) has density 2(2y =x)-(2y-x)2 exp -21 on the set where y 2 0 and y 2x. Now deduce from Theorem 37.8 the corresponding limit theorem for symmetric random walk.
=+37.19. Show by means of the transformation (37.12) that for positive a and b the probability is 1 that the process is within the boundary -at < W, < bt for all sufficiently large t. Show that a/(a +b) is the probability that it last touches above rather than below.
=+37.20. The martingale calculation used for (37.39) also works for slanting boundaries.For positivea, b, r, let . be the smallest t such that either W, = - a + rt or W, = b + rt, and let p(a,b, r) be the probability that the exit is through the upper barrier-that W, = b + IT.
=+(a) For the martingale Ya ,, in the proof of Lemma 2, show that E[Y .. ] =1.Operating formally at first, conclude that(37.48)E[e8W ,- 921/2]= 1.
=+Take 0 =2r, and note that OW, - - 02, is then 2rb if the exit is above(probability p(a,b, r)) and -2ra if the exit is below (probability 1 -p(a,b, r)).Deduce 1- 22ra p(a,b, r) - 2rb _ e -2ra
=+(b) Show that p(a,b, r) -> a/(a +b) as r-0, in agreement with (37.39).
=+(c) It remains to justify (37.48) for 0 =2r. From E[Y .., ] =1 deduce E[2[W2-+2℃)] = 1
=+(37.49)for nonrandom o. By the arguments in the proofs of Lemmas 2 and 3, show that (37.49) holds for simple stopping timesa, for bounded ones, for a = T An, for o = T.
=+27.1. Prove Theorem 23.2 by means of characteristic functions. Hint: Use (27.5) to compare the characteristic function of Che, Zak with explEk Pnk(e" - 1)].
=+27.2. If {X„) is independent and the X ,, all have the same distribution with finite first moment, then n-'S ,, - E[ X, ] with probability 1 (Theorem 22.1), so that n~'S ,, - E[X]]. Prove the latter fact by characteristic functions. Hint: Use(27.5).
=+27.3. For a Poisson variable Y, with mean A, show that (Y - A)/ VA => N as A -00 Show that (22.3) fails for 1 - 1.
=+27.4. Suppose that |X„4| ≤ M ,, with probability 1 and M ,, /s ,, - + 0. Verify Lyapounov's condition and then Lindeberg's condition.
=+27.5. Suppose that the random variables in any single row of the triangular array are identically distributed. To what do Lindeberg's and Lyapounov's conditions reduce?
=+27.6. Suppose that Z1, Z2 ,... are independent and identically distributed with mean 0 and variance 1, and suppose that Xnk = " ,. Z1. Write down the Lindeberg condition and show that it holds if max k ≤r ,, Unk "
=+27.7. Construct an example where Lindeberg's condition holds but Lyapounov's does not.
=+27.8. 22.91 Prove a central limit theorem for the number R ,, of records up to time n.
=+27.9. 6.31 Let S ,, be the number of inversions in a random permutation on n letters. Prove a central limit theorem for S ..
=+27.10. The 8-method. Suppose that Theorem 27.1 applies to {X,), so that Vna '(X ,, -c) = N, where X ,, = n. ""E" _, X ,. Use Theorem 25.6 as in Exam-ple 27.2 to show that, if f(x) has a nonzero derivative atc, then In ( f(X) -
=+f(c))/olf'(c)| = N: X ,, is approximately normal with mean c and standard deviation o/ vn, and f(X,) is approximately normal with mean f(c) and standard deviation |f'(c)lo/ Vn. Example 27.2 is the case f(x)=1/x.
=+27.11. Suppose independent X ,, have density |x|" outside (-1, +1). Show that(n log n) -1/2S ,, = N.
=+27.12. There can be asymptotic normality even if there are no moments at all.Construct a simple example.
=+27.13. Let d.(w) be the dyadic digits of a point @ drawn at random from the unit interval. For a k-tuple (u1 ,..., u.) of O's and 1's, let N.(u1 ,..., u ,; w) be the number of m ≤ n for which (d ,, (w) ,..., dm+k-{(w)) =(u] ,..., u;). Prove a central limit theorem for N.(u ,,..., ug; w). (See
=+27.14. The central limit theorem for a random number of summands. Let X1, X2, ... be independent, identically distributed random variables with mean 0 and vari-ance o2, and let S ,, - X, + . . . + X ,,. For each positive t, let v, be a random variable assuming positive integers as values; it need
=+X„ Suppose that there exist positive constantsa, and 0 such that>> 8 a, as t -+ co. Show by the following steps that Sur -N.(27.27)
=+(a) Show that it may be assumed that 0 = 1 and thea, are integers.
=+(b) Show that it suffices to prove the second relation in (27.27).
=+(c) Show that it suffices to prove (S ,, - S2)/ Va, = 0.
=+(d) Show that P[S ,, - SalzeVa,] SP[Iv ,- a,lze3a,]+P max IS4 -S2,12eva ,.Ik-a,lse30, and conclude from Kolmogorov's inequality that the last probability is at most 2€a2.
=+27.15. 21.21 23.10 23.141 A central limit theorem in renewal theory. Let X1, X2 ,... be independent, identically distributed positive random variables with mean m and variance o2, and as in Problem 23.10 let N, be the maximum n for which S ,, ≤t. Prove by the following steps that
=+N, - tm-1 011/2m-3/2 - N.(a) Show by the results in Problems 21.21 and 23.10 that (Sw, - t)/ VT = 0.
=+(b) Show that it suffices to prove that N ,- Som-1 -(SN, - MN)at1/2m-3/2 011/2m-1/2 > N.
=+(c) Show (Problem 23.10) that N,/t = m -1, and apply the theorem in Prob-lem 27.14.
=+27.16. Show by partial integration that 111 -x2 /2(27.28)as x -o.
=+27.17. t Suppose that X1, X2 ,... are independent and identically distributed with mean 0 and variance 1, and suppose thata, - .. Formally combine the central limit theorem and (27.28) to obtain
=+1 10-a2/2 me - 0}(1+ 34)/2(27.29)P[S,≥a,Vn]~12 TT , where ( ,, - + 0 if a ,, - . For a case in which this does hold, see Theorem 9.4.
Showing 3100 - 3200
of 5198
First
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
Last
Step by Step Answers