New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
modern mathematical statistics with applications
Probability And Measure Wiley Series In Probability And Mathematical Statistics 3rd Edition Patrick Billingsley - Solutions
=+. Show that F, has infinite range if Fo = 1, W, = 2"", and 7 is the smallest n for which X, = +1.
=+F. + A* is constant. Interpret each condition, and show that together they imply that the policy is bounded in the sense of (7.24).
=+7.10. For a given policy, let A* be the fortune of the gambler's adversary at time n.Consider these conditions on the policy: (i) W .* ≤ F+_ Fn -1; (ii) W .* ≤ A" _,; (iii)
=+7.9. Suppose that W. = 1, so that F. = F. + S ,. Suppose that p2 q and ₸ is a stopping time such that 1 ≤+ ≤n with probability 1. Show that E[F,] ≤ E[F ,, ], with equality in case p = q. Interpret this result in terms of a stock option that must be exercised by time n, where Fo + S,
=+7.8. Here is a common martingale. Just before the nth spin of the wheel, the gambler has before him a pattern x1, ..., x, of positive numbers (k varies with n). He bets x1 + x2, or x, in case k = 1. If he loses, at the next stage he uses the pattern x 1, . .., * *, x , + x , (x 1, x , in case k =
=+7.7. In "progress and pinch," the wager, initially some integer, is increased by 1 after a loss and decreased by 1 after a win, the stopping rule being to quit if the next bet is 0. Show that play is certain to terminate if and only if p 2 ;. Show that F. - F0 + =W2 + }(7 - 1). Infinite capital
=+probability of F, = F. + 1 in the fair case is 1 - 2 -*. Prove this via Theorem 7.2 and also directly.
=+7.6. In "doubling," W1 = 1, W ,, = 2W ,._ 1, and the rule is to stop after the first win.For any positive p, play is certain to terminate. Here F, - Fo + 1, but of course infinite capital is required. if Fo =2 - 1 and W ,, cannot excced Fn-1, the
=+7.5. Suppose that a gambler with initial fortune 1 stakes a proportion 0 (0
=+pairs. Show that this process simulates a fair coin: Z1, Z2 ,... are independent and identically distributed and P[Z; = +1] = P[Z, =- 1] =;, whatever p may be. Follow the proof of Theorem 7.1.
=+. Let D ,, be 1 or 0 according as X2 ,- 1 # X2 ,, or not, and let M, be the time of the kth 1-the smallest n such that Ej _, D, = k. Let Z, = X2M. In other words, look at successive nonoverlapping pairs (X2n - 1, X2,), discard accordant(X2m-1 =X2) pairs, and keep the second element of discordant
=+(X¡(w), X2(w) ,... ) for « in C is called a collective: a subsequence chosen by any of the effective rules o contains all k-tuples in the correct proportions.
=+system o, and let C, be the w-set where every k-tuple of + 1's (k arbitrary)occurs in Y(" )(w), Y2)(c) ,... with the right asymptotic relative frequency (in the sense of Problem 6.12). Let C be the intersection of C ,, over all effective selection systems o. Show that C lies in $ (the o-field in
=+An analysis of the question is a matter for mathematical logic, but one can see that there can be only countably many algorithms or finite sets of rules expressed in finite alphabets.Let Y(a), Y{") ,... be the random variables of Theorem 7.1 for a particular
=+there are uncountably many selection systems, how many have an effective description in the sense of an algorithm or finite set of instructions by means of which a deputy (perhaps a machine) could operate the system for the gambler?
=+7.3. 6.121 If V ,, is the set of n-long sequences of + 1's, the function b ,, in (7.9)maps V ,- 1 into (0, 1). A selection system is a sequence of such maps. Although
=+7.2 As shown on p. 94, there is probability 1 that the gambler either achieves his goal of c or is ruined. For p +q, deduce this directly from the strong law of large numbers. Deduce it (for all p) via the Borel-Cantelli lemma from the fact that if play never terminates, there can never occur c
=+7.1. A gambler with initial capital a plays until his fortune increases b units or he is ruined. Suppose that p > 1. The chance of success is multiplied by 1 + 8 if his initial capital is infinite instead ofa. Show that 0
=+xx 2 ;, the overall chance of success is of course Q(x), and for an initial fortune x < ;, it is Q(2x)Q()=pQ(2x)=Q(x). The success probability is indeed Q(x) as for bold play, although the policy is different. With this example in mind, one can generate a whole series of distinct optimal policies.
=+bold-play strategy scaled down to the interval [0, ; ], and so the chance he ever reaches ; is Q(2x) for an initial fortune of x. Suppose further that if he does reach the goal of ;, or if he starts with fortune at least ; in the first place, then he continues, but with ordinary bold play. For an
=+There are policies other than the bold one that achieve the maximum success probability Q(x). Suppose that as long as the gambler's fortune x is less than 2 he bets x for x S . and 2 - x for as x < 2. This is, in effect, the
=+-and-black is available to him. The question is not whether to gamble-he must gamble. The question is how to gamble so as to maximize the chance of survival, and bold play is the answer.
=+This example illustrates the point of Theorem 7.3. The gambler enters the casino knowing that he must by dawn convert his $100 into $20,000 or face certain death at the hands of criminals to whom he owes that amount. Only
=+Example 7.9. In Example 7.2 the capital is $100 and the goal $20,000. At unit stakes the chance of successes is .005 for p = ; and 3 x 10-911 for p = . Another approximate calculation shows that bold play at red-and-black gives the gambler probability about .003 of success, which again compares
=+unit stakes or adopts bold play. At red-and-black (p = "), his chance of success with unit stakes is .00003; an approximate calculation based on (7.31)shows that under bold play his chance Q(.9) of success increases to about .88, which compares well with the fair case.
=+Example 7.8. The gambler of Example 7.1 has capital $900 and goal$1000. For a fair game (p =; ) his chance of success is .9 whether he bets
=+CASE 4. rs , sass. By (7.30), A(r, s) = pq + qQ (2a - 1) - pqQ(2s - 1) - pqQ(2r).From 0 ≤ 2a-1 =r +s -1 ≤ ;, follows Q(2a -1) =pQ(4a -2); and from :≤2a - = r+s- ≤ 1, follows Q(2a - 2) = p + qQ(4a - 2). Therefore, qQ(2a - 1) = pQ(2a-})-p2, and it follows that A(r, s) = p[q - p + Q(2a - 1)
=+CASE 3. rsas;ss. By (7.30), A(r,s) =pQ(2a) -p[ p+qQ(2s-1)] -q[ pQ(2r)].From;< ssr+s = 2a ≤ 1, follows Q(2a) = p + qQ(4a - 1); and from 0 ≤2a - - ≤ ;, follows Q(2a -; )=pQ(4a-1). Therefore, pQ(2a) =p2+qQ(2a - ; ), and it follows that A(r, s) = q[ Q(2a - }) - pQ(2s - 1) - pQ(2r)].Since p
=+CASE 2. Sr. By the second part of (7.30), A(r,s) =qA(2r-1,2s-1)≥0.
=+CASE 1. s. By the first part of (7.30), A(r, s)=pA(2r, 2s). Since 2r and 2s have the form k/2", the induction hypothesis implies that A(2r,2s) 0.
=+For some other systems (gamblers call them "martingales"), see the problems. For most such systems there is a large chance of a small gain and a small chance of a large loss.
=+Example 7.7. Suppose as before that Fo = a and W ,, = 1, so that F ,, = a +S ., but suppose the stopping rule is to quit as soon as F ,, reaches a +b. Here F* is bounded above by a + b but is not bounded below. If p =;, the gambler is by (7.8) certain to achieve his goal, so that F, = a +b. In
=+Example 7.6. The gambler has initial capital a and plays at unit stakes until his capital increases to c (0) ≤a ≤c) or he is ruined. Here Fo = a and W„ = 1, and so F =a + S, The policy is bounded byc, and F, is c or 0 according as the gambler succeeds or fails. If p = ; and if s is the
=+6.17. Suppose that X1, X2,... are independent and P[X=0]=p. Let L,, be the length of the run of O's starting at the nth place: L,k if X,, = ... = Xn+k-1 =0X+ Show that P[L, 2, i.o.] is 0 or 1 according as p" converges or diverges. Example 6.5 covers the case p = .
=+6.10)Since a ,, ~ log log n (see Problem 18.17), most integers under n have something like log log n distinct prime divisors. Since log log 107 is a little less than 3, the typical integer under 107 has about three prime factors-remarkably few.
=+(6.8)VI for p + q and hence that the variance of g under P ,, satisfies Var [8] 53 2 .(6.9)Psn Prove the Hardy-Ramanujan theorem:lim P. m: 8(m) - 12€ =0.
=+. 5.201 Let g(m) = E.8 (m) be the number of distinct prime divisors of m. For a = E.[ g] (see (5.46) show thata. - > 0, Show that
=+6.15. In the terminology of Example 6.5, show that log, n + log2 log2 n +@ log2 log, log2 n is an outer or inner boundary as 0 > 1 or 0 ≤ 1. Generalize.(Compare Problem 4.12.)
=+X1, X2 ,... are the successive letters produced by an information source, and h is the entropy of the source. Prove the asymptotic equipartition property: For large n there is probability exceeding 1 - € that the probability p. (w) of the observed n-long sequence, or message, is in the range
=+6.14. Shannon's theorem. Suppose that X1, X2 ,... are independent, identically dis-tributed random variables taking on the values 1 ,..., , with positive probabili-ties P .. ..., P ,. If p.(ii) = p ;... P; and p.(w) = p(X(w) ..... X(w), then p,(w) is the probability that a new sequence of n
=+6.13. 1 A number @ in the unit interval is completely normal if, for every base b and every k and every k-tuple of base-b digits, the k-tuple appears in the base-b expansion of w with asymptotic relative frequency b . Show that the set of completely normal numbers has Lebesgue measure 1.
=+N.(u) ,..., u;) be the frequency of the k-tuple in the first n + k - 1 trials, that is, the number of I such that 1 st sn and X, = u ....., X1 + 1 =u. Show that with probability 1, all asymptotic relative frequencies are what they should be-that is, with probability 1, n 'N ., (u ,, ..., u,) ->
=+6.12. 1 Suppose that the X ,, are independent and assume the values x1, ..., x, with probabilities p(x1), ..., p(x)). For u ,,..., u. a k-tuple of the x;'s, let
=+6.11. Suppose that X1, X2, ... are m-dependent in the sense that random variables more than m apart in the sequence are independent. More precisely, let* = o(X) ...., X2), and assume that ki ,., ofky are independent if k- , +m
=+6.10. 5.11 6.71 Suppose that (in the notation of (5.41)) B ,, - @2 - O(1 /n). Show that n"N ,, - a ,, - 0 with probability 1. What condition on B ,, - «2 will imply a weak law? Note that independence is not assumed here.
=+. 1 Use the ideas of Problem 6.8 to give a new proof of Borel's normal number theorem, Theorem 1.2. The point is to return to first principles and use only negligibility and the other ideas of Section 1, not the apparatus of Sections 2 through 6; in particular, P( 4) is to be taken as defined
=+6.8. 1 Suppose that X1, X ,,... are independent and uniformly bounded and E[ X ,, ] = 0. Using only the preceding result, the first Borel-Cantelli lemma, and Chebyshev's inequality, prove that n"'S ,, - 0 with probability 1.
=+(b) Suppose that n -2S ,,; - 0 with probability 1 and that the X ,, are uniformly bounded (sup ,, | X,(w)! < w). Show that n 'S ,, - 0 with probability 1. Here the X ,, need not be identically distributed or even independent.
=+6.7. (a) Let x1, x2, ... be a sequence of real numbers, and put s ,. = x, + ""+X ,.Suppose that n "2s,2 - 0 and that the x ,, are bounded, and show that n-'s, - 0.
=+6.6. Prove Cantelli's theorem: If X1, X ,, ... are independent, E[ X ,, ] = 0, and E[ X4]is bounded, then n-'S ,, - 0 with probability 1. The X ,, need not be identically distributed.
=+. Prove Poisson's theorem: If A1, A2 ,... are independent events, p. =n"E"_P(A,), and N = E" _, /A ,, then n 'N ,, - P .. - 0.In the following problems S ,, = X; + . . . + X ...
=+6.4. For a function f on [0, 1] write ||f|| = sup, |f(x)|. Show that, if f has a continuous derivative f', then IIf - Bill sell f'll +211fll /ne2. Conclude that Il f - B,Il = 0(n-1/3).
=+let X„(w) be the number of smaller elements (between 1 and k - 1) lying to the right of k in the bottom row. The sum S ,, = X ... + . +X ,.,, is the total number of inversions-the number of pairs appearing in the bottom row in reverse order of size. For the permutation in Example 5.6 the values
=+6.3. As in Examples 5.6 and 6.3, let « be a random permutation of 1, 2 ,..., n. Each k, 1 s k & n, occupies some position in the bottom row of the permutation w;
=+. Show in Example 6.3 that P[IS, - L,| > L'/2+0.
=+6.1. Show that Z ,, - Z with probability 1 if and only if for every positive € there exists an n such that P[|Zx - Z]
=+(By the prime number theorem the ratio of the two sides in fact goes to 1.)Conclude that the rth prime p, satisfies p, xr log r and that-10(5.54)₽
=+(e) Use (5.52) and truncation arguments to prove for the number "(x) of primes not exceeding x that x(5.53)Tr (x) = log x
=+in the sense that the ratio of the two sides is bounded away from 0 and co.
=+(d) Restrict the range of summation in (5.51) to 0x
=+Use this to estimate the error removing the integral-part brackets introduces into (5.49), and show that(5.51)[ p-1 log p = log x + O(1).PSX
=+c) Show that [2n /p] - 2|n / p] is always nonnegative and equals 1 in the range n < p ≤ 2n. Deduce E2„[log*] - E„[log*] =O(1) and conclude that(5.50)[ log p =0(x).Psx
=+(b) Let log" m = E, 8 (m) log p. Show that E„[log'] =_= |2 | log p= log n +0(1).(5.49)
=+5.20. 1 (a) From Stirling's formula, deduce(5.48)E„[log] = log n +0(1).From this, the inequality E,[@ ] ≤ 2/ p, and the relation log m = Ep@ (m) log p.conclude that Epp" log p diverges and that there are infinitely many primes.
=+says roughly that ( p - 1)"1 is the average power of p in the factorization of large integers.
=+For a function f of positive integers, let(5.46)E.[f]=" _ f(m)m=1 be its expected value under the probability measure P .,. Show that(5.47)
=+According to (5.44), the « ,, are for large n approximately independent under P .,, and according to (5.45), the same is true of the op.
=+5.19. 2.181 For integers m and primes p, let @ (m) be the exact power of p in the prime factorization of m: m = II ,pap(m). Let 8 (m) be 1 or 0 as p divides m or not. Under each P ,, (see (2.34)) the @, and 8, are random variables. Show that for distinct primes pt, . . . , Pur(5.43)P.[ ap, zk,
=+5.18. 2.201 The proof given for Theorem 5.3 for the special case where the u ,, are all the same can be extended to cover the general case: use Problem 2.20.
=+5.17. (a) Suppose that X, - p X and that f is continuous. Show that f(X) >p f(X).(b) Show that E[X - X,] -> 0 implies X1 , X. Show that the converse is false.
=+5.16. 1 Suppose that 0 Sp ,, ≤ 1 and put @, = min(p .,, 1 - p.). Show that, if Ea, converges, then on some discrete probability space there exist independent events A ,, satisfying P(A„) = p„. Compare Problem 1.1(b).
=+. By Theorem 5.3, for any prescribed sequence of probabilities p .,, there exists(on some space) an independent sequence of events A ,, satisfying P(A ,, ) =P ,.Show that if p ,, - + 0 but Ep ,, = , this gives a counterexample (like Example 5.4)to the converse of Theorem 5.2(ii).
=+5.14. Let f (x) be n2x or 2n -n2x or 0 according as 0 5x
=+5.13. Let 1; = 14 be the indicators of n events having union A. Let S, = EI ,, . . . 4 ...where the summation extends over all k-tuples satisfying 1 si, < ... < in sn.Then s. = E[S ] are the terms in the inclusion-exclusion formula P(A) =5, -S2 + · · · +s ,,. Deduce the inclusion-exclusion
=+. Show that, if X has nonnegative integers as values, then E[ X] - En _, P[ X ≥ n].
=+Thus Var[n 'N.] -> 0 if and only if B ,, - @2 - 0, which holds if the A ,, are independent and P(A ,, ) = p (Bernoulli trials), because then «, = p and B ,, ==
=+(5.42)E[n \N2] =a, Var[n 'N.] - B - x2 +H-P n
=+5.11. For events A1, A2 ,..., not necessarily independent, let N1 = CX-1 /4 be the number to occur among the first n. Let 2(5.41) an=" _ P(AR), Pn = n(n-1) |sicksn E P(A,NA¡).k=1 Show that
=+5.10. 1 Minkowski's inequality is(5.40)valid for p > 1. It is enough to prove that E[(X1/P+Y1)]
=+5.9. 1 Holder's inequality is equivalent to E[ X1/Py1/4] SEV[X] E/[Y](p-1 +q == 1), where X and Y are nonnegative random variables. Derive this from (5.38).
=+(b) Show that f is convex if it has continuous second derivatives that satisfy(5.39)f1 20, f22 20, fufu >fix.
=+5.8. (a) Let f be a convex real function on a convex set C in the plane. Suppose that (X(w), Y(o)) E C for all @ and prove a two-dimensional Jensen's inequal-ity:(5.38)f(E[X], E[Y]) ≤E[f(X,Y)].
=+b) Prove that E[l/X"]≥ 1/E"[ X] for p> 0 and X a positive random vari-able.
=+5.7. (a) Write (5.37) in the form E6/"[| X|"] ≤ E[| X|")"/"] and deduce it directly from Jensen's inequality.
=+5.6. The polynomial E[({\ X| + |YD)2] in t has at most one real zero. Deduce Schwarz's inequality once more.
=+(c) By considering a random variable assuming two values, show that Cantelli's inequality is sharp.
=+b) Show that P[IX - m| 2a] ≤ 202/(a2 + a2). When is this better than Chebyshev's inequality?
=+(a) Prove Cantelli's inequality P[X-mzalso2+a2'G2@≥0.
=+5.5. Suppose that X has mean m and variance o2.
=+5.4. Suppose that X assumes the values m -a, m, m + a with probabilities p, 1 -2p, p, and show that there is equality in (5.32). Thus Chebyshev's inequality cannot be improved without special assumptions on X.
=+5.3. Show that m = E[ X ] minimizes E[(X- m)2].
=+5.2. 2.191 Show that the unit interval can be replaced by any nonatomic probabil-ity measure space in the proof of Theorem 5.3.
=+(c) Suppose that P( A) is 0 or 1 for every A in . This holds, for example, if is the tail field of an independent sequence (Theorem 4.5), or if $ consists of the countable and cocountable sets on the unit interval with Lebesgue measure.Show that if X is measurable , then P[ X = c] = 1 for some
=+(b) Show that, if $= {0, 2), then X is measurable & if and only if X is constant.
=+5.1. (a) Show that X is measurable with respect to the o-field { if and only if o(X) CS. Show that X is measurable o(Y) if and only if o( X) Co(Y).
=+1.1. (a) Show that a discrete probability space (see Example 2.8 for the formal definition) cannot contain an infinite sequence A1, A2 ,... of independent events each of probability ;. Since A ,, could be identified with heads on the nth toss of a coin, the existence of such a sequence would make
=+(b) Suppose that 0 ≤ p ,, ≤ 1, and put «, = min{p ,,, 1 -p„}. Show that, if E ,, &, diverges, then no discrete probability space can contain independent events
=+A1, A2 ,... such that A ,, has probability p ,.
=+1.2. Show that N and Nº are dense [A15] in (0, 1].
=+1.3. 1 Define a set A to be trifling* if for each € there exists a finite sequence of intervals I ,, satisfying (1.22) and (1.23). This definition and the definition of negligibility apply as they stand to all sets on the real line, not just to subsets of(0,1].
Showing 4300 - 4400
of 5198
First
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
Last
Step by Step Answers