New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
elementary probability for applications
Probability An Introduction 2nd Edition Geoffrey Grimmett, Dominic Welsh - Solutions
13. Show that the symmetric random walk on the integer points Zd =(i1, i2, . . . , id ) : i j = . . . ,−1, 0, 1, . . . , j = 1, 2, . . . , dis recurrent if d = 1, 2 and transient if d ≥ 3. You should use the results of Problems 10.5.8 and 10.5.12, and you need do no more calculations.
Deduce that the symmetric random walk in three dimensions is transient (the general argument in Problem 10.5.6 may be useful here).
6 , at each stage. Show that the probability wn that the particle revisits its starting point at the nth stage is given by w2m+1 = 0, w2m =1 22m 2m m X(i, j,k): i+j+k=mm!3mi ! j ! k!2.180 Random walks Use Stirling’s formula to show that∞X n=0 wn < ∞.
12. Consider a symmetric random walk on the integer points of the cubic lattice {(i, j, k) : i, j, k = . . . ,−1, 0, 1, . . . } in three dimensions, in which the particle moves to one of its six neighbouring positions, chosen with equal probability 1
Let (X, Y) be the random point on the line x + y = m which is reached first. What is the probability generating function of X − Y? (Oxford 1979F)
In a two-dimensional random walk, a particle can be at any of the points (x, y) which have integer coordinates. The particle starts at (0, 0) and at discrete intervals of time, takes a step of unit size. The steps are independent and equally likely to be any of the four nearest points.Show that the
* 11. A particle performs a random walk on the integers starting at the origin. At discrete intervals of time, it takes a step of unit size. The steps are independent and equally likely to be in the positive or negative direction. Determine the probability generating function of the time at which
10. In the two-dimensional random walk of Problem 10.5.8, let Dn be the Euclidean distance between the origin and Sn. Prove that, if the walk is symmetric, E(D2 n) = E(D2 n−1) + 1 for n = 1, 2, . . . , and deduce that E(D2 n) = n.
10.5.8. Make the following change of variables. If Sn = (i, j ), set Xn = i + j and Yn = i − j ;this is equivalent to rotating the axes through an angle of 1 4π. Show that X0, X1, . . . and Y0, Y1, . . . are independent symmetric random walks in one dimension. Deduce by Theorem 10.6 that
9. Here is another way of approaching the symmetric random walk in two dimensions of Problem
Deduce directly that the symmetric random walk in two dimensions is recurrent.
4 .Use Stirling’s formula to show that∞X n=0 vn = ∞ if p = q = r = s = 1 4 .
8. Consider the two-dimensional random walk of Exercise 10.11, in which a particle inhabits the integer points {(i, j ) : i, j = . . . ,−1, 0, 1, . . . } of the plane, moving rightwards, upwards, leftwards or downwards with respective probabilities p, q, r , and s at each step, where 10.5
2 if the previous pull was lost, and with probability p (< 1 2 ) if won. Show that the probability un that the player wins at the nth pull satisfies un + ( 1 2 − p)un−1 = 1 2 for n > 1.Deduce that un =1 + (−1)n−1( 1 2 − p)n 3 − 2p for n ≥ 1.
7. A slot machine functions as follows. At the first pull, the player wins with probability 1 2 . At later pulls, the player wins with probability 1
6. Let N be the number of times that an asymmetric simple random walk revisits its starting point.Show that N has mass function P(N = k) = α(1 − α)k for k = 0, 1, 2, . . . , where α = |p − q| and p is the probability that each step of the walk is to the right.
5. Consider a simple random walk with an absorbing barrier at 0 and a ‘retaining’ barrier at N.That is to say, the walk is not allowed to pass to the right of N, so that its position Sn at time n satisfies P(Sn+1 = N | Sn = N) = p, P(Sn+1 = N − 1 | Sn = N) = q, where p +q = 1. Set up a
Deduce that if a fair coin is tossed repeatedly, the probability that the number of heads ever exceeds twice the number of tails is 1 2 (√5 − 1).
Suppose that the particle is absorbed whenever it hits either N or N + 1. Find the probabilityπN(a) that it is absorbed at 0 rather than at N or N + 1, having started ata, where 0 ≤ a ≤ N + 1. Deduce that, as N → ∞,πN (a)→(1 if p ≤ 1 3 ,θa if p > 1 3 , where θ = 1 2 {√1 + (4q/p)
3 .
4. Consider a random walk on the integers in which the particle moves either two units to the right (with probability p) or one unit to the left (with probability q = 1 − p) at each stage, where 0 < p < 1. There is an absorbing barrier at 0 and the particle starts at the point a(> 0). Show that
What is this probability when p = q? (Oxford 1983M)
3. A particle performs a random walk on the set {−N,−N + 1, . . . , N − 1, N} and is absorbed if it reaches −N or N, where N > 1. The probability of a step of size −1 is q = 1 − p, with 0 < p < 1. Suppose that the particle starts at 0. By conditioning on the first step and using Theorem
Find the mean number of stages before the particle is absorbed at one or other of the barriers.
2. Consider a random walk on the integers with absorbing barriers at 0 and N in which, at each stage, the particle may jump one unit to the left (with probability α), remain where it is (with probability β), or jump one unit to the right (with probability γ ), where α, β, γ > 0 andα + β +
1. Two particles perform independent and simultaneous symmetric random walks starting from the origin. Show that the probability that they are at the same position after n steps is1 22nXn k=0n k2.Hence or otherwise show that Xn k=0n k2=2n n.
Exercise 10.41 Show that, in the Gambler’s Ruin Problem, the game terminates with probability 1. You may find it useful to partition the sequence of coin tosses into disjoint runs of length N, and to consider the event that one of these runs contains only steps to the right.Exercise 10.42 Use
Exercise 10.22 Show that a symmetric random walk starting from the origin visits the point 1 with probability 1.
Exercise 10.21 Show that a symmetric random walk revisits its starting point infinitely often with probability 1.
Exercise 10.20 Consider a simple random walk with p 6= q. Show that, conditional on the walk returning to its starting point at some time, the expected number of steps taken before this occurs is 4pq|p − q|(1 − |p − q|).
Explain why this implies that the probability of ultimate extinction equals 1 − α1/(1−β).(Cambridge 2001)
(d) Find the probability that Xn = 0, and show that it converges as n → ∞to 1−α1/(1−β).
(c) Show that G(s) = 1−α(1−s)β is the probability generating function of a non-negative integer-valued random variable when α, β ∈ (0, 1), and find Gn explicitly when G is thus given.
(b) Let Xn be the size of the nth generation of a branching process in which each family size has probability generating function G, and assume that X0 = 1. Show that the probability generating function Gn of Xn satisfies Gn+1(s) = Gn(G(s)) for n ≥ 1.
6. (a) Explain what is meant by the term ‘branching process’.
5. A branching process (Xn : n ≥ 0) has P(X0 = 1) = 1. Let the total number of individuals in the first n generations of the process be Zn, with probability generating function Qn. Prove that, for n ≥ 2, Qn(s) = sP1(Qn−1(s)), where P1 is the probability generating function of the family-size
4. If (Zn : 0 ≤ n < ∞) is a branching process in which Z0 = 1 and the size of the r th generation Zr has the generating function Pr (s), prove that Pn(s) = Pr (Pn−r (s)) for 1 ≤ r ≤ n − 1.Suppose that the process is modified so that the initial generation Z0 is Poisson with parameterλ,
3. By using the partition theorem and conditioning on the value of Zm, show that if Z0, Z1, . . .is a branching process with mean family-size μ, then E(Zm Zn) = μn−mE(Z2m) if m < n.
2. Use the result of Problem 9.6.1 to show that, if Z0, Z1, . . . is a branching process whose family sizes have mean μ (> 1) and variance σ 2, then var(Zn/μn) →σ 2/[μ(μ − 1)] as n →∞.
Deduce an expression for var(Zn) in terms of μ, σ 2, and n.166 Branching processes
1. Let X1, X2, . . . be independent random variables, each with mean μ and variance σ 2, and let N be a random variable which takes values in the positive integers {1, 2, . . . } and which is independent of the Xi . Show that the sum S = X1 + X2 + · · · + XN has variance given by var(S) = σ
Exercise 9.24 If each family size of a branching process has the binomial distribution with parameters 2 and p (= 1 − q), show that the probability of ultimate extinction is e =(1 if 0 ≤ p ≤ 1 2 ,(q/p)2 if 1 2 ≤ p ≤ 1.
Exercise 9.23 If the family-size distribution of a branching process has mass function pk = pqk for k = 0, 1, 2, . . . and 0 < p = 1 − q < 1, use Theorem 9.19 to show that the probability that the process becomes extinct ultimately is p/q if p ≤ 1 2 .
Exercise 9.17 Find the mean and variance of Zn when the family-size distribution is given by pk = pqk for k = 0, 1, 2, . . . , and 0 < p = 1 − q < 1. Deduce that var(Zn)→0 if and only if p > 1 2 .
Exercise 9.11 Suppose that each family size of a branching process contains either one member only(with probability p) or is empty (with probability 1 − p). Find the probability that the process becomes extinct at or before the nth generation.Exercise 9.12 Let μ and σ 2 be the mean and variance
(d) What is the probability that nomadkind ultimately becomes extinct?
(c) What is the probability that nomadkind is extinct by time n?
(b) What is the mass function of Zn?
(a) What is the mean and variance of Zn?
(c) Poisson processes and related processes: modelling processes such as the emission of radioactive particles from a slowly decaying source, or the length of the queue at the supermarket cash register.
(b) random walks: modelling the movement of a particle which moves erratically within a medium (a dust particle in the atmosphere, say),
(a) branching processes: modelling the growth of a self-reproducing population (such as mankind),
24. State the central limit theorem.The cumulative distribution function F of the random variable X is continuous and strictly increasing. Show that Y = F(X) is uniformly distributed. Find the probability density function of the random variable −log(1 − Y ), and calculate its mean and
23. Let u(t ) and v(t ) be the real and imaginary parts, respectively, of the characteristic function of the random variable X. Prove that(a) E(cos2 t X) = 1 2 [1 + u(2t )],(b) E(cos sX cos t X) = 1 2 [u(s + t ) + u(s − t )].Hence, find the variance of cos t X and the covariance of cos t X and
* 22. X and Y are independent, identically distributed random variables with mean 0, variance 1, and characteristic function φ. If X + Y and X − Y are independent, prove thatφ(2t ) = φ(t)3φ(−t ).By making the substitution γ (t ) = φ(t )/φ(−t) or otherwise, show that, for any positive
In the special case aj = 2−j for j ≥ 1, show that Yn converges in distribution as n → ∞ to the uniform distribution on the interval (−1, 1).8.6 Problems 153
Derive the moment generating function of the random variable Yn =Pn j=1 a j X j , where a1, a2, . . . , an are constants.
21. Let X1, X2, . . . , Xn be independent and identically distributed random variables such that P(X1 = 1) = P(X1 = −1) = 1 2 .
2 n. Consider a sequence of independent trials where the probability of success is p for each trial. Let N be the number of trials required to obtain a fixed number of k successes. Show that, as p tends to zero, the distribution of 2Np tends to the distribution of Y with n = 2k. (Oxford 1979F)
(c) Tn converges in distribution to Z.State carefully any theorems which you use. (Oxford 1980F)* 20. Let X j , j = 1, 2, . . . , n, be independent identically distributed random variables with probability density function e−1 2 x2√2π, −∞ < x < ∞. Show that the characteristic function
(b) the mean and variance of Tn converge to the mean and variance of Z,
(a) Sn converges in distribution to Z,
18. Let Zn have the geometric distribution with parameter λ/n, where λ is fixed. Show that Zn/n converges in distribution as n → ∞, and find the limiting distribution.* 19. Let (Xk : k = 1, 2, . . . ) and (Yk : k = 1, 2, . . . ) be two sequences of independent random variables with P(Xk = 1)
17. The sequence (Xi ) of independent, identically distributed random variables is such that P(Xi = 0) = 1 − p, P(Xi = 1) = p.If f is a continuous function on [0, 1], prove that Bn(p) = EfX1 + · · · + Xn n152 The main limit theorems is a polynomial in p of degree at most n. Use
Deduce that, if F has a unique median m, then P(Zn ≤ x) →Z x−∞1√2πe−1 2 u2 du for u ∈ R, where Zn = (Yn − m)p 4n f (m)2.
Assume that n = 2r + 1 is odd, and show that Yn has density function fn(y) = (r + 1)n rF(y)r [1 − F(y)]r f (y).
* 16. Let X1, X2, . . . be independent random variables each having distribution function F and density function f . The order statistics X(1), X(2), . . . , X(n) of the subsequence X1, X2, . . . , Xn are obtained by rearranging the values of the Xi in non-decreasing order. That is to say, X(1) is
15. Let Z have the normal distribution with mean 0 and variance 1. Find E(Z2) and E(Z4), and find the probability density function of Y = Z2.
14. Let (Xn : n ≥ 1) be a sequence of random variables which converges in mean square. Show that E????[Xn − Xm]2→0 as m, n →∞.If E(Xn) = μ and var(Xn) = σ 2 for all n, show that the correlation between Xn and Xm converges to 1 as m, n →∞.
12. Let X be a random variable which takes values in the interval [−M, M] only. Show that P(|X| ≥a) ≥E|X| − a M − a if 0 ≤ a < M.13. Show that Xn →0 in probability if and only if E|Xn|1 + |Xn|→ 0 as n → ∞.
(c) g is increasing on [0,∞).
(b) g(0) > 0 for x 6= 0,
(a) g(x) = g(−x) for x ∈ R,
8.6 Problems 151
11. Adapt the proof of Chebyshev’s inequality to show that, if X is a random variable and a > 0, then P(|X| ≥a) ≤1 g(a)E(g(X)), for any function g : R→ R which satisfies
10. Let X1, X2, . . . and Y1, Y2, . . . be independent random variables each having mean μ and non-zero variance σ 2. Show that Un =1√2nσ 2Xn i=1 Xi −Xn i=1 Yisatisfies, as n → ∞, P(Un ≤ x) →Z x−∞1√2πe−1 2 u2 du for x ∈ R.
9. If Xn → X in probability and Yn → Y in probability, show that Xn + Yn → X + Y in probability.
8. Use the Cauchy–Schwarz inequality to prove that if Xn → X in mean square, then E(Xn) → E(X). Give an example of a sequence X1, X2, . . . such that Xn → X in probability but E(Xn)does not converge to E(X).
7. Use the Cauchy–Schwarz inequality to prove that if Xn → X in mean square and Yn → Y in mean square, then Xn + Yn → X + Y in mean square.
6. (a) Let 0 < a < 1 and Tn =X k: |k−1 2 n|>1 2 ann k.By considering the binomial distribution or otherwise, show that T1/n n →2 p(1 + a)1+a(1 − a)1−a.(b) Find the asymptotic behaviour of T 1/n n , where a > 0 and Tn =X k: k>n(1+a)nk k!.
5. By applying the central limit theorem to a sequence of random variables with the Poisson distribution, or otherwise, prove that e−n1 + n +n2 2! + · · · +nn n!!→1 2as n → ∞.
4. Binomial–Poisson limit. Let Zn have the binomial distribution with parameters n and λ/n, where λ is fixed. Use characteristic functions to show that Zn converges in distribution to the Poisson distribution, parameter λ, as n → ∞.
3. Let Xn be a discrete random variable with the binomial distribution, parameters n and p. Show that n−1Xn converges to p in probability as n → ∞.
2. By applying the central limit theorem to a sequence of random variables with the Bernoulli distribution, or otherwise, prove the following result in analysis. If 0 < p = 1 − q < 1 and x > 0, then Xn kpkqn−k →2 Z x 01√2πe−1 2 u2 du as n → ∞, where the summation is over all values
(a) Zn → a in probability as n → ∞,(b) √Zn →√a in probability as n →∞,(c) if Un = n(1 − Zn) and a = 1, then P(Un ≤ x) →(1 − e−x if x > 0, 0 otherwise, so that Un converges in distribution to the exponential distribution as n → ∞.
1. Let X1, X2 . . . be independent random variables, each having the uniform distribution on the
Exercise 8.55 Let X1, X2, . . . be independent random variables, each having the Cauchy distribution.Show that An = n−1(X1 + X2 + · · · + Xn) converges in distribution to the Cauchy distribution as n →∞. Compare this with the conclusion of the weak law of large numbers.Exercise 8.56 Let
Example 8.49 Let U be a random variable which takes the values −1 and 1, each with probability 12 . We define the sequence Z1, Z2, . . . by Zn =(U if n is odd,−U if n is even.
Exercise 8.33 For n = 1, 2, . . . , let Xn be a random variable having the gamma distribution with parameters n and 1. Show that the moment generating function of Zn = (Xn − n)/√n is Mn(t ) = e−t√n1 −t√n
Exercise 8.32 A fair die is thrown 12,000 times. Use the central limit theorem to find values of a and b such that P(1900 < S < 2200) ≈Z b a1√2πe−1 2 x2 dx, where S is the total number of sixes thrown.
Exercise 8.21 Use Chebyshev’s inequality to show that the probability that in n throws of a fair die the number of sixes lies between 1 6 n − √n and 1 6n + √n is at least 31 36 .Exercise 8.22 Show that if Zn → Z in probability then, as n → ∞, aZn + b →aZ + b in probability
Exercise 8.20 Prove the following alternative form of Chebyshev’s inequality: if X is a random variable with finite variance and a > 0, then P????|X − E(X)| > a≤1 a2 var(X).
Exercise 8.11 Show that the conclusion of the mean-square law of large numbers, Theorem 8.6, remains valid if the assumption that the Xi are independent is replaced by the weaker assumption that they are uncorrelated
Exercise 8.10 Let Nn be the number of occurrences of 5 or 6 in n throws of a fair die. Use Theorem 8.6 to show that, as n → ∞, 1 nNn →1 3in mean square.
Exercise 8.9 Let Z1, Z2, . . . be a sequence of random variables which converges to the random variable Z in mean square. Show that aZn + b → aZ + b in mean square as n → ∞, for any real numbers a and b.
26. Lyapunov’s inequality. Let Z be a positive random variable. By Jensen’s inequality or otherwise, show that E(Zr )1/r ≥ E(Zs )1/s when r ≥ s > 0. Thus, if Z has finite r th moment, then it has finite sth moment, for r ≥ s > 0.
25. Let p ≥ 1. By Jensen’s inequality or otherwise, find the smallest value of the constant cp such that (a + b)p ≤ cp(ap + bp) for alla, b ≥ 0. (Cambridge 2006)
(c) Suppose Y has moment generating function MY (t), where MY (t) =1 2(1 − t) +1 2 − t.Find P(Y ≤ 1).(Oxford 2010)
(iii) [M1(t )]2,(iv)R 1 0 M1(ut) du.
(ii) M1(t )M2(t),
Showing 1100 - 1200
of 3340
First
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Last
Step by Step Answers