- =+17. Apply Proposition 15.5.3 and prove that A[ϕ(N)/N] = ζ(2)−1
- =+16. In Proposition 15.5.3, show that the assumption that f(n) is nonnegative can be relaxed to infn f(n) > −∞ or supn f(n) < ∞.
- =+15. Derive formula (15.9).
- =+14. If limn→∞ f(n) =c, then demonstrate that A(f) = lims→1 Es[f(N)] = c.
- =+13. Suppose q is a positive integer and cm is a sequence of numbers with cm+q = cm for all m. Prove that the series n m=1 cm m converges if and only if q m=1 cm = 0. (Hint: Apply Proposition
- =+12. Liouville’s arithmetic function is defined byλ(n) = * 1, n = 1(−1)e1+···+ek , n = pe1 1 ··· pek k .Prove thatd|nλ(d) = %1 if n is a square 0 otherwise.
- =+11. Demonstrate that neither ϕ(n) nor μ(n) is completely multiplicative.Show that the Dirichlet convolution of two completely multiplicative functions need not be completely multiplicative.
- =+10. Verify that the identities 1 = d|nμ(d)τ(n/d)n = d|nμ(d)σ(n/d)hold for all natural numbers n.
- =+9. Check that the Dirichlet inverse f[−1] of a multiplicative arithmetic function f is multiplicative. (Hints: Assume otherwise, and consider the least product mn of two relatively prime natural
- =+n=1 ln n ns ≥ 2∞n=1 ln(2n)(2n)s − ln 2 2s∞n=1 ln n ns ≤ 2∞n=1 ln(2n)(2n)s .Note that the second inequality in (15.11) can fail when n = 1 and s < 1/ ln 2. In this special case, apply
- =+ln(2n)(2n)s ≤ ln(2n − 1)(2n − 1)s + ln(2n)(2n)s 2ln(2n)(2n)s ≥ ln(2n + 1)(2n + 1)s + ln(2n)(2n)s (15.11)for appropriate values of n and s. Use these to prove that∞
- =+8. Prove the two inequalities (15.6). (Hints: Show that the function f(t) = t−s ln t is decreasing on the interval [e1/s, ∞). Use this to prove that 2
- =+7. Suppose the independent realizations M and N of Zipf’s distribution generate arithmetic functions Y = f(M) and Z = g(N) with finite 392 15. Number Theory expectations. Show that the random
- =+What happens in the exceptional case when all ni = 1?
- =+6. Let N1,...,Nm be an i.i.d. sample from the Zipf distribution with values n1,...,nm. If at least one ni > 1, then prove that the maximum likelihood estimate of s is uniquely determined by the
- =+5. Suppose the arithmetic function Y = f(N) satisfies Es(Y ) = 0 for all s ≥ r > 1. Show that Y ≡ 0. (Hint: Prove that f(n) = 0 for all n by induction and sending s to ∞ in the equation ns
- =+4. Choose two independent random numbers M and N according to Zipf’s distribution. Prove that M and N are relatively prime with probability ζ(2s)−1. (Hints: Let Xp and Yp be the powers of the
- =+3. Let N follow Zipf’s distribution. Demonstrate that Es(Nk) = ζ(s − k)ζ(s)Es(Nk | q divides N) = qkζ(s − k)ζ(s)Es(Nk | N prime) =p pk−sp p−s for k a natural number with s>k + 1.
- =+2. Show that the gap g between successive prime numbers can be arbitrarily large. (Hint: Consider the sequence of numbers beginning with(g + 1)! + 2.)
- =+1. Demonstrate that∞n=21 − 1 n2= 1 2∞n=01 + z2n= 1 1 − z , |z| < 1.
- =+14. Verify formula (14.13) by induction on s
- =+13. In the coupling method demonstrate the bound Pr(S > 0) ≥ α∈I pα1 + E(Vα).See reference [170] for some numerical examples. (Hints: Choose T appropriately in equality (14.8) and apply
- =+β∈Nα pαpβ ≤ λ2(2t + 1)/n + 2λpt. (Hint:b1 = p2t + 2tp2t(1 − p)+ [2nt − t 2 + n − 3t − 1]p2t(1 − p)2 exactly. Note that the pairs α and β entering into the double sum for b1
- =+β∈Nα\{α} pαβ = 0. Finally, show that b1 = α∈I
- =+t−1 k=j Wk for j > 1. The number of such success runs starting in the first n positions is given by S = α∈I Xα, where the index set I = {1,...,n}.The Poisson heuristic suggests the S is
- =+12. Consider an infinite sequence W1, W2,... of independent, Bernoulli random variables with common success probability p. Let Xα be the indicator of the event that a success run of length t or
- =+11. In the somatic cell hybrid model, suppose that one knows a priori that the number of assay errors does not exceed some positive integer d.Prove that assay error can be detected if the minimum
- =+14.5 Problems 371 regardless of which β ∈ Nα\{α} is chosen [76]. Setting r = p(1 − p), verify the recurrence relation wn+1,d12,d13 = r(wn,d12−1,d13 + wn,d12,d13−1 + wn,d12−1,d13−1)+
- =+10. In the somatic cell hybrid model, suppose that the retention probability p = 1 2 . Define wn,d12,d13 = Pr[ρ(Cn 1 , Cn 2 ) = d12, ρ(Cn 1 , Cn 3 ) = d13]for a random panel with n clones. Show
- =+Now define the neighborhoods Nα so that Xα is independent of those Xβ with β outside Nα. Demonstrate thatα∈Iβ∈Nαpαpβ =n d n d−n − d d 1
- =+ This is a special case of the Poisson approximation treated in Example 14.2.2 by the coupling method. In this exercise we attack the birthday problem by the neighborhood method. To get started,
- =+9. Suppose n balls (people) are uniformly and independently distributed into m boxes (days of the year). The birthday problem involves finding the approximate distribution of the number of boxes
- =+8. A graph with n nodes is created by randomly connecting some pairs of nodes by edges. If the connection probability per pair is p, then all pairs from a triple of nodes are connected with
- =+370 14. Poisson Approximation(Hints: Let I be the set of all 2n vertices, Xα the indicator that vertexα has all of its edges directed toward α, and Nα = {β : β −α ≤ 1}.Note that Xα is
- =+7. Consider the n-dimensional unit cube [0, 1]n. Suppose that each of its n2n−1 edges is independently assigned one of two equally likely orientations. Let S be the number of vertices at which
- =+6. In the context of Example 14.3.1 on the law of rare events, prove the less stringent boundπS − πZTV ≤ nα=1 p2αby invoking Problems 29 and 30 of Chapter 7.
- =+α. Calculate an explicit Chen-Stein bound, and give conditions under which the Poisson approximation to S will be good.
- =+5. In certain situations the hypergeometric distribution can be approximated by a Poisson distribution. Suppose that w white balls and b black balls occupy a box. If you extract n
- =+4. In the m´enage problem, prove that Var(S)=2 − 2/(n − 1).
- =+3. In Problem 2 prove that the total variation inequality can be improved toπS − πZTV ≤2n+1 + e−1(n + 1)! .This obviously represents much faster convergence. (Hints: Use the exact
- =+2. For a random permutation σ1,...,σn of {1,...,n}, let Xα = 1{σα=α}be the indicator of a match at position α. Show that the total number of matches S = nα=1 Xα satisfies the coupling
- =+1. Verify that λ−1(1 − e−λ) ≤ min{1, λ−1} for all λ > 0.
- =+22. Prove that the linear density fij0 + fij1x is nonnegative throughout the interval (aij , ai,j+1) if and only if its center of mass cij lies in the middle third of the interval. (Hint: Without
- =+21. Consider a continuous-time branching process in which the possible number of daughter particles is bounded above a common integer b for all particle types. Show how the process can be
- =+354 13. Numerical Methods count process Xt moves from state to state by randomly selecting two genes, which may coincide. The first gene dies, and the second gene reproduces a replacement. If the
- =+20. In Moran’s population genetics model, n genes evolve by substitution and mutation. Suppose each gene can be classified as one of d alleles, and let Xti denote the number of alleles of type i
- =+19. In counting jumps in a Markov chain, it is possible to explicitly calculate the matrix exponential (13.12) for a two-state chain. Show that the eigenvalues of the matrix diag(Λ) + C(u)
- =+18. A square matrix M = (mij ) is said to be diagonally dominant if it satisfies |mii| > j=i |mij | for all i. Demonstrate that a diagonally dominant matrix is invertible. (Hint: Suppose Mx = 0.
- =+(e) Inverse transform to find the solutions pj (t).(f) Compute ˆdk, and show that ˆd0 = 0 and all other ˆdk are negative.(g) Deduce that limt→∞ pj(t)=ˆp0(0) = 1/n for all j.
- =+a ∗ bj into the pointwise product naˆkˆbk.(d) Solve the transformed equations ˆp k(t) = npˆk(t) ˆdk.
- =+(c) Prove that the finite Fourier transform maps the convolution
- =+13.8 Problems 353 17. Consider n equally spaced points on the boundary of a circle. Turing suggested a simple model for the diffusion of a morphogen, a chemical important in development, that
- =+16. In the coin tossing example, prove that the probabilities un satisfy un = qpr 1 − pr + O(r−n)for any r
- =+15. Let F(s) = ∞n=1 fnsn be a probability generating function. Show that the equation F(s) = 1 has only the solution s = 1 on |s| = 1 if and only if the set {n: fn > 0} has greatest common
- =+14. For a complex number c with |c| > 1, show that the periodic function f(x)=(c − e2πix)−1 has the simple Fourier series coefficients ck = c−k−11{k≥0}. Argue from equation (13.5) that
- =+13. Continuing Problems 11 and 12, suppose a constant a ≥ 0 and positive integer p exist such that|ck| ≤ a|k|p+1 for all k = 0. Integration by parts shows that this criterion holds if
- =+352 13. Numerical Methods for |k| < n. Functions analytic around 0 automatically possess Fourier coefficients satisfying the bound |ck| ≤ ar|k|.
- =+12. Continuing Problem 11, let ck be the kth Fourier series coefficient of a general periodic function f(x). If |ck| ≤ ar|k| for constants a ≥ 0 and 0 ≤ r < 1, then verify using equation
- =+11. For 0 ≤ m ≤ n − 1 and a periodic function f(x) on [0,1], define the sequence bm = f(m/n). If ˆbk is the finite Fourier transform of the sequence bm, then we can approximate f(x) by
- =+10. From a periodic sequence ck with period n, form the circulant matrix C =⎛⎜⎜⎝c0 cn−1 cn−2 ··· c1 c1 c0 cn−1 ··· c2... ... ... ...cn−1 cn−2 cn−3 ··· c0⎞⎟⎟⎠
- =+9. Consider a power series f(x) = ∞m=0 cmxm with radius of convergence r > 0. Prove that∞m=k mod n cmxm = 1 nn−1 j=0 u−jk n f(uj nx)for un = e2πi/n and any x with |x| < r. As a special
- =+8. Prove parts (a) through (c) of Proposition A.3.1.
- =+7. For 0 ≤ r < n/2, define the rectangular and triangular smoothing sequences cj = 1 2r + 1 1{−r≤j≤r}dj = 1 r1{−r≤j≤r}1 − |j|rand extend them to have period n. Show that cˆk = 1
- =+6. Show that the sequence cj = j on {0, 1,...,n − 1} has finite Fourier transform cˆk =* n−1 2 k = 0−1 2 + i 2 cot kπn k = 0.
- =+5. Explicitly calculate the finite Fourier transforms of the four sequences cj = 1, cj = 1{0}, cj = (−1)j, and cj = 1{0,1,...,n/2−1} defined on{0, 1,...,n − 1}. For the last two sequences
- =+4. Assume X and Y are independent random variables whose ranges are the nonnegative integers. Specify a finite Fourier transform method for computing the discrete density of the difference X −Y .
- =+3. Suppose you are given a transition probability matrix P and desire the n-step transition probability matrix P n for a large value of n.Devise a method of computing P n based on the binary
- =+2. Show that the entries of the block Gauss-Seidel update (13.3) are nonnegative.
- =+1. Prove that the matrix R defined by equation (13.1) is invertible.(Hints: Apply Proposition 7.6.1 and the Sherman-Morrison formula(A + uvT )−1 = A−1 − A−1uvT A−1 1 + vT A−1u for the
- 8.11 Show that (B1) implies (a) (8.24) and (b) (8.26).
- 8.10 (a) If sup |Yn(t)| P→ 0 and sup |Xn(t) − c| P→ 0 as n → ∞, then sup |Xn(t) −ceYn(t)| P→ 0, where the sup is taken over a common set t ∈ T .(b) Use (a) to show that (8.22) and
- 8.9 Prove Lemma 8.7.
- 8.8 Let X1,...,Xn be iid as N(θ , 1) and consider the improper density π(θ) = eθ4. Then, the posterior will be improper for all n.
- 8.7 Prove the result stated preceding Example 8.6.
- 8.6 Give an example in which the posterior density is proper (with probability 1) after two observations but not after one.[Hint: In the preceding example, let π(τ )=1/τ 2.]
- 8.5 Let X1,...,Xn be independent, positive variables, each with density (1/τ )f (xi/τ ), and let τ have the improper density π(τ )=1/τ (τ > 0). The posterior density after one observation is a
- 8.4 In Example 8.5, the posterior density of θ after one observation is f (x1 − θ); it is a proper density, and it satisfies (B5) provided Eθ |X1| < ∞.
- 8.3 The assumptions of Theorem 2.6 imply (8.1) and (8.2).
- 8.2 Referring to Example 8.1, consider, instead, the minimax estimator δn of p given by(1.11) which corresponds to the sequence of beta priors with a = b = √n/2. Then,√n[δn − p] = √nX n
- 8.1 Determine the limit distribution of the Bayes estimator corresponding to squared error loss, and verify that it is asymptotically efficient, in each of the following cases:(a) The observations
- 7.34 The derivatives of all orders of the density (7.37) tend to zero as x → ξ .
- 7.33 Let X1,...,Xn be iid according to the three-parameter lognormal distribution(7.37). Show that
- 7.32 In Example 7.15, (a) verify equation (7.39), (b) show that the choice a = −2 produces the estimator with the best second-order efficiency, (c) show that the limiting risk ratio of the MLE (a =
- 7.31 In the preceding problem, compare (a) the asymptotic distribution of the MLE and the UMVU estimator of c; (b) the normalized expected squared error of these two estimators.
- 7.30 In Example 7.13, determine the UMVU estimators of a andc, and the asymptotic distributions of these estimators.
- 7.29 In Example 7.13, show that(a) cˆ and aˆ are independent and have the stated distributions;(b) X(1) and log[Xi/X(1)] are complete sufficient statistics on the basis of a sample from (7.33).
- 7.28 In Example 7.12, show that (a) √n(ˆbˆ −b) L→ N(0, b2) and (b) √n(bˆ −b) L→N(0, b2).
- 7.27 In Example 7.12, compare (a) the asymptotic distributions of ξˆ and δn; (b) the normalized expected squared error of ξˆ and δn.
- 7.26 For each of the following estimates, write out the ψ function that determines it, and show that the estimator is consistent and asymptotically normal under the conditions of Theorems 9.2 and
- 7.25 Theorem 9.3 Under the conditions of Theorem 9.2, if, in addition(i) Eθ0∂∂t ψ(X, t)|t=t0!is finite and nonzero,(ii) Eθ0ψ2(X, t0)!< ∞, then√n(T0 − t0) L→ N(0, σ2 T0), where σ2 T0
- 7.24 To have consistency of M-estimators, a sufficient condition is that the root of the estimating function be unique and isolated. Establish the following theorem.Theorem 9.2 Assume that conditions
- 7.23 Let F have a differentiable density f and let ψ2f < ∞.(a) Use integration by parts to write the denominator of (7.27) as [ ψ(x)f (x)dx]2.(b) Show that σ2(F,ψ) ≥ [(f /f )2f ]−1 = I
- 7.22 Show that if ρ s defined by (7.24), then ρ and ρ are everywhere continuous.
- 7.21 Show that the likelihood (7.21) is unbounded.
- 7.20 Verify the roots (7.22).
- 7.19 Suppose that in (7.21), the ξ ’s are themselves random variables, which are iid as N(µ, γ 2).(a) Show that the joint density of the (Xi, Yi) is that of a sample from a bivariate normal
- 7.18 When τ = σ in (7.21), show that the MLE exists and is consistent.
- 7.17 In Example 7.8:(a) Show that for j > 1 the expected value of the conditional information (given Xj−1)that Xj contains about β is 1/(1 − β2).(b) Determine the information X1 contains about
- 7.16 (a) In Example 7.8, show that the likelihood equation has a unique solution, that it is the MLE, and that it has the same asymptotic distribution as δn = n i=1 XiXi+1/ n i=1 X2 i .(b) Show
- 7.15 Prove that the sequence X1, X2,... of Example 7.8 is stationary provided it satisfies(7.17).