All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Ask a Question
Search
Search
Sign In
Register
study help
business
introduction to statistical investigations
Questions and Answers of
Introduction To Statistical Investigations
Let \(\left\{f_{n}(x)ight\}_{n=1}^{\infty}\) be a sequence of real functions defined by \(f_{n}(x)=(1+\) \(\left.n^{-1}ight) \delta\{x ;(0,1)\}\) for all \(n \in \mathbb{N}\).a. Prove that\[\lim _{n
Let \(g(x)=\exp (-|x|)\) and define a sequence of functions \(\left\{f_{n}(x)ight\}_{n=1}^{\infty}\) as \(f_{n}(x)=g(x) \delta\{|x| ;(n, \infty)\}\), for all \(n \in \mathbb{N}\).a.
Define a sequence of functions \(\left\{f_{n}(x)ight\}_{n=1}^{\infty}\) as \(f_{n}(x)=n^{2} x(1-x)^{n}\) for \(x \in \mathbb{R}\) and for all \(n \in \mathbb{N}\).a. Calculate\[f(x)=\lim _{n
Define a sequence of functions {fn(x)}∞n=1{fn(x)}n=1∞ as fn(x)=n2x(1−x)nfn(x)=n2x(1−x)n for x∈[0,1]x∈[0,1]. Determine whether\[\lim _{n ightarrow \infty} \int_{0}^{1} f_{n}(x) d
Suppose that \(f\) is a quadratic polynomial. Prove that for \(\delta \in \mathbb{R}\),\[f(x+\delta)=f(x)+\delta f^{\prime}(x)+\frac{1}{2} \delta^{2} f^{\prime \prime}(x) .\]
Suppose that \(f\) is a cubic polynomial. Prove that for \(\delta \in \mathbb{R}\),\[f(x+\delta)=f(x)+\delta f^{\prime}(x)+\frac{1}{2} \delta^{2} f^{\prime \prime}(x)+\frac{1}{6} \delta^{3} f^{\prime
Prove that if \(f\) is a polynomial of degree \(p\) then\[f(x+\delta)=\sum_{i=1}^{p} \frac{\delta^{i} f^{(i)}}{i !}\]
Prove Theorem 1.13 using induction. That is, assume that\[E_{1}(x, \delta)=\int_{x}^{x+\delta}(x+\delta-t) f^{\prime \prime}(t) d t\]which has been shown to be true, and that\[E_{p}(x,
Given that \(E_{p}(x, \delta)\) from Theorem 1.13 can be written as\[E_{p}(x, \delta)=\frac{1}{p !} \int_{x}^{x+\delta}(x+\delta-t)^{p} f^{(p+1)}(t) d t\]show that \(E_{p}(x, \delta)=\delta^{p+1}
Use Theorem 1.13 with \(p=1,2\) and 3 to find approximations for each of the functions listed below for small values of \(\delta\).a. \(f(\delta)=1 /(1+\delta)\)b. \(f(\delta)=\sin ^{2}(\pi /
Prove that the \(p^{\text {th }}\)-order Taylor expansion of a function \(f(x)\) has the same derivatives of order \(1, \ldots, p\) as \(f(x)\). That is, show that\[\left.\frac{d^{j}}{d \delta^{j}}
Show that by taking successive derivatives of the standard normal density that \(H_{3}(x)=x^{3}-3 x, H_{4}(x)=x^{4}-6 x^{2}+3\) and \(H_{5}(x)=x^{5}-10 x^{3}+15 x\).
Use Theorem 1.13 (Taylor) to find fourth and fifth order polynomials that are approximations to the standard normal distribution function \(\Phi(x)\). Is there a difference between the
Prove Part 1 of Theorem 1.14 using induction. That is, prove that for any non-negative integer \(k\),\[H_{k}(x)=\sum_{i=0}^{\lfloor k / 2floor}(-1)^{i} \frac{(2 i) !}{2^{i} i
Prove Part 2 of Theorem 1.14. That is, prove that for any non-negative integer \(k \geq 2\),\[H_{k}(x)=x H_{k-1}(x)-(k-1) H_{k-2}(x) .\]The simplest approach is to use Definition 1.6. Theorem 1.14.
Prove Part 3 of Theorem 1.14 using only Definition 1.6. That is, prove that for any non-negative integer \(k\),\[\frac{d}{d x} H_{k}(x)=k H_{k-1}(x)\]Do not use the result of Part 1 of Theorem 1.14.
The Hermite polynomials are often called a set of orthogonal polynomials. Consider the Hermite polynomials up to a specified order \(d\). Let \(\mathbf{h}_{k}\) be a vector in \(\mathbb{R}^{d}\)
In Theorem 1.15 prove that \(E_{p}(x, \delta)=o\left(\delta^{p}ight)\), as \(\delta ightarrow 0\). Theorem 1.15. Let f be a function that has p+ 1 bounded and continuous derivatives in the interval
Consider approximating the normal tail integral\[\bar{\Phi}(z)=\int_{z}^{\infty} \phi(t) d t\]for large values of \(z\) using integration by parts as discussed in Example 1.24. Use repeated
Using integration by parts, show that the exponential integral\[\int_{z}^{\infty} t^{-1} e^{-t} d t\]has asymptotic expansion\[z^{-1} e^{-z}-z^{-2} e^{-z}+2 z^{-3} e^{-z}-6 z^{-4}
Prove the second and third results of Theorem 1.18. That is, let \(\left\{a_{n}ight\}_{n=1}^{\infty}\), \(\left\{b_{n}ight\}_{n=1}^{\infty},\left\{c_{n}ight\}_{n=1}^{\infty}\), and
Prove the remaining three results of Theorem 1.19. That is, consider two real sequences \(\left\{a_{n}ight\}_{n=1}^{\infty}\) and \(\left\{b_{n}ight\}_{n=1}^{\infty}\) and positive integers \(k\) and
For each specified pair of functions \(G(t)\) and \(g(t)\), determine the value of \(\alpha\) and \(c\) so that \(G(t) \asymp c t^{\alpha-1}\) as \(t ightarrow \infty\) and determine if there is a
Consider a real function \(f\) that can be approximated with the asymptotic expansion\[f_{n}(x)=\pi x+\frac{1}{2} n^{-1 / 2} \pi^{2} x^{1 / 2}-\frac{1}{3} n^{-1} \pi^{3} x^{1 / 4}+O\left(n^{-3 /
Refer to the three approximations derived for each of the four functions in Exercise 26. For each function use \(\mathrm{R}\) to construct a line plot of the function, along with the three
Refer to the three approximations derived for each of the four functions in Exercise 26. For each function use \(\mathrm{R}\) to construct a line plot of the error terms \(E_{1}(x, \delta), E_{2}(x,
Refer to the three approximations derived for each of the four functions in Exercise 26. For each function use \(\mathrm{R}\) to construct a line plot of the error terms \(E_{2}(x, \delta)\) and
Consider the approximation for the normal tail integral \(\bar{\Phi}(z)\) studied in Example 1.24 given by\[\bar{\Phi}(z) \simeq z^{-1} \phi(z)\left(1-z^{-2}+3 z^{-4}-15 z^{-6}+105 z^{-8}ight) .\]A
Verify that \(\mathcal{F}=\left\{\emptyset, \omega_{1}, \omega_{2}, \omega_{3}, \omega_{1} \cup \omega_{2}, \omega_{1} \cup \omega_{3}, \omega_{2} \cup \omega_{3}, \omega_{1} \cup \omega_{2} \cup
Let \(\left\{A_{n}ight\}_{n=1}^{\infty}\) be a sequence of monotonically increasing events from a \(\sigma\) field \(\mathcal{F}\) of subsets of a sample space \(\Omega\). Prove that the sequence
Consider a probability space \((\Omega, \mathcal{F}, P)\) where \(\Omega=(0,1) \times(0,1)\) is the unit square and \(P\) is a bivariate extension of Lebesgue measure. That is, if \(R\) is a
Prove Theorem 2.4. That is, prove that\[P\left(\bigcup_{i=1}^{n} A_{i}ight) \leq \sum_{i=1}^{n} P\left(A_{i}ight)\]The most direct approach is based on mathematical induction using the general
Prove Theorem 2.6 (Markov) for the case when \(X\) is a discrete random variable on \(\mathbb{N}\) with probability distribution function \(p(x)\). Theorem 2.6 (Markov). Consider a random variable X
Prove Theorem 2.7 (Tchebysheff). That is, prove that if \(X\) is a random variable such that \(E(X)=\mu\) and \(V(X)=\sigma^{2}\delta) \leq\) \(\delta^{-2} \sigma^{2}\). Theorem 2.7 (Tchebysheff).
Let \(\left\{A_{n}ight\}_{n=1}^{\infty}\) be a sequence of events from a \(\sigma\)-field \(\mathcal{F}\) of subsets of a sample space \(\Omega\). Prove that if \(A_{n+1} \subset A_{n}\) for all \(n
Let \(\left\{A_{n}ight\}_{n=1}^{\infty}\) be a sequence of events from \(\mathcal{F}\), a \(\sigma\)-field on the sample space \(\Omega=(0,1)\), defined by\[A_{n}= \begin{cases}\left(\frac{1}{3},
Let \(\left\{A_{n}ight\}_{n=1}^{\infty}\) be a sequence of events from \(\mathcal{F}\), a \(\sigma\)-field on the sample space \(\Omega=\mathbb{R}\), defined by \(A_{n}=\left(-1-n^{-1},
Let \(\left\{A_{n}ight\}_{n=1}^{\infty}\) be a sequence of events from \(\mathcal{F}\), a \(\sigma\)-field on the sample space \(\Omega=(0,1)\), defined by\[A_{n}= \begin{cases}B & \text { if } n
Let \(\left\{A_{n}ight\}_{n=1}^{\infty}\) be a sequence of events from \(\mathcal{F}\), a \(\sigma\)-field on the sample space \(\Omega=\mathbb{R}\), defined by\[A_{n}=
Consider a probability space \((\Omega, \mathcal{F}, P)\) where \(\Omega=(0,1), \mathcal{F}=\mathcal{B}\{(0,1)\}\) and \(P\) is Lebesgue measure on \((0,1)\). Let
Consider tossing a fair coin repeatedly and define \(H_{n}\) to be the event that the \(n^{\text {th }}\) toss of the coin yields a head. Prove that\[P\left(\limsup _{n ightarrow \infty}
Consider the case where \(\left\{A_{n}ight\}_{n=1}^{\infty}\) is a sequence of independent events that all have the same probability \(p \in(0,1)\). Prove that\[P\left(\limsup _{n ightarrow \infty}
Let \(\left\{U_{n}ight\}_{n=1}^{\infty}\) be a sequence of independent \(\operatorname{UNIFORm}(0,1)\) random variables. For each definition of \(A_{n}\) given below, calculate\[P\left(\limsup _{n
Let \(X\) be a random variable that has moment generating function \(m(t)\) that converges on some radius \(|t| \leq b\) for some \(b>0\). Using induction, prove
Let \(X\) be a \(\operatorname{Poisson}(\lambda)\) random variable.a. Prove that the moment generating function of \(X\) is \(\exp [\lambda \exp (t)-1]\).b. Prove that the characteristic function of
Let \(Z\) be a \(\mathrm{N}(0,1)\) random variable.a. Prove that the moment generating function of \(X\) is \(\exp \left[-\frac{1}{2} t^{2}ight]\).b. Prove that the characteristic function of \(X\)
Let \(Z\) be a \(\mathrm{N}(0,1)\) random variable and define \(X=\mu+\sigma Z\) for some \(\mu \in \mathbb{R}\) and \(0
Let \(X\) be a \(\mathrm{N}\left(\mu, \sigma^{2}ight)\) random variable. Using the moment generating function, derive the first three moments of \(X\). Repeat the process using the characteristic
Let \(X\) be a \(\operatorname{UnIform}(\alpha, \beta)\) random variable.a. Prove that the moment generating function of \(X\) is \([t(\beta-\alpha)]^{-1}[\exp (t \beta)-\) \(\exp (t \alpha)]\).b.
Let \(X\) be a random variable. Prove that the characteristic function of \(X\) is real valued if and only if \(X\) has the same distribution as \(-X\).
Prove Theorem 2.24. That is, suppose that \(X\) is a random variable with moment generating function \(m_{X}(t)\) that exists and is finite for \(|t|0\). Suppose that \(Y\) is a new random variable
Prove Theorem 2.32. That is, suppose that \(X\) is a random variable with characteristic function \(\psi(t)\). Let \(Y=\alpha X+\beta\) where \(\alpha\) and \(\beta\) are real constants. Prove that
Prove Theorem 2.33. That is, suppose that \(X_{1}, \ldots, X_{n}\) be a sequence of independent random variables where \(X_{i}\) has characteristic function \(\psi_{i}(t)\), for \(i=1, \ldots, n\).
Let \(X_{1}, \ldots, X_{n}\) be a sequence of independent random variables where \(X_{i}\) has a \(\operatorname{Gamma}\left(\alpha_{i}, \betaight)\) distribution for \(i=1, \ldots, n\).
Suppose that \(X\) is a discrete random variable that takes on non-negative integer values and has characteristic function \(\psi(t)=\exp \{\theta[\exp (i t)-1]\}\). Use Theorem 2.29 to find the
Suppose that \(X\) is a discrete random variable that takes on the values \(\{0,1\}\) and has characteristic function \(\psi(t)=\cos (t)\). Use Theorem 2.29 to find the probability that \(X\) equals
Suppose that \(X\) is a discrete random variable that takes on positive integer values and has characteristic function\[\psi(t)=\frac{p \exp (i t)}{1-(1-p) \exp (i t)}\]Use Theorem 2.29 to find the
Suppose that \(X\) is a continuous random variable that takes on real values and has characteristic function \(\psi(t)=\exp (-|t|)\). Use Theorem 2.28 to find the density of \(X\). Theorem 2.28.
Suppose that \(X\) is a continuous random variable that takes on values in \((0,1)\) and has characteristic function \(\psi(t)=[\exp (i t)-1] / i t\). Use Theorem 2.28 to find the density of \(X\).
Suppose that \(X\) is a continuous random variable that takes on positive real values and has characteristic function \(\psi(t)=(1-\theta i t)^{-\alpha}\). Use Theorem 2.28 to find the density of
Let \(X\) be a random variable with characteristic function \(\psi\). Suppose that \(E\left(|X|^{n}ight)
a. Prove that \(\kappa_{4}=\mu_{4}^{\prime}-4 \mu_{3}^{\prime} \mu_{1}^{\prime}-3\left(\mu_{2}^{\prime}ight)^{2}+12
a. Prove that\[\kappa_{5}=\mu_{5}^{\prime}-5 \mu_{4}^{\prime} \mu_{1}^{\prime}-10 \mu_{3}^{\prime} \mu_{2}^{\prime}+20
Prove Theorem 2.34. That is, suppose that \(X_{1}, \ldots, X_{n}\) be a sequence of independent random variables where \(X_{i}\) has cumulant generating function \(c_{i}(t)\) for \(i=1, \ldots, n\).
Suppose that \(X\) is a \(\operatorname{PoIsson}(\lambda)\) random variable, so that the moment generating function of \(X\) is \(m(t)=\exp \{\lambda[\exp (t)-1]\}\). Find the cumulant generating
Suppose that \(X\) is a \(\operatorname{Gamma}(\lambda)\) random variable, so that the moment generating function of \(X\) is \(m(t)=(1-t \beta)^{-\alpha}\). Find the cumulant generating function of
Suppose that \(X\) is a \(\operatorname{LAPlace}(\alpha, \beta)\) random variable, so that the moment generating function of \(X\) is \(m(t)=\left(1-t^{2} \beta^{2}ight)^{-1} \exp (t \alpha)\) when
One consequence of defining the cumulant generating function in terms of the moment generating function is that the cumulant generating function will not exist any time the moment generating function
For each of the distributions listed below, use \(\mathrm{R}\) to compute \(P(|X-\mu|>\delta)\) and compare the result to the bound given by Theorem 2.7 as \(\delta^{-2} \sigma^{2}\) for
For each distribution listed below, plot the corresponding characteristic function of the density as a function of \(t\) if the characteristic function is real-valued, or as a function of \(t\) on
For each value of \(\mu\) and \(\sigma\) listed below, plot the characteristic function of the corresponding \(\mathrm{N}\left(\mu, \sigma^{2}ight)\) distribution as a function of \(t\) in the
Random walks are a special type of discrete stochastic process that are able to change from one state to any adjacent state according to a conditional probability distribution. This experiment will
Let \(\left\{X_{n}ight\}_{n=1}^{\infty}\) be a sequence of independent random variables where \(X_{n}\) is a \(\operatorname{GAmma}(\alpha, \beta)\) random variable with \(\alpha=n\) and
Let \(Z\) be a \(\mathrm{N}(0,1)\) random variable and let \(\left\{X_{n}ight\}_{n=1}^{\infty}\) be a sequence of random variables such that \(X_{n}=Y_{n}+Z\) where \(Y_{n}\) is a
Consider a sequence of independent random variables \(\left\{X_{n}ight\}_{n=1}^{\infty}\) where \(X_{n}\) has a \(\operatorname{Binomial}(1, \theta)\) distribution. Prove that the
Let \(U\) be a \(\operatorname{Uniform}(0,1)\) random variable and define a sequence of random variables \(\left\{X_{n}ight\}_{n=1}^{\infty}\) as \(X_{n}=\delta\left\{U ;\left(0, n^{-1}ight)ight\}\).
Let \(\left\{c_{n}ight\}_{n=1}^{\infty}\) be a sequence of real constants such that\[\lim _{n ightarrow \infty} c_{n}=c\]for some constant \(c \in \mathbb{R}\). Let
Let \(X_{1}, \ldots, X_{n}\) be a set of independent and identically distributed random variables from a shifted exponential density of the form\[f(x)= \begin{cases}\exp [-(x-\theta)] & \text { for }
Let \(X_{1}, \ldots, X_{n}\) be a set of independent and identically distributed random variables from a distribution \(F\) with variance \(\mu_{2}\) where \(E\left(\left|X_{1}ight|^{4}ight)a. Prove
Consider a sequence of independent random variables \(\left\{X_{n}ight\}_{n=1}^{\infty}\) where \(X_{n}\) has probability distribution function\[f_{n}(x)= \begin{cases}2^{-(n+1)} &
Let \(\left\{X_{n}ight\}_{n=1}^{\infty}\) be a sequence of random variables such that\[\lim _{n ightarrow \infty} E\left(\left|X_{n}-cight|ight)=0\]for some \(c \in \mathbb{R}\). Prove that \(X_{n}
Let \(U\) be a UNIFORm \([0,1]\) random variable and let \(\left\{X_{n}ight\}_{n=1}^{\infty}\) be a sequence of random variables such that\[X_{n}=\delta\left\{U ;\left(0,
Let \(\left\{X_{n}ight\}_{n=1}^{\infty}\) be a sequence of independent random variables where \(X_{n}\) has probability distribution function\[f(x)= \begin{cases}1-n^{-1} & x=0 \\ n^{-1} &
Let \(X_{1}, \ldots, X_{n}\) be a set of independent and identically distributed random variables following the distribution \(F\). Prove that for a fixed value of \(t \in \mathbb{R}\), the empirical
Prove Theorem 3.3 using the theorems of Borel and Cantelli. That is, let \(\left\{X_{n}ight\}_{n=1}^{\infty}\) be a sequence of random variables that converges completely to a random variable \(X\)
Let \(\left\{X_{n}ight\}_{n=1}^{\infty}\) be a sequence of monotonically increasing random variables that converge in probability to a random variable \(X\). That is, \(P\left(X_{n}
Prove Part 2 of Theorem 3.6. That is, let \(\left\{\mathbf{X}_{n}ight\}_{n=1}^{\infty}\) be a sequence of \(d\) dimensional random vectors and let \(\mathbf{X}\) be another \(d\)-dimensional random
Let \(\left\{\mathbf{X}_{n}ight\}_{n=1}^{\infty}\) be a sequence of random vectors that converge almost certainly to a random vector \(\mathbf{X}\) as \(n ightarrow \infty\). Prove that for every
Let \(\left\{U_{n}ight\}_{n=1}^{\infty}\) be a sequence of independent and identically distributed UNI\(\operatorname{FORM}(0,1)\) random variables and let \(U_{(n)}\) be the largest order statistic
Prove the first part of Theorem 3.7. That is let \(\left\{X_{n}ight\}_{n=1}^{\infty}\) be a sequence of random variables, \(c\) be a real constant, and \(g\) be a Borel function on \(\mathbb{R}\)
Prove the second part of Theorem 3.8. That is let \(\left\{X_{n}ight\}_{n=1}^{\infty}\) be a sequence of random variables, \(X\) be a random variable, and \(g\) be a Borel function on \(\mathbb{R}\).
Let \(\left\{\mathbf{X}_{n}ight\}\) be a sequence of \(d\)-dimensional random vectors, \(\mathbf{X}\) be a \(d\) dimensional random vector, and \(g: \mathbb{R}^{d} ightarrow \mathbb{R}^{q}\) be a
Let \(\left\{X_{n}ight\}_{n=1}^{\infty},\left\{Y_{n}ight\}_{n=1}^{\infty}\), and \(\left\{Z_{n}ight\}_{n=1}^{\infty}\) be independent sequences of random variables that converge in probability to the
Let \(\left\{X_{n}ight\}_{n=1}^{\infty}\) be a sequence of random variables. Suppose that for every \(\varepsilon>0\) we have that\[\limsup _{n ightarrow \infty}
A result from calculus is Kronecker's Lemma, which states that if \(\left\{b_{n}ight\}_{n=1}^{\infty}\) is a monotonically increasing sequence of real numbers such that \(b_{n} ightarrow \infty\) as
Let \(\left\{X_{n}ight\}_{n=1}^{\infty}\) be a sequence of independent and identically distributed random variables from a \(\operatorname{CaUchy}(0,1)\) distribution. Prove that the mean of the
Let \(\left\{X_{n}ight\}_{n=1}^{\infty}\) be a sequence of independent and identically distributed random variables from a density of the form \[f(x)= \begin{cases}c x^{2} \log (|x|) & |x|>2
Prove Theorem 3.17. That is, let \(F\) and \(G\) be two distribution functions. Show that\[\|F-G\|_{\infty}=\sup _{t \in \mathbb{R}}|F(t)-G(t)|\]is a metric in the space of distribution functions.
In the proof of Theorem 3.18, verify that \(\hat{F}_{n}(t)-F(t) \geq \hat{F}_{n}(t)-F\left(t_{i-1}ight)-\varepsilon\). Theorem 3.18 (Glivenko and Cantelli). Let X,..., Xn be a set of indepen- dent
Let \(X_{1}, \ldots, X_{n}\) be a set of independent and identically distributed random variables from a distribution \(F\).a. Prove that if \(E\left(\left|X_{1}ight|^{k}ight)
Showing 200 - 300
of 1041
1
2
3
4
5
6
7
8
9
10
11