New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
statistical sampling to auditing
Probability Theory And Statistical Inference 2nd Edition Aris Spanos - Solutions
Explain how the moment generating function can be used to derive the moments.
Explain the concept of skewness and discuss why α3 = 0 does not imply that the distribution in question is symmetric.
Explain the concept of kurtosis and discuss why it is of limited value when the distribution is non-symmetric.
For a Weibull distribution with parameters (α = 3.345, β = 3.45), derive the kurtosis coefficient using the formulae in Appendix 3.A.
Explain why matching moments between two distributions can lead to misleading conclusions.
Compare and contrast the cumulative distribution function and the quantile function.
Explain the concepts of a percentile and a quantile and how they are related.
Why do we care about probabilistic inequalities?
“Moments do not characterize distributions in general and when they do we often need an infinite number of moments for the characterization.” Discuss.
Explain the probability integral and the probability integral transformations. How useful can they be in simulating non-uniform random variables?
Consider the discrete uniform distribution with density fx(x; θ) = 1/(n + 1), where n is an integer, x = 0,1,2,...,n. Derive E(X) and Var(X); note that ok (n+1)/2, ok = n (2n+1)(n+1)/6. =
“Marginalizing amounts to throwing away all the information relating to the random variable we are summing (integrating) out.” Comment.
Consider the random experiment of tossing a coin twice and define the random variables X=number of heads, and Y=|number of heads − number of tails|. Derive the joint distribution of (X, Y), assuming a fair coin, and check whether the two random variables are independent.
Let the joint density function of two random variables X and Y be(a) Derive the marginal distributions of X and Y.(b) Determine whether X and Y are independent.(c) Verify your answer in (b) using the conditional distribution(s). y-11 x \y -1 0 1 -1 || 222 .1 .1 .2
Define the concept of independence for two random variables X and Y in terms of the joint, marginal, and conditional density functions.
Explain the concept of a random sample and explain why it is often restrictive for most economic data series.
Describe briefly the formalization of the condition [c] the experiment can be repeated under identical conditions in the form of the concept of a random sample.
Explain intuitively why it makes sense that when the joint distribution f (x, y) is Normal, the marginal distributions fx(x) and fy(y) are also Normal.
Define the raw and central joint moments and show that Cov(X, Y) = E(XY) −E(X)·E(Y). Why do we care about these moments?
(a) Explain the concept of an ordered sample.(b) Explain intuitively why an ordered random sample is neither independent nor identically distributed.
Explain the concepts of identifiability and parameterization.
“In relating statistical models to (economic) theoretical models we often need to reparameterize/restrict the former in order to render the estimated parameters theoretically meaningful.” Explain.
Explain the concept of a random sample and its restrictiveness in the case of most economic data series.
How do we assess the distributional features of a data series using a t-plot?
“A smoothed histogram is more appropriate in assessing the distributional features of a data series than the histogram itself because the former is less data specific.” Explain.
Explain how one can distinguish between a t-plot of NIID and a t-plot of Student’s t IID observations.
Explain the relationship between the abstract concept of independence and the corresponding chance regularity pattern in a t-plot of a data series.
Explain how any form of dependence will help the modeler in prediction.
Explain the relationship between the abstract concept of identical distribution and the corresponding chance regularity pattern in a t-plot of a data series.
“Without an ordering of the observations one cannot talk about dependence and heterogeneity.” Discuss.
Explain the notion of a P–P plot and the Normal P–P plot in particular.
Compare and contrast a Normal P–P and a Normal Q–Q plot.
Explain how the standardized Student’s t P–P plot can be used to evaluate the degrees of freedom parameter.
Explain the notion of a reference distribution in a P–P plot. Why does the Cauchy reference distribution take different shapes in the context of a standardized Normal and a Student’s t P–P plot?
The data in Appendix 5.A (see Lai and Xing (2008), p. 71) denote logreturns on six stocks, monthly observations from August 2000 to October 2005, where PFE=Pfizer, INTEL=Intel, CITI=Citigroup, AXP=American Express, XOM=Exxon-Mobil, GM=General Motors. In addition, the data include log-returns for
Why do we care about heterogeneity and dependence in statistical models?
Explain how the idea of sequential conditioning helps to deal with the problem of many dimensions of the joint distribution of a non-random sample.
(a) Define the following concepts: (i) joint moments, (ii) conditional moments,(iii) non-correlation, (iv) orthogonality.(b) Explain the difference between (i) dependence vs. correlation and (ii) correlation vs. non-orthogonality.
ForXN(0, 1), define the random variable Y = X2 − 1 and show that Cov(X, Y) = 0, but the two random variables are not independent.
Let the joint density function of two random variables X and Y be(a) Derive the conditional distributions f (y|x), x = 0, 1.(b) Derive the following moments: E(X), E(Y), Var(X), Var(Y), Cov(X, Y), E(XY), Corr(X, Y), E(Y|X = 0), E(Y|X = 1), Var(Y|X = 0). x\y 0 1 2 0 .1.2 .2 1 .2.1 .2
Explain the notion of rth-order conditional dependence and compare it with that of(m, k)th-order dependence.
Explain and compare conditional independence and Markov dependence.
Explain why non-correlation implies independence in the case of a bivariate Normal distribution. How does one assess the correlation by looking at a scatterplot of observed data?
Explain how one can distinguish between the equal-probability contours of the Normal, Student’s t and Pearson type II bivariate densities.
Explain why zero correlation does not imply independence in the case of the Student’s t and Pearson type II bivariate distributions.
Explain how an increase in correlation will affect the bivariate exponential density.What does that mean for the scatterplot?
Explain why the notion of correlation makes no sense in the case of random variables measured on the nominal scale.
(a) Define the following measures of dependence: (i) cross-product ratio, (ii) Yule’s Q,(iii) gamma coefficient.(b) Explain why these measures can be used for nominal- and ordinal-scale data.(c) For the 2 × 2 table in Example 6.12(a), evaluate the following: (i) the concordance and discordance
The data in Appendix 5.A (Lai and Xing, 2008, p. 71) denote log-returns on six stocks, with monthly observations from August 2000 to October 2005, where PFE=Pfizer, INTEL=Intel, CITI=Citigroup, AXP=American Express, XOM=Exxon-Mobil, GM=General Motors. In addition, the data include log-returns for
Explain how the notion of conditioning enables us to deal with the dimensionality problem raised by joint distributions of samples.
Explain why the reduction f (x, y) = f (y|x)·fx(x) raises a problem due to the fact that{ f (y|X = x; ϕ1), ∀x∈RX} represents as many conditional distributions as there are possible values of x in RX.
Define and explain the following concepts:(a) conditional moment functions;(b) regression function;(c) skedastic function;(d) homoskedasticity;(e) heteroskedasticity.
Consider the joint distribution as given below:(a) Derive the conditional distributions of (Y|X = x) for all values of X.(b) Derive the regression and skedastic functions for the distributions in (a). x y 1 2 3 fx(x) -1 .10 .08 .02 0 .15 .06 .09 23 1 .2 .20 .10 .5 fy(y) .45 .34 .21 1
Let the joint density function of two random variables X and Y be xy 01 2 0 .1 .2 .2 1 .2.1.2
Compare and contrast the concepts E[Y|X = x] and E[Y|σ(X)].
From the bivariate distributions in Chapter 7:(a) Collect the regression functions which are: (i) linear in the conditioning variable,(ii) linear in parameters, and (iii) the intersection of (i) and (ii).(b) Collect the skedastic functions which are: (iv) homoskedastic, (v) heteroskedastic.(c) In
Explain the notion of linear regression. Explain the difference between linearity in x and linearity in the parameters.
Consider the joint Normal distribution denoted by (x) ~N ((m). (012 011 012 022 (7.65) (a) For values = 1, 2 = 1.5, 11 = 1,012 = 0.8, 22 = 2, plot E(Y|X = x) and Var(Y|Xx) for x = 0, 1,2. (b) Plot E(YX = x) and Var(Y|X = x) for x = 0,1,2 for a bivariate Student's t distribution with the parameters
Explain the concept of stochastic conditional moment functions.
Explain the notion of weak exogeneity. Why do we care?
Explain what one could do when weak exogeneity does not hold for a particular reduction as given in (7.64).
Explain the concept of a statistical generating mechanism and discuss its role in empirical modeling.
LetY be a random variable and define the error term by u = Y −E(Y|σ(X)). Show that by definition, this random variable satisfies the following properties:[i] E(u|σ(X)) = 0, [ii] E(u·X|σ(X)) = 0,[iii] E(u) = 0, [iv] E{u·[E(Y|σ(X)]|σ(X)} = 0.
Explain the difference between temporal and contemporaneous dependence.
Compare and contrast the statistical GMs of:(a) the simple Normal model,(b) the linear/Normal regression model, and(c) the linear/Normal autoregressive model.
Compare and contrast the simple Normal and Normal/linear regression models in terms of their probability and sampling models.
Compare and contrast the Normal/linear and Student’s t regression models in terms of their probability and sampling models.
Explain why the statistical and structural (substantive) models are based on very different information.
Discuss why a statistical misspecified model provides a poor basis for reliable inference concerning the substantive questions of interest.
Explain how the purely probabilistic construal of a statistical model enables one to draw a clear line between the statistical and substantive models.
Explain how the discussion about dependence and regression models in relation to the bivariate distribution f (x, y; φ) can be used to represent both synchronous and temporal dependence, by transforming the generic notation (X, Y) to stand for both (Xt, Yt) and(Yt, Yt−1) .
Discuss the purely probabilistic construal of a regression model proposed in this chapter and explain why it enables one to distinguish between a statistical and a substantive(structural) model. Explain the relationship between the two types of models.
(a) Why do we need the notion of a stochastic process? How does it differ from the concept of a random variable?(b) Explain why the claim that when one assumes an IID sample, there is no need to use an ordering for the data is highly misleading.
(a) Explain the notion of a sample path of a stochastic process and contrast it to viewing the process as a sequence of random variables.(b) Compare the notions of index and probability averages. When do the two coincide?
(a) Explain the classification of stochastic processes using the index set and the state space.(b) Explain why the notion of a stochastic process is relevant in modeling all types of data: time-series, cross-section, and panel data.
Explain intuitively the Kolmogorov extension theorem for a stochastic process {Xt, t ∈N} and discuss its significance for modeling purposes.
What is the difference between the joint distribution and functional perspectives on stochastic processes? Why should we care?
(a) Explain the concepts of Markov dependence and homogeneity.(b) Explain the relationships between Markov, partial sums, and independent increments processes.(c) What is the relationship between identically distributed increment processes and a stationary process?
(a) Explain the concept of a partial sum stochastic process and the type of separable heterogeneity it gives rise to.(b) Explain why separable heterogeneity often gives rise to operational models.(c) Compare and contrast separable heterogeneity with non-stationarity.
(a) Compare and contrast a partial sum process and a martingale process.(b) Explain the probabilistic structure of a random walk process and contrast it to that of a Normal random walk.
Compare and contrast a Brownian motion and a Wiener process.
Explain the following notions of dependence: (a) independence, (b) Markov dependence,(c) Markov dependence of order p, (d) m-dependence, (e) asymptotic independence,(f) non-correlation, (g) α-mixing, (h) φ-mixing.
(a) Explain the relationship between ψ-mixing and a mixingale, and discuss the appropriateness of such probabilistic assumptions for empirical modeling purposes.(b) Explain the notion of ergodicity and explain why it might be non-testable in the context of statistical models.
(a) Explain the following notions of heterogeneity: (a) identical distribution, (b) strict stationarity, (c) first-order stationarity, (d) second-order stationarity, (e) mth-order stationarity.(b) Explain why a strictly stationary process is not necessarily second-order stationary.
Compare and contrast the stochastic processes: (a) white-noise process, (b) innovation process, (c) martingale difference process. (d) Explain why a martingale difference process can accommodate dynamic heteroskedasticity but the innovation process cannot.
Explain the notion of a Markov chain process and discuss the case where it is homogeneous.
Explain the notion of a Poisson process and relate it to the associated “waiting times”process.
Explain the probabilistic structure of a martingale process and relate it to that of a martingale difference process.
Consider the IID process {Xt, t = 1, 2, 3, . . .} such that E(Xt) = μ=0, t = 1, 2, . . . Show that the process {M = ()k, 1=1,2,3,...} is a martingale.
Let{Sk = kt=1 Yt, t∈N} be a simple random walk process, where Y = 2Z-1, Ry:={-1,1}, teN, {Z, TEN) is an IID Bernoulli process, Z, BerlID(p, p(1 - p)). Show that for p = .5, {Sk=Y, teN) is a martingale process.
Compare and contrast a white-noise process {Zt, t∈N} when the underlying distribution is Normal vs. Student’s t in terms of the respective conditional processes{(Zt|σ(Zt−1, . . . , Z1), t∈N}.
Explain how a Gaussian, Markov, stationary process can give rise to an AR(1)model. Compare the resulting model when the process is Student’s t, Markov, and stationary.
AnARMA(p,q) representation constitutes a parsimonious version of an MA(∞).
Explain the probabilistic structure of a Wiener process and compare it with that of a Brownian motion process.
Explain how a Brownian motion process can be changed into a second-order stationary process by a transformation of the index.
Compare and contrast the Normal autoregressive model and the Normal/linear regression model specified in Chapter 7.
Compare and contrast the statistical GM of an AR(1) model (Table 8.3) with that of a Weibull hazard-based model (Table 8.8).
Explain the statistical parameterization of the AR(1) model (Table 8.3) and discuss any problems that arise when testing the following hypotheses:(a) H0: α1 = 1 vs. H1: α1 < 1;(b) H0: α1 = 1 vs. H1: α1 > 1.
(a) “When using weekly observations on speculative prices, using the average of daily prices, one could invoke the central limit theorem to justify using the Normal distribution for modeling such prices.” Explain the fallacy in this argument.(b) In a fair coin-tossing experiment a gambler is
“There is no point in using the Student’s t distribution in modeling speculative prices.One should adopt the Normal distribution at the outset because as the number of observations increases the Student’s t converges to the Normal.” Explain the fallacy in this argument.
Showing 3000 - 3100
of 4976
First
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
Last
Step by Step Answers