New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
mathematics
numerical analysis
Foundations of Mathematical Economics 1st edition Michael Carter - Solutions
Let f: X → Y be a linear homeomorphism (remark 2.12). Then there exists constants m and M such that for all x1, x2 ∊ X, m||x1 - x2|| ≤ || f (x1) - f (x2)|| ≤ M||x1 - x2||
Let X and Y be Banach spaces. Any linear function f: X → Y is continuous if and only if its graph graph (f) = {(x, y) : y = f(x), x ∊ X} is a closed subset of X × Y.
Show that f (x) 2x + 3 violates both the additivity and the homogeneity requirements of linearity.
A function f: X → Y is affine if and only if f(x) = g(x) + y where g: X → Y is linear and y ∊ Y
Show that the function f: ℜ3 → 3 ℜ2 defined by f(x1, x2, x3) = (x1 = x2, 0) is a linear function. Describe this mapping geometrically.
Show that an affine function maps affine sets to affine sets, and vice versa. That is, if S is an affine subset of X, then f (S) is an affine subset of Y; if T is an affine subset of Y, then f-1 (T) is an affine subset of X.
An affine function preserves convexity; that is, S ⊆ X convex implies that f (S) is convex.
If the production plan y ∊ Y maximizes profits at prices p > 0, then y is efficient (example 1.61).
Assume that the sample space S is finite. Then the expectation functional E takes the formwith ps ¥ 0 and sS ps = 1. ps = P({s}) is the probability of state s.
Let X = C[0; 1] be the space of all continuous functions x(t) on the interval 0, 1. Show that the functional defined by f(x) = x(1/2) is a linear functional on C[0; 1].
Let {S1, S2, Sn} be a collection of subsets of a linear space X with S = S1 + S2 + ... + Sn. Let f be a linear functional on X. Then x* = x1* + x*2 + ... + x*n maximizes f over S if and only if x*i maximizes f over Si for every Si. That is, f (x*) ≥ f (x) for every x ∊ S ⇔ f (x*i) ≥ f (xi)
Let C0 denote the subspace of lm consisting of all infinite sequences converging to zero, that is C0 = {(xt) ∊ l∞: xt → 0}. Show that 1. l1 ⊂ c0 ⊂ l∞ 2. l∞ is the dual of C0 3. l∞ is the dual of l1 The next two results will be used in subsequent applications. It implies the
Let X be a linear space and
Let f, g1, g2, ..., gm be linear functionals on a linear space X. f is linearly dependent on g1, g2, ..., gm, that is, f lin g1, g2, ..., gm if and only if
H is a hyperplane in a linear space X if and only if there exists a nonzero linear functional f ∊ X' such that H = {x ∊ X : f(x) = c} for some c ∊ ℜ. We use Hf (c) to denote the specific hyperplane corresponding to the c-level contour of the linear functional f.
Describe the action of the mapping f : 2 2 defined by
Let H be a hyperplane in a linear space that is not a subspace. Then there is a unique linear functional f ∊ X' such that H = {x ∊ X: f (x) = 1} On the other hand, where H is a subspace, we have the following primitive form of the Hahn-Banach theorem (section 3.9.1).
Let H be a maximal proper subspace of a linear space X and x0 H. (H is a hyperplane containing 0). There exists a unique linear functional f X².Such that H = {x X: f(x) = 0} and f(x0) = 1
For any f, g ∊ Xʹ Kernel f = kernel g ⇔ f =
Let f be a nonzero linear functional on a normed linear space X. The hyperplaneH = {x ∊ X: f(x) = c}is closed if and only if f is continuous.
Show that the function defined in the previous example is bilinear. There is an intimate relationship between bilinear functionals and matrices, paralleling the relationship between linear functions and matrices (theorem 3.1). The previous example shows that every matrix defines a bilinear
Let f: X à Y be a bilinear functional on finite-dimensional linear spaces X and Y. Let m = dim X and n = dim Y. For every choice of bases for X and Y, there exists an m à n matrix of numbers A = (aij) that represents f in the sense thatfor every x
Show that the function f defined in the preceding example is bilinear.
Let BiL(X × Y, Z) denote the set of all continuous bilinear functions from X × Y to Z. Show that BiL(X × Y, Z) is a linear space. The following result may seem rather esoteric but is really a straightforward application of earlier definitions and results. It will be used in the next chapter.
Let X, Y, Z be linear spaces. The set BL(Y, Z) of all bounded linear functions from Y to Z is a linear space (exercise 3.33). Let BL(X, BL(Y, Z)) denote the set of bounded linear functions from X to the set BL(Y, Z). Show that 1. BL(X, BL(Y, Z)) is a linear space. 2. Let
Every symmetric, nonnegative definite bilinear functional f satisfies the inequality (f (x, y))2 ≤ f (x, x)f (y, y) for every x, y ∊ X. A symmetric, positive definite bilinear functional on a linear space X is called an inner product. It is customary to use a special notation to denote the
Show that the Shapley value ϕ defined by (1) is linear.
For every x, y in an inner product space, |xT y| ≤ ||x|| ||y||
The inner product is a continuous bilinear functional.
The functional ||x|| = √xTx is a norm on X.
Every element y in an inner product space X defines a continuous linear functional on X by fy(x) = xT y.
A nonempty compact convex set in an inner product space has at least one extreme point.
In an inner product space ||x + y||2 + ||x - y||2 = 2 ||x||2 + 2||y||2
Show that C(X) (exercise 2.85) is not an inner product space.Two vectors x and y in an inner product space X are orthogonal if xT y = 0. We symbolize this by x ¥ y The orthogonal complement S¥ of a subset S X as the set of all vectors that are orthogonal to every
Any pairwise orthogonal set of nonzero vectors is linearly independent.
Let the matrix A = (aij) represent a linear operator with respect to an orthonormal basis x1; x2,..., xn for an inner product space X. Then aij = xTf (xj) for every i, j A link between the inner product and the familiar geometry of ℜ3 is established in the following exercise, which shows that the
For any two nonzero elements x and y in an inner product space X, define the angle y between x and y byfor 0 ¤ θ ¤ n. Show that 1. - 1 ¤ cos θ ¤ 1 2. x ¥ y if and only if 0 = 90 degrees The angle between two
If x ⊥ y, then ||x + y||2 = ||x||2 + ||y||2 The next result provides the crucial step in establishing the separating hyperplane theorem (section 3.9).
Let S be a nonempty, closed, convex set in a Euclidean space X and y a point outside S (figure 3.4). Show that 1. There exists a point x0 e S which is closest to y, that is, ||x0 - y|| ≤ ||x - y|| for every x ∊ S 2. x0 is unique 3. (x0 - y)T(x - x0) ≥ 0 for every x ∊ S Finite dimensionality
Generalize the preceding exercise to any Hilbert space. Specifically, let S be a nonempty, closed, convex set in Hilbert space X and y B S. LetThen there exists a sequence (xn) in S such that || xn - y || d. Show that 1. (xn) is a Cauchy sequence. 2. There exists a unique point x0 e S
Let S be a closed convex subset of a Euclidean space X and T be another set containing S. There exists a continuous function g: T → S that retracts T onto S, that is, for which g(x) = x for every x ∊ S. Earlier (exercise 3.64) we showed that every element in an inner product space defines a
Let f ∊ X* be a continuous linear functional on a Hilbert space X. There exists a unique element y ∊ X such that f(x) = xT y for every x ∊ X
If X is a Hilbert space, then so is X*.
Every Hilbert space is reflexive.
Let f ∊ L(X, Y) be a linear function between Hilbert spaces X and Y. 1. There exists a unique x* ∊ X such that fy(x) = xT x*. 2. Define f*: Y → X by f*(y) = x*. Then f * satisfies f (x)T y = xT f*(y) 3. f* is a linear function, known as the adjoint off.
Verify that the Shapley value is a feasible allocation, that is,This condition is sometimes called Pareto optimality in the literature of game theory.
Let A, B, and C be matrices that differ only in their ith row, with the ith row of C being a linear combination of the rows of A and B. That is,Then det(C) = α det(A) + β det (b)
Show that the eigenvectors corresponding to a particular eigen-value, together with the zero vector 0X, form a subspace of X
A linear operator is singular if and only if it has a zero eigenvalue.
Let f be a linear operator on a Euclidean space, and let the matrix A = (aij) represent f with respect to an orthonormal basis. Then f is a symmetric operator if and only if A is a symmetric matrix, that is, A = AT.
For a symmetric operator, the eigenvectors corresponding to distinct eigenvalues are orthogonal.
Let f be a symmetric operator on a Euclidean space X. Let S be the unit sphere in X, that is S = {x ∊ X: ||x|| = 1}, and define g: X × X → ℜ by g(x, y) = (
Let S be defined as in the preceding proof. Show that 1. S is a subspace of dimension n - 1 2. f(S) ⊆ S
The determinant of symmetric operator is equal to the product of its eigenvalues.
Two players i and j are substitutes in a game (N, w) if their contributions to all coalitions are identical, that is, if W(S ∪ {i}) = w(S ∪ {j} for every S ⊆ N \ {i, j} Verify that the Shapley value treats substitutes symmetrically, that is i j substitutes ⇒ ϕiw = ϕjw
Let the matrix A = (aij) represent a linear operator f with respect to the orthonormal basis x1; x2; . . . ; xn. Then the sumdefines a quadratic form on X, where x1; x2; . . . ; xn are the coordinates of x relative to the basis.
For any quadratic form Q(x). xTAx, there exists a basis x1; x2; . . . , xn and numbers such that Q(x).
1. Show that the quadratic form (11) can be rewritten asassuming that a11 0. This procedure is known as ``completing the square.'' 2. Deduce (12). 3. Deduce (13). This is an example of the principal axis theorem (exercise 3.91).
Show that Q(0) . 0 for every quadratic form Q. Since every quadratic form passes through the origin (exercise 3.93), a positive definite quadratic form has a unique minimum (at 0). Similarly a negative definite quadratic form has a unique maximum at 0. This hints at their practical importance in
A positive (negative) definite matrix is nonsingular
A positive definite matrix A = (aij) has a positive diagonal, that is, A positive definite ⇒ aii > 0 for every i One of the important uses of eigenvalues is to characterize definite matrices, as shown in the following exercise
A symmetric matrix is
A nonnegative definite matrix A is positive definite if and only if it is nonsingular.
Verify these assertions directly.
Show that the function f(x) = 10x - x2 represents the total revenue function for a monopolist facing the market demand curve x = 10 - p where x is the quantity demanded and p is the market price. In this context, how should we interpret g(x) = 9 + 4x?
Let f: X Y be differentiable at x0 with derivative Df[x0], and let x be a vector of unit norm (||x|| = 1). Show that the directional derivative of f at x0 in the direction x is the value of the linear function Df[x0] at x, that is,
Show that the ith partial derivative of the function f: n at some point x0 corresponds to the directional derivative of f at x0 in the direction ei, whereei = (0, 0, . . . , 1, . . . , 0)is the i unit vector. That is,
Calculate the directional derivative of the functionat the point (8, 8) in the direction (1, 1).
Show that the gradient of a differentiable functional on n comprises the vector of its partial derivatives, that is,
Show that the derivative of a functional on n can be expressed as the inner product
If a differentiable functional f is increasing, then ∇f(x) ≥ 0 for every x ∈ X; that is, every partial derivative Dxi f[x] is nonnegative.
Show that the gradient of a differentiable function f points in the direction of greatest increase.
f: X → ℜm, X ⊂ ℜn is differentiable x0 if and only if each component fj is differentiable at x0. The matrix representing the derivative, the Jacobian, comprises the partial derivatives of the components of f.
A point x0 is a regular point of a C1 operator if and only det Jf (x0) ≠ 0.
Suppose that nominal GDP rose 10 percent in your country last year, while prices rose 5 percent. What was the growth rate of real GDP?
A point x0 is a critical point of a C1 functional if and only if ∇
Every continuous bilinear function f: X × Y → Z is differentiable with Df[x, y] = f(x, ∙) + f(∙, y) that is, Df[x0, y0](x, y) = f(x0, y) + f(x, y0)
Let f: X → ℜ be differentiable at x and g: Y → ℜ be differentiable at y. Then their product f g: X × Y → R is differentiable at (x, y) with derivative Dfg[x, y] = f(x)Dg[y] + g(y) Df[x]
The power function (example 2.2) f(x) = xn, n = 1, 2, . . . is differentiable with derivative Df[x] = f′[x] = nxn-1
Assume that the inverse demand function for some good is given by p = f(x) where x is the quantity sold. Total revenue is given by R(x) = f(x)x Find the marginal revenue at x0.
Suppose that f: X Y is differentiable at x and its derivative is nonsingular. Suppose further that f has an inverse f-1: Y X that is continuous (i.e., f is a homeomorphism). Then f-1 is differentiable at x with
When the roles are reversed in the general power function, we have the general exponential function defined as f(x) = ax where a ∈ ℜ+. Show that the general exponential function is differentiable with derivative Dxf(x) = ax log a
Let f: X be differentiable at x where f(x) 0, then 1/f is differentiable with derivative
Show that the definition (2) can be equivalently expressed as
Let f: X be differentiable at x and g: Y be differentiable at y with f(y) 0. Then their quotient f/g: X Ã Y is differentiable at (x, y) with derivative
Calculate the value of the partial derivatives of the function f(k, l) = k2/3 l1/3 at the point (8, 8)
Show that gradient of the Cobb-Douglas functioncan be expressed as
Compute the partial derivatives of the CES function (exercise 2.35)
Suppose that f ∈ C[a, b] is differentiable on the open interval (a, b). Then there exists some x ∈ (a, b) such that f(b) - f(a) = f′[x](b - a)
A differentiable functional f on a convex set S ⊆ ℜn is increasing if and only ∇f(x) ≥ 0 for every x ∈ S, that is, if every partial derivative Dxi f[x] is nonnegative.
A differentiable functional f on a convex set S ⊆ ℜn is strictly increasing if ∇f(x) > 0 for every x ∈ X, that is, if every partial derivative Dxi f[x] is positive.
Let f be a functional on an open subset S of n. Then f is continuously differentiable (C1) if and only if each of the partial derivatives Dif[x] exists and is continuous on S.We now present some extensions and alternative forms of the mean value theorem which are useful in certain
A differentiable function f on a convex set S is constant if and only if Df[x] = 0 for every x ∈ S.
Let fn: S Y be a sequence of C1 functions on an open set S, and defineSuppose that the sequence of derivatives Dfn converges uniformly to a function g: S BL(X, Y). Then f is differentiable with derivative Df = g.
The derivative of a function is unique.
Prove that ex+y = exey for every x, y ∈ ℜ.
The elasticity of a function f: is defined to beIn general, the elasticity varies with x. Show that the elasticity of a function is constant if and only if it is a power function, that is, f (x) = Axa
Let f: S Y be a differentiable function on an open convex set S X. For every x0, x1, x2 S,
Let f: X → Y be C1. For every x0 ∈ X and
Let f: X → Y be C1. For every x0 ∈ X and Discuss.
Showing 3000 - 3100
of 3402
First
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
Step by Step Answers