New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
mathematics
linear algebra
Applied Linear Algebra 1st edition Peter J. Olver, Cheri Shakiban - Solutions
(a) Show that if y12 + y22 + y23. + y42 = 1, then the matrixis a proper orthogonal matrix. The numbers y1, y2, y3, y4 are known as Cayley-Klein parameters. (b) Write down a formula for Q-1. (c) Prove the formulas relating the Cayley-Klein parameters and the Euler angles of Exercise 5.3.4, [26.
Show that if Q is a proper orthogonal matrix, and R is obtained from Q by interchanging two rows, then R is an improper orthogonal matrix.
(a) Prove that the transpose of an orthogonal matrix is also orthogonal. (b) Explain why the rows of an n × n orthogonal matrix also form an orthonormal basis of Rn.
Write the following polynomials as linear combinations of monic Legendre polynomials. Use orthogonality to compute the coefficients: (a) t3 (b) t4 + t2 (c) 7t4 + 2t3 - t
(a) Find the roots, Pn(t) = 0. of the Legendre polynomials P2, P3 and P4. (b) Prove that for 0 < j < k, the polynomial Rj,k (t) defined in (5.49) has roots of order k - j at t = ± 1, and j additional simple roots lying between - 1 and 1. (c) Conclude that all k roots of the Legendre polynomial
Construct polynomials P0, P1, P2, and P3 of degree 0, 1, 2, and 3, respectively, which are orthogonal with respect to the inner products(a)(b) (c) (d)
Find the first four orthogonal polynomials on the interval [0, 1] for the weighted L2 inner product with weight w(t) = t2.
Write down an orthogonal basis for vector space P(5) of quintic polynomials under the inner product
Use the Gram-Schmidt process based on the L2 inner product on [0, 1] to construct a system of orthogonal polynomials of degree < 4. Verify that your polynomials are multiples of the modified Legendre polynomials found in Example 5.34.
Find the first four orthogonal polynomials under the Sobolev H1 inner productcf. Exercise 3.1.25.
Find the monic Laguerre polynomials of degrees 4 and 5 and their norms.
Prove the integration formula (5.54).
The Hermite polynomials are orthogonal with respect to the inner productFind the first five monic Hermite polynomials.
(a) Find the monic Legendre polynomial of degree 5 using the Gram-Schmidt process. Check your answer by using the Rodrigues formula. (b) Use orthogonality to write t5 as linear combinations of Legendre polynomials. (c) Repeat the exercise for degree 6.
The Chebyshev polynomials:(a) Prove thatTn(t) = cos(n arccos t), n = 0, 1, 2,...form a system of orthogonal polynomials under the weighted inner product(b) What is || Tn ||? (c) Write out the formulae for T0(t),.... T6(t) and plot their graphs.
Does the Gram-Schmidt process for the inner product (5.62) lead to the Chebyshev polynomials Tn(t) defined in the preceding exercise? Explain why or why not.
Find an orthogonal basis for the space of the solutions to the differential equation y"' - y" + y' - y = 0 for the L2 inner product on [- π, π].
Explain how to adapt the numerically stable Gram- Schmidt method in (5.28) to construct a system of orthogonal polynomials. Test your algorithm on one of the preceding exercises.
In this exercise, we investigate the effect of more general changes of variables on orthogonal polynomials. (a) Prove that t = 2 s2 - 1 defines a one-to-one map from the interval 0 ≤ s ≤ 1 to the interval - 1 ≤ t ≤ 1. (b) Let pk(t) denote the monic Legendre polynomials, which are orthogonal
(a) Show that the change of variables 5 = e-1 maps the Laguerre inner product (5.53) to the standard L2 inner product on [0, 1]. However, explain why this does not allow you to change Legendre polynomials into Laguerre polynomials. (b) Describe the functions resulting from applying the change of
(a) Explain why qn is the unique monic polynomial that satisfies (5.43). (b) Use this characterization to directly construct qs(t)
Prove that the even (odd) degree Legendre polynomials are even (odd) functions of t.
Write out an explicit Rodrigues-type formula for the monic Legendre polynomial qk(t) and its norm.
Write out an explicit Rodrigues-type formula for an orthonormal basis Q0(t).......Qn (t) for the space of polynomials of degree < n under the inner product (5.40).
Use the Rodrigues formula to prove (5.47). Pk(1) = 1. (5.47)
A proof of the formula in (5.48) for the norm of the Legendre polynomial is based on the following steps.(a) First, prove thatby a repeated integration by parts. (b) Second, prove that by using the change of variables t = cos θ in the integral. The resulting trigonometric integral can
Determine which of the vectorsis orthogonal to (a) The line spanned by (b) The plane spanned by (c) The plane defined by x - y - z = 0 (d) The kernel of the matrix (e) The range of the matrix (f) The cokemel of the matrix
Find the least squares solutions to the following linear systems.(a)(b) (c)
Find the closest point to b = (1, 2, - 1,3)T in the subspace W = span {(1,0,2, l)T, (1, 1,0, l)T, (2, 0, 1, - 1)T} by first constructing an orthogonal basis of W and then applying the orthogonal projection formula (5.64).
Repeat Exercise 5.5.13 using the weighted norm || v || = v21 + 2v22 + v23 + 3v24. Exercise 5.5.13 Find the closest point to b = (1, 2, - 1,3)T in the subspace W = span {(1,0,2, l)T, (1, 1,0, l)T, (2, 0, 1, - 1)T} by first constructing an orthogonal basis of W and then applying the orthogonal
Use the orthogonal sample vectors (5.71) to find the best polynomial least squares fits of degree 1,2 and 3 for the following sets of data:(a)(b) (c)
(a) Verify the orthogonality of the sample polynomial vectors in (5.71).(b) Construct the next orthogonal sample polynomial q4(t) and the norm of its sample vector.(c) Use your result to compute the quartic least squares approximation for the data in Example 5.42.
Use the result of Exercise 5.5.16 to find the best approximating polynomial of degree 4 to the data in Exercise 5.5.15.(a)(b) (c)
The formulas (5.71) only apply when the sample times are symmetric around 0. When the sample points t1.......tn are equally spaced, so ti+1 - ti = h for all i = 1........n - 1, then there is a simple trick to convert the least squares problem into a symmetric form.(a) Show that the translated
Find the orthogonal projection of the vector v = (1, 1, l)T onto the following subspaces, using the indicated orthonormal/orthogonal bases:(a) The line in the direction(b) The line spanned by (2, - 1, 3)T (c) The plane spanned by (1, 1, 0)T, (- 2, 2, l)T (d) The plane spanned by
Construct the first three orthogonal basis elements for sample points t1,... , tm that are in general position.
John knows that the least squares solution to Ax = b can be identified with the closest point on the subspace mg A spanned by the columns of the coefficient matrix. Therefore, he tries to find the solution by first orthonormalizing the columns using Gram- Schmidt, and then finding the least squares
Let A be an m × n matrix with ker A = (0). Suppose that we use the Gram-Schmidt algorithm to factor A = Q R as in Exercise 5.3.33. Prove that the least squares solution to the linear system Ax = b is found by solving the triangular system Rx = QTb by Back Substitution.
Apply the method in Exercise 5.5.22 to find the least squares solutions to the systems in Exercise 4.3.14. (a) x + 2y = 1, 3x - y = 0, - x + 2y = 3, (b) 4x - 2y = 1, 2x + 3y = - 4, x - 2y = - 1, 2x + 2y = 2, (c) 2u + v - 2w = 1, 3u - 2w = 0, u - v + 3w = 2, (d) x - z = - 1, 2.x - y + 3z = 1, y - 3z
Which is the more efficient algorithm: direct least squares based on solving the normal equations by Gaussian Elimination, or using Gram-Schmidt orthonormalization and then solving the resulting triangular system by Back Substitution as in Exercise 5.5.22? Justify= your answer.
(a) Find a formula for the least squares error (4.30) in terms of an orthonormal basis of the subspace. (b) Generalize your formula to the case of an orthogonal basis.
Let w1,... wn be any basis of the subspace W ⊂ Rm. Let A = (w1,...,wn) be the m x n matrix whose columns are the basis vectors, so that W = rng A and rank A = n. Let P = A(AT A)-1 AT be the corresponding projection matrix, as defined in Exercise 2.5.8. (a) Prove that the orthogonal projection of
Use the projection matrix method of Exercise 5.5.26 to find the orthogonal projection of v = (1,0, 0,0)T onto the range of the following matrices:(a)(b) (c) (d)
Repeat Exercise 5.5.28 using the L2 norm on [0, 1]. (a) Quadratic, and (b) Cubic approximation to t4, based on the L2 norm on [- 1, 1].
Find the orthogonal projection of the vectoronto the range of
Find the best cubic approximation to f(t) = e' based on the L2 norm on [0, 1].
Find the(a) Linear(b) Quadratic,(c) Cubic polynomialsq(t) that minimize the following integral:What is the minimum value in each case?
Find the best quadratic and cubic approximations for sin t for the L2 norm on [0, π] by using an orthogonal basis. Graph your results and estimate the maximal error.
Answer Exercise 5.5.30 when f(t) = sin t. Use a computer to numerically evaluate the integrals.Exercise 5.5.30Find the best cubic approximation to f(t) = e' based on the L2 norm on [0, 1].
Find the degree 6 least squares polynomial approximation to eʹ on the interval [- 1, 1] under the L2 norm.
(a) Use the polynomials and weighted norm from Exercise 5.4.12 to find the quadratic least squares approximation to f(t) = 1/r. In what sense is your quadratic approximation "best"? (b) Now find the best approximating cubic polynomial. (c) Compare the graphs of the quadratic and cubic approximants
Use the Laguerre polynomials (5.55) to find the quadratic and cubic polynomial least squares approximation to f(t) = tan-1 t relative to the weighted inner product (5.53). Use a computer to evaluate the coefficients. Graph your result and discuss what you observe.
Find the orthogonal projection of the vector v = (1, 3, - l)T onto the plane spanned by (- 1, 2, l)T, (2, 1, - 3)T by first using the Gram- Schmidt process to construct an orthogonal basis.
Find the orthogonal projection ofv = (1, 2, - 1, 2)Tonto the following subspaces:(a) The span of(b) The range of the matrix (c) The kernel of the matrix (d) The subspace orthogonal to a = (1, - 1, 0, l)T Warning: Make sure you have an orthogonal basis before applying formula (5.64)!
Redo Exercise 5.5.2 using(i) The weighted inner product(v, w) = 2v1 w1 + 2v2 w2 + v3w3,(ii) The inner product induced by the positive definite matrix
Let u1, ...,uk be an orthonormal basis for the subspace W Rm. Let A = (u1, u2 ... uk) be the m à k matrix whose columns are the orthonormal basis vectors, and define P = AAT to be the corresponding projection matrix.(a) Given v Rn, prove that its orthogonal
(a) Prove that the set of all vectors orthogonal to a given subspace V ⊂ Rm forms a subspace. (b) Find a basis for the set of all vectors in R4 that are orthogonal to the subspace spanned by (1, 2, 0, - 1)T, (2,0,3, l)T.
Find the orthogonal complement W⊥ to the subspaces W ⊂ R3 spanned by the indicated vectors. What is the dimension of W⊥ in each case?
Let V be an inner product space. Prove that (a) = {0} (b) {0}⊥ = V
Show that if W1 ⊂ W2 are finite dimensional subspaces of an inner product space, then W1⊥ ⊃ W2⊥.
(a) Show that if W, Z ⊂ Rn are complementary subspaces, then W⊥ and Z⊥ are also complementary subspaces. (b) Sketch a picture illustrating this result when W and Z are lines in R2.
Fill in the details of the proof of Proposition 5.53. If W is a finite-dimensional subspace of an inner product space, then (W⊥)⊥ = W. This result is a direct corollary of the orthogonal decomposition derived in Proposition 5.49.
Prove Lemma 5.44. If w1........wk span W and z1.........zj span Z, then W and Z are orthogonal subspaces if and only if (wi, zj) = 0 for all i = 1,... , k and j = 1,... , l.
Let W ⊂ V with dim V = n. Suppose w1........wn, is an orthogonal basis for W and wm+1,... wn is an orthogonal basis for W⊥. (a) Prove that the combination w1,...,wn forms an orthogonal basis of V. (b) Show that if v = c1w1 + ... cn wn is any vector in V, then its orthogonal decomposition v = w
Consider the subspace W = {u(a) = 0 = u(b)} of the vector space C0[a, b] with the usual L2 inner product. (a) Show that W has a complementary subspace of dimension 2. (b) Prove that there does not exist an orthogonal complement to W. Thus, an infinite-dimensional subspace may not admit an
For each of the following matrices A,(i) Find a basis for each of the four fundamental subspaces;(ii) Verify that the range and cokernel are orthogonal complements;(iii) Verify that the corange and kernel are orthogonal complements:(a)(b) (c) (d) (e) (f) (g)
For each of the following matrices, use Gaussian elimination on the augmented matrix (A | b) to determine a basis for its cokemel:(a)(b) (c) (d)
Let(a) Find a basis for comg A. (b) Use Proposition 5.58 to find a basis of rng A. (c) Write each column of A as a linear combination of the basis vectors you found in part (b)
Find a basis for the orthogonal complement to the following subspaces of R3: (a) The plane 3x + 4y - 5z = 0 (b) The line in the direction (- 2, 1, 3)T (c) The range of the matrix (d) The cokemel of the same matrix
Write down the compatibility conditions on the following systems of linear equations by first computing a basis for the cokemel of the coefficient matrix. (a) 2x + y = a. x + Ay = b, - 3x + 2y = c; (b) x + 2y + 3z = a, - x + 5y - 2z = b, 2x - 3 - y + 5z = c; (c) x1 + 2x2 + 3x3 = b1, x2 + 2x3 = b2,
For each of the following m x n matrices, decompose the first standard basis vector e1 = w + z Rn, where w corng A and z ker A. Verify your answer by expressing w as a linear combination of the rows of A.(a)(b) (c) (d)
For each of the following linear systems(i) Verify compatibility using the Fredholm alternative,(ii) Find the general solution, and(iii) Find the solution of minimum Euclidean norm.(a) 2x - 4y = - 6, - x + 2y = 3(b) 2x + 3y = - 1, 3x + 7y = 1, - 3x + 2y = 8(c) 6x - 3y + 9z = 12,2x - y + 3z = 4(d) x
Let(a) Find an orthogonal basis for comg A. (b) Find an orthogonal basis for ker A. (c) If you combine your bases from part (a) and (b), do you get an orthogonal basis of R4? Why or why not?
Suppose v1.......vn span a subspace V ⊂ Rm. Prove that w is orthogonal to V if and only if w ∊ coker A where A = (v1 v2 . . . vn) is the matrix with the indicated columns.
Prove that if v1, ... vr are a basis of comg A then their images Av1,..., Avr are a basis for rng A.
Prove that if K is a positive semi-definite matrix, and f ∉ rng K. then the quadratic function p(x) = xT K x - 2xT f + c has no minimum value.
Find a basis for the orthogonal complement to the following subspaces of R4: (a) The set of solutions to - x - 1- 3y - 2z + w = 0 (b) The subspace spanned by (1, 2, - 1, 3)T, (- 2, 0, 1, - 2)T, (- 1, 2, 0, l)T (c) The kernel of the matrix in Exercise 5.6.2c (d) The corange of the same matrix.
Is Theorem 5.54 true as stated for complex matrices? If not, can you formulate a similar theorem that is true? What is the Fredholm alternative for complex matrices?
Decompose each of the following vectors with respect to the indicated subspace as v = w + z, where w W, z W¥.
Redo Exercise 5.6.1 using the weighted inner product (v, w) = v1w1 + 2v2w2 + 3v3w3 instead of the dot product.
Redo Example 5.52 using the weighted inner product (v, w) = v1w1 + 2v2w2 + 3v3w3 + 4v4 w4 instead of the dot product.
Let V = P(4) denote the space of quartic polynomials, with the L2 inner productLet W = P2 be the subspace of quadratic polynomials. (a) Write down the conditions that a polynomial p P(4) must satisfy in order to belong to the orthogonal complement W¥. (b) Find a basis for and
Prove that the orthogonal complement W⊥ to a sub space W ⊂ V is itself a subspace.
Let W ⊂ V. Prove that (a) W ∩ W⊥ = {0} (b) W ∊ (W⊥)⊥
Find the discrete Fourier coefficients,(ii) The low frequency trigonometric interpolant, for the following functions using the indicated number of sample points:(a) sin x, n = 4(b) |x - π|, n = 6(c)(d) sign (x - π), n = 8
Construct the discrete Fourier coefficients forbased on n = 128 sample points. Then graph the reconstructed function when using the data compression algorithm that retains only the 11 and 21 lowest frequency modes. Discuss what you observe.
Answer Exercise 5.7.10 when f(x) =(a) x(b) x2(2Ï - x)2(c)
Let f(x) = x(2π - x) be sampled on n = 128 equally spaced points between 0 and 2π . Use a random number generator with - 1 < rj < 1 to add noise by replacing each sample value fj = f(xj) by gj = fj + εrj. Investigate, for different values of ε, how many discrete Fourier modes are required to
The signal in Figure 5.15 was obtained from the explicit formulaNoise was added by using a random number generator. Experiment with different intensities of noise and different numbers of sample points and discuss what you observe.
If we use the original form (5.88) of the discrete Fourier representation, we might be tempted to de- noise/compress the signal by only retaining the first 0 ≤ k ≤ l terms in the sum. Test this method on the signal in Exercise 5.7.10 and discuss what you observe.
True or false: If f(x) is real, the compressed/denoised signal (5.107) is a real trigonometric polynomial.
Use the Fast Fourier Transform to find the discrete Fourier coefficients for the following functions using the indicated number of sample points. Carefully indicate each step in your analysis.(a) x/π. n = 4(b) sin x, n = n = 8(c) |x - π|, n = 8(d) sign(x - n), n = 16
Use the Inverse Fast Fourier Transform to reassemble the sampled function data corresponding to the following discrete Fourier coefficients. Carefully indicate each step in your analysis. (a) c0 = c2 = 1, c1 = c3 = - 1 (b) c0 = c1 = c4 = 2, c2 = c6, = 0, c3 = c5 = c5 = - 1
In this exercise, we show how the Fast Fourier Transform is equivalent to a certain matrix factorization. Letc = (c0, c1... ,c7)Tbe vector of Fourier coefficients, and letf(k) = (f0(k), f1(k),..., f7(k)T, k = 0,1,2,3,be vectors containing the coefficients defined in the reconstruction algorithm
Find(i) The sample values, and(ii) The trigonometric interpolant corresponding to the following discrete Fourier coefficients:(a) c-1 = c1 = 1, c0 = 0(b) c-2 = c0 = c2 = 1, c-1 = c1 = - 1(c) c-2 = c0 = c1 = 2, c-1 = c2 = 0(d) c0 = c2 = c4 = 1, c1 = c3 = c5 = - 1
Let f(x) = x. Compute its discrete Fourier coefficients based on n = 4, 8 and 16 sample points. Then, plot f(x) along with the resulting (real) trigonometric interpolants and discuss their accuracy.
Answer Exercise 5.7.3 for the functions(a) x2(b) (x - Ï)2(c) sin x(d) cos 1/2 x(e)(f)
(a) Draw a picture of the complex plane with the complex solutions to z6 = 1 marked.(b) What is the exact formula (no trigonometric functions allowed) for the primitive sixth root of unity ζ6?(c) Verify explicitly that1 + ζ6 + ζ26 + ζ36 + ζ46 + ζ56 = 0.(d) Give a geometrical explanation of
(a) Explain in detail why the nth roots of 1 lie on the vertices of a regular n-gon. What is the angle between two consecutive sides? (b) Explain why this is also true for the nth roots of any non-zero complex number z ≠ 0. (c) Sketch a picture of the hexagon corresponding to 6√z for a given z
In general, an nth root of unit f is called primitive if all the nth roots of unity are obtained by raising it to successive powers: 1, ζ, ζ2, ζ3, .... (a) Find all primitive (i) Fourth (ii) Fifth (iii) Ninth roots of unity (b) Can you characterize all the primitive nth roots of unity?
Showing 8400 - 8500
of 11883
First
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
Last
Step by Step Answers