All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Ask a Question
Search
Search
Sign In
Register
study help
mathematics
optimization models
Questions and Answers of
Optimization Models
1. How would you efficiently draw an ellipsoid in R2, if the ellipsoid is described by a quadratic inequality of the formwhere A is 2 x 2 and symmetric, positive-definite, b ∈ R2, and c ∈ R?
Consider the unconstrained optimization problemwhereare given. The goal of this exercise is to determine the optimal value p and the set of optimal solutions, ,in terms of c and the eigenvalues and
Let p, q ∈ Rn be two linearly independent vectors, with unit norDefine the symmetric matrix In your derivations, it may be useful to use the notation 1. Show that p + q and p – q are eigenvectors
Linear dynamical systems are a common way to (approximately) model the behavior of physical phenomena, via recurrence equations of the form2where t is the (discrete) time, x(t) ∈ Rn describes the
A matrix A ∈ Rn,n is said to be nonnegative (resp. positive) if aij ≥ 0 (resp. aij > 0) for all i, j = 1, . . . , n. The notation A ≥ 0 (resp. A > 0) is used to denote
Let A ∈ Rm,n be a matrix. Assume that u ∈ Rn is a vector-valued random variable, with zero mean and covariance matrix In. That is, E{u} = 0, and E{uuT} = In.1. What is the covariance matrix of
Let A ∈ Rn,n and letbe the characteristic polynomial of A.1. Assume A is diagonalizable. Prove that A annihilates its own characteristic polynomial, that is2. Prove that p(A) = 0 holds in
Consider the image in Figure 3.6, a gray-scale rendering of a painting by Mondrian (1872-1944). We build a 256 x 256 matrix A of pixels based on this image by ignoring grey zones, assigning +1 to
Let A, B ∈ Rm,n be two matrices. Show that the fact that the null space of B is contained in that of A implies that the range of BT contains that of AT.
We are given a graph as a set of vertices in V = f{, . . . , n}, with an edge joining any pair of vertices in a set We assume that the graph is undirected (without arrows), meaning that (i, j) ∈ E
For each of the following cases, determine the shape of the region generated by the quadratic constraint xTA ≤ 1. 1. A = || 2. A = 3. A = [ 2 1 1 2 1 -1 -1 1 -1 0 0 -1
Let A, B ∈ Sn be two symmetric matrices. Define the component-wise product of A, B, by a matrix C ∈ Sn with elements Cij = Aij Bij, 1 ≤ i, j ≤ n. Show that C is
Let A, B ∈ Sn be such that 1. Show that all eigenvalues of BA are real and positive (despite the fact that BA is not symmetric, in general).2.3. With all terms defined as in the previous
Let A ∈ Sn+ be a symmetric, positive semi-definite matrix.1. Show that the trace, trace A, and the Frobenius norm,ΙΙAΙΙF, de-pend only its eigenvalues, and express both in terms of
Let A ∈ Sn be positive semidefinite. Prove thatDistinguish the cases det A = 0 and det A ≠ 0. In the latter case, consider the normalized matrix , where D = diag and use the
Let be a symmetric, positive definite matrix. Show thatYou may assume known that the result holds true when n = 1. The above shows that the function p : Rn → R with (non-negative)
Assume a matrix has columns that are orthogonal to each other: Find an SVD for A, in terms of the ai’s. Be as explicit as you can. A = [a₁,..., am]
Let A ε Rn,m with n ≥ m, have singular values σ1, . . . , σm. 1. Show that the singular values of the (n + m) x m matrix [A] Ā = are vi /1+o, i = 1,...,m. 2. Find an SVD of the matrix Ã.
An exam with m questions is given to n students. The instructor collects all the grades in a n x m matrix G, with Gij the grade obtained by student i on question j. We would like to assign a
Let f : Rn → Rm be a linear map. Show how to compute the (unique) matrix A such that f(x) = Ax for every x ∈ Rn, in terms of the values of f at appropriate vectors, which you will determine.
Latent semantic indexing is an SVD-based technique that can be used to discover text documents similar to each other. Assume that we are given a set of m documents D1, . . . , Dm. Using a
Consider a least-squares problem min || Ax - y2,
Lotka’s law describes the frequency of publication by authors in a given field. It states that XaY = b, where X is the number of publications, Y the relative frequency of authors with X
The Michaelis-Menten model for enzyme kinetics relates the rate y of an enzymatic reaction, to the concentration x of a substrate, as follows: Bix B2 + x' where ßi, i = 1,2, are positive
Assume now that we would like to delete a measurement, and update the least-squares solution accordingly.Assume now we delete the last measurement, that is, replace (am, ym) by (0, 0). We assume that
Computing the standard inner product between two vectors a, b ∈ Rn requires n multiplications and additions. When the dimension n is huge (say, e.g., of the order of 1012, or larger), even
We are given a set of points p1,..., pm ε Rn, which are collected in the n x m matrix P = [p1, . . . , pm]. We consider the problem min F(X) = Σ ||xi – Pillá +_ Σ \\x-x;||3, Σ 2 - i=1 1
Recall from Section 3.4.2 that a matrix is said to be sparse if most of its entries are zero. More1.2. formally, assume a m x n matrix A has sparsity coefficient (A)
We consider a problem with linear equality constraintswhere A ∈ Rm,n, with A full row rank: rank A = m ≤ n, and where we assume that the objective function f0 is decomposable, that iswith each hi
Formulate the problemfor different functions fj, j = 1, . . . , 5, with values given in Table 9.2, as QPs or LPs, or, if you cannot, explain why. In our formulations, we always use x ∈ Rn as
Consider the LASSO problemCompare the following algorithms. Try to write your code in a way that minimizes computational requirements; you may find the result in useful.1. A coordinate-descent
Let X = [x1, . . . , xm] ∈ Rn,m, and p ∈ [1,+ ∞]. We consider the problemIf the data is centered, that is, X1 = 0, the above amounts of finding a direction of largest “deviation” from the
Show that the indicator function IX of a convex set X is convex. Show that this function is closed whenever X is a closed set.
A matrix P ∈ Rn,n is a permutation matrix if its columns are a permutation of the columns of the n x n identity matrix.1. For a n x n matrix A, we consider the products PA and AP. Describe in
1. Let ƒ : Rm → Rk and g : Rn → Rm be two maps. Let h : Rn→ Rk be the composite map h = ƒ ο g, with values h(x) = ƒ(g(x)) for x ∈ Rn. Show that the derivatives of h can be expressed
In this exercise, you derive a bound on the largest absolute value of the derivative of a polynomial of a given order, in terms of the size of the coefficients1. For ω ∈ Rk+1, we define the
Prove Hölder’s inequality Consider the normalized vectors u = x/ΙΙxΙΙp, ν = y/ΙΙyΙΙq, and observe thatThen, apply Young’s inequality to the products Y ·|³o³n|7b||f|| d || x ||
1. Show that the following inequalities hold for any vector x:2. Show that for any non-zero vector x,where card(x) is the cardinality of the vector x, defined as the number of non-zero elements in x.
Let x, y ∈ Rn be two unit-norm vectors, that is, such that ΙΙxΙΙ2 = ΙΙyΙΙ2 = 1. Show that the vectors x – y and x + y are orthogonal. Use this to find an orthogonal basis for
Let x, y ∈ Rn. Under which condition on ∝ ∈ Rn does the function define an inner product on Rn? P f(x,y) = Σ axxxyk k=1
1. Find the projection z of the vector x = (2, 1) on the line that passes through x0 = (1, 2) and with direction given by vector u = (1, 1)2. Determine the angle between the following two
Consider the set in R3, defined by the equation1. Show that the set P is an affine set of dimension 2. To this end, express it as x(0) + span(x(1), x(2)), where x(0) ∈ P, land x(1),
Consider the set S of points such that x1 + 2x2 + 3x3 = 0, 3x1 + 2x2 + x3 = 0.Show that S is a subspace. Determine its dimension, and find a basis for it.
We are given m data points and we seek an hyperplane where and that best “fits” the given points, in the sense of a minimum sum of squared distances criterion.Formally, we need to
A rigid transformation is a mapping from Rn to Rn that is the composition of a translation and a rotation. Mathematically, we can express a rigid transformation Φ orthogonal
Find the least squares line and the total least-squares12 line for the data points Plot both lines on the same set of axes. (x₁, y₁), i = 1,...,4, with x = (-1,0,1,2), y = (0,0,1,1).
Consider the linear equation in Ax = y where Answer the following questions to the best of your knowledge.1. The time required to solve the general system depends on the sizes m, n and the
Let A = (aij) ∈ Rn,n, b ∈ Rn, with aii ≠ 0 for every i = 1, . . . , n. The Jacobi method for solving the square linear system Ax = b consists in decomposing A as a sum: A = D + R, where D =
Consider linear iterations of the formwhere F ∈ Rn,n, c ∈ Rn, and the iterations are initialized with x(0) = x0. We assume that the iterations admit a stationary point, i.e., that there exist x?
In this exercise we introduce some “equivalent” formulations of a system of linear equations and then study a linear recursive algorithm for solution of this system.Ax = b, A ∈ Rm,n,1. Consider
1. For x, y both positive scalars, show thatUse the above result to prove that the function f defined as is convex.2. Show that for r ≥ 1, the function fr : Rm+ → R, with values is concave.
Solve the following optimization problems. Make sure to determine an optimal primal solution.1. Show that, for given scalars α, β,2. Show that for an arbitrary vector z ∈ Rm,3. Show that for an
Consider the optimization problems (no assumption of convexity here)1. Prove that p*1 ≥ p*2 (i.e., enlarging the feasible set cannot worsen the optimal objective).2. Prove that, if
Let X ∈ Rn+m be a matrix with non-negative entries, and p, r ∈ [1, + ∞], with p ≥ r. We consider the problem1. Show that the function fX : Rm+ → R, with valuesis
A two-dimensional skier must slalom down a slope, by going through n parallel gates of known position (xi, yi), and of width ci, i = 1, . . . , n. The initial position (x0, y0) is given, as well as
We are given a set of n = 3 types of food, each of which has the nutritional characteristics described in Table 9.4. Find the optimal composition (amount of servings per each food) of a breakfast
Consider the set defined by the following inequalities1. Draw the set. Is it convex?2. Show that it can be described as a single quadratic inequality of the form q(x) = xT Ax + 2bTx + c ≤ 0, for
The line segment linking two points p, q ∈ Rn (with p ≠ q) is the set = {λp + (1 – λ)q : 0 ≤ λ ≤ 1.1. Show that the minimum distance D* from a point a ∈ Rn to the line
For a given vector ν ∈ Rn, the average can be found as the solution to the optimization problemwhere 1 is the vector of ones in Rn. Similarly, it turns out that the median (any value x such that
Recall from Exercise 3.11 that a positive matrix A > 0 has a dominant eigenvalue and corresponding left eigenvector ω > 0 and right eigenvector ν > 0 (i.e., ωTA = λωT, Aν =
Consider a linear least squares problem where the matrix involved is random. Precisely, the residual vector is of the form A(δ)x – b, where the m x n A matrix is affected by stochastic
When considering a second- order cone constraint, a temptation might be to square it in order to obtain a classical convex quadratic constraint. This might not always work. Consider the constraintand
We would like to minimize the function f : R3 → R, with values:with the constraint ΙΙxΙΙ∞ < 1. Explain precisely how to formulate the problem as an SOCP in standard form.
Consider Figure 10.12, in which a point in 0 must move to reach point p = [4 2.5]T, crossing three layers of fluids having different densities.In the first layer, the point can travel at a maximum
Consider k points x1, . . . , xk in R2. For a given positive number d, we define the k-ellipse with radius d as the set of points x ∈ R2 such that the sum of the distances from x to the
The returns on n = 4 assets are described by a Gaussian (Normal) random vector r ∈ Rn, having the following expected value ˆr and covariance matrix ∑:The last (fourth) asset corresponds to a
Consider the following problem, known as Boolean Least Squares:Here, the variable is x ∈ Rn, where A ∈ Rm,n and b ∈ Rm are given. This is a basic problem arising, for instance, in
The following problem is known as Robust Principal Component Analysis:where ІІ·ІІ* stands for the nuclear norm, and ІІ·ІІ1 here denotes the sum of the absolute values of the elements
In this exercise, we revisit Exercise 9.3, and approach it using the S-procedure of Section 11.3.3.1.1. Show that the minimum distance from the line segment L to the origin is above a given number R
A version of the so-called (convex) trust-region problem amounts to finding the minimum of a convex quadratic function over an Euclidean ball, that iswhere is the given radius of the ball.
Let Bi, i = 1, . . . ,m, be m given Euclidean balls in Rn, with centers xi, and radii ρi ≥ 0. We wish to find a ball B of minimum radius that contains all the Bi, i = 1, . . . ,m. Explain how
We consider a process described by difference equationwhere the u(t) ∈ R is the input, y(t) ∈ R the output, and the coefficient vector a(t) ∈ R3 is time-varying. We seek to compute bounds
A second-degree polynomial with values p(x) = y0 + y1x + y2x2 is non-negative everywhere if and only ifwhich in turn can be written as an LMI in y = (y0, y1, y2):In this exercise, you show a
For X ∈ Sn, and i ∈ {1, . . . , n}, we denote by λi(X) the i-th largest eigenvalue of X. For k ∈ {1, . . . , n}, we define the function fk : Sn → R with valuesThis function is an
Consider a system of linear inequalities Ax ≤ b, with A ∈ Rm,n, where aiT , i = 1, . . . ,m, denote the rows of A, which are assumed, without loss of generality, to be nonzero. Each
Consider a constrained minimization problemwhere f0 is convex and smooth and is convex and compact. Clearly, a projected gradient or proximal gradient algorithm could be applied to this
The bisection method applies to onedimensional convex problems of the formwhere xl < xu are both finite, and f : R → R is convex. The algorithm is initialized with the upper and lower bounds on
In Section 13.2, we examined the problem of fitting a polynomial of degree d through m data points (ui, yi) ∈ R2, i = 1, . . . ,m. Without loss of generality, we assume that the input satisfies
We consider the following problem in a symmetric n x n matrix variable Xwhere is an empirical covariance matrix,ΙΙXΙΙ1 denotes the sum of the absolute values of the elements of the positive
Let xi, i = 1, . . . , n, be given real numbers, which we assume without loss of generality to be ordered as x1 ≤ x2 ≤ .........≤ xn, and consider the scalar equation in variable n that we
Assume you are given a data set in the form of a n x m term-by document matrix X corresponding to a large collection of news articles. Precisely, the (i, j) entry in X is the frequency of the word i
We are given a data matrix X = [x(1), . . . , x(m)], with x(i) ∈ Rn, i = 1, . . . ,m. We assume that the data is centered: x(1) + . . . + x(m) = 0. An (empirical) estimate of the covariance matrix
We have a historical data set containing the values of a times series r(1), . . . , r(T). Our goal is to predict if the time-series is going up or down. The basic idea is to use a prediction based on
We consider the problemwhere is a regularization parameter, and λ ≥ 0 allows to control the cardinality (number of non-zeros) in the solution. This in turn allows better interpretability of the
Consider the problemswhere f is a convex function, ΙΙ · ΙΙ is an arbitrary vector norm, and λ > 0, μ > 0 are parameters. Assume that for every choice of these parameters, the
Prove that, for any matrix A ∈ Rm,n, it holds that N (ATA) R(ATA) || || N(A) R(AT).
Return to the variant of PCA examined in Exercise 11.2. Using a (possibly synthetic) data set of your choice, compare the classical PCA and the variant examined here, especially in terms of its
We consider a single- period optimization problem involving n assets, and a decision vector x ∈ Rn which contains our position in each asset. Determine which of the following objectives or
1. For a n-vector x, with n = 2m – 1 odd, we define the median of x as the scalar value xa such that exactly n of the values in x are ≤ xa and n are ≥ xa (i.e., xa leaves half of the
Consider a continuous-time LTI system x(t) = Ax(t), t ≥ 0, with no input (such a system is said to be autonomous), and output y(t) = Cx. We wish to evaluate the energy contained in the system's
We consider a single-period portfolio optimization problem with n assets. We use past samples, consisting of single-period return vectors where returns of the assets from period t — 1
1. Consider the following portfolio optimization problem:where is the expected return vector, is the return covariance matrix, and p is a target level of expected portfolio return. Assume
1. Consider the following portfolio optimization problemwhere r̂ ∈ Rn is the expected return vector, is the return covariance matrix, and μ is a target level of expected portfolio return. Assume
We consider a multi-stage, single-asset investment decision problem over n periods. For any given time period i = 1 ,..., n, we denote by yi the predicted return, σi the associated
Consider the following personal finance problem. You are to be paid for a consulting job, for a total of C = $30, 000, over the next six months. You plan to use this payment to cover some past credit
We consider the following portfolio optimization problem:where C is the empirical covariance matrix, A > 0 is a risk parameter and r̂ is the time-average return for each asset for the given
Prove that the continuoustime LTI system (15.20) is asymptotically stable (or stable, for short) if and only if all the eigenvalues of the A matrix, λ/(A), i = 1, . . . , n, have (strictly) negative
A continuous-time signal ω(t) is a function mapping time t ∈ R to values w(t) in either Cm or Rm. The energy content of a signal ω(t) is defined aswhere ΙΙwΙΙ2 is the 2-norm of the
The gain of a system is the maximum energy amplification from the input signal to output. Any input signal u(t) having finite energy is mapped by a stable system to an output signal y{t) which also
A matrix A ∈ Rn,n is said to be continuous-time extended superstable (which we denote by A ∈ Ec) if there exists d ∈ Rn such thatSimilarly, a matrix A ∈ Rn,n is said to be discrete-time
Showing 1 - 100
of 101
1
2