Question: Problem 1 Solve the following problems on linear algebra and probability theory. Here, we assume a vector is column vector instead of a row vector.

 Problem 1 Solve the following problems on linear algebra and probability

Problem 1 Solve the following problems on linear algebra and probability theory. Here, we assume a vector is column vector instead of a row vector. Let T be the transpose operation; i.e. let be xTeR1xd the transpose of x E Rd = Rdx1. In problem P1-A, and P1-B, let f(A, B, C, D, e, x) = {(Ax b)TC(Dx e) be a function of XERO, A ERmxd, b ERM, CERmxn, DeRnxd, and e ER". P1-A Derive the following derivatives: of, of, of ? o, 3D, respectively. P1-B Here f is rewritten as a function of x as F, and we denote the optimal variable of x to minimize the function f as = argmin f(x). Derive analytical solution of . Here we TER assume ATCD be a positive definite matrix. P1-C Let A e Rdxd be a square symmetric matrix, and li R, i = 1, ...,d be eigenvalues of A. Rewrite trace A, and|A| by 11,..., dd, respectively. You should include the derivation process towards the solution. Here traceB is an operator to sum the diagonal elements of a square matrix B e Rdxd, i.e. traceB = Li-i biri, where i-th row, i-th column element of B is represented by biri. (Option) P1-D IF A Rdxd is a symmetric matrix, prove that A-1 is also a symmetric matrix. (Option) P1-E Let the following two distributions P1 and p2 be defined as Pi(x) = N(X|Mi, ), i=1,2, where x e Rd represents a vectorial random variable, and the parameters of Gaussian distribution N are represented by M1, M2 Rd, e Rdxd. Compute Kullback Leibler divergence between distribution p and P2: KL (P1|P2). Kullback Leibler divergence is defined as KL (P1\P2) = S p1(x) In pladx Problem 1 Solve the following problems on linear algebra and probability theory. Here, we assume a vector is column vector instead of a row vector. Let T be the transpose operation; i.e. let be xTeR1xd the transpose of x E Rd = Rdx1. In problem P1-A, and P1-B, let f(A, B, C, D, e, x) = {(Ax b)TC(Dx e) be a function of XERO, A ERmxd, b ERM, CERmxn, DeRnxd, and e ER". P1-A Derive the following derivatives: of, of, of ? o, 3D, respectively. P1-B Here f is rewritten as a function of x as F, and we denote the optimal variable of x to minimize the function f as = argmin f(x). Derive analytical solution of . Here we TER assume ATCD be a positive definite matrix. P1-C Let A e Rdxd be a square symmetric matrix, and li R, i = 1, ...,d be eigenvalues of A. Rewrite trace A, and|A| by 11,..., dd, respectively. You should include the derivation process towards the solution. Here traceB is an operator to sum the diagonal elements of a square matrix B e Rdxd, i.e. traceB = Li-i biri, where i-th row, i-th column element of B is represented by biri. (Option) P1-D IF A Rdxd is a symmetric matrix, prove that A-1 is also a symmetric matrix. (Option) P1-E Let the following two distributions P1 and p2 be defined as Pi(x) = N(X|Mi, ), i=1,2, where x e Rd represents a vectorial random variable, and the parameters of Gaussian distribution N are represented by M1, M2 Rd, e Rdxd. Compute Kullback Leibler divergence between distribution p and P2: KL (P1|P2). Kullback Leibler divergence is defined as KL (P1\P2) = S p1(x) In pladx

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Databases Questions!