Question: J = ( a ) ? , iinD ( Y a i - [ U V T ] a i ) 2 2 + 2

J=(a)?,iinD(Yai-[UVT]ai)22+2(a?,kUak2+i?,kVik2)
In order to break a big optimization problem into smaller pieces that we know how to
solve, we fix U and find the best V for that U. But a subtle and important point is
that even if V** is best for U, then the U** that is best for V** might not be the
original U! It's like how we might be some lonely person's best friend, even though
they are not our best friend. In light of this, we repeat, like this: we fix U and solve
for V, then fix V to be the result from the previous step and solve for U, and repeat
this alternate process until we find the solution. This is an example of iterative
optimization, where we greedily take steps in a good direction, but as we do so the
context shifts so 'the good direction' evolves. Gradient descent, which we've already
seen, has the same structure.
Consider the case k=1. The matrices U and V reduce to vectors u and v, such
that ua=Ua1 and vi=Vi1.
When v is fixed, finding u that minimizes J becomes equivalent to finding u that
minimizes ...
Consider the case k=1. The matrices U and V reduce to vectors u and v, such
that ua=Ua1 and vi=Vi1.
When v is fixed, finding u that minimizes J becomes equivalent to finding u that
minimizes ...
(Yai-uavi)22+2a?(ua)2
(a)?,iinD(Yai-uavi)22+2a?(ua)2
(a)?,iinD(Yai-uavi)22
(a)?,iinD(Yai-uavi)22+2i?(vi)2
 J=(a)?,iinD(Yai-[UVT]ai)22+2(a?,kUak2+i?,kVik2) In order to break a big optimization problem into smaller

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Databases Questions!