Question: Suppose p theta ( x ) and pdata ( x ) both place probability mass in only a very small part of the domain;

Suppose p\theta (x) and pdata(x) both place probability mass in only a very small part of the domain; that is, consider the limit ->0. What happens to KL(p\theta (x)|| pdata(x)) and its derivative with respect to \theta , assuming that \theta =\theta 0? Why is this problematic for a GAN trained with the loss function LG defined in problem 3c?

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Databases Questions!