Derive the formulas (7.14) by minimizing the cross-entropy training loss: [ -frac{1}{n} sum_{i=1}^{n} ln gleft(boldsymbol{x}_{i}, y_{i} mid

Question:

Derive the formulas (7.14) by minimizing the cross-entropy training loss:

\[ -\frac{1}{n} \sum_{i=1}^{n} \ln g\left(\boldsymbol{x}_{i}, y_{i} \mid \boldsymbol{\theta}\right) \]

where \(g(\boldsymbol{x}, y \mid \boldsymbol{\theta})\) is such that:

\[ \ln g(\boldsymbol{x}, y \mid \boldsymbol{\theta})=\ln \alpha_{y}-\frac{1}{2} \ln \left|\boldsymbol{\Sigma}_{y}\right|-\frac{1}{2}\left(\boldsymbol{x}-\boldsymbol{\mu}_{y}\right)^{\top} \boldsymbol{\Sigma}_{y}^{-1}\left(\boldsymbol{x}-\boldsymbol{\mu}_{y}\right)-\frac{p}{2} \ln (2 \pi) \]

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question

Data Science And Machine Learning Mathematical And Statistical Methods

ISBN: 9781118710852

1st Edition

Authors: Dirk P. Kroese, Thomas Taimre, Radislav Vaisman, Zdravko Botev

Question Posted: