Question: If I am running negative binomial regression ( NBR ) on SPSS - 2 9 and decide to use 2 separate independent datasets. Let s
If I am running negative binomial regression NBR on SPSS and decide to use separate independent datasets. Lets say I run NBR in dataset #designated as training dataset and generate model, and then run NBR in dataset #designated as testing dataset for external validation and use only dataset # for Kfold cross validation.
For external validation, If the performance metrics AIC Deviance, Log Likelihood for the original dataset designated as training show small difference from testing dataset what conclusions can be drawn?
For external validation, If the performance metrics AIC Deviance, Log Likelihood for the original dataset designated as training show small difference from testing dataset for one metric and large difference for another what conclusions can be drawn and how it be corrected?
Why do we conduct external validation?
For Kfold cross validation foldIf the performance metrics AIC Deviance, Log Likelihood for all folds are averaged after running NBR on each, what conclusions can be drawn with the average and what do we compare this average with to see if difference is small or large in what SPSS output can I find the metric for comparison
For Kfold cross validation fold If the performance metrics AIC Deviance, Log Likelihood for all folds are averaged after running NBR on each and I see that the difference is small in one and large in another, what can I do to correct this??
Why do we conduct Kfold cross validation?
If the training metric AIC Deviance, Log Likelihood is higher than testing metric what does it suggest and should I keep it or correct it
Step by Step Solution
There are 3 Steps involved in it
1 Expert Approved Answer
Step: 1 Unlock
Question Has Been Solved by an Expert!
Get step-by-step solutions from verified subject matter experts
Step: 2 Unlock
Step: 3 Unlock
