Question: After vectoring the training and test sets, apply Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) and represent the features in the lower dimensional
After vectoring the training and test sets, apply Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) and represent the features in the lower dimensional space (k <<<< 15). Then apply the SVM on both lower dimensional feature space and analysis the results.
Steps need to be considered:
Data pre-processing: apply the PCA and LDA to reduce the feature dimensionality.
Model implementation: train the Kernel-SVM method with new features generated using PCA and LDA
Testing sets preparation: apply the PCA and LDA on testing samples.
Evaluation: testing the models with independent test sets (20% samples).
"Finally, compare the results generated from both methods and report accordingly. Performance metrics: confusion matrix, precision, recall, F1-score, ROC curve, etc. If you implement the PCA and LDA from scratch instead of sklearn, you will be given some bonus points for the second task."
can you explain this in technically in python code
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
