Question: using python To do so, you need to perform bootstrapping first. You can write a for loop with loop variable i 0...18. In each iteration
using python
To do so, you need to perform bootstrapping first. You can write a "for loop with loop variable i 0...18. In each iteration of the loop, you have to: 1. make a bootstarp sample of the original "Training" Dataset (build in part(b)) with the size of bootstarp_size 0.8* (Size of the original dataset). You can use the following command to generate a random bootstrap dataset ("i" is the variable of the loop, so the random_state changes in each iteration): resample(X-train, n-samples-bootstarp-size , random. State-i, replace = True) Define and train a new base decision tree classifier on this dataset in each iteration: Base_DecisionTree Decision TreeClassifier(random_state-3) Perform prediction using "this base classifier" on the original "Testing" Dataset X_test (build in part(b)), and save the prediction results for all testing samples. 2. 3. After finishing the "for" loop, you should have 19 different predictions for EACH sample in vour testing set. Then, Perform Voting to make the final decision on each data sample based on the votes of all 19 classifiers. Finally, calculate and report the final accuracy of your Bagging (Voting) method. Note: You do NOT need to calculate the accuracy of each one of the base classifiers in each round of the loop! You have to just perform Voting to make the final decision on each data sample, and then calculate the accuracy on the final results. hia votes 1curacof eac one oke the final deds To do so, you need to perform bootstrapping first. You can write a "for loop with loop variable i 0...18. In each iteration of the loop, you have to: 1. make a bootstarp sample of the original "Training" Dataset (build in part(b)) with the size of bootstarp_size 0.8* (Size of the original dataset). You can use the following command to generate a random bootstrap dataset ("i" is the variable of the loop, so the random_state changes in each iteration): resample(X-train, n-samples-bootstarp-size , random. State-i, replace = True) Define and train a new base decision tree classifier on this dataset in each iteration: Base_DecisionTree Decision TreeClassifier(random_state-3) Perform prediction using "this base classifier" on the original "Testing" Dataset X_test (build in part(b)), and save the prediction results for all testing samples. 2. 3. After finishing the "for" loop, you should have 19 different predictions for EACH sample in vour testing set. Then, Perform Voting to make the final decision on each data sample based on the votes of all 19 classifiers. Finally, calculate and report the final accuracy of your Bagging (Voting) method. Note: You do NOT need to calculate the accuracy of each one of the base classifiers in each round of the loop! You have to just perform Voting to make the final decision on each data sample, and then calculate the accuracy on the final results. hia votes 1curacof eac one oke the final deds
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
