Question: 7.8 Simpli ed AdaBoost. Suppose we simplify AdaBoost by setting the parameter t to a xed value t = > 0, independent of the boosting

7.8 Simpli ed AdaBoost. Suppose we simplify AdaBoost by setting the parameter t to a xed value t = > 0, independent of the boosting round t.

(a) Let be such that ( 1 2 ???? t) 

> 0. Find the best value of as a function of by analyzing the empirical error.

(b) For this value of , does the algorithm assign the same probability mass to correctly classi ed and misclassi ed examples at each round? If not, which set is assigned a higher probability mass?

(c) Using the previous value of , give a bound on the empirical error of the algorithm that depends only on and the number of rounds of boosting T.

(d) Using the previous bound, show that for T > logm 2 2 , the resulting hypothesis is consistent with the sample of size m.

(e) Let s be the VC-dimension of the base learners used. Give a bound on the generalization error of the consistent hypothesis obtained after T = j logm 2 2 k +
1 rounds of boosting. (Hint: Use the fact that the VC-dimension of the family of functions fsgn(
PT t=1 tht) : t 2 Rg is bounded by 2(s+1)T log2(eT )).
Suppose now that varies with m. Based on the bound derived, what can be said if (m) = O(
q logm m )?)

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Pattern Recognition And Machine Learning Questions!