Question: Task 1 . We have learned that a bagged model has a 2 m error guarantee based on an assumption that each base model suffers

Task 1. We have learned that a bagged model has a 2m error guarantee based on an assumption that each base model suffers an independent N(0,2) noise. Now, consider a weighted bagged model
f(x)=i=1mifi(x),
where fi's are learned in the same way as in the bagged model (not as in the boosting model).
Prove that, under the same noise assumption, this weighted bagged model has a better error guarantee than 2m if it is a convex combination of the base models (i.e.,i=1mi=1 and i0) and ||||21m, where =[1,dots,m] is an m-dimensional vector and ||*||2 is l2 norm.
Tip: Analysis should be similar to the bagged model analysis, except at the end we take a further argument based on the Cauchy Schwarz inequality i=1mipii=1mi2i=1mpi22 for any p1,dots,pm.
Please show by hand
Task 1 . We have learned that a bagged model has

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Programming Questions!