Question: Fix a single arbitrary hypothesis h : X - > Y produced by A and determine a lower bound on the number of examples, k

Fix a single arbitrary hypothesis h : X -> Y produced by A and determine a lower
bound on the number of examples, k, such that P[err(h)>\epsi ]<=\delta .(The contrapositive
view would be: with probability at least 1\delta , it must be the case that err(h)<=\epsi .
Make sure this makes sense.)
(b) From part 5a we know that as long as a block is at least of size k, then if that block is
classified correctly by a fixed arbitrary hypothesis h we can effectively upper bound the
probility of the bad event(i.e. A outputs h s.t. err(h)>\epsi ) by \delta . However, our bound
must apply to every h that our algorith B could output for an arbitrary distribution
D over examples. With this in mind, how large should m be so that we can bound
all hypotheses that could be output? (You may assue that algorithm B will know the
mistake bound throughout the question.)
(c) Put everything together and fully describe (with proof) a PAC learner that is able to
output a hypothesis with a true error at most \epsi with probability at least 1\delta , given
a mistake bounded learner A. To do this you should first describe your pseudocode for
algorithm B which will use A as a sub-routine (no need for minute details or code, broad
2

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Programming Questions!