Question: In class we saw that finite hypothesis classes are always PAC-learnable. Here, we will prove that there are also infinite hypothesis class that are PAC-learnable

 In class we saw that finite hypothesis classes are always PAC-learnable.

In class we saw that finite hypothesis classes are always PAC-learnable. Here, we will prove that there are also infinite hypothesis class that are PAC-learnable Let's first start with an easier case, that is we consider a specific distribution, rather than proving it for any distribution. Let X-(x R2 : llxlla 1) and y-(0,1). Consider the class of classifiers given by concentric circles in the plane that predict 1 inside the circle and 0 outside, that is F-{h" : h(x) 1xl2 3 T],>0. Note the cardinality of this hypothesis class is infinite. Denote by hs the solution of the ERM problem (a) Assume that the samples x are uniformly distributed in X and assume the realizability as- sumption. Prove that if the number of training samples is bigger than the sample complexity function my(e, ) = log 1, then with probability at least 1-5 we have LD(hs) e. (b) Let's now move to the harder case: Prove that F is PAC-learnable with the same sample complexity, up to constants. In other words, prove that the above guarantee holds for any distribution over . In class we saw that finite hypothesis classes are always PAC-learnable. Here, we will prove that there are also infinite hypothesis class that are PAC-learnable Let's first start with an easier case, that is we consider a specific distribution, rather than proving it for any distribution. Let X-(x R2 : llxlla 1) and y-(0,1). Consider the class of classifiers given by concentric circles in the plane that predict 1 inside the circle and 0 outside, that is F-{h" : h(x) 1xl2 3 T],>0. Note the cardinality of this hypothesis class is infinite. Denote by hs the solution of the ERM problem (a) Assume that the samples x are uniformly distributed in X and assume the realizability as- sumption. Prove that if the number of training samples is bigger than the sample complexity function my(e, ) = log 1, then with probability at least 1-5 we have LD(hs) e. (b) Let's now move to the harder case: Prove that F is PAC-learnable with the same sample complexity, up to constants. In other words, prove that the above guarantee holds for any distribution over

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Databases Questions!