Question: Our first theorem formalizes the intuition given above that learning from sta - tistical queries implies learning in the noise - free Valiant model. The

Our first theorem formalizes the intuition given above that learning from sta-
tistical queries implies learning in the noise-free Valiant model. The proof of
this theorem is omitted for brevity, but employs standard Chernoff bound and
uniform convergence analyses [7]. The key idea in the simulation is to draw
a single large sample with which to estimate all probabilities requested by the
statistical query algorithm.
Theorem 1 Let F be a class of concepts over x, and let H be a class of rep-
resentations of concepts over x. Suppose that F is efficiently learnable from
statistical queries using H by algorithm L. Then F is efficiently learnable using
H in the Valiant model, and furthermore:
(Finite Q case) If L uses a finite query space Q and is a lower bound on
the allowed approximation error for every query made by L, then the number of
calls to Ex(f,D) required to learn in the Valiant model is O(12log(|Q|)).
(Finite VC dimension case) If L uses a query space Q of Vapnik-Chervonenkis
dimension d and is a lower bound on the allowed approximation error for ev-
ery query made by L, then the number of calls to Ex(f,D) required to learn in
the Valiant model is O(d2log(1)).
Note that in the statement of Theorem 1, the sample size dependence on lon
is hidden in the sense that we expect and possibly the query class to depend
on lon.
 Our first theorem formalizes the intuition given above that learning from

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Databases Questions!