Question: Adaptive sampling The sampling density required in time and space depends on the variability of the phenomenon, the noise, and the reconstruction method. Suppose linear

Adaptive sampling The sampling density required in time and space depends on the variability of the phenomenon, the noise, and the reconstruction method. Suppose linear interpolation is to be used and noise is negligible. The function sin x is to be sampled such that the maximum error between the interpolated values and the function itself is less than some maximum value M. The form of Taylor’s Theorem that applies to this situation is fðxÞ ¼ ½ðx  x0Þ f ðx1Þ  ðx  x1Þ f ðx0Þ=h þ fðxðxÞÞðx  x0Þðx  x1Þ=2;
where h¼x1x0 and x(x) is some point in the interval [x0,x1]
The second term in this equation is the error if the linear approximation represented by the first term is used.

(a) Compute the maximum error for the two situations ðx0; x1Þ ¼ ðp=4; p=4Þ
and ðp=4; 3p=4Þ:

(b) Clearly a much larger error results in the second case. Find the bounds of the maximum value of the second derivative of sin x and thus using Taylor’s Theorem determine the minimum length of interval required to achieve an error similar to the first case.

(c) Devise an adaptive procedure that would automatically sample a function at variable density to meet a specified maximum interpolation error target for each subinterval, even when the function is unknown.

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Principles Of Embedded Networked Systems Design Questions!