Question: Suppose we have a sequential ordered file of 4 0 0 , 0 0 0 unspanned records, where each record is 4 0 0 0

Suppose we have a sequential ordered file of 400,000 unspanned records, where each record
is 4000 bytes. Assume 1 record per block, average seek time =9 ms, average rotational delay
=3 ms, and block transfer time =0.04 ms. Suppose we want to make X independent random
record reads from the file. This could be done in two different approaches.
1. read the entire file once and look for the X records of interest
2. use a binary search to find a particular record, and repeat this for all X records of
interest.
The question is to decide when it would be more efficient to perform approach 1 versus
approach 2. That is, what is the value for X when an exhaustive read of the file is more
efficient than X binary searches? Develop this as a function of X.(A graph would be
helpful.

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Programming Questions!