Question: Part of developing different computational tools is doing an analysis of the computational time of algorithms. This can be done a couple of different ways
Part of developing different computational tools is doing an analysis of the computational time of algorithms. This can be done a couple of different ways in R. You can either use system.time() or a package called microbenchmark.
Let's look at the computational time of the sort() algorithm, in particular the quicksort method. The quicksort algorithm was developed by Tony Hoarse in 1959. It is a particularly fast sorting algorithm that has an average computation time of nlog(n)n for a list of size n
First we create a random vector of size 1000000 by x<-rnorm(1000000)
Implementing the code, system.time(sort(x,method="quick"))
10 times we have the following table of computational times for the quick sort algorithm
| Measurement | Elapsed time (secs) |
|---|---|
| 1 | 0.088 |
| 2 | 0.109 |
| 3 | 0.093 |
| 4 | 0.083 |
| 5 | 0.086 |
| 6 | 0.088 |
| 7 | 0.088 |
| 8 | 0.110 |
| 9 | 0.085 |
| 10 | 0.104 |
Problems:
- What is the population of the study? What is the sample?
- Give the five number summary of the data.
- The microbenchmark library automatically runs a code snippet multiple times. Use the code summary(microbenchmark(sort(x,method="quick"),times=10,unit="s")) to find the 5 number summary of computational times using microbenchmark.
- Draw boxplots of both the system.time() and microbenchmark summaries on a single axis.
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
