the computational properties of divide-and-conquer sorting algorithms, based on tight big-Oh characterizations of the asymptotic growth rate
Question:
the computational properties of divide-and-conquer sorting algorithms, based on tight big-Oh characterizations of the asymptotic growth rate of the functions for the running time or space size, depending on the question. Assume that the input sequence is given as a list, and the output sequence is also a list. Also assume a general implementation of the sorting algorithms, as opposed to an in-place implementation; that is, during any individual call, the divide-and-conquer algorithm creates new lists to pass the corresponding sub-lists of the input sequence during the “divide” part of the algorithm, before the “recursive” part (i.e., the constructed input for each sub-problem are recursively passed-by-value using lists), and returns the output as using a list data structure. The “conquer” part of the algorithm then operates on the output list returned from the “recursive” part. Assume that insertions to and deletions from both the front or the back of a list, as well as concatenating two lists, take worst-case constant time and space, while traversing the list takes worst-case linear time and constant space.
a) In terms of the worst-case running time, the “divide” part (i.e., before the recursion), what is the time complexity of quick-Sort and merge sort? Why is this the case?
b) In terms of worst-case running time, the “conquer” part (i.e., after the recursion), what is the time complexity of quick-Sort and merge sort? Why is this the case?