问题描述:

E.g. given a unordered list of N elements, find the medians for sub ranges 0..100, 25..200, 400..1000, 10..500, ...

I don't see any better way than going through each sub range and run the standard median finding algorithms.

A simple example: [5 3 6 2 4]

The median for 0..3 is 5 . (Not 4, since we are asking the median of the first three elements of the original list)

**INTEGER ELEMENTS:**

If the type of your elements are integers, then the best way is to have a bucket for each number lies in any of your sub-ranges, where each bucket is used for counting the number its associated integer found in your input elements (for example, `bucket[100]`

stores how many `100`

s are there in your input sequence). Basically you can achieve it in the following steps:

- create buckets for each number lies in any of your sub-ranges.
- iterate through all elements, for each number
`n`

, if we have`bucket[n]`

, then`bucket[n]++`

. - compute the medians based on the aggregated values stored in your buckets.

Put it in another way, suppose you have a sub-range `[0, 10]`

, and you would like to compute the median. The bucket approach basically computes how many `0`

s are there in your inputs, and how many `1`

s are there in your inputs and so on. Suppose there are `n`

numbers lies in range `[0, 10]`

, then the median is the `n/2`

th largest element, which can be identified by finding the `i`

such that `bucket[0] + bucket[1] ... + bucket[i]`

greater than or equal to `n/2`

but `bucket[0] + ... + bucket[i - 1]`

is less than `n/2`

.

The nice thing about this is that even your input elements are stored in multiple machines (i.e., the distributed case), each machine can maintain its own buckets and only the aggregated values are required to pass through the intranet.

You can also use hierarchical-buckets, which involves multiple passes. In each pass, `bucket[i]`

counts the number of elements in your input lies in a specific range (for example, `[i * 2^K, (i+1) * 2^K]`

), and then narrow down the problem space by identifying which bucket will the medium lies after each step, then decrease `K`

by `1`

in the next step, and repeat until you can correctly identify the medium.

**FLOATING-POINT ELEMENTS**

**The entire elements can fit into memory:**

If your entire elements can fit into memory, first sorting the N element and then finding the medians for each sub ranges is the best option. The linear time heap solution also works well in this case if the number of your sub-ranges is less than `logN`

.

**The entire elements cannot fit into memory but stored in a single machine:**

Generally, an external sort typically requires three disk-scans. Therefore, if the number of your sub-ranges is greater than or equal to 3, then first sorting the N elements and then finding the medians for each sub ranges by only loading necessary elements from the disk is the best choice. Otherwise, simply performing a scan for each sub-ranges and pick up those elements in the sub-range is better.

**The entire elements are stored in multiple machines:**
Since finding median is a holistic operator, meaning you cannot derive the final median of the entire input based on the medians of several parts of input, it is a hard problem that one cannot describe its solution in few sentences, but there are researches (see this as an example) have been focused on this problem.

I think that as the number of sub ranges increases you will very quickly find that it is quicker to sort and then retrieve the element numbers you want.

In practice, because there will be highly optimized sort routines you can call.

In theory, and perhaps in practice too, because since you are dealing with integers you need not pay n log n for a sort - see http://en.wikipedia.org/wiki/Integer_sorting.

If your data are in fact floating point and not NaNs then a little bit twiddling will in fact allow you to use integer sort on them - from - http://en.wikipedia.org/wiki/IEEE_754-1985#Comparing_floating-point_numbers - The binary representation has the special property that, excluding NaNs, any two numbers can be compared like sign and magnitude integers (although with modern computer processors this is no longer directly applicable): if the sign bit is different, the negative number precedes the positive number (except that negative zero and positive zero should be considered equal), otherwise, relative order is the same as lexicographical order but inverted for two negative numbers; endianness issues apply.

So you could check for NaNs and other funnies, pretend the floating point numbers are sign + magnitude integers, subtract when negative to correct the ordering for negative numbers, and then treat as normal 2s complement signed integers, sort, and then reverse the process.

**My idea:**

Sort the list into an array (using any appropriate sorting algorithm)

For each range, find the indices of the start and end of the range using binary search

Find the median by simply adding their indices and dividing by 2 (i.e. median of range

`[x,y]`

is`arr[(x+y)/2]`

)

Preprocessing time: O(n log n) for a generic sorting algorithm (like quick-sort) or the running time of the chosen sorting routine

Time per query: O(log n)

**Dynamic list:**

The above assumes that the list is static. If elements can freely be added or removed between queries, a modified Binary Search Tree could work, with each node keeping a count of the number of descendants it has. This will allow the same running time as above with a dynamic list.

The answer is ultimately going to be "in depends". There are a variety of approaches, any one of which will probably be suitable under most of the cases you may encounter. The problem is that each is going to perform differently for different inputs. Where one may perform better for one class of inputs, another will perform better for a different class of inputs.

As an example, the approach of sorting and then performing a binary search on the extremes of your ranges and then directly computing the median will be useful when the number of ranges you have to test is greater than log(N). On the other hand, if the number of ranges is smaller than log(N) it may be better to move elements of a given range to the beginning of the array and use a linear time selection algorithm to find the median.

All of this boils down to profiling to avoid premature optimization. If the approach you implement turns out to not be a bottleneck for your system's performance, figuring out how to improve it isn't going to be a useful exercise relative to streamlining those portions of your program which are bottlenecks.