-8

Just a simple algorithm to sort small integers, but it must be O(n).

1
  • The simplest algorithm to sort small integers is a table containing 2**m elements, where m is the number of bits in a "small integer". Commented Oct 5, 2011 at 1:22

5 Answers 5

8

A radix sort is one approach that's O(n). Since you're dealing with small integers, it shouldn't be too hard to implement.

Sign up to request clarification or add additional context in comments.

6 Comments

Wrong! If you look at that reference you'll see that it's O(k·n). Which is to say that the algorithm is constructed such that for less than n it goes slower, and only achieves max performance at n. I could make a bubble sort be O(k·n), by slowing it appropriately for smaller problems.
Yes, there are indeed restrictions for O(n). But radix sort is still the closest thing I can think of.
@DanielRHicks You're right about the runtime, but I can't make any sense of your cliam that "the algorithm is constructed such that for less n it goes slower" - that's not the case. n is the number of elements to sort, and k is the length of the elements in bits. If k is constant (eg, the element size doesn't scale with the size of your dataset), the result is O(n).
He's saying that if n is smaller, chances are the k can also be decreased. So by fixing k at a large value, you are "slowing down" the smaller values of n by forcing them to use a larger than necessary k. This is a totally valid argument and as his answer mentions, is part of the "fine print" of the O(n) of the radix sort.
@Daniel R Hicks: Sorry, but you're the one who's wrong. The linear performance of Radix sort comes not from omitting the unpleasant k as a constant, but from using logarithmic instead of unary cost-measures. +Nick Johnson: If you're element range doesn't increase with N then you just use binning & counting, which also gives you a linear time algorithm. A valid point of the discussion is however that asymptotic complexity is usually not very important for (small) applications. The well implemented STL quick sort (C++) beats almost everything, even though its runtime could be as bad as O(N^2).
|
1

Of course the fine print in the definition of O(n) there gets you. The radix sort, eg, is really n*log(n) when you figure that you must create a deeper tree as you accommodate more values -- they just manage to define it as O(n) by the trick of capping the number of values to be sorted. There's no way to really beat n*log(n) in the general sense.

Eg, for 8-bit values I can easily achieve O(n) by simply having a 256-entry array. But if I go to, say, even 32-bit values then I must have an array with 4G entries, and the address decoder for the memory chip for that array will have grown with log(n) of the size of the memory chip. Yes, I can say that the version with 4G entries is O(n), but at a electronic level the addressing is log(n) slower and more complex. Additionally, the buses inside the chip must drive more current and it will take longer for a memory cell, once "read", to dump its contents onto the bus. And all those effects are log(n).

Comments

0

Simply put :

  • If you have no prior information on your number you're sorting, you cannot do better than O(nlogn) in average
  • If you have more information (like the fact that you're dealing with integers), you can have some O(n) algorithms

A great resource are these Wikipedia tables. Have a look at the second one.

Comments

0

To the best of my knowledge, comparison based sorting algorithms share a lower bound of O(nlogn). To achieve O(n), we probably can't use any comparison based algorithms. Also, the input must bear additional properties. In your example, small integers, I guess, means that the integers fall within a specified range. If that were the case, you could try bucket/radix sort algorithm, which does not require any comparisons. For a simple example, suppose you have n integers to be sorted, all of which belong to the interval [1, 1000]. You just make 1000 buckets, and go over the n integers, if the integer is equal to 500, it goes to bucket 500, etc. Finally you concatenate all the buckets to obtain the sorted list. This algorithm takes O(n).

Comments

0

The optimum for comparison based sort is O(n*log(n)), the proof is not very difficult. BUT you may use counting sort, which is enumeration based or very similar bucket sort... You may also use radix sort, but it is not sort itself. Radix sort only iteratively calls some other stable sort...

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.