August 12, 2014
 Rewatching lectures... And a thought came to me.

 Most sorting algorithms work with handling one element at a time; Although you may do a binary tree to sort getting NlogN, the optimal number of compares is still out of reach due to the number of extra compares done.

 However to build a proper comparing to try and get the minimal number of compares, you (consciously or not) are effectively doing a binary tree if you want to or not.

 So what if you merge trees instead of elements?

 Consider: You have a tree of even numbers. 2 4 6 8 10. 6 would be the top and being balanced. Now you have a tree of odd numbers also evenly distributed, so 1 3 5 7 9 and 5 would be the head.

 If you are merging the odd with the even, then the first compare of 5 vs 6 which it's less. So you break the tree in half since you know everything on the left side of the odd's tree is already less than 6 so you don't need to compare those. So the tree half of 1 3 5 follows the left tree down and gets slightly restructured so it's evenly distributed once more before doing the comparison with 4. The remainder (7 & 9) might get re-compared against 6, or passed to the right tree depending on how it handles duplicates.

 I'm working on a prototype to see if this passes... I just wonder if the overhead would cancel out the potential benefit; Of course that depends on how expensive the compare is. Against ints you might not get much benefit, but large string compares or large numbers you'd get a performance boost.

 It's probably possible to add this idea/feature to already existing tree structures like red/black trees. So going out of my way to build something new might be a waste except as a concept.

 Thoughts?
1 2
Next ›   Last »