Do you have any ideas for a new sorting algorithm?

Do you have any ideas for a new sorting algorithm?

Attached: stable_sort.jpg (400x188, 22K)

Other urls found in this thread:

dangermouse.net/esoteric/intelligentdesignsort.html
twitter.com/NSFWRedditImage

No.

No. I believe in The Sorter.
dangermouse.net/esoteric/intelligentdesignsort.html

Not really.

This

In fact you should demand in your hiring contract that all lists come to you pre-sorted, just to be extra sure.

Yeah sure do. We take slices off the stack and invert the binary on the heap then with a reverse inverse look up table we dynamically allocate smart storage tech that is contextually aware of the datum and in 1 cycle we arrange the allocation to meet spec.

Wtf I tried this and sorted with O(1) time and space complexity

int[] sort(int[] in) {
int[] out = new int[in.length];
for(int i=0; i

No point as you cannot sort any faster than n * log(n) and there already exists algorithms which performs as such.

Yes, I do.

Can't top sleep sort.

Wrong. I "invented" an algorithm a while ago with O(n) complexity.

You can't beat O(n*log(n)) for a general comparison sort, but if you have assumptions about your data, improvements are possible.

What about sleep sort?

Slightly more correct, but my algorithm didn't make assumptions. It would work on the same datasets as the traditional algorithms.

Big O notation only describes asymptotic behavior for arbitrarily large values of n

For embedded systems that need to sort a relatively small number of items and need to be very efficient about it, there are algorithms that perform better. An O(n^2) algo with a small constant term may perform better for small n than an O(nlogn) one with a large constant term

Makes assumptions about data

Even ignoring the ineffectiveness of sleep sort, from a purely theoretical point of view:
1) Sleep sort requires a mapping from your values to integers that preserves the order - an additional assumption.
2) Sleep sort's complexity is O(value of largest element), its independent of the amount of data.

Not possible. Comparison sorting means the only operation you're allowed to perform on your data is comparison.

Well, it is possible, because I had somebody implement it. It wasn't particularly memory-efficient, but it worked in O(n) and didn't make any assumptions about the data.

That's the reason good sorting algorithms are usually hybrids, like qsort.

This (second) one. There are all kinds of factors deciding over which algorithm is best in a certain situation, starting with your data, which just might be mostly sorted already, up to memory issues like cache misses and what not.
It shouldn't come at a surprise that modern sorting algorithms usually are some kind of hybrid, that just swap to insert sort at some point. Wait, isn't that basically timsort?

The fuck am I reading?

sleep sort and histogram sort and other fringe sorting algorithms can't "beat" traditional nlogn runtimes because they aren't influenced by the same metric (data size)

>didn't make any assumptions about the data.
It assumed that the data could be readily converted or hashed to a number. A true "comparison sort" can work on data that has no numerical representation, only a "is this less than that" function.

No and there is no need for a new one.

Focus on something usefull like cracking SHA-256

>Focus on something usefull like cracking SHA-256
This. And there's always boobie simulation which hasn't been solved yet either.

>Well, it is possible, because I had somebody implement it.
No, it's not possible, and I call bullshit on your claims. Post algorithms or it didn't happen.

I just thought about it today, but didn't find anything new. Still funny for me that you posted this.

Attached: 1521271784590.jpg (600x1147, 66K)

i understand everything perfectly now

but why even give a fuck about the sorting speed if you're data set is small as fuck anyway

The individual data sets might be small, but you could have a lot of them.

i have a perfect sleepsort algorithm that works on any comparison-sortable data:

first, quicksort the entire set of data
then assign a hash to each item based on its index in the sorted list
then sleepsort based on that hash

Sure. Convert the numbers to strings, merge sort by length and keep an array of the bounds of where the length increases. Convert back to integers. Then run merge sort on the subarrays.

Pretty cool algo to look at on a larger scale.

Nice O(n^2) algorithm.

sleepsort on already-sorted data is only O(n), so it should still be O(nlogn) total

import time
import threading

def sleep_sort(list):
def print_num(num):
time.sleep(num)
print(num)
for num in list:
threading.Thread(target=print_num, args=(num,)).start()

Quicksort is a O(n^2) algorithm. Also, why would you do any more sorting after already having sorted the data?

>trying to reinvent the wheel
>2001+17
this thread needs to be merged with SQT

Attached: ISHYGDDT.jpg (4688x4688, 575K)