Whats the complexity of max(len(x) for x in y)?

whats the complexity of max(len(x) for x in y)?

Attached: 1515452482374.gif (540x304, 1.44M)

depends on how max is implemented. can be logarithmic if it uses binary search and y is presorted or linear if it just goes the whole array

I don't see when it would be logarithmic because a binary search is never a valid strategy. If it's sorted by len(), then looking up the max would be getting the first or last element, which is constant-time. If it's not sorted by len(), it's linear because you need to scan them all.

Attached: gyate_marisa_angry.png (420x248, 60K)

If y is an array of arrays, then it should be O(n)
But this is python so who the fuck knows what y could be

WTF??! That image is so racist, it's not funny!

>a binary search is never a valid strategy
(not true, by the way)

why

It's never a valid strategy for max(). The maximum will always be at one end of a sorted structure, never in the middle

O(lenx*leny), or O(lenx^2) if lenx is equal to leny, which is commonly written as O(n^2)

Getting the length of a python list is O(1). Python's "lists" are arrays, not linked lists.

this is absolutely retarded but nice quads
if you take measurements say an absorbance spectrum, then the response (absorbance) will be sorted in order of frequency, i.e. from low to high let's say. Then the maximum is usually a peak somewhere in the middle of the data.

who the fuck knows, its python so its going to be O(slow)

In that case you can not use binary search. Binary search only works on sorted lists, where the elements are sorted according to the value which you want to maximize.

In the example you describe, you can't use a binary search to find the maximum, because the list is not sorted by length. You must examine every element, and if you try to get fancy you risk finding a local maximum instead the global.
I'm afraid you are the one who is retarded, user.

Attached: meteor.gif (560x315, 594K)

yes i was just commenting on you saying the maximum is always at the endpoints of a sorted structure. sorting by independent variable is extremely common in experiments because thats how instruments work.

>a binary search is never a valid strategy
That was the original point though.

O(n), in 99% of x's and y's

lol how the fuck should i know?
sheeet white boys be crazy

max 2 genders

This looks like python. Assuming the most common, each x is a list, y is a list of lists. Each len(x) is O(1) time, for loop over y is O(n) time, max over new implicitly created list of length n is O(n) time. So I would say overall it should be O(n) complexity.

However, it's python. It's going to be slow. Especially with the implicit creation of a second array coming from the for loop list comprehension.

O(n), linear over the number of elements in y

cringe level stupidity here

You didn't specify a comparator for max() so how should we know

>implicit creation of a second array coming from the for loop list comprehension
That's a generator expression (no square brackets). It evaluates directly to an iterator that produces the items in sequence, without building the intermediate array.

You're a goddamn fucking idiot.

This thread just show how high-level languages make it extremely hard to understand the complexity of such basic programs.

if a list is already sorted just check the teo endpoints lol

Assuming:
>you're using Python
>x is of type List[T]
>y is of type List[List[T]]

The complexity is O(len(y)), since len() is an O(1) operation on Python lists.