I have a bachelor's degree in Computer Science and I still have no idea what O(n) means

I have a bachelor's degree in Computer Science and I still have no idea what O(n) means.
Can someone explain this shit?

Attached: help.jpg (200x313, 10K)

Other urls found in this thread:

youtube.com/watch?v=6LHpGmSSATI
rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/
en.wikipedia.org/wiki/Limit_superior_and_limit_inferior
twitter.com/SFWRedditGifs

It means time complexity grows linearly with input size

It's a reference to the phrase, "if there's a problem trying turning it off and back O(n) again."

In layman's terms, it's a way of describing how the length of time it takes an algorithm to finish changes as the input grows larger. Some calculus concepts help a lot in understanding this,

is yandex broken?

>I have a bachelor's degree in Computer Science
I am so sorry.
>and I still have no idea what O(n) means.
This is normal for CS grads, you're in good company.

f(n) is in O(g(n)) if there exist positive constants c and k such that 0

it's a monoid in the category of endofunctors

Attached: bigo.png (651x482, 14K)

Make a plot of your algorithm's run time as you increase the size of the input.A The x axis is the size of the input, the y axis is time. The shape of the function is the complexity.

What kind of degree mill did you go to you pajeet nigger tier mild mental retardation kike bitch pussy? How fucking brainlet are you??

Attached: 1538697807559.png (960x960, 551K)

Shut the fuck up

This is what happens when you don't teach calculus to cs majors

lim sup f(n)/g(n) < ∞

Found the burnout math major

>t. Code bootcamp pajeet codemokey

youtube.com/watch?v=6LHpGmSSATI

How the fuck do you have a degree and not know big O?

Big o is always explained so vaguely. I don't know why but stuff like
Just doesn't give enough of what it truly is that I can chew on. I don't know why I can't put words to the feeling. I've had it defined for me many times many ways and I have used it. I just do not feel comfortable at all with what it means or is fundamentally doing

>Big o is always explained so vaguely.
This nigga here gave you the exact mathematical definition, it doesn't get any less vague than that.

It means you wasted 3 years of your life, plus possibly multiple thousands of dollars.

it gets as big as it gets big

O(1) is instantly accessing the answer like doing dict[1] or something to get the value at that key.
O(log n) is splitting the problem in half at each point, like binary search where you go from the middle each time recursively
O(n) is like when you have a list and the size of that list is n, so if it's 20 it's 20, if it's 50 it's 50.
O(n log n) is the typical one you'll get for a sort method like quicksort, that will take a bit more than 1 pass through to figure it out but as fast as possible from there

O(n2) is nested for loops inside for loops

Anything else is basically "oh shit wtf is going on" and usually recursively calling recursive solutions with 2 more recursive solutions inside, or some other kind of thing that is just generally growing out of control the more you add to it, it'll grow by like 4 times, but then 4 times for each one and end up with thousands of calls

It means that the function is bounded above

f(n) in O(n) means

f(n) N for some C > c

It is used by shills of crappy slow languages like python, who are too dumb for C, to defend their slowness by arbitrarily deciding that the constants dont matter.

>I have a bachelor's degree in Computer Science and I still have no idea what O(n) means.
I was thinking about going to college, but now I'm reconsidering

Here's an attempt at an informal definition.

It's a measure of how runtime grows with increasing input. Just think. If you have two algorithms, A in O(n), and B in O(n^2). If you double n, you perform 2n, or twice as many operations with algorithm A. With algorithm B you perform (2n)^2 = 4n^2, or four time as many operations. You see that as the input grows, the number of operations performed by B grows much faster than A. A is a more efficient algorithm because the runtime scales better with input size.

More concrete example.

Consider a single for loop versus two nested for-loops, with some constant expression in the innermost loop:

for (i = 0; i

Should be j

>CSlets in charge of understanding basic fucking notation.

O(n) Order of n

Say i have a function f(x) that is approximated by some other function g(x) with an error that is ~ x^2. Then f(x) = g(x) + O(x^2) f(x) is equal to g(x) with an error in order of magnitude x^2. If x = 10, then the error will be in the range 100 - 999.

Similarly, if you have a program runs through a loop N times in ~10-99 seconds, and 10N times ~100-999 seconds, then this program runs in O(N). i.e the time it takes for the program to run grows linearly.

rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/

HURRRR


DURRRRRRRR

>hurr-durr calculus
O-notation was taught to me in a calculus class. You don't need limits to define O-notation.

>lim sup f(n)/g(n) < ∞
>lim sup
you are retarded

is OP couldn't understand with this, then hes a dumbass.

Fucking kek

There's still some hocus pocus as to why it's O(n) and not O(2n). Both satisfy the criteria but I guess one is more aesthetic.

nice bait

Basically "for sufficiently large values of n" it's approximated
E.g. if f(n)=10,000n and g(n)=n^2, then f(n) is in O(g(n)) because for "sufficiently large n" - in this case n>=10,000 actually - g(n) will always be greater than f(n).

Also O, Θ, Ω, and the others define sets. So saying "equal" is technically wrong, it should be "element of", but everyone still writes f(n)=O(meme) anyway.

your degree should be revoked and your college should be closed down.

O(n) = O(2n). So yeah, you just choose the simpler one, like you write 1 instead of 15/15.

This

That's wrong. Saying something is equal to O(n) is an abuse of notation as pointed out. O(n) is really a set. O(2n) is in O(n) so we drop the constants. That's a hand-wavy explanation, but yeah.

Cs brainlets cant handle basic math

But O(n)=O(2n), sets can be equal you know. Of all places to use equals when talking about O, this is the correct one.

Serious question, how do you manage to live your daily life while being this dumb? Do you need help tying your shoelaces in the morning or do you just use velcro?

Attached: brainlet.jpg (644x500, 39K)

I understand but the definition still doesn't work for me. It still seems vague

Like. These definitions are so vague. I apparently know this notation and I can definite it in many ways and use it correctly. But I don't feel like I know it at all? What the fuck is actually going on with something like O (n)? I feel like whenever the notation is defined we just talk around it

It gives you an idea of how many actions have to be performed per one element of input. For O(n) if you have n elements an algorithm performs n actions, with O(n^2) it's n^2 actions for n elements, etc. The constants for n are dropped since we are judging at n->infinity. In practical terms it only tells you how a function performs when you change the amount of elements in the input, comparing different functions with it will often lead to nasty surprises at small inputs.

O defines a set of functions that satisfy a particular property. The notation O can itself be thought of as a mapping from a function to a set of functions: O(g(n)) = {all f(n) s.t. f(n) satisfies a certain property relative to g(n)}. Mathematically, that's what O is.
That "property" is what others have already defined rigorously: and informally, it can be approximated as "g(n), when multiplied by some constant number, bounds f(n) from above - for sufficiently large values of n". But O itself is a mapping from functions to sets of functions, nothing fancier.

If you had seriousely studied math during your undergrand (assuming you didnt go to some code monkey school) you would have no trouble understanding. Anyway you dont need to know this to writenjaava script

Math is primarily what I studied . I can't code at all

I knew a kid with 76 IQ when I was in HS. He was so fucking dumb we call him "chimp" because that's the IQ of chimpanzee. He was pretty good at sports though.

O(1) does not mean instantly you fucking idiot. It means the time it takes to finish an operation doesn't change with how many inputs it has.

Big O show time for compute. Some math make number big, some math make number small. Math go in big O function. If when n get bigger math make it even bigger, algorithm is slow. If when n get bigger math make it not quite as big, algorithm is fast.

He's correct, you room temperature IQ retard.

I have a degree in English Literature and I know what big O notation is. Thanks for making me feel like I'm learning.

>t. Russian election influencer

>he doesn't know

kek, some shit tier calculus course you took

>en.wikipedia.org/wiki/Limit_superior_and_limit_inferior

t. seething samefag

don't you learn this in your first year intro to coding class

How is that any better
I can't find any text that explains it well despite passing the class

Install Gentoo

Wtf, this is something you learn first year in your first programming class. Or atleast the basics. How did you pass Data Structures and Algorithms class?

Fuck that shit.

I know big oh. But I don't *know* it. You know ? I can work with it just fine but every time I hear any definition for it I'm left unsatisfied and thinking "that's shitty" .

>doesnt know what lim sup is
>calls other people stupid

yikes

Ok Jow Forums LISTEN UP

Big O is easy but what about NP-Completeness?
I got a 10 in Data Structures and Algorithms and I still didn't understand that shit.

>the length of time it takes an algorithm to finish changes
Not exactly length of time as that is dependent on your processing power. More like "how many instructions will be executed in a given algorithm".

I have a bachelor's degree in computer science and idk either LOL

>sane fagging pajeet