How do I learn data structures? What even is big O notation

How do I learn data structures? What even is big O notation

Attached: DZWF88DWkAAB2gw.jpg (577x1024, 101K)

Other urls found in this thread:

pastebin.com/JC9zvj51
twitter.com/SFWRedditImages

Attached: bigo.jpg (1941x2817, 1.86M)

Big O is when your asshole is fully gaping. There's also a small o, which is the real killer, because small o stands for an anus that never can really close all the way. So by keeping to the Big O, you only gape when you want to, watch out though! Too much Big O, will eventually lead to small o. Small o is a silent killer.

In simple terms, It's the growth rate of your algorithm. It tells how efficient your algorithm is.

>get asked to implement doublt linked list
>can't figure it out
>drop class
>hide in math classes because thats,more intuitive

If someone could explain the implementation in clear pseudo code I'd really appreciate it

>the absolute state of american education
Meanwhile on our half-term exam we had to write implementation of tango tree, on paper, in sepples.
Why live? Tango trees aren't even used in academia, let alone in the industry. At this point uni seems to be just about throwing truckload of sticks under your feet and seeing if you fall.

Attached: chern.png (456x480, 64K)

Imagine it like a building with escalators. Each floor has a escalator that leads up and down except the first and the last one. To get to the top you need to go to the first escalator (root) then go to next ones until there is no escalator that leads up.

Big O notation is the worst case efficiency of an algorithm.
O(1) means the time taken to perform an algorithm on any given data set is always 1. This is impossible.
O(n) means that the time taken is linear and proportional to the amount of data given.
O(n^2) means that the time taken is the square of the amount of data given
It goes on from there to use logs, exponents etc. Basically, to find the time taken you substitute n with the amount of data given.

>O(1) is impossible
>what is a lookup table
>what are algorithms that don't read all the data

Each element has a pointer/reference to the element before and after it. You are seriously stupid if you dropped out because of that.

I dropped out into math over jt.
My professor said the same shit you guys did. But what does it actually MEAN? how does that help me built it

>O(1) means the time taken to perform an algorithm on any given data set is always 1. This is impossible.
just drop out

Attached: 1494126852835.jpg (1280x720, 75K)

So this is a practical example of a linked list in a game, Unreal Tournament. The inventory of each character (his guns) is stored as a linked list. So it's pretty simple, the Inventory class (which implements a linked list) has a variable UTInventory* NextInventory
Whenever a gun is added to a character, an iterator goes through each pointer. If the next ivnentory is a nullptr then it's the last one in the list and the next inventory is added to it.
Here's the source code for the iteration. They overload the ++ to find the next inventory item

pastebin.com/JC9zvj51

Linked lists are super simple

Also I'm drunk but I'll try to answer any questions if I can. Data structures are (in general) pretty simple but they are taught in a very abstract way in college so people don't get why they are useful. It's a failure of academia being detached from reality and not being able to give applied examples. Most professors suck big time

I want a job what are the big data structures in need to get and please explain . Sorry

Absolutely retarded

Attached: 98AE7A16-3805-45B8-966D-39A35B11140B.gif (500x281, 1.01M)

IT'S SHOWTIME

Data structures are only part of it for big data, make sure you know your databases well. SQL, noSQL, you name it. There's been some recent work by MIT I believe doing categorical databases, might be the next thing for big data but it's a fairly new field.

Big data is about indexing, querying, loading that data into memory but also about writing to the databases. So yeah db is the basic stuff you have to know.

As for data structures, well first let's start with the algorithms. Cache replacement policies also called block-replacement algorithms are a must. It's how you maintain a cache of information in memory. Now for the data structures

Trees. Learn your trees, learn how to transverse them invert them, etc. Log-structured merge-trees are very important for big data. Also bloom filters. They are used to test whether an element is in a set, but way faster than traditional methods. The thing is, bloom filters are probabilistic data structures meaning that you don't get 1 or 0, you get "maybe in your detabase," or "definitely not in your database." So you lose a bit of certainty but you gain speed which is what you want in big data.

I would start by that: master databases, trees, bloom filters and cache replacement policies

It helps you build it because knowing the syntax and knowing how it works is all you need to know to build it, how much of a brainlet can you be?
public class node{
node next = null;
node previous = null;
}
public class linkedlist{
node root = null;
node last = null;
public void add(Node n){
if(root == null){
root = n;
last = n;
}else {
last.next = n;
n.previous = last;
last = n;
}
}

It feels insurmountable. I can't even program man

Have you ever played snake?
Think of a linked list like the snake in the game.
The snake has a head that always moves, if it doesn't move into a treat, it removes the tail.
The body of the snake is just links between the head and the tail.

It is like people weren't born knowing this but put months to many years of effort into learning this + programming + also skills related to sysadmin-ing Linux + the basics like being able to quickly blind type and more.

but how do i know what my algorithms' O is?

>O(1) means the time taken to perform an algorithm on any given data set is always 1. This is impossible.
No, this means the algorithm takes a *constant amount* of time/memory to run, no matter how much data you cram into it.

This does't mean that amount will be small because of how big-O analysis works - O(20n)==O(n+10)==O(n). An O(1) algorithm may actually be *slower* than an O(n^2) solution for small enough n. Big-O measures scalability, not abstract 'goodness'.

To make a very contrived example, you want to find a substring in a text file. You actually found the solution at home already and jotted down the position, but you forgot to take the note with you. You could grab a taxi and go get it, but it will take you two hours. So, this is an O(1) solution - no matter if it was an SMS or War and Peace you were searching in, it's 2hrs to get the result.

Or you could run a dumb O(n^2) string compare - for a 300-char SMS, the algorithm will find the match in, say, a minute - 120x faster than the O(1) solution. Once you try to run it on a full-length book, it'd take, say, 4 hours, and so would be worse than taking the taxi. But if you knew you'd only operate on SMSes, using the O(n^2) would be *more* efficient in comparison.

Big O notation is just an asymptotic upper bound on the number of steps an algorithm will take. Stated mathematically it's just "There exist positive constants c and n0 such that f(n) = n0" where f(n) is your real number of steps and g(n) is your asymptotic upper bound. Basically, it's just saying that there's a function that is always greater than the actual number of steps your algorithm takes within a constant multiple. So, the simplest case is two nested for loops that go from 1 to n. Clearly this will take n^2 loop iterations to complete. So the big O could be O(n^2) because you could find some constant where n^2

Big O is related to algorithms and is completely irrelevant to data structures, stupid nigger

People are blasting some dipshit answers.
BigO describes the rate of change in the amount of time an algorithm will complete its task based on input. Its an efficiency measurement.
Data structures are really an umbrella term for "logically arranging data, such that the data becomes more usable".
Binary search trees are an example of this, where the data is made more usable for a searching algorithm.

Start by doing fizz buzz my man

Yeah I'm learning smalltalk with squeak , couldn't find a working gnu smalltalk for windows

>woman against being nice to non-Chads

WHAT A SURPRISE

It's literally just a collection of points where each one points to the next, with the end then pointing to the start.

Attached: 10090ccc4321cad9161f8aa1ac76616e0fa2fe8e71d532183021437ab3bc192a.jpg (579x300, 47K)

>What even is big O notation
Big O, in short, is a method of describing the worst case complexity of an algorithm. Complexity here, refers to how much time will be used by your algorithm as its input parameters grow in size.

To go more in depth, Big O describes a set of functions that describe the complexity of an algorithm. To put this into perspective, let's consider an actual use case.

Say I've got some algorithm that sorts some array of integers. I take a few measures of how many milliseconds it takes on my computer for various input sizes, and create a function mapping the input size n to the amount of time it takes. Here, I get something along the lines of 2n^2 + 18n + 7. On my friend's computer, it's twice that, because his computer is slower. Clearly, it is not in my best interest to use either of these functions to describe the time cost of my program, because they don't work everywhere.

But perhaps we're less interested in the exact amount of milliseconds the program takes, and more interested in how the program grows in size. In this case, the only relevant factor is the n^2 part. Everything else pales in comparison as the input gets really big. This is what Big O is doing. It's describing a set that contains all of those exact time functions, and putting them under a common name. In proper terminology, if we had the functions f(n) = 2n^2 + 18n + 7, and g(n) = n^2, we would say that f(n) is a function in the set O(g(n)).

More formally, we can say that any arbitrary function f(n) is in the set O(g(n)) if and only if there are positive constants c and k such that 0 ≤ f(n) ≤ c * g(n) for all n ≥ k. It's a bit weird to wrap your head around, but the basic effect is that you can rip out all of the constants and all of the lower order terms.