Why did neural networks/machine learning become so popular all of the sudden...

Why did neural networks/machine learning become so popular all of the sudden? Why did it take so long for scientists to figure out that ML is the solution to most of AI-related problems?

Attached: machine-learnin-machine-learning-everywhere-emegenerator-net-29035157[1].png (500x300, 79K)

Other urls found in this thread:

marxists.org/archive/marx/works/1848/communist-manifesto/ch03.htm
stats.stackexchange.com/questions/182734/what-is-the-difference-between-a-neural-network-and-a-deep-neural-network-and-w
en.wikipedia.org/wiki/Gradient_descent
meaningness.com/metablog/robots-that-dance
spectrum.ieee.org/cars-that-think/transportation/sensors/slight-street-sign-modifications-can-fool-machine-learning-algorithms
en.wikipedia.org/wiki/AI_winter
twitter.com/SFWRedditImages

There was a demand for new buzzwords. Also notice the strong correlation with onions consumption in the western world

Scientists figured out a long time ago. We just didn't have good enough computers for training back then.

Because it fucking works and is effotlessly pushing SOTA on shittons of tasks

>Scientists figured out a long time ago.
Even starting from Alan Turing: a program as complex as the human brain cannot be coded directly
>We just didn't have good enough computers for training back then.
It's mainly processing on GPU which enabled to have big neural networks trained on a lot of data. The more data you train on the more unlikely overfitting gets

>muhsheen lurning
also known as... estimation
it's not 'learning' anything

We will have socialism in 30 years because machine learning will replace all jobs.

This.

It's randomly guessing at variables until a slightly accurate answer appears.

how is that different from how human babies learn

Make an LLC.
Hire like 4 app developers that are familiar with Azure.
Put Machine Learning, AI, and Blockchain into your public media.
Bathe in investment capital.

it's like "data science" aka, highschool statistics on a Mac

Machine Learning is AI.

>The more data you train on the more unlikely overfitting gets
vanishing gradient

Investors don't know that

> All of the sudden
You mean last 6-7 years?
>Why did it take so long for scientists to figure out
All the ideas were already there long before. We just got some breakthroughs that make neural networks train more efficiently combined with improvements on hardware (gpus). Then NNs were rebranded to deep learning and hype train took it from there. To be fair we have got some very impresive results, especially in computer vision.

Attached: 1524968752715.png (1080x1920, 1.09M)

Breakthroughs in recurrent networks around 2013. Also with using digital filters/convolution networks. The rest is just better data.

But! Jow Forums said it's a meme technology!

>The more data you train on the more unlikely overfitting gets

not if your test data is drawn from a different distribution to the data used in production

implying we don't already have aspects of socialism. What are public roads, schools, hospitals, police services, fire services...

So was Marx right?

>rebranded
There's a significant difference. The difference between using a DNN and a generic NN is like the difference between using a computer and using a cheapo calculator.

Sure, *technically* it's nearly the same shit under the hood except, well, more of it, but you can also achieve a fuckload more with one vs another. Porn, most famously.

>>The more data you train on the more unlikely overfitting gets
>vanishing gradient
Vanishing gradient is a problem which arises with deep networks, more with deeper networks in fact and is unrelated to training data sizes. It's related to numeric instabilities and to another problem called exploding gradients

>So was Marx right?
Marx have always been right at least in term of analysis and likely right on the rest
>>The more data you train on the more unlikely overfitting gets
>not if your test data is drawn from a different distribution to the data used in production
If you sample your test data from your data it quickly gets unlikely, there is no reason that a part taken at random get significantly different from the rest, on big number the probable error become really low
>implying we don't already have aspects of socialism. What are public roads, schools, hospitals, police services, fire services...
Implying that it's a bad thing.
Automation with a good democratic system would free people from bad work. you'd have to avoid that inequalities skyrocket with excessive power from machine owners. socialists rather than capitalist Luddites

because its easy for idiots to use.

Why don't you go buy build a chatbot that contributes to the discussion? You should find it easy to use.

>Even starting from Alan Turing: a program as complex as the human brain cannot be coded directly
that faggot could have never predicted how far computers would come. It's like the 640k is all you need quote. I have no idea why people take what he said as gospel

nobody has gotten close to what a human baby can do

how to identify a retard who read too many hype articles

>implying we don't already have aspects of socialism. What are public roads, schools, hospitals, police services, fire services...

That is something socialists will say in an attempt to fool people. If public services are an "aspect of socialism" then every society ever, even those preceding the idea of socialism had aspects of socialism. They are not aspects of socialism any more than hair is an aspect of my ass.

Social policies aren't socialism.
Helping the poor isn't socialism, redistribution of the ownership of the means of production is.

The former is beneficial if you can afford it. The latter fails every single time it's tried.

Socialism doesn't claim to have invented these things. It just describes them.

Then how is it an "aspect of socialism"?

Alan Turing predicted AI to be a lot more powerful in his lifetime than it actually is today.
He thought it would be relatively easy to make a computer translate natural language perfectly, for example.

... a couple of minutes ago it was claiming to be them, so ...

>hurr durr there is only ONE definition of socialism!!
marxists.org/archive/marx/works/1848/communist-manifesto/ch03.htm

>literally arguing that the marxist strawman of socdems makes them socialists when it was created to deny them the label
How many levels of hypocrisy are you on m8?

>join the conversation to drop a link
>>hurr durr hypocrite
see, this is the level of paranoia and retardation one can expect from anti-communists
it was obvious even to Marx himself more than a hundred years ago:
> A spectre is haunting Europe — the spectre of communism. All the powers of old Europe have entered into a holy alliance to exorcise this spectre [...]

ML is boring. Modern ML is basically just an optimization trick that was discovered in the 80s to train multiple layers that became much more feasible due to the advances in GPU hardware. All it really is is the creation of big matrices that can solve certain problems with high accuracy but suck at everything else. Neural nets are the antithesis of general intelligence.

Attached: gpus.jpg (474x355, 15K)

>thinks he knows a flying fuck about politology
>most basic bitch quotes from the manifesto
>probably hasn't read Das Kapital
>claims soclibs are socialist
Every time.

Come back when you undestand your own ideas. You don't even qualify to ride in the helicopter.

Literally GPUs.
NN/Machine learning is nothing but brute force computing your way through a statistical graph.
ML isn't a solution for AI because its not AI.. It's a dirty hack that can run very quickly on highly parallelized hardware giving the appearance with big data that its AI. It's not AI. If you're fooled by this hot garbage, get your head checked.

I've literally been shitposting here as a break from describing my DL algo for my thesis.

Which, incidentally, is quite possibly *ahead* of the current state of the art in medical genetics.

>>claims soclibs are socialist
I didn't even say anything about that
well done, retard

^this guy gets it.
So do these guys :
Now, let me address the brainlets...
Give me 360 billion and I'll give you the answer. Don't ever expect to get a public answer on this. Along with this thinking, you can pretty much expect anything that is public or in a white paper to be nothing near such capability.
Pretty much what a bunch of poorfag PhDs did.
Ai Winter 2.0. Way too much dumb money chasing dumb ideas.
I'm waiting to go forward until the next lot of Aeron chairs goes on sale.
Hacks upon hacks upon hacks to strain ever decreasing value out of sacks upon sacks of shit.
Thus why crashes happen and they get cleared out or whomever was dumb enough to be last holding the bag.
It is. Too bad most people never took a statistics or advanced mathematics course thus have no clue what mathematical optimization is capable of.
> mfw brainlet who went to community college and never used gradient descent on TSP
> Significant different.
Nigger its literally just more layers w/ a hack so your dam GPU doesn't choke on the exponential increase.
stats.stackexchange.com/questions/182734/what-is-the-difference-between-a-neural-network-and-a-deep-neural-network-and-w

> vanishing gradient
Nigger tier statistical brute forcing has its disadvantages

>machine learning AI that functions on the blockchain

sounds like top tier normie investment bait

Attached: Glow_eye_patrick.png (170x153, 34K)

>I'm waiting to go forward until the next lot of Aeron chairs goes on sale.
my fucking sides, over here it's haworth

>mfw i actually bought a shitcoin that claimed to do this
its ticker is MAN if anyone wants to help pump my bags.

>Nigger its literally just more layers w/ a hack
That is the *entire fucking point*. Deeper nets have a chance of actually learning a general solution, and that's just plain feedforward crap.

Writing ML code as a discipline is frustrating bullshit too. There are so many parameters to pick that they actually have to write meta learning algorithms to pick the parameters they feed into the nets. How many layers do I need? What activation function do I use? Is it a CNN or RNN? Do I have a bunch of exotic layers for other shit? How much do I have to downscale my images into inputs because I cannot process 1920*1080 inputs. What loss function do I need? How do I clean my data? How do I cross-validate? If you are processing text good fucking god there is a lot of extra shit you have to do like word2vec or some other ungodly hacks. You can pick all these parameters and still get something with worse accuracy than some oldschool technique.

No wonder americans are obsessed with celaning their teeths and having good breath

You mean, if you brute force a problem space enough in layers of differential plumbing you'll eventually get something? Holy shit ... Imagine the grand thinking that went into this..
> I KNOW DAWG... ADD MORE LAYERS TO THE NN... OHHHH CHIT IT WORKS
> IT WOOOOORKS
Fucking absolute state of you faggots
> MUH laundry list of activation functions
Niggers literally hacking their way through pajeet style.

Literally one big pajeet tier approach to something niggers didn't want to do more research on to come up with a more refined approach. PhD grouping upon grouping literally started poaching the intellectual works of dead scientists who can't sue them or claim credit and began putting their names on it. Because they weren't the original inventors they still have no clue how any of this shit works or a clean approach forward so just start slapping together shit and seeing after a billion iterations if its 10% better than the previous and getting wall to wall applause at NIPS.

Absolute shit tier status of modern day academics and tech industry.

If it works it works.

> If mathematical optimization works it works
Welcome to 1847
en.wikipedia.org/wiki/Gradient_descent

It's funny because it's true. The AI winter is on it's way, what worries me is what happens next. We're all going back to Shit-as-a-Service, does Blockchain have a future? Everything goes to shit and ML Engineers become cyber-criminals? God knows

blockchain ain't really a technology, any person who undertook one subject on cryptography at university will be able to understand it in a short time.

The mathematics for ML is pretty old, it's just that GPUs and other hardware caught up. Also the size of data sets have exploded with online retail, social networks, etc.

cryptography and distributed systems. That being said, it has stuff like smart-contracts and shit that may give some business and edge...or maybe not.

What does that have to do with anything you fucking racist nazi bigot?

The question is how much does it scale. Deep learning needs massive amounts of data. I know we have NoSQL and shit, but the question remains. Which company has actually gone anywhere nearer teaching robots how to dance.
meaningness.com/metablog/robots-that-dance

Nobody in this thread knows anything about machine learning, do any of you know what stochastic gradient descent with Nesterov momentum is?

I made this, thanks for riposting my app

There is a lot of unemployment in Africa because nobody needs 3 Billion pool cleaners and taxi drivers. Even an economy based on robots is one that will suffer from scaling problems. At the moment the most common form of robot we engage with is the automated supermarket checkout tills. Once people realise that there is a better way those things will be useless. If machines build machines then the machines that build the automated checkout tills will also be redundant. Machine learning is not a necessary component of life. It doesnt matter how much a machine improves itself there is always going to be a time when it becomes redundant. Robotics and machine learning is not a problem looking for a solution. It is a solution looking for a problem. The biggest problem any machine capable of learning will face is how to keep itself relevant and non-redundant and economically viable, just like all the pool cleaners and taxi drivers in Africa

>What is data augmentation

it's expensive as fuck

What are you talking about, you just set it running on the CPU while running the actual neural network on the GPU. Have you actually trained a neural network?

You have no idea what you are talking about do you?

>taking machine learning graduate course
>realize it's actually founded on tons of linear algebra, ODEs, statistics & real analysis
>but then there are retards who do
import MachineLearning4Tards
MachineLearning4Tards.sixFigureSalary()

Deeper neural networks run into loads of problems, solving each is tricky. Vanishing gradients where solved with residual learning but that's just one, it's all about trying to converge on something that works in the real world (nobody wants a neural network that works in theory but doesn't classify new, unseen before, data).

That image you posted:
If you care I trained it with the CIFAR-10 data set, based the architecture in the paper ImageNet Classification with Deep Convolutional Neural Networks (Alex Krizhevsky, Ilya Sutskever & Geoffrey E. Hinton, 2012). I manged to get marginally better accuracy (I used some newer data augmentation stuff) on new (not trained with) validation images, was a pain in the arse to train even on a GTX 1070.

> AI winter comes and cleans out the shit being attempting by companies and startups attempting to fake AI
> Legit AI comes and kicks Cloud?SaaS in the ass putting an end to this long overdue wave
> HW companies see a boon in what will be a new demand for higher end compute to run the software
> Blockchain gets blown the fuck out

> Everything goes to shit and ML Engineers become cyber-criminals?
They better have saved up their hype pennies.
Most of them are going to be jobless.

I'm skeptical of AI's future altogether, fake AI will fail misserably regardless of real AI being legit, and what happens to the software industry in the next 10-15 years.

Same thing that's always happened: scientists discover shit, software engineers makes memes using said shit.

Attached: learnin.png (1228x612, 1.11M)

> ML is the solution to most of AI-related problems
Ahahahaha

>Socialism is a range of economic and social systems characterised by social ownership and democratic control of the means of production.
>Social ownership may refer to forms of public, collective or cooperative ownership, or to citizen ownership of equity.
>There are many varieties of socialism and there is no single definition encapsulating all of them, though social ownership is the common element shared by its various forms.
"Socialism" is a shitty buzzword that applies to everything short of actual anarcho-capitalism, stop using it.

explain your reasoning.

He has none

> definition of a fucked algorithm
JUST ADD Terabytes of data and brute force the fuck out of it
> mfw these retards will have an exascale data center mining data to tell retards of the future that women order tampons on the 3rd week of a month because they're about to have their period
fucking dumb world has gone made...
Melting polar ice caps to arrive at understanding that is common sense or things you can learn just by talking to people.

> mfw the future

Attached: just_let_the_gpu_do_it.png (1223x596, 1.19M)

> he bought into the AI meme
> mfw you didn't listen to everyone telling you its nothing more than bruteforce statistics.
> mfw you actually thought cloud computing/big data companies were furthering something other than big data and search algos

I have a graduate degree in this shit. I don't need a primer on the nigger tier add-ons clowns come up w/ every week to resolve inherent flaws in a limited algorithm. Literally making it up as you go
> Muh residual feedback normalized quotient tangent re-activation congruent genetic re-recurrent anal hole deep parsing differential transient convergent sum hybrid network
Pajeet level hacks
> But nvidia now lets me crunch 10TFLOPS so its ok
> Still think a stop sign with shit on it is a 45mph sign
spectrum.ieee.org/cars-that-think/transportation/sensors/slight-street-sign-modifications-can-fool-machine-learning-algorithms

Attached: muh_autism.gif (240x138, 160K)

You only have to train it once mate, machine learning is as much about practical knowledge as research.

> fake AI will fail misserably regardless of real AI being legit
For sure
> I'm skeptical of AI's future altogether
Don't be. The real shit is coming
> what happens to the software industry in the next 10-15 years.
Outside of hardware? Joblessness.
They disrupted other sectors...
Soon they get disrupted

Attached: 1529538657425.gif (528x555, 816K)

kek
It's self explanatory brainlet

Attached: 1528259502669.jpg (802x842, 87K)

Wide Residual Networks (Sergey Zagoruyko & Nikos Komodakis, 2016).

The world is dynamic brainlet.
I know it will be a revelation for you when it changes and you don't have a data set to plumb.

Also, you should understand you're talking to someone who knows all about ML. It's shit. The only brainlets who disagree are the ones using it to get rich before anyone finds out its largely a sham. Have some integrity or get your back broken when the disruption comes

Call me when computers can learn the difference between niggers and gorillas without 1000000000 pictures

fucking kek.
Every invested in this crap deserves to be bankrupted

Attached: dat_width.jpg (750x428, 62K)

I'm just doing this for a shitty college course, I needed a project to do so I picked something vaguely interesting. After a year and a half I have something that can classify objects in images with decent accuracy, as far as I know an apple will still look like an apple for long enough for me to get the fuck out of this field.

>still no argument

reddit pls

>far as I know an apple will still look like an apple for long enough for me to get the fuck out of this field.
kek'd

> mfw your car literally drives you into a 'corner case'
> mfw you're literally living on a statistical edge

Attached: Its_ok_anon_were_adjusting_yourNN_weights.jpg (780x439, 82K)

I'm aware which is why you think its legit.
> After a year and a half I have something that can classify objects in images with decent accuracy, as far as I know an apple will still look like an apple
Oh sure :
spectrum.ieee.org/cars-that-think/transportation/sensors/slight-street-sign-modifications-can-fool-machine-learning-algorithms
> long enough for me to get the fuck out of this field.
Winter is just around the corner my friend.
Good luck

what's with this "Winter AI" meme?

can someone explain?

>Adversarial attacks fool machine learning!
Yeah not shit egg head

He means the AI buzzword bubble will burst. Cars will still be driving themselves with lower death rates than humans no matter how angry he gets.

> stop sign gets dirty from weathering
> shit tier statistical match gives closest approximation to a 45mph sign
> car accelerates
> gets you T-boned
> Battery pack bursts into flames
> incinerates you
Brainlet Meme learning engineer claims nature wasn't being fair and was being adversarial. These are the kinds of idiots you're entrusting your life to

Attached: 1522012099140.png (1440x1557, 738K)

>Cars will still be driving themselves with lower death rates than humans no matter how angry he gets.
Hasn't happened yet lmao.

Don't buy a fucking self driving car until they iron out (or fail to iron out) the kinks, whats the problem?

en.wikipedia.org/wiki/AI_winter
Google is your friend.
Essentially, all of the techniques in use now were developed some time ago and a hype wave grew around them. They over-promised and under-delivered as they are doing right now. It resulted in a crash w/ no survivors. It was rebooted when mathematicians hijacked it and called statistical optimization AI. Although the hardware can finally compute, the algorithm is the same dumb shit from that time period

The same thing happened in the 80s with neural networks, the hype train doesn't have any effect on the actual algorithms. If you think it has any bearing on it (outside of getting a job) then I feel sorry for you.

The last time it was expert systems, and logic programming not neurel nets.