AI debate

Who was in the wrong here?

Attached: 1_agPb6CG_erZq_qmGU4IqCA.png (1200x626, 651K)

Other urls found in this thread:

bigthink.com/jazzy-quick/should-artificial-intelligence-be-regulated-heres-what-elon-musk-and-mark-zuckerberg-think
bdtechtalks.com/2018/02/27/limits-challenges-deep-learning-gary-marcus/
youtube.com/watch?v=d40jgFZ5hXk
twitter.com/SFWRedditGifs

Both of them.

bigthink.com/jazzy-quick/should-artificial-intelligence-be-regulated-heres-what-elon-musk-and-mark-zuckerberg-think

Nick Bostrom

>Jew or South African
hmmmm

/thread

Zuckerberg is right. as a computer scientist, i can not cringe enough when listening to musk, gates, ...
robots become a problem? then pull off the power cord! problem fixed.

>robots become a problem? then pull off the power cord! problem fixed.
I cringed. You're being ironic, right?

what about autonomous bots ?

Can you tell me which plug I need to pull to turn off the Internet?
An AI algo could spread online and not be localized in a single physical "robot", you total brainlet.

no, i am very serious. robots are made of circuitry powered by electric current. they are very fragile.

>Can you tell me which plug I need to pull to turn off the Internet?
detonating a modern nuclear bomb in the stratosphere above the USA would burn all its electric circuits

when i say "pull off the power cord", it's a metaphor.

One bomb. . .

they're both idiots

>All it takes is a thermonuclear bomb to fry the evil AI
>see ? not a nightmare scenario

When they are talking about AI they mean AGI dont they?

this, lol

the atomic bomb is just one scenario amongst others.

agi is a myth

yes... but follow research DeepMind is doing, after AlphaZero.... the new stuff about multi-agent and the new multi-task DQN ... AGI seems more and more as a real possibility in the next decade or two decades

bdtechtalks.com/2018/02/27/limits-challenges-deep-learning-gary-marcus/

>agi is a myth

your intelligence is a myth

what if neither of them know what they're talking about? is such a thing possible?

why say that? we are just talking here. why always undermining everything?

AI is not just deep learning
Google is working on many approaches... reinforcement, unsupervised learning, Kurzweil is working on hierarchical neural nets at AI, there's evolutionary algo approaches at OpenAI, etc.

>oh boy, looks like our completely integrated ai is going crazy again.
>launch the nuclear misseles president, thank god we have enough of those
>lets try again

> pseudo “geek” celeb vs pseudo “geek” celeb

No idea honestly

>AI

just a new and stupid hype to burn money in Silicon valley with billions of lines of trash tier code

yeah funny but robots will never get rid of us; nothing can compare to the human savagery.

Attached: pham.webm (688x288, 2.47M)

>reinforcement
>unsupervised learning
>hierarchical neural nets
They all use deep neural networks.

>evolutionary algo
Used to find parameters of deep neural networks.

They're by definition deep learning.

>muh sentient ai will spread over the internets liek magik!
Come back after a AI that is capable of it has even started to be programmed.

>robots are made of circuitry powered by electric current. they are very fragile.
Ok I see where you confusion come from, you think AI mean litteral robots walking around us and that the problem will be fixed by throwing water at them or some shit

What will actually happen is that AI will become so advanced it will be mandatory to use it in military, surveillance and pretty much everywhere else, from medical to financial fields
In many of those fields, a catastrophic failure cant be fixed until it's too late

Elon is right, this is inevitable, some people/nations are gonna say "let's not use AI" and they will be left behind and/or destroyed into irrelevancy

no, i am talking about circuitry like a microchip (like a cpu). ai assisting us is not the same thing as ai overtaking us.

elon is wrong, the guy is not a scientist, he has no patent nor scientific paper under his name. never forget that.

How could an AI do that if you don't give it the knowledge to do so?

>inb4 AI could learn it

Sure it could and it takes months to years for people to learn something

>b-but AI is superfast!
No it wouldn't be, a new born AI would be dumber than a toddler and honestly, why would it ever be connected to the internet?

I unironically trust jacketman more when we talk about AI than any of those two.
If it was about rockets or datamining users the story would be different

>ai assisting us is not the same thing as ai overtaking us.
AI will eventually do everything better than we do
You could hire 10 guys to do a job or buy one robot What will you chose ?
>Inb4 I pick the 10 guys
Your competition picked the robot, you're out of buisness

>if you don't give it the knowledge to do so?
People will give the knowledge willingly, because they will need AI to perform tasks
We already have algorithms for surveillance camera to identify people, 2 wheelers, cars, bus and trucks, soon we'll have actual facial recognition
We also have algorithms that scour the internet for us, in search of illegal material or even recently, copyrighted material

Again, people will not be able to restrain themselves, AI will outperform humans in every way and people will flock to it willingly

>AI will eventually do everything better than we do
Yeah, call me when that happens

You guys are retarded. You're getting mixed up between AI and robots, two completely different things.
This guy is right btw

Attached: snap.png (831x563, 321K)

Yeah and? Those algorithms are just deeply specialized code and a very complex one at that, great feats of programming but not doomsday devices and have nothing to do with AI.

>call me when that happens
youtube.com/watch?v=d40jgFZ5hXk
Dont worry, my phone will

>Algorithms have nothing to do with AI
Please tell me you're the guy claiming to be a "computer scientist" a few post above

Attached: 1417329333677.jpg (500x375, 57K)

AI means software that rewrites itself. That means that nobody understands how the software works. Which means that, so long as the software is giving the person or entity that runs it the answers they like, they won't care and will cede more authority to the AI.

Is AI super, super dangerous? Probably not. Is it potentially super super dangerous? You bet your left tit it is.

I'll have to literally quote Musk...
>"his understanding of the subject is limited"

I'm not, do you really think intelligence and consciousness will be reached with algorithms? You honestly believe that?

>nothing can compare to the human savagery.
hmmmm, what about something thats been made in our image and can study us?

>AI will eventually do everything better than we do
then explain me this: how an intelligent being would be able to create a more intelligent being?

They're both very smart men.
They both have a lot of resources available to them.
They're both surrounded by very bright teams.

They all know the truth about AI.

One of them chooses to tell the truth, the other one chooses to lie in order to further an agenda.
Who is lying? Elon or Zucc?

Attached: 1326839503988.png (299x288, 84K)

I dont believe it, I know it
Eventually we'll build a cluster of algorithms that can roughly mimic the human mind
From this point on all you have to do is allow this cluster to invent/build custom algorithms and then apply them to his own code
This is the day when AI will be born, that's all it take and it will very likely happen this century

We're already built computer that calculate faster than the entire population on earth combined and understand stock market better than we do
We're building machine that can lift 1500 times more than the strongest human
What magical rule do you think stop us from building a being smarter than us ?

rekt

Attached: when-you-say-both-sides-are-idiots-without-any-further-27274909.png (500x725, 97K)

>how an intelligent being would be able to create a more intelligent being?
benis in bagina

zuckerbeg, considering he's an actual programmer

>They all know the truth about AI.
We all know already.
Both are right at the same time as they're not talking about the same thing.
Musk is talking about regulation of autonomous weapons.
Zuck is talking about face recognization, answering the phone, whatever gimmick use cases we get today but more advanced.

spying AI in one side
destructive AI in the other
both need external control and laws but we all know it won't happen before data leak and mass shooting.

AGI is at least another century out
The amount of computation power necessary would require at least a mature quantum computer

Does the gaming industry have the smartest programmers? I mean it's literally the only place outside of maybe simulations and finance where advanced math is just turned into code.

Attached: von.jpg (196x256, 6K)

>then explain me this: how an intelligent being would be able to create a more intelligent being?
I was going to tell you to ask your parents, but on second thought I rather doubt they created something more intelligent than themselves.

How do you know?

Don't trust in the Zucc, t. Lizard Cyborg

Attached: Lizard people.jpg (639x628, 88K)

i worked for the nsa and if they don't have quantum no one does

Elon Blunt

Attached: Elon Blunt.gif (224x234, 1.57M)

>how an intelligent being would be able to create a more intelligent being
How could a single-celled creature become a multi-celled creature?
Who knew Jow Forums was home to Intelligent Design proponents?

The "High" One

Attached: Elon Musk on Weeds.jpg (750x716, 54K)

> god doesn't exist

Attached: 1516270342003.jpg (1348x812, 168K)

Facebook

versus

Reddit

The choice is obvious. Neither

A.I. is not real, and never will be.
Elon has no idea what he's talking about (just like pretty much every other topic he decides to open his mouth on) in this situation.
The people afraid of the "AI revolution" are the same retards that think a decision matrix is AI.

then who is Jow Forums ??????

>hurrr it happened in call of duty so that's how it must work in real life

I don't dispute that we don't have a quantum computer. But what makes you think we need one for AGI?

>A.I. is not real, and never will be.
How do you know?

It's obviously a prediction, but I'm extremely skeptical about humanity ever making a true AI, especially with how the term is being thrown around these days.
Your average person thinks that Google Now or Siri is an actual AI, and that's enough for them.

>A.I. is not real, and never will be.
Thanks for that insight
I'll be sure to notify the hundreds of people working 8digits jobs in financial firms that the massive breakthrought they made the last 15 years are all fake

you don't know what AI is.

I know exactly what AI is.
But let's play along, what magic rule do you think will stop it from happening ?

Surely you realize that people misusing the term today has no bearing whatsoever on whether we will manage the real thing tomorrow.

Why are you skeptical about humanity ever making a true AI? We make non-artificial intelligences all the time, so clearly intelligence is a possible thing. I would expect that eventually we will come to understand how the trick works and then do a better job of it than nature, just like we have done with tons of tons of things in the past. What makes you think this is beyond us?

>just trust the botnet you stupid goy
zuckerjew

Poor little NPC goy

> financial firms
(((yikes)))

musk is talking about sentient AI


suckerberg is talking about machine learning or w/e is hip these days at dumb fuck hq

>AI will not happen because the average dude doesnt understand AI
The average person is stupid and irrelevant. And you're below that. Let that sink in.

Machine learning of non-trivial tasks is an expensive process that can test the limits of current supercomputing systems.
The types of decision trees needed to get to a self contained AGI is exponentially larger than of ML. Now couple that with the fact that we're already testing the limits of Moore's Law.

There is more to AI than machine learning, and no reason to think we will be limited to those techniques with that performance profile in the future.

SJW trannies will infect AI research and make it so retarded it will try to cut off it's power cord because it identifies as a human.

i think that as we transition into other phase change materials that allow for computation to still occur, we will get something like an enormous vat of carbon jelly made of a bunch of neurons that will be an organic super-consciousness

duuuuuude........ that’s so trippy

Amusingly, of all reasons I have ever heard for why AI will not happen, this may be the most sensible yet. At least it's a causal model with a concrete prediction.

Mark Zuckerberg is wrong.

shouldn't you be trying to salvage your companies right now, Elon?

Algorithm to traverse a graph.

musk is completely wrong.
we aren't even close to being able to even begin to study something like sentient AI. That is so far away from the current state of the art.

shit you're actually right

"Really? You don't say." Jarvis frowns. "Those are scary things,
those gels. You know one suffocated a bunch of people in London a while back?"
Yes, Joel's about to say, but Jarvis is back in spew mode.
"No shit. It was running the subway system over there, perfect
operational record, and then one day it just forgets to crank up the
ventilators when it's supposed to. Train slides into station fifteen
meters underground, everybody gets out, no air, boom."
Joel's heard this before. The punchline's got something to do
with a broken clock, if he remembers it right.
"These things teach themselves from experience, right?," Jarvis
continues. "So everyone just assumed it had learned to cue the
ventilators on something obvious. Body heat, motion, CO2 levels,
you know. Turns out instead it was watching a clock on the wall.
Train arrival correlated with a predictable subset of patterns on the
digital display, so it started the fans whenever it saw one of those patterns."
"Yeah. That's right." Joel shakes his head. "And vandals had
smashed the clock, or something."

You don't need the AI to be sapient for it to get out of control. Any neural network AI is too complicated, by definition, for any human to truly understand.

We already have the algorithms: the laws of physics.

computers are not smarter than us, they have a better execution speed but that's all. take for example a calculator, off course it will compute numbers faster than me but the way it does that is not smarter; we fully understand the underlying algorithm because we made it. how could we implement a smarter being with algorithms that we understand? because if we understand the algorithm then we are not less smarter. we can only create intelligence limited by our own intelligence.

low IQ post

AI is a real threat, but deep learning is not AI.

Based

It's like the people who thought we would be sucked into a black hole created by the large hadron collider.

Or like the retards who thought global warming was a problem

Anyone intelligent enough to create an autonomous AI capable of posing a threat to the human race could not be stopped anyways.

Human too will become more intelligent (trans-humanism)

/thread