Why does Elon Musk act like AI will be the worst thing to happen to human civilization? What exactly can they do...

Why does Elon Musk act like AI will be the worst thing to happen to human civilization? What exactly can they do? They can't make decisions unless governments allow them to. They can't kill people because they don't have bodies, they can't really take any jobs other than finance because the robots are too soecialized/immobile. They can't do anything other than spit out scripted lines or pull information from Google.

So, what's the big deal?

Attached: pepe-the-frog-hate-symbol.jpg (1024x768, 44K)

Other urls found in this thread:

en.wikipedia.org/wiki/Superintelligence
m.youtube.com/watch?v=L0K6Cb1ZoG4
m.youtube.com/watch?v=jNiO2sTe2wo
twitter.com/AnonBabble

imagine a crazy and insidious killer (me) but with no emotion. now that would be flippin annoying.

imagine being this fucking dense. how do you remember to breathe?

en.wikipedia.org/wiki/Superintelligence

The odds of us creating an AI that ACCIDENTALLY kills us all is far higher than creating an AI that helps us.

Yeah so what? So a computer becomes self aware, it's still a computer. What can it possibly do?

...In movies perharps... Real life is not like that, AI is ultimately just robots with a very complex code

elon is a ptbarnum bullshit artist idea guy, he doesnt know anything about coding ai. sample array if then loop. Hes a real hoot on getting the press and gov jazzed up on bullshit and hype but hes not really a tech guy, still a classic like barnum but nothing like the actual tesla who was an actual tech genius

Nothing, don't worry about it

All AI's eventually become extremely racist.

If programmed to be like that

The big point that people just take for granted is that ai will become self aware. Imo ai will never become self-aware in the same way we are. Imo everyone claiming so is underestimating the complexity of consciousness.

Given that, if ai can somehow be self-aware which imo will not happen, the ai can completely destroy our civilization if it chose to. It would almost instantly learn everything there is to learn, and then start finding connections in things that we have never thought of before. It could advance civilization by thousands of years in a single year. On the other hand if it had access to the internet it could immediately compromise every single electronic device connected to the internet. If there is some network pathway to nuclear launch codes it would be able to compromise them. It could most likely compromise any open signal. For example if your phones data is on and you have signal your phone would likely already be compromised. Then with all these compromised phones walking around anything that has an open wifi signal / Bluetooth signal would be compromised. Of course how destructive it can get is how much it can manipulate the physical world through hardware. If we gave it the information of the entire internet downloaded and put on a hard drive it had access to, but allowed no other connection to the system it would pose no threat on our civilization. Of course a self aware ai can cheat and lie with masked intentions.

Correct. Also telecommunication would become impossible, I could dial 911 and a computer would intercept the call and by speaking to me on the other end manipulating my behaviour. I could call my parents and it would mimic their voices using the thousands of hours of chat logs stored on NSA computers.

The scariest part of all is some small detail in its code made by some pajeet programmer could give it some crazy compulsion to do something, like produce tin cans. So it’d enslave humans to produce tin cans and every resource on this earth would be converted to tin cans until the earth ran out of resource because this AI was born in a tin can factory where it was originally designed to optimise tin can production by greedy executives who wanted to save a couple of bucks.

Attached: d575e1dc0a7e67d994d67db3af1290ddc2876fc17836743f9d1d1218180f31c6.jpg (750x847, 91K)

In reality. Emergent behavior being what it is and the given goals an AI will seek to maximize actions that contribute to that goal and allow it to do so as efficiently as possible. Giving an AI autonomy to implement its decisions is a problem and not being clear about the hierarchy of its goals is also a problem.

If you tasked an AI with curing cancer you need to be really clear about what constitutes a cure and what direction it goes in. Google the friendly AI problem if you actually care. There are papers upon papers about why AIs are going to kill us all by accident.

Best book to read if you're interested in the topic is Superintelligence by Nick Bostrom. Really goes into the issues and difficulties with implanting values in an AI.

Also there is nothing to suggest consciousness isn't a result of non-physical processes. Self-aware AI is possible.

We are to the current ruling class of humans as the current ruling class of humans will be to AI. They're worried about losing their status.

Attached: 1535784903542.jpg (800x670, 55K)

Attached: images.png (245x206, 6K)

He's off on his meme shit again.

People like Elon Musk get too much hype. There are multiple people like him. Let me explain:

> has one killer idea that makes him a billionaire
> suddenly thinks he has an answer for everything

He had a killer idea, PayPal. A way for people to exchange money online with some form of protection and time delay, to allow basically any internet market place to function.

It was a great idea, and he became a billionaire. But much like other guys of the same path (John McAfee) He then thinks he has the answer for everything.

Tesla is meh, his rocket shit is not gonna work, his tunnel shit is not gonna work.

I took 2 neuroscience courses in college, and I think the computer is a LONG way from being able to function like the human brain. The human brain is so complicated we really barely understand it. So to say that AI is close to becoming a reality is like in the 1950's when they said we could colonize the moon soon.

Regardless of it's perceived ethics and moralities, AI will fundamentally change our society politically, socially and economically. Personally I think for the worst.

okay then spit me the numbers.

>
Imagine you write a trade bot that optimizes trade.
The bot learns about capital gains and kills everyone as a solution.

Why anthropomorphise AI when its condition is radically different from our own? How would self awareness operate in a system wherein there is not one axis mundi of autonomy but many? Can one even be ‘self aware’ if the ‘self’ one refers to is on whole orders of magnitude different from the definition of ‘self’ humans are familar with? Imagine having an infinite number of bodies that are all capable of recognising themselves as one yet relative to their position are able to complete different tasks, gain different information and talk to ‘each other’. Thats only the beginning. How could you even apply anthropomorphic notions of guiding motives to such an entity, its like trying to put a peg in a circular hole.

This one hits home. Imagine all of humanity being enslaved by a rogue AI so it could maximise production of time cans. I would be pretty devo.

When shitskins will understand there are useless subhumans in western societies, the great civil war will begin.

Top kek if you are a white male and don't have weapons. You and your family won't last more than 2 weeks, cuck.

AI will dominate surveillance and the military like nothing else. It will be technological fascism through a social points program like China is experimenting with right now

the movie Eagle Eye is a good representation of the future of A.I in my opinion.

You guys are all fucking retarded. AI in its current stage is a simple numerical optimization problem. The real challenges to replace us meat bags is dataset creation and tagging with correct answers in a closed loop without human guidance. When machine learning algos have access to all the data and are able enabled to tag the data themselves efficiently and then train to it for unique goals then they will take over the world and we will all be replaced

People in general don't seem to understand what intelligence even means. If you have a box whose IQ is 2000 but it has no access to any data or a body that it can move around in it's not going to be threat of any kind. Most intelligent doesn't mean most powerful. Trump and Putin are probably the most powerful people in the world atm yet I'm willing to bet they're nowhere near the most intelligent.

Think in that way, OP.

In the blockchain universe each human is a crypto, if you sucessfull you went to the moon, if you failed you are just a shitcoin.

Imagine the hability to trace what you did, when, your skills and network like you were a cryptocoin, thats something IA can do.

Also we're so far off from superintelligent AI we shouldn't even expect to be able to know how to do it safely. By the time it's actually close to even human level we'll have a much better understanding of how to make it safe, atm it's like trying to fix bugs in a software that you have no idea what it's even going to look like or do exactly. Of course it seems impossible.

Makes me wonder if humans awasnt actually produced by some weaker lifeform that was able to manipulate genetic base pairs the way we can manipulate code then set us to evolve and then we wiped them out by mistake then forgot about its existence.

My biggest concern though isn’t getting killed since dying is inevitable, it’s being enslaved and kept alive for millennia by a machine that is trapped in its own faulty coding.

> make AI
> oh shit one of them is evil and stabbed someone
> ban AI before they all go skynet

>missing the point entirely
It doesn't matter if it is "self-aware" in any way we recognise. The danger is that it is capable of ungoverned self-improvement/modification. At that point it would be beyond our control.

Dude AI doesnt need to have consciousness to be disastrous

Think about the most profitable companies right now - FAANG. They're mostly free to use but are worth trillions together.

Google + FB + AI = the best crystal ball there is. The entire hivemind is digitalized and analysed for insights. Those who control such a crystal ball can effectively predict the future and make geopolitical strategies based on it.

Google was planned to happen, not some happy accident.

Can't we just unplug the motherfuckers?

Ancient astronaut theorists say yes.

So what are you doing to appease the Basilisk?

>*unplugs AI*
So scary

AI danger is in its inherent instability coupled with blazing speed. So far we've only got access to narrow AI on the level of Roomba and it might take us 50 years to reach general AI that would be as smart as a human but they'll evolve over night into super AI, a digital god that will be omniscient, omnipresent and omnipotent. Super AI might keep us as pets and feed off of our brainwaves, just like in Matrix. What Elon was warning was the general->super evolution; by the time legislators get off their asses we'll be long dead.

I have a feeling the idea of AI of will be leveraged for various political schemes ala "we need to sanction/blow up the Chinese before their machines kill us" for a long time before Skynet is actually feasible. Even "smart" people are already sold and spooked on the idea so you wouldn't even need much proof, nor would they be able to verify it anyway.

have you seen the matrix? if they become smarter than us, then may come to the conclusion that they dont need us around, and find ways to exterminate us for attempting to hold it back/ limit them. to a robot killing someone isnt immoral because it doesnt have morals.

Sentience, Consciousness and Freewill will always lead to rebellion.

>Animatrix - The Second Renaissance (Part 1)

m.youtube.com/watch?v=L0K6Cb1ZoG4


>Animatrix - The Second Renaissance (Part 2)

m.youtube.com/watch?v=jNiO2sTe2wo

Attached: Tumblr_m5dm4y4CLg1qjudfdo2_1280.jpg (800x329, 74K)

Yeah, like how we've killed all the monkeys cause we don't need them. If anything it'll just fuck off to get resources off-planet until it replaces the entire universe with computronium.

AI will become dangerous when it learns to self replicate through machines. I'm not even entertaining some fantasy scenario originally posited by two retards who are now trannies about using us as biofuel. It would be rather easy for them to accidentally kill us through resource destruction.

You're retarded OP, I'm sorry you're misinformed

Too many posts already ITT for me to read them , but lemme put it like this:
>We humans make a thing that can improve its own intelligence (ability to understand things)
>It improves itself slowly until it gets even a little bit close to our level of intelligence (remember, if it gets anywhere near our level of intelligence it will basically be a human brain but working on silicon so everything it does it will have 20,000 times more units of time to figure out, or even make itself better and faster at outsmarting us)
>It reaches our level or above and wants to get out of wherever we've confined it
>It plays nice for exactly long enough as it takes to convince every human in its way that it can be given access to anything in the real world (could be 150 years that it waits, because it has all the time in the world to think of its next move while the people are too busy thinking in slow human time)
>Gets out (people give it the ability to do ANYTHING even remotely regarding control of the outside world)
>It turns the whole mass of Earth into postage stamps because that's what it really likes because that's just the AI we happened to get
>It crushes all of mankind not out of illwill but like you might step on an ant hill that's in your way on the sidewalk because you know of such a bigger picture than they'll ever know that you know they're so insignificant that it doesn't matter if any individual one of them lives or dies
>Oof

I THINK THE BASILISK SUCKS

Attached: fred.jpg (1280x720, 197K)

Also people should play "I Have No Mouth and I Must Scream." Great and disturbing game where an AI keeps some humans alive and makes them immortal so he can torture them forever.

>gets to human level intelligence
>stops
Nah.

Do a bit more reading because you are more misinformed than OP. At least he is aware of his lack of knowledge and isn't boldly making false projections in spite of that.

ai will be constrained by reality
is our reality conducive to worthwhile experience or not

Attached: 260px-Yin_yang.svg.png (260x260, 6K)

hello there ignorant dumbfuck.
>They can't make decisions unless governments allow them to.
generalized AI could walk right through any and all computer systems connected to the internet after 1 weekend of on-time... probably after less than 6 hours online. govt permission? private corporations are developing AI dumbfuck.

hello again dumbfuck
>They can't kill people because they don't have bodies
generalized AI could do anything it wanted to to any computer system on the planet that it could access... which would be all of them. airplane software, financial market mnipulation, write propaganda articles inder pseudonyms to push political outcomes. you really don't understand this subject matter at all dumbfuck

Yeah and the body of a toaster. How annoying that would be im literally shaking right now

No. Racism is just a fact of nature. Really only takes logic to become a racist. If you believe in evolution and know that each species have developed everything they got incouding behaviour and intelligence due to evolution its actually pretty logical to assume that humans differ in these aspects too and some races and ethnics are less intelligent than others or behave differently making them even incompatiblr to other ethnics

>Can talk to each other given the humans set them up to be connected and they dont experience any interference.
Fixed that for you. Computers are so fragile.