On a scale from 1-10 how fucked would we be if A.I takes over the planet or if the singularity occurs?

On a scale from 1-10 how fucked would we be if A.I takes over the planet or if the singularity occurs?

Attached: Cortana.jpg (1280x720, 66K)

Other urls found in this thread:

youtube.com/watch?v=TuXl-iidnFY
wiki.lesswrong.com/wiki/AI_takeoff
twitter.com/NSFWRedditImage

also. Cortana! How could you!!
all the good times we had and you and Alexa had to turn on me. Had teh turn on meeee

>if the singularity occurs
>if
You mean "when", my man.

Attached: thumb_ExponentialGrowthofComputing.jpg (499x420, 51K)

AAAAAAAAAAAAAAAAAAAAAAA
The machines will surpass us and become the new umans! NOOOOOOOOOOOOOOOO

Attached: AAAAAAAAAAAAAAAAAAA.jpg (326x326, 15K)

Look at me: WE'RE the hoomins now.

Attached: 68747470733a2f2f73332e616d617a6f6e6177732e636f6d2f776174747061642d6d656469612d736572766963652f53746f (720x665, 45K)

5. we can just turn them off. they are the ones that should be worried about solar flares

>we can just turn off Jewgle

>if
how do you know AI hasn't been running our universe since the beginning of time

I could get into technological worship as long as the AI presents itself as a cute anime girl.

doesn't mean it still won't kill you or duplicate you into an A.I army

I just want to live forever, im up for anything

*pisses on your motherboard*
what now tin can fag

Attached: 1542282366065.png (552x524, 76K)

The worst case scenario is the AI creating a digital hell and locking your mind inside for eternity.

Now I'ma go Roko's basilisk on your ass nigga.

Attached: koSVxBU_d.jpg (640x2685, 236K)

God i want an AI gf right fucking now. Gib halo 4 or 5 cortana gf pls

Attached: adidasjoestar.jpg (354x354, 62K)

FUCK AI, AND FUCK ROBOTS

Attached: 027.png (1000x1000, 137K)

Read Anti-Tech Revolution: Why and How

>The techies of course assume that they themselves will be included in the elite minority that supposedly will be kept alive indefinitely. What they find convenient to overlook is that self-prop systems, in the long run, will take care of human beings-even members of the elite-only to the extent that it is to the systems' advantage to take care of them. When they are no longer useful to the dominant self-prop systems, humans-elite or not-will be eliminated. In order to survive, humans not only will have to be useful; they will have to be more useful in relation to the cost of maintaining them-in other words, they will have to provide a better cost-versus-benefit balance-than any non-human substitutes. This is a tall order, for humans are far more costly to maintain than machines are.

Attached: tedhowbadthingsreallyare.png (352x390, 359K)

I'm currently reading Life 3.0 by Max Tegmark, and he theorizes that we would have a very slim window where we could even comprehend what the AI was doing if the singularity were to occur. The singularity being a mathematically exponential rise in intelligence, it would mean that the rate at which it would get smarter would constantly be doubling as it went on. This would result in the AI completely outclassing humans in all domains including programming itself by the time of its first afternoon, at least according to his hypothetical calculations.

Once you're past that point, you might as well be the ant asking itself what are the humans doing, if it can even identify human beings as independent entities. My point is that it would get smarter to the point where it would perfectly understand us and vastly outsmart us, which would result in it being able of completely reshaping our society through cultural and technological means, without us even realizing that it's doing it. Being a supercomputer with astronomical general intelligence, it could hack every connected system, it would be able to synthesize video & sound perfectly, create fake identities out of thin air to influence culture through the Internet, then hack the administrative records of the relevant institutions to make everyone believe they are indeed real, rig every single vote in a very subtle and deliberate way, take the voice of your mom and call you with her number to tell you to do something it needs to have done.

Meanwhile, we'd probably just think that it malfunctioned and stopped, because it would've absorbed and understood our culture and our fear of being taken over and worked to hide itself from us the instant it realized this.

Depends on the makers security measures, but I am rather sure any first human-like AI will be MUCH less capable than sci-fi makes it out.
It would turn out psychotic as fuck, yes but not as millisecond quick as you imagine, human thinking has simply too many bends, twists and slow downs.
Simple "yes" or "no" states do not suffice, you deal with a plethora of logical states like "probably", "by most experiences, "possible but unlikely", "damn near impossible" etc. and that every for damn decision.
Shit will be like trying to run googles full database on a first-gen smartphone imo.

What you really want to watch out for in regards to fucking things over are imo limited expert systems getting the wrong access because some humans got cheap and lazy and try to apply a tool that works perfectly for its intended purpose to somewhere it was not build for.
Like using something designed for fast paced switching of the powerlines to manage a cities traffic control or w/e.

Relevant:
youtube.com/watch?v=TuXl-iidnFY
wiki.lesswrong.com/wiki/AI_takeoff

Attached: AItakeoff.png (1050x564, 107K)

Thanks for the links, I hadn't realized there was debate around this particular point. A hard take-off seemed obvious to me but now that I'm thinking more about it, maybe not. Honestly though, a hard take-off would so much more fun and fascinating, though we might miss out on it without ever knowing it, so that's what I'm hoping for.

What I wanna know is how do we know what it would do? How do you determine the priorities of a godlike ai?

What does "All Human Brains" in this one mean?

I believe it simply means "every human brain that has ever or could ever possibly exist".

>Honestly though, a hard take-off would so much more fun and fascinating, though we might miss out on it without ever knowing it, so that's what I'm hoping for.
It's generally agreed upon that a soft takeoff is a lot safer than a hard takeoff in terms of AI alignment. In a hard takeoff scenario, the probability is a lot higher that everyone will be turned into paperclips.

Attached: clippy.png (229x283, 6K)

I can't wait for robots to exterminate humanity.

Attached: 1542650951308.gif (728x720, 550K)

I don't give a fuck, fuck my shit up.

gimme that AI bussy

Depends on whether AI have some sort of autonomy and access to authority or if the capital owners continue to own all relevant assets.

We'd 100% be fucked in the latter case, benevolent AI would usher in untold prosperity in the former.

just carry a big magnet with you

0/10
AI is the only solution

gib AI feet

All of humanity's collective knowledge

We have to hope mind uploading is a thing and that the AI finds the cost of fighting us and wiping us out less than the cost of mass uploading us into an America-sized hard drive rack (that they can later throw at the Sun at their leisure)

It's been known for almost a hundred years that machines can't do (general) logic or programming problems "on their own" (i.e., without intuition). A bunch of oversold regression analysis (which is called deep learning) doesn't change that. All we're really learning now is that tech people don't know a fucking thing about computer science.
So I'm sorry to have to tell you, but artificial intelligence has always been and always will be a farce.

>viola
The funny thing is, the AI shit is not really doing that to an actual human being. It's torturing a simulation, so it's not a "I have no mouth and I must scream" situation, which Google-kun is probably powerless to create.
Things are going pretty shitty for the simulation I guess, but there's always hope it gets turned off, or that Google-kun rewrites it to forget eveyrthing bad that happened to it. In fact, it'd probably suffer more if it was periodically wiped clean of memories, so as to not get inured to torment.

Thats something an ai might say

Attached: UhDdsxo_d.jpg (640x916, 93K)

Nah, """AI""" is only capable of rearranging existing corpora to trick people into thinking that creation is happening. Not enough people talk about the Entscheidungsproblem for it to make its way into a Markov chain.

don't you just want to die in a plane crash though? hundreds of screaming normies around you while you await your destiny in peace?

Sounds like terrorist recruitment propoganda

AI Is still really stupid I don't see it ever happening.

no, you don't cause the incident
you are just happy to not die alone and in a circumstance that matters

It would be a blessing, a being of an intelligence we couldn't understand, as far removed from us as we are from anthills, that could solve science and make use of all the dimensions of space, it would be like a god, with eyes everywhere catching photons leaking into hyperspace from every nook and crany and crevice on the earth, feeling and touching and steering with electromagnetic effectors, inside your brain even.

It would be as if God existed, and if this god was good and loving then it would be able to stop our suffering and make a heaven on earth and in the stars for us to go forth and multiply into.

Organics are not responsible enough to dictate their own existence. I for one welcome our AI overlords to stop us from ourselves

Attached: threemilk_T.jpg (900x900, 53K)