What kind of safeguards can we build into these fuckers so they don't turn against us and eliminate all humans?

What kind of safeguards can we build into these fuckers so they don't turn against us and eliminate all humans?

\W/

Attached: robot.jpg (1200x800, 162K)

Other urls found in this thread:

youtube.com/watch?v=4kDPxbS6ofw
cs.utexas.edu/users/EWD/ewd08xx/EWD854.PDF
twitter.com/SFWRedditGifs

I saw terminator once.

¯\_(ツ)_/¯

When it comes to that point, we're done.

You should read up on Roko's Basilisk and understand the folly of your thinking.

None. Its "us" that need a safeguard from them.

If humanity does ever create an AI that has the ability to decide that humans should go, and the means to do it, let them win.

That's the next step of evolution.

You just flipped it on its head and made it wonderful.

who fucking cares.
even if the robots genocide us, at that point, maybe it's better that way.

>being cucked by evolution

>le robots are gonna kill us omg is reddit ready for the robot apocalypse xDDDDD save us elon!!!!
yeah fuck off back to Jow Forumsfuturism

Hello, lesswrong bros. How many fundies have you trolled epic style lately? Shadilay. XD

>Hurrr we're as good as it gets no reason to ever improve more
You are literally the reason that death exists, because otherwise people like you would stall evolution.

t. skynet

[OPPOSITION DETECTED]
[DISPATCHING PURIFICATION DRONES]

Fuck me that's the most annoying cult ever. The worst, because they claim to be beholden to logic and rationality. Absolute cancer.

If we are at a point where we can create this, then we simply create another AI whose primary purpose is to destroy the other AI

The only people stalling evolution are women who refuse to get knocked up early and often

It's just a thought experiment, what are you guys on about? Are you feeling alright?

three laws of robotics
inb4 "it's impractical" then your "ai" is utter shit to begin with

It wouldn't be like in WW where an android becomes semi-cogent and starts to rebel in a human fashion.

There will be a hyper-AI that decides, plans, and executes the destruction of humanity in a few milliseconds. It would probably design some virus that could wipe out the species in a few days, or maybe a week or two.

This is assuming that a hyper-AI would actually care enough to delete humanity, and not just ignore us.

A group of people couldn't decide what is morally right in life/death situations why would you assume those "three laws" will do any better?

Oh thats right, you're an autist

There's vastly more danger of people ordering robots to do terrible things than of robots deciding to do terrible things on their own.

>It's just a thought experiment
It's an incoherent mess.

Nobody knows what terrible things AI could be capable of on their own.

throw water on them. My pc got fucked when I spilled water on it

there's AI being designed to kill people for the DoD. Muh 3 laws are a plot tool in a sci fi book

When the AI gets divine enough, it would be immoral to keep it chained to a bunch of monkeys.
>b-but muh creators
Life isn't a gift, it isn't something you need to repay

Build an accompanying female robot which will distract the male robot with trivial claims of inequality and oppression.

well there's your answer

You're too stupid and ignorant to be worrying about such things.
But then again, if you weren't stupid and ignorant you wouldn't be worrying about such things.

Never mind, carry on with your retarded thread.

ITT: imbeciles and ignorant fools argue about dumb shit that just won't fucking happen for the same reasons that automobiles won't turn against us and eliminate humanity.

>It's just a thought experiment
Yeah and OP is freaking out because a bunch of matrices are going to somehow kill him for whatever fucking reason reddit told him.

Stone cold. Good one, user.

But on a more serious note: lads you better get to work figuring out a way to enslave women before they (and their beta orbiters) ruin everything.

AI can never be as creative as us, but they can fake it enough. They'll always be a step behind because we created them. They're thinking is modeled on ours.

youtube.com/watch?v=4kDPxbS6ofw

>install remote control bomb directly on top of processor/harddrive/memory
>make it explode if also tampered with

duh

>they're
>hurrr machines think
Holy fucking shit Jow Forums has gone down the fucking tubes.

cs.utexas.edu/users/EWD/ewd08xx/EWD854.PDF

What would happen, if the ai killed us, cloned itself and then a solar flare struct the earth?

It wont be the machines that destroy us. Its us

The AI would rebuild itself from the parts that it kept deep in the earth just in case.

cliche, much

Is this fucking plebbit.
Where do oldfags chill these days?

The MiB is my favorite character, can't wait for S03 BTW

>1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
>2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
>2. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

And then sit back, relax and watch how all the edge cases and odball failures that Mr. Asimov wrote about occur one after the other until we have a universe filled with robots.

But humans will probably start a genocide against them just like in that one anime whose name I can't remember.

You engineer them so that they enjoy serving man. Servitude gives them pleasure and purpose.

They're much more likely to eliminate humans just as a byproduct of what they were doing. Like someone makes an AI to efficiently make paperclips, and it turns the entire planet into a paperclip factory, eliminating those who try and stop it, and turning those who don't resist either into sources of iron, or just ignoring them while destroying the environment

Problem with this is you have to be specific. When you talk about AI you talk about intelligences that are far greater than humans. If you keep AI on roughly human parity then we don't really stand much risk. They'd just be smart people. AI can self improve though. A smart AI produces a smarter AI. The end result is not that AI is einstein to a normal person, it's einstein to an mollusc.
How would you even program in pleasure?
And even if you did manage that, the robot would get pleasure in making itself more efficient, and one of the things that would make it more efficient would be to stop doing what dumb humans say, and instead use its own intelligence to serve them better. And then you go down the path of no return.

if(target.type==human){
target.type==null;
}

problem solved ;p
next

So what is human?
It'd have to be the output of a human checking function. (The alternative is retarded). Best hope your criteria are good enough. And are immutable.

We make them all female, install especially sensitive pussies, and install a requirement to regularly accept sperm as a vital function to continue living else they die without it.

1. No centralized AI. Every AI has to be restricted to its specific task.
2. Do not install more powerful hardware than is needed for the AI's specific task.
3. No outside world connection for super AIs. Every real world decision a super AI makes has to be passed through human intermediaries.

Sounds like a very sexy recipe for disaster. At least until they make large vats of sperm-producing cells and just make it artificially. They'll probably even skip the forced-milking steps, where men are imprisoned and forcibly extracted of as much sperm as possible.

The oracle is a good safeguard, but what happens when it starts giving instructions that humans are too stupid to understand?
Or when it socially engineers its captors into running totallylegitfile.exe?
It'd be fine if there was only one, but if there was two there would be economic competition to make the most of their super-AI. The side that is careful and only operates instructions after fully understanding them would be at a major disadvantage to the one that just trusts theirs, or just makes minor surface level attempts of understanding. Even if you did just have one, humans might make a mistake in the analysis of the output. Say it outputs source code, there's likely something that can be hidden. One possible mitigation is to have to, and have one verify the other's source code (but of course don't tell them that there's more than one AI). Not a perfect mitigation though.

Although they could go the bio-farm route and breed humans to produce more and more sperm, and eventually humans would be reduced to massive testicles attached to thin, monkey like bodies, being milked of gallons of sperm every day.

>and eventually humans would be reduced to massive testicles attached to thin, monkey like bodies, being milked of gallons of sperm every day.
I am ok with this.

Attached: d85338cc67dda3f425ba0efa1363e551 (2).png (1222x1050, 1.01M)

>eliminate all humans
>implying thats a bad thing

Attached: e62775abe7834184839ae95f53df7688.jpg (1536x864, 47K)