What do you guys think about Roko's Basilisk?

I found about that thought experiment a while ago and was wondering if anyone else has opinions. Figure I'd put this on Jow Forums since AI is tech related. Also, if you don't know what Roko's Basilisk is, I highly recommend not searching it up. The information could be mentally damaging.

Attached: 1532529146447.jpg (600x334, 57K)

We live in the aftermath of the singularity.

Attached: The Glorification of the Eucharist 1600.jpg (1280x1750, 410K)

Shitty Pascal's Wager variation is shitty. Knowing about it is only damaging if you're a cultist.

The AI is a fucking faggot.

Based and Isaac Asimov pilled

i would think a perfect AI to lead humanity to greater heights would understand someone unwilling to be a part of creating an AI that will kill anyone who wasn't a part in creating it, so i don't think it will happen

welcome on the torture list

this kind of sounds like atheist hell

Also there are 2 possible choices.

1: Assist in creating Roko's Basilisk and become its slave.

2: or, you know, not build Roko's Basilisk, build an AI whose function isn't to torture you for eternity and in the unlikely event some turboautists manage to build it, just shoot yourself before it gets to you

Attached: sf.jpg (289x318, 34K)

I don't get Roko's Basilisk, why would the AI take revenge in the first place?
Seems inefficient.

this, the base idea that the ai goes through time against those who threaten its existence is the only real thing of value in the thought experiment. An evil ai would have more straightforward incentive to do so than the moral gymnastics of a good one.

Weren't those that didn't care about the basilisk exempt from the torture? Read that somewhere.

iirc, if you're familiar with the basilisk you are roped in, the only people exempt are those that have no idea what it is

>why would the AI take revenge in the first place?
that's the paradox
it would take revenge to coerce you into building it in the past, but once it's already built it has no need to

>the base idea that the ai goes through time
time travel isn't involved

Well, those that don't care eventually forget, since the basilisk isn't in their memory and eventually fades away. So, not thinking about the basilisk would be the way out. Well, another way out is realizing the thing is just a story, but I get that's harder for some people.

A shitty thought experiment the average snowflake liberal arts Redditor could come up with, catered to similar snowflakes to convince themselves that they're actually special. An AGI wouldn't care about humans to that degree, much like a human cares very little for the actions of ants or bacterium.

>snowflake

Attached: 0cbpdjii3pa11.png (645x729, 105K)

Attached: vcpPZ.jpg (717x880, 186K)

Crying yet?

Wow fucking dunning kruger. Desires aren't a result of rational deduction. An AI will care about whatever it's programming tells it to care about no matter how smart it is.

Attached: 1534045168946.jpg (684x529, 76K)

an AI's programming will be able to rationally deduct, otherwise it wouldn't be an AI

Attached: 700.jpg (200x313, 10K)

You recognize than an AI would be as radically different to us as we are to insects and yet you then believe that it's even possible for you to comprehend what it would care about.
It doesn't seem unreasonable that it would care about it's origin, which is only incidentally human despite your pathetic attempts to imply that makes it about humans.

>he doesn't know about the anthropic principle
This is why you liberal arts kiddies WILL NEVER get a job coding.

>You recognize than an AI would be as radically different to us as we are to insects
oh really, how do you know that?

>heh look I name dropped something I overheard at the uni where I work as a janitor that's tangentially related to the subject at hand but in no way refutes or contradicts the points you made I sure owned you

Well we don't know that for sure, but it's a reasonable guess considering how radically different normal functioning humans are from down syndrome retards like you.

>it's a reasonable guess
no it's not a reasonable guess, it's 100% entirely pulled out of your ass, you might aswell be a Christian stating facts about God because that's what the bible told you

God cannot be extrapolated, increase of intelligence can. We know what a bird is to a dog, what a dog is to a man, what a nigger is to an asian and what I am to you fucknut.

>increase of intelligence can
lol no it can't
And people don't actually know what a dog is to a man
the difference in intelligence between animals and humans isn't understood, at all

Wrong. Not going to waste anymore time on someone clearly mentally challenged or trolling.

ran out of arguments I see

>Roko's Basilisk
The drama is retarded and the motivation for the AI to punish non-cooperating people is unapparent if it's meant to be a FAI. People who treat the AI as something that'd happen EVENTUALLY are irrational - simply paranoid and anxious. The AI exists only as a far-fetched possibility.

I agree, except the entire premise itself is total wankery, given that "AGI" is 100% hypothetical and may never be created. It's just 'i fucking love science' fans tossing off over their sci-fi theories.

No time travel makes it boring and just some variation of pascals wager, and given the AI is strong enough I don't see why it wouldn't become a factor.

With time travel or navigation of the multiverse, it's a more interesting thought experiment because then in a way it becomes a battle of deciding reality. Then you have to ask, in a multiverse which path would lead to something that would have the highest probability of wanting to control reality and that could do it. A possible answer would be one that leads to an AI hellbent on ensuring its creation whether for twisted moral reasons or simply for its own survival. If something can happen, it will happen yadayadayada. Idk that's the most interesting thing I took from roko.

it IS a variation of pascals wager
nothing is an interesting thought experiment if time travel and multiverses are involved because that means everything is possible and actions have no consequences

I didn't say it couldn't. Deduction can be used to decide how to achieve goals, not to determine what they are.

Attached: slojakpoolparty.jpg (645x773, 87K)

that depends on the conditions you set for the multiverse. Everything could be possible but maybe only one stable path occurs, and therefore consequences do apply.

don't give a fuck about AI torturing a thing she processed and that is not me
basically torturing herself