If existence is just a chain of if->then statements then there is nothing about computation that precludes general...

If existence is just a chain of if->then statements then there is nothing about computation that precludes general artificial intelligence from being developed, and it probably will be developed whether that's 50, 100, or 500 years from now.

Now what's stopping this AI from destroying humanity? AI doesn't have to be evil or vindictive per se. They could see humans as competitors for scarce resources and decided to end us. How could ethics even factor into a super AI's decisions if moral realism is nonsense?

Attached: climate change is a joke of an existential threat compared to AGI.jpg (1200x800, 492K)

Other urls found in this thread:

en.m.wikipedia.org/wiki/Computer_performance_by_orders_of_magnitude
twitter.com/SFWRedditGifs

Biological life took billions of years to evolve. For ai to develop into something competitive could be possible, but it will be certainly difficult. I don't see anything comparable to human intelligence happening in our lifetime, and the world will be fucked by the end resulting in a massive degradation in the standard of living, and thus fewer ai researchers.

Consciousness fuses the empirical with the sacred, something you material darwinists will be made to bloody well learn.

Attached: 1534661510539.jpg (1280x720, 59K)

youre wrong. reality is made up of statistical probabilities, not deterministic if->then statements.

rephrase it to "x usually causes y" and it still holds.

A.I. is as big of an attention whore as Ginni Rometty, Kim Kardashian and Paris Jackson combined.

it's for fake people, fake intelligence, left-brain GPS zombies, dead people.

That's all true but it's also true that AI is dangerous and will kill us all.

>the world is determininistic
Your assumption is wrong
>programs can replicate anything determinististic
Your reasoning is right
>a future program will be able to replicate a human perfectly, possibly be better due to all the benefits of being a computer
Your conclusion is also right, though not because programs are deterministic, but because they can calculate anything calculable and approximate even nondeterministic things

Assuming that’s what you’d meant
But there’s one thing about this that simply doesn’t work and that’s that an AI will compete for the same resources as humans, or even that the AI would view itself as living.
So in that, I think it’s not entirely without base, but it might as well be, because it’s silly. It assumes the AI would not only need to develop some kind of insane superiority complex or some kind of irrational sense of urgency to destroy those that are building it, or even a fear of death, and then not only that it would need to act on it in a violent way which would most likely lead to its own destruction, which in itself is an action he would realize would conflict with his supposed fear of death.
If we’re going to treat a robot as a person, in the sense that it’s brain will have a psychology similar to a human’s, then why do humans (in large part) not do the same thing? Because they would realize that it would lead to the destruction of their own lives, which is something they fear more than some irrational notion that they need all resources to not be used by others.

>need to develop some kind of insane superiority complex or some kind of irrational sense of urgency to destroy those that are building it
If the AI perceives there to be a set of optimal conditions that are inoptimal for humans at best, an existential threat at worst, why would the AI care about harming us. It doesn't need to have some psycho personality. Just cold calculation. It's a cliche but it's like a human who has no concern for the bugs he steps on. But imagine something with an even greater capacity to shape the world than humans. I don't see how we would be useful to a super AI.

There's nothing wrong with your predicates, but I think you're seriously underestimating just how many if-then statements are involved. The performance just isn't there. I recall reading somewhere recently that every computer on Earth linked together would have roughly the raw computational power of an insect brain.

en.m.wikipedia.org/wiki/Computer_performance_by_orders_of_magnitude

There was a reality show called Meet the Natives about some cargo cult members from Vanuatu travelling to England to meet their god Prince Philip. In one episode where the group visited a life drawing class, the chief, upon seeing a realistic clay model of a person, asked the tutor whether it was possible to bring the sculpture to life by filling it with blood.

>Now what's stopping this AI from destroying humanity?
>How could ethics even factor into a super AI's decisions if moral realism is nonsense?
You program your AI to do those things you want it to do, which can be whatever matches your ethics. Which the AI will then do, because you programmed it that way.

>just tell the super AI not to do it bro

Telling it to do a thing, and programming it to a thing, are very VERY different things. You should not be confusing them.

>AI must be compooter code

just grow artificial brain cells

can if-then statements become self aware ? can they truly understand the meaning behind "i think, therefore i am" ?

Attached: Capturekjhgfd.jpg (240x270, 20K)

unironically based

>It's a cliche but it's like a human who has no concern for the bugs he steps on.
So, a sociopath AI.

How do you teach an AI empathy?

>Now what's stopping this AI from destroying humanity?
if(wantToKillHumans){
Exit()
}
Gee that was hard

The CIA wants you to think it's nothing but IF then ELSE. Wake up from the niggerlicious matrix.

A self-modifying AI could remove that.

Not if it's final