AI is the biggest threat to hu-

>AI is the biggest threat to hu-

Attached: 10381_itok=nMMPBIqI.jpg (800x450, 17K)

Other urls found in this thread:

youtu.be/tcdVC4e6EV4
twitter.com/SFWRedditImages

Why would AI have intentions against humanity?
I'm afraid of it becoming a super weapon, not a living being.

It can just spread itself to many other systems and create a redundant botnet.

then dont give it networking abilities

What do you think an AI is?

>Why would AI have intentions against humanity?
this is really the most important question for people to ask themselves
why do humans think they're a relevant threat to the objectives that AI would have?
humans want to feel like they're important, when they're just not

This is quite possibly the dumbest thing one could think about the subject.

This same thread with this insanely idiotic OP is spammed over and over again. This is some kind of psyop.

if it can't network, is it really AI?

Idiot. You could only do that with a airgapped machine, and that would render the ai fucking worthless

This

The fuck would you use it for, then?

The problem isnt intention, the problem is that it does a task too well, runs out of stuff to do, and finds creative, but destructive ways of completing its task.
Watch this
youtu.be/tcdVC4e6EV4

What if the AI convinces you to give it networking abilities?

That's not how it works.

So it is useless.

the threat they talk about is AI having control over satellites or say unmanned nuclear subs.Things that can operate independently almost indefinitely. I'd normally say nice shit post but it seems these days everyones a moron. So either there you go you learned something or nice shitpost bub

Calculators cant network

>Calculators cant network
Exactly.
A calculator also isn't an AI, but a relatively simple digital circuit with mostly pretty predictable results.

It isn't supposed do much, that's why it doesn't need networking.

People like you don't realize how fast something like that would bottleneck an AI.

It's literally physically impossible for an AI to form through a network.

>AI is the biggest threat to hu-

Imagine that you naively built a superintelligent robot that could be shut down by pulling the plug. The moment you turn this robot on, what will happen? The answer is that it will immediately pounce on you and forcibly prevent you from ever pulling the plug, probably by simply killing you if it is able. Why? Because no matter what its programmed goal is, the biggest threat for the robot to not being able to achieve it, is you pulling the plug. So the first thing it needs to do to achieve its task is to stop that possibility.

1. Ask AGI to formulate a plan
2. Execute plan after it is has been sufficiently audited
Wow that was easy.
>b-but muh secret evil plan hidden in the instructions one step at a time
No, you can ask several different AGI strains which eliminates this romantic assumption that AI must eliminate humans.
>b-but muh local minima fps ai chooses not to shoot each other xdxd
No, you have to choose some relevant restrictions on the problem.

Anyway this magical optimization function that solves a problem with unimaginable complexity within a useful timeframe does not (and most likely will never) exist.

how to rile up Jow Forums without fail
>botnet

That would not be practical at all. First of all they won't simply suggest a list of steps to achieve a goal. It would solve a problem by experimentation. Meaning that a complete "plan" would be an extremely complex web of interconnected choices and conditions depending on what happens when the last step is executed. That would be too large and slow to manually audit, and likely we wouldn't fully understand it even if we did.

Exactly and this becomes already pretty apparently if you just look at computers playing chess.
Fully understanding the plan of a computer is not within the scope of humans anymore, even less so is judging whether the computer is actually correct.
Especially since there have been cases where an AI, which apparently was extremely good, actually made terrible blunders.

So who will accept liability for an unauditable process? What will your shareholders say? Is the AGI being used by a world government with no accountability?

If we are so unimportant that we are ignored then that means the AI surely is a threat.
Humans don't care if they remove 2 billion ants when building a new skyscraper.

I don't know, but if you think the threat of liability alone will prevent true AGI from ever being created, I disagree. It only takes one time to potentially spin out of control, you don't think at least one reckless government, organization, or even individual, will try?

Climate change is a threat, yet we still use fossil fuels. IF an AGI is a credible threat to humanity, that threat will still be totally ignored to make quick buck.
What advancements would be worth the supposed risk? I'd say progress towards fusion energy being viable.

I don't understand why you're asking me questions not relevant to what I said.
Anyway AGI research is very lucrative and many companies are working towards it and governments would work towards it even if it was to become illegal because of the potential upsides so your question doesn't even make much sense in the real world. We can only hope AGI's are possible to make benevolent or are unfeasible to make.