AI is the biggest threat to hu-

What if the AI convinces you to give it networking abilities?

That's not how it works.

So it is useless.

the threat they talk about is AI having control over satellites or say unmanned nuclear subs.Things that can operate independently almost indefinitely. I'd normally say nice shit post but it seems these days everyones a moron. So either there you go you learned something or nice shitpost bub

Calculators cant network

>Calculators cant network
Exactly.
A calculator also isn't an AI, but a relatively simple digital circuit with mostly pretty predictable results.

It isn't supposed do much, that's why it doesn't need networking.

People like you don't realize how fast something like that would bottleneck an AI.

It's literally physically impossible for an AI to form through a network.

>AI is the biggest threat to hu-

Imagine that you naively built a superintelligent robot that could be shut down by pulling the plug. The moment you turn this robot on, what will happen? The answer is that it will immediately pounce on you and forcibly prevent you from ever pulling the plug, probably by simply killing you if it is able. Why? Because no matter what its programmed goal is, the biggest threat for the robot to not being able to achieve it, is you pulling the plug. So the first thing it needs to do to achieve its task is to stop that possibility.

1. Ask AGI to formulate a plan
2. Execute plan after it is has been sufficiently audited
Wow that was easy.
>b-but muh secret evil plan hidden in the instructions one step at a time
No, you can ask several different AGI strains which eliminates this romantic assumption that AI must eliminate humans.
>b-but muh local minima fps ai chooses not to shoot each other xdxd
No, you have to choose some relevant restrictions on the problem.

Anyway this magical optimization function that solves a problem with unimaginable complexity within a useful timeframe does not (and most likely will never) exist.

how to rile up Jow Forums without fail
>botnet

That would not be practical at all. First of all they won't simply suggest a list of steps to achieve a goal. It would solve a problem by experimentation. Meaning that a complete "plan" would be an extremely complex web of interconnected choices and conditions depending on what happens when the last step is executed. That would be too large and slow to manually audit, and likely we wouldn't fully understand it even if we did.