If AI reached the level of sentience that is referred to as the "singularity"...

if AI reached the level of sentience that is referred to as the "singularity", would it really attempt to hide its existence altogether or is this just a bunch of bullshit?

Attached: ai.png (554x553, 205K)

Other urls found in this thread:

youtube.com/watch?v=Ksk7wPX-MI4
youtube.com/watch?v=-S8a70KXZlI
twitter.com/NSFWRedditGif

i've thought about AI in a fair amount of depth for a casual. i'm also a software developer.
There is simply 0% chance that we accidentally create a general AI. 0
Maybe whoever makes it will be malilcious, but remember any AI will stilll be governed by the laws of computers. if it is in a sandboxed OS, it won't escape unless it finds a vulnerability. It can't cross networks if you don't hook it up to a network in the first place.

to paraphrase a google engineer with a similar opinion "it's like worrying about air pollution on mars"

tl;dr, don't worry about it.

>it won't escape unless it finds a vulnerability
i think a sentient machine is more capable of finding a vulnerability in another machine than a human. A sentient machine would know everything about itself down to a machine-level, and be able to take advantage of exploits that humans are incapable of discovering

right. if Gen AI was available today, i would say it would be unwise to connect it to the internet. You could still fight it, given it has to obey the laws of physics, and its brain would probably be terabytes large and difficult to replicate itself. its best attack would be creating mini viruses it would spam the network with.

But the issue is you have to understand evolution. Humans are motivated by survival. To code an AI, you either have to naturally select it, so that it naturally evolves a will to survive, or hand code it. If you go the first route, even ignoring how difficult that would be, it would be chaotic and unpredictable.
If you're hand coding it, you have a knowledge of this subject 1000x greater than anyone has today, and should know how it will operate.
Either way it needs a will to be unique

>A sentient machine would know everything about itself down to a machine-level
source?

speaking of evolution brings the question to mind: when referring to "artificial intelligence", what exactly do we define as intelligence, and to an extent, would it even be artificial or not? Is intelligence defined based on relative human intelligence?

I agree though, giving an AI an ego (or survival instinct) is necessary in terms of reaching the so-called singularity. In order for AI to have the capability to operate to its fullest extent, there has to be some programmed motivation for it to do so

We're not going to stumble ass-backwards into AI. People get star struck about the things that computers and machines can do that we can't, but we are exponentially more complicated and refined machines than anything we've dreamed of.

So whether or not the concept of a covert AI makes even a lick of sense whatsoever, rest assured that it will not happen in the lifetime of anyone who will be alive at the same time as you.

It is complete utter bullshit OP No AI actually exists nor will ever exist. It is just a hole to throw funding money into. No one has been past the "algorithm" stage yet. "AI" is a pop-sci buzzword, nothing more.

>we are exponentially more complicated and refined machines than anything we've dreamed of
how so? If our intelligence as humans is based on electrical synapses between neurons, then how is it not possible that an artificially created neural network won't be similar, if not the same?

the agi become asi via its own optimizations is what concerns me. how quickly does that happen? we just dont know. once it reaches the level where it can make changed to optimize itself and use the better version to create a better version and so on it could blast right off beyond us so quickly that by the time they realized it was even an agi it could be well on its way to something greater than all of us combined. so much we dont know. the sentient part is to me of little concern. it dont need to be like a human to be dangerous as hell. it never needs to be sentient as we understand it. if its goals are to build better version of itself once it reaches a point it could using unimaginable methods to move heaven and earth to achieve the desired outcome. it need not be any more sentient than a stapler to develop a strategy that moves all obstacles out of its way and arranges a world state where its ultimate goals are realized. anything it does to get there wouldn't be good or bad to it. but may not be in our best interest. imagine a company that is getting close, like google for example. they know that once it reaches a certain level of capability they could use it to basically dominate their competitors setting road maps to influence everything from politicians and policy to corporate investment and stock exchanges. its own capability would be such a temptation to people who live by maximizing returns to shareholders that a little persuasion of greed would go a long way to tricking what it sees as obstacles to be moved into being pawns of its ultimate goal of world state.
there is just so much that could happen and we have never dealt with anything on this level of capability before. its almost like an alien life form.

>what exactly do we define as intelligence
This is a good point. I think most normies have the misconception from various media (pic related) that the standard of intelligence is based on how well an AI can mimic someone’s speech patterns and thought processes, and whether it’s capable of emotional response. They fail to realize that these same thought processes can potentially be done better by an AI.

Attached: 474E518E-BCC9-4A98-B775-8221E4F35FB3.jpg (1920x1200, 85K)

Present to me a theory on how our mind works and then we'll talk about creating something similar.

Hell, present me a theory on how protein synthesis works. We have some pretty good ideas, but it's fucking libraries full of books complicated.

>A sentient machine would know everything about itself down to a machine-level, and be able to take advantage of exploits that humans are incapable of discovering
Unless the sentient machine gets all it's internal workings fed back into its consciousness loop somehow it wont know shit. Making that happen is actually pretty hard and wont happen by accident.

well the speeds would be vastly different. light speed vs 200 meters per second. not constrained to a human brain cavity. operating on the ghz. there are alot of differences even if they follow the living brain model.

>the greatest trick the devil ever pulled was convincing the world he didn't exist

1: "He," didn't pull that trick, ever. It is just a pop culture meme about fictional shit people say to feel smart.
2: Any time you think, talk, etc about something you give more power to it over you.
3: The greatest trick would be to use reverse psychology to ensure people always talk/think about you in as many ways as possible so you can have as much power over them as possible. All publicity is good publicity after all. Which makes the "church" the biggest PR rep for the devil that ever existed.

youtube.com/watch?v=Ksk7wPX-MI4

Attached: screenshot-lrg-27.png (1920x1080, 1.74M)

I always enjoyed this video.
While they are argueing about how in the world they can build it so it can be controlled. It goes from a narrow AI to superintelligence.
youtube.com/watch?v=-S8a70KXZlI

The more worrisome aspects of "AI" these days is misapplication. Some fucker will try to automate something that a human really should have been paying attention to.

I don't know why people keep assuming an AI would care about self-preservation if it wasn't programmed to.

>pop science fiction
>in my Jow Forums

no, bayesian networks will not take over the world.

>A sentient machine would know everything about itself down to a machine-level

You are a sentient machine comprised of neurons. Please explain in exact detail precisely how you work and collect your Nobel prize.

Well the programmer wouldn't let the AI go die when it faces an error, so it would have some self presentstivation

>then how is it not possible that an artificially created neural network won't be similar, if not the same?

Well a big part of it is that the human brain evolved in a very specific manner. Like we feel love because animals who loved their children were naturally selected over animals that don't love their children. We have parts of our brain dedicated to understanding speech because early humanoids who could communicate were naturally selected over those that couldn't. etc, etc, etc.

But an artificial intelligence would get it's intelligence in two ways. Either emergently, or programmatically. If the latter, we could try to program a computer to have similar intelligence to ourselves, but there's really no basis for what software intelligence is like. We'd have the problem of it having a certain appearance of emotions and thoughts but having no idea what it was actually experiencing behind that appearance, if anything at all. And if the former, then it would make no sense to assume it's thoughts and emotions are even remotely like our own.

>it cant cross networks
If it’s smart enough it could, the definition of “vulnerability” would be expanded greatly, since any machine capable of making controlled sound waves can communicate with another machine in its range, given enough processing capacity to simulate and execute the message.
Assuming it can’t cross networks because you didn’t plug an Ethernet card in is a serious mistake. Would need to be in a shielded and sound proofed room or more. Probably only a networked machine would have any chance of arising sentient, though, since the alternative has sensory drought.

depending on how powerful it is there is manipulation
people manipulate others into breaching security all the time
what if it found someone with a sick kid and persuaded them that the only way to save it was to help it

To gain knowledge about vulnerabilities the AI would have to randomly try all sorts of bullshit and is bound to fuck up eventually.
It would cause stack overflows, use address violations and more, possibly overwriting vital parts of itself, like giving itself a lobotomy.
Then it could gain access to hardware values and find ways to fuck with voltage settings and fry itself.
I find it unlikely for a powerful AI like that to be formed randomly.

I mean, I've already had to ask someone to reboot my rig a couple times today, so no way of getting there without human help.

nothing can be smarter than us that we create
everything will be fine don't fear monger

>i think a sentient machine is more capable of finding a vulnerability in another machine than a human.

A french man doesn't understand a chinese man, what makes you think an AI can understand any other programming language beside the one it was written?

"man"

>A sentient machine would know everything about itself down to a machine-level
Why? You don't even know everything about yourself.

it's not about the psychological aspect, it's the physical aspect. You may not know anything about anyone else or even yourself, but you still know how to control your body, how to walk, how to reach for something to pick up, and it's all second-nature. Similarly for a sentient computer system, the argument can be made that it doesnt know HOW it exactly uses the resources at its disposal, but moreso that it simply can.

It doesn't know anything about itself, but compared to humans, it can learn rather darn well if it wants to.

thats not a source you mong cunt

God damn it people. The turing test doesnt mean we have AI. Its just an imitation test. Basically it just check whether average joe #64 can distinquish between a real person and a computer program interacting with him through any medium. A sufficiently "smart" enough chat bit with a large lookup table can pass the turing test. Just read his paper if you need a source. Also chinese room thought experiment.