Doomsday tech

Are humanity doomed to die at the hands of its own technology?

Short of progress simply reaching a ceiling pretty quickly, I just can't imagine a way to protect against misuse of potentially catastrophic tech. Genetically engineered extreme pathogens, Nanobots that consume nearby material to self-replicate until nothing is left, Superintelligent general AI with a carelessly or maliciously defined goal, neural interfaces that will make humans able to feel nothing but extreme pleasure and thus neglect everything else until they die (not that that necessarily would be a bad scenario but it's still the end).

Eventually these techs should become almost trivial to make and at that point there is no stopping it, someone somewhere will make something catastrophic. It won't help to regulate or outlaw it. All it takes is one single reckless or malicious act.

Attached: terminator-orion-pictures-640x480.jpg (640x480, 29K)

Other urls found in this thread:

youtube.com/watch?v=hEUO6pjwFOo&feature=youtu.be&t=225
youtube.com/watch?v=Ejf6FwIibkE
youtube.com/watch?v=zmbldpqn0K4&list=PLIIOUpOge0LulClL2dHXh8TTOnCgRkLdU&index=3
twitter.com/SFWRedditImages

...

Pray tell, why would I post about technology in the television & film board and not the technology board?

You watch too many movies

What exactly did I post that is unrealistic? My examples are all very plausible, even likely, scenarios.

>Superintelligent general AI
Will never exist. End of story.

t. CS guy

Many AI/CS experts disagree with you.

Care to substantiate your claim?

AI isn't some miracle consciousness. It's like saying your smart fridge is gonna build nukes one day. 1s and 0s can't gain a consciousness. It can be programmed to go "hey actually I've been treated like shit so according to ihatehumanity.h I'm gonna kill everyone somehow. But to avoid it you just have to put like dont(kill people); and that's it.

I can't even code properly, how could an AI. kek

No because I don't feel like it, but let me just say this. General AI has been "just around the corner" since early 60s. We can't define intelligence, therefore we can't generalize it, therefore we can't make it. Philosophically speaking, even if we managed to somehow define intelligence, it's still an enormous leap before we actually generalize it. All the news stories scaring you with "AI is just around the corner!!" are rehashes from decades ago. It's always the same bullshit, and it never happens. Unfortunately people don't have a solid understanding of the underlying technology and they just jump on the hype train without a second thought. I'm telling you as a computer scientist and a programmer, that general AI will never ever happen. AI will indeed get more complex, and become more useful, but at the end of the day it's all a bunch of weighted graphs. Absolutely nothing intelligent about it, it still boils down to a sequence of commands. It's not actually intelligent. Until someone manages to define intelligence it's ridiculous to even think about any sort of advancement. And I find it highly implausible that anyone will ever manage to define it, simply because we're too dumb for that.

General AI is nothing like a smart fridge. First of all we won't be able to simply program it to tell it what to do. What makes it an intelligence is that it figures out by itself what to do. All we can do is define its goal, and defining a goal that does not allow for any loopholes that would help it reach its goal faster at the expense of human values is EXTREMELY difficult, maybe harder than actually building the AI itself.

As for the claim "1s and 0s can't gain a consciousness" you have absolutely zero evidence for that.


>General AI has been "just around the corner" since early 60s.

No it hasn't. No one that knew anything about it has claimed that, even today we don't claim that it is "around the corner". A lot of more research is needed and it will take a long time, but what is certain is that there has been a LOT of progress. And there's no reason to believe that that progress will eventually stop before we eventually do reach general AI. Also what do you even mean by "defining intelligence"? Defining it is easy. Intelligence is the ability to reach a goal.

>you have absolutely zero evidence for that
Okay, go ahead. Make your computer become skynet. Right now.

You're being obtuse on purpose, aren't you

>a LOT of progress
Baby steps at best. It's utopian to think we're not just as far from any sort of "real" AI as we were in the 60s. But dream on for all I care. Once you get to the statistical level you'll realize we're using the exact same methods as we did back then, except more generalized and with more powerful computers. That's the sad reality of it.

It's proof in its purest form; make your 1s and 0s desktop calculator become sentient.

>Defining it is easy. Intelligence is the ability to reach a goal.
This is so wrong it's funny. Intelligence is more than just an ability to solve tasks. It's cognition as well. Just the fact that you would use such a pedestrian definition tells me you know absolutely nothing about AI research. So I won't bother arguing with you. You already made up your mind anyway.

>Intelligence is the ability to reach a goal.

Attached: 1517963390075.gif (281x484, 178K)

I never claimed that I personally could build an AI right now, so I really don't know what you're getting at.

But I can actually prove my actual claim, that an AI is possible. Because you have one inside your skull. That is actually also based on 0 and 1s, because every quantity in the universe is fundamentally quantized, and it is possible to represent quantized values with 0s and 1s. So if an intelligent system is possible, there's no reason that it is not possible to make one artificially.

That's really what intelligence means *in this context*. Don't take it from me, take it from a professional AI engineer.

youtube.com/watch?v=hEUO6pjwFOo&feature=youtu.be&t=225

It's not about goals, for fucks sake. It's about making sense of inputs. And when the inputs for a given decision, are infinite (as they are in the real world), you're basically fucked.

Your brain doesn't operate on 1s and 0s, the world isn't that simple. Atoms can have a positive or negative charge but they have an almost infinite varietty on what happens when you join them together. If you add a few hydrogen atoms together with an oxygen one, you get water. If you put a bunch of carbon atoms together you get charcoal, pressurize them and you get diamonds. You can't do anything with a 1 or a 0 but change its value to something else. I'm very annoyed that you're trying to over-simplify life. You can't detect or quantify human consciousness because if you did then you would win about a hundred nobel prizes and be a trillionaire.

(cont) Also, a single paramecium cell showcases more intelligent behavior than any AI we developed so far. It avoid predators. It finds food. It reproduces. All that in real world.

We already are with nuclear bombs

>It's not about goals, for fucks sake. It's about making sense of inputs.
No. It IS about goals. It is only about goals. Watch the video for good expalanation, but basically everything you mention, like cognition, making sense of inputs, are just tools for the intelligence. Tools can make the intelligence more effective at reaching its goals, i.e. more intelligent. But an intelligence does not need a specific set of tools to qualify as an intelligence or not. Intelligence only means adeptness at making decisions that lead to a goal.

>And when the inputs for a given decision, are infinite (as they are in the real world), you're basically fucked.

Then why do our brains work?


No, literally every value is quantized which means the world can be represented in 0s and 1s. The complexity is enormous but that's beside the point. I am not simplifying anything, I am stating facts.

And by the way, you are the one that introduced the discussion of 1s and 0s. I never mentioned that the AI had to run on computers. So even if you are right, which I don't think you are, we could just build the AI out of biological materials instead, exactly like ours brains.

You mean like a nuclear bomb?

At this point I'll just tell you that intelligence is magic, all you just wrote is wrong, and leave.

the numbers are on your side. the chances of you actually experiencing any of this personally are slim to none, you'll be dead long before your kurzweil fantasies.

t. a man that gets paid to sit in front of a screen all day and has many friends that do the same.

Attached: 1454277368516.png (700x400, 40K)

That's an example but not quite to the level of the other things I mentioned. Reason being that, at least for now, nukes are extremely hard to make and requires materials that are extremely hard to get, so only a few select world powers currently have access to them.

At this point I don't care if you leave because you are clearly never going to accept that you are in fact the one who is wrong.

See

I think I see your problem. You are confusing intelligence as a concept with the specific intelligence of life that has an already defined goal: to reproduce. But not all intelligences need such goals. A system can be vastly different from anything found in nature, with vastly different goals, and it can still be intelligent, even more so than any creature in nature, if it is very adept at accomplishing those goals it has. One example intelligence that Robert Miles often brings is the stamp collecting machine. It is completely alien to us in that is has no sensory system or physical connection to the world, all it has is a connection to the internet, and it can send and receive packets to communicate with the world. And it has one goal, also completely different from any goal found in natural intelligences: to increase the number of stamps in the world. Now, if it is intelligent it will be able to figure out what kind of output to send to the internet in order to increase the amount of stamps. And lets say it is far better at any human at this and soon the world is basically all stamps. It seems from your definition that this machine is not intelligent as it does not avoid predators, it doesn't find food, it doesn't reproduce. But in fact it is extremely intelligent because it is exceedingly good at increasing the amount of stamps in the world through problem solving.

>Then why do our brains work?
Laugh if you will but I find it plausible that brains operate as a medium between the material existence and the immaterial consciousness. You're not generating intelligence, you're latching onto it metaphysically.

Of course your reductionist mindset won't accept this so go ahead and find those bits that make you intelligent. You won't ever find them because you're searching in the wrong place.

Attached: 42633-seminar-jin-jang-in-vse-vmes-6623.jpg (400x400, 12K)

Something is intelligent only if it behaves intelligently in the real world. And what that boils down to is reproduction and survival, each as important as the other. AI agents simply aren't capable of this as they operate on a predefined set of rules. Simple as that.

See, that's where you are wrong. It is definitely not true that something is intelligent only if it behaves intelligently in the real world, and I have no idea where you got that idea/definition from.

Because everything else is abstraction and perfectly useless until you connect it to the real world.

This is probably the most intelligent thing to say at this point.

Actually your definition doesn't even make sense, it is circular. Something is intelligent only if it behaves intelligently in the real world? Then how do you define behaving intelligently in the real world? The answer is what I've been saying from the start: The system you describe is intelligent because it is adept at reaching its GOAL, and its goal is to survive and reproduce in the real world. But as said, that's not the only kind of goal an intelligence can have.

>Then how do you define behaving intelligently in the real world?
I've been telling you from the start that we can't define it. If we could we wouldn't be having this conversation would we.

But as said, we can define it. I just did.

Not rigorously, and not meaningfully.

As rigorously and meaningfully as necessary in the context of artificial intelligence, which is why that is the definition that the AI scientists actually use and agree on.

Our bodies are inefficient as fuck.
The only way for "humanity" to survive is to either merge with technology or to build a benevolent guardian.

Which all means jack shit when you compare it to human intelligence. I'm not saying AI research is invalid, I'm saying general AI won't happen.

The tech obviously hasn't progressed enough to match human level general intelligence yet, no. But I don't think any of the arguments you've made so far are even slightly convincing as to why it will never, ever happen.

Because we can't define it. I think my argument is too simple for you. You're one of those people who needs a terribly complicated argument worth a few books before you'd open yourself to the idea. We simply can't define it. End of story. Once we're able to define exactly what actual, human intelligence is, and do so in rigorous terms, then we'll pave the way for advancements that COULD in theory lead to a general AI. Personally I don't see it happening but that's just my opinion, I'm not the authority of this universe to tell you what is ultimately possible and what isn't. I'm saying that right now, our AI research is more or less on the level we were in the 60s. Sure we have tensorflow and machine learning, but those are only made possible by the fact that we have insanely powerful machines compared to back then. If they had the same machines in the 60s, they would have the same capabilities, more or less. AI research has been stagnating since the 60s because all the statistical models used to create it have been figured out by then. It's just silly to think that we've made any sort of real progress towards the goal of computers being able to use cognition.

Like Jeeg?

I'd say it's just not coming as the ability to create life. It's just something to huge for us to grasp.

Defining "actual, human intelligence" is irrelevant, because we're not trying to make "actual, human intelligence". Just throw away the idea that the human brain or animal brains have anything to do with artificial general intelligence.

What we're trying to do is make a system that is adept at reaching complex goals. That is what we want to make regardless of how you want to define the word intelligence.

Also, saying that the only thing that has changed between now and the 60's is like saying that the only think that has changed in science since the 60's is that now we have electron microscopes and particle accelerators and space probes and etc. Don't underestimate tech for progressing science. Hardware powerful enough to handle all the computation involved with intelligence is an absolute necessity for researching and developing it. So just having those machines, and all the things we've been able to do with it so far, is a MASSIVE leap of progress since the 60's even if we are still relatively far away from general intelligence adept enough to reach a goal as complex as the human ones.

Typical computer AI would have about a few thousand neurons worth of "computation" going on. Whereas humans have billions of neurons.

No, you don't understand. We won't be making human levels of intelligence because we don't know what that is. We can observe the effects of it, but we have no way of telling what it actually is. It's like it exists in a separate domain and we don't have the tools necessary to probe and classify it. Having powerful machines, again, means jack shit when you can't classify the underlying problem that you're trying to solve.

Phrased differently, computers can't have awareness. Since they can't have awareness, they can't have cognition. Their "perception", as it were, is limited to a stream of meaningless bits. They don't perceive the problem at hand, they solve is via delegation and that in itself is only possible because the problem domain being solved is well understood by one of the experts that's developing a solution to it. We can't have general AI because we don't know what the fuck it is supposed to be.

I don't think I can write this any clearer.

But again, we do know what human levels of intelligence is: The adeptness of making decisions to reach the goal of survival and reproduction.

Do you maybe mean that we don't know *how it works*? If so, sure, not yet, but I don't see why that means that we never will.

Once the tech catches up, when we have powerful enough hardware to simulate a system as complex as for example a human brain (actually minus a lot of complexity that we don't need such as motor control etc) then it will be much easier to try various experiments and make progress. Like we've been doing today with intelligences with less complex goals of which the hardware we have available today is enough, like playing Go and analyzing images.

>Phrased differently, computers can't have awareness

Why not? And also, like I said earlier, who said we have to make it out of a computer? It's the most promising platform but it would also be possible with biological material like our brain.

Just ignore him man, he sounds like a 12 year old straight from Reddit who thinks he's smart because he watched rick and morty. It's not worth explaining it to him.

Oh the irony

>we do know what human levels of intelligence is
No we don't, and your definition isn't rigorous.

>Once the tech catches up
Again, they've been saying this since the 60s. You can't bruteforce intelligence, any half-competent AI researcher will tell you that.

>Why not
I don't know. It gets philosophical at that point. What is awareness? What is qualia? Why do we perceive? I can't answer any of these meaningfully and neither can anyone else. We can't create something we don't understand.

>No we don't, and your definition isn't rigorous.
Like I said we don't need a more rigorous definition to continue progress. It is an exact, 100% accurate definition of what we are trying to make.

And I still don't even understand why you are so hung up on definitions. Why does it matter how you define it?

>Again, they've been saying this since the 60s. You can't bruteforce intelligence, any half-competent AI researcher will tell you that.

I'm not talking about brute forcing, I am talking about experimentation. We do know for sure that the hardware does help progress, because that's what we've seen in the last decade. The level of intelligence in systems we have made so far, have increased around linearly with hardware performance, maybe even faster. No reason to expect that correlation to stop.

>We can't create something we don't understand.
Sure we can. People used electricity before we knew what electrons were.

>We can't create something we don't understand.
Bullshit, we do that all the time. Half the shit you use every day was in use for decades or centuries before anybody had any idea how it worked. Do you think the prehistoric dirt farmers had to learn what yeast was before they invented bread?

The whole appeal of AI is you just throw a bunch of shit at some general purpose algorithms and stir it around until it works. You don't have to know how it works. If you're insisting that we don't know how intelligence works then this is the best chance we have at creating it (aside from sex between people other than you) since it is a tool designed specifically for solving problems we don't understand.

>And I still don't even understand why you are so hung up on definitions. Why does it matter how you define it?
Because that's how programming works. You can't program something based on some user off the internet defining it in vague terms.

"program a smell of a flower"

You see how stupid the above statement is? How do I define smell? What is a flower? I can't program something unless you define every single term you use down to its most minute detail. You need a rigorous definition to program anything whatsoever.

>have increased around linearly with hardware performance
No they haven't. Go and study this shit before you make claims like this.

>People used electricity before we knew what electrons were.
We're using intelligence to communicate right now. Doesn't tell you anything meaningful about what intelligence actually is.

Attached: picard-facepalm.jpg (895x503, 39K)

Are you seriously comparing prehistoric farming to advanced cutting edge programming? Let's just compare spaceships to rocks then if it's all the same.

>The whole appeal of AI is you just throw a bunch of shit at some general purpose algorithms and stir it around until it works
That's called machine learning, and is not relevant to the discussion. It's not cognition, it's running your model until you get satisfactory predictable results.

>Because that's how programming works. You can't program something based on some user off the internet defining it in vague terms.

Programming general intelligence isn't like programming other procedural programs. You don't have to know how it works. We don't really know how deepmind plays Go. Its programmers aren't good at Go. But the result still beats every human.

>No they haven't. Go and study this shit before you make claims like this.

The absolutely have. See pic for example. Or try to tell 60's researches to build an image recognition software on the level of today's leading edge. They can't, and not just because they don't hard the hardware but also because without hardware to experiment with they can't figure out what approach works or not.

>We're using intelligence to communicate right now. Doesn't tell you anything meaningful about what intelligence actually is.

?? This doesn't really change anything. Doesn't change the fact that we were able to use electricity even if we didn't know what it was.

>That's called machine learning, and is not relevant to the discussion. It's not cognition, it's running your model until you get satisfactory predictable results.
It is absolutely relevant to the discussion. Machine learning is one tool for making intelligent systems. General AI when it comes will almost certainly use similar techniques as today's machine learning programs in part of itself. Standard neural networks for sure, for example.

Forgot pic

Attached: chess.plot.150.jpg (1403x1032, 150K)

So now you're saying machine learning will lead to general AI.

I give up.

Uh, yes? It should be very obvious that there is a lot of overlap between modern techniques and theory in machine learning and what can be used to make general intelligence. I mean we know this for a fact because out brains are mostly made of neurons and synapses.

>That's called machine learning, and is not relevant to the discussion
Are you being retarded on purpose?
>We can't solve the problem because we don't understand it
>That tool we use to solve problems we don't understand is not relevant because it hasn't already solved it

We understand the underlying problem domain, you idiots. And machine learning is just glorified convex optimization.

You two have no idea what you're even talking about.

We don't even need AI to be malicious for it to destroy humanity. It just needs to exist. Within the next 50 years, half the job market is going to literally VANISH. There will be no jobs replacing those jobs, they will simply be gone, because AI and robots can do them better.

Cars will drive themselves. Translators will be obsolete. Food will be cooked by machines. People will order from digital terminals instead of waitresses. Farmers will start to suffer as vertical farming in urban areas takes off. There will be programs creating other programs.

Pretty much every job you can think of, within the next half century, robots will be able to do it better. Now, knowing how businessmen operate, are they going to purposely employ humans when it's going to hurt their bottom line? Some might. But most will not. So when we reach the point where there is mass poverty due to joblessness, even the rich will suffer, the economy will start to decline heavily because no one can afford to buy anything. And at the same time we're worrying about all that, is right when the rising sea levels will start fucking up everybody. It's a perfect storm of bullshit and it's heading our way.

We understand the problem domain for general intelligence too. It is the same as for machine learning problems: To reach a goal. The difference is only that the goals modern machine learning techniques are used for are less complex than that of general intelligences. It is even possible that we can use machine learning techniques exactly as they are today to create a humanoid general intelligence, with enough processing power. E.g. by creating a giant neural network with sensor data as input and robot joints as output. We simply don't know until we have the hardware to test it.

Also it is wrong that machine learning is "glorified convex optimization.". Reward functions can be and usually are non-convex in modern machine learning programs.

Making statements like "you two have no idea what you are talking about" makes me think that YOU are the one who don't know what you are talking about, by the way.

>Also, a single paramecium cell showcases more intelligent behavior than any AI we developed so far.
Are you for real ?
Please do some research before spewing bullshit
youtube.com/watch?v=Ejf6FwIibkE

thats when the wars vs time controlling aliens begins and it starts hard

Attached: th.jpg (331x186, 13K)

but dont worry...the revolution won't be televised...

Increased automation makes society richer, not poorer. As long as the government does its job properly and distribute the new wealth evenly, f.ex. via basic income, humans will be become better off if there are no jobs left.

Attached: 1523793010107_1.jpg (640x480, 41K)

>Are humanity doomed to die at the hands of its own technology?

The short answer is, probably not.

This video is a bit unrelated in that it's about the Fermi paradox, but he covers most of what you're worried about, so I'd give it a watch: youtube.com/watch?v=zmbldpqn0K4&list=PLIIOUpOge0LulClL2dHXh8TTOnCgRkLdU&index=3

The tldr is critical technology failures of the magnitude required to put humanity out for good, rather than merely temporarily inconvenience civilisation, are profoundly unlikely. Humanity wouldn't be made extinct by a nuclear war for instance. Remember, to be a real doomsday scenario it has to be an extinction level event. Even if we were knocked back to the stone age with only a few thousand survivors of whatever technological apocalypse we hit ourselves with, we'd be back eventually.

>We understand the problem domain for general intelligence too
For the 11th time, no we don't. Stop making things up. Basically what you're saying is that if you make your program JUST right, it will behave intelligently. It's a dumb argument.

That's in the long run. Yes, 100 years from now, the world will be fucking awesome, everybody will have basic income and access to free healthcare while robots and AI systems do all the work. But in the short term, the government will be in denial, refusing to admit that the traditional "labor for money" system no longer works.

Nature created general intelligence with nothing but time and brute force. I see no reason why we couldn't achieve the same given time and the scientific method.