Is the AI meme dead already?

Is the AI meme dead already?
Did people finally understand it wasn't AI?

Attached: Elon_dumb.jpg (1080x1080, 752K)

Other urls found in this thread:

phys.org/news/2018-09-artificial-synaptic-device-simulating-function.html
economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence
youtube.com/watch?v=EqPtz5qN7HM
en.wikipedia.org/wiki/Schizophrenic_number
twitter.com/NSFWRedditGif

What do you mean my NN that's an glorified statistic table isn't AI?

...

Well, we just made advanced data sorters, didn't we?
It's nowhere near consciousness.

The problem is that plebs think AI means general, human-like intelligence similar to what androids have in sci-fi films. In reality AI just means a program that can make its own decisions based on a given input based on past experiences and that is being used a lot in various fields.

Once the next recession hits the spook will be over.

So what happens, now that normie investors understand we don't I-robot next years?

It all comes tumbling down

No. Artificial intelligence really does mean some kind of conscious man-made intelligence. Don't rely on the Turing test either. Turing was a fraud. His work was entirely done by Tommy Flowers, who gets almost no recognition whatsoever. If it wasnt for Tommy Flowers, and the fact that Turing was from a well off family, and a fag, Turing would have faded into faggy obscurity a long time ago. Flowers was more working class but was the giant on whose shoulders Turingcumbucket stood

Well, they never should have called it AI to begin with.
But the hype train wouldn't have it.

AI Is heresy

Attached: HeresyStamp.jpg (600x600, 54K)

>Alan turing is a fag!
>Tommy Flowers
>Flowers

Intelligence has several meanings, the original one in AI wasn't about being smart

It has nothing to do with consciousness. Computers have historically been very good at following instructions but not very good at "decision making". Artificial intelligence is supposed to overcome this by creating models that are capable of learning and then using that "knowledge" to make informed decisions in future scenarios.

There is no grey line.
AI would be self conscious.
There's nothing in these ad profiling things that even remotely ressembles AI.
They're fascinating because they work while not being 'programmed' per say, but that's about it.

if that is true then why is funding still going into neural networks?

*Laughs in AGI*

phys.org/news/2018-09-artificial-synaptic-device-simulating-function.html

dumb pipo in this thread will lose the ai god train lmao. I hope ai will torture u forever when it reigns

Nice!

Attached: 1512237701406.jpg (460x527, 32K)

Making sentient and self conscious machines is very ILLEGAL AND HERETICAL. Machines are tools for man.

Attached: bf63d5676641fd030499a1c2d238b779.png (887x951, 380K)

Because ad revenue.
See, this is not AI, but they still can nail what you'll buy compulsively.

So how is it different from what the human brain is already doing?

>retards are STILL salty over Turing starting computing
When will you learn?

AI will pity you for even devoting a second of your time formulating that retarded sentence.

There is one role that humans really need fulfilled, but have no one to fulfill.
One role that requires trust, intelligence, benevolence, devotion beyond those that any existing being has.

This role is god.

Attached: 1508391718849.jpg (1024x745, 235K)

Because the human brain can make a rational decision when faced with an entirely new situation
Your "blackbox" NN would just shit itself

The worst part is how AI are likely to impede each other to the point where it could be a hindrance to their usability.
My biggest concern is that AI will not be helpful to us due to the construction of our universe. The mechanics of living things are so complex we will likely never truly understand them, hell I think it is impossible to understand them.

AI are just abstractions of what we think happens within people.

Attached: 1563194700785.png (250x250, 97K)

Akshuly, zero-shot learning is one of the subjects being researched.

Neural networks are modelled after animal brains because brains are extremely good at decision making and recognising patterns. Again, it has nothing to do with conciousness.

P versus NP problem

Until then...

>Computers have historically been very good at following instructions but not very good at "decision making".
>have not very good decision making
Neither do humans. Why do you think we're making AI? We suck at decisions and desperately need help.
Turns out AI aren't that crash hot at it either.

It's almost like were in a universe of endless torment.

Attached: 1516788534265.jpg (960x960, 42K)

>In a data analytics class
>Jew professor says linear regression is AI

Artificial intelligence means AI.
It was co-opted by typical silicon valley scammers to mean something else for shekels. The buck moved to AGI and now scam ventures like OpenAI are claiming that too. The ride never ends in s(0)yvalley. One has to constantly come up with scams to afford million dollar cuck sheds and liberal expenditures that erodes one's paycheck

animals are conscious

Yeah, they're gonna fight about how they sort data.
Fucking hell, this needs to be said.
AI is decades away if ever.
Don't get me started on self driving cars.

i actually have some insights on how to achieve general AI and true transfer learning (not the reusing pretrained networks meme), tho considering there are armies of researches working in the field im researching other stuff.

also Elon is right, autonomous AI can decide to do anything to achieve its goal. if you task it with stopping email spam it can decide to kill all humans so there's no one to send it.

>phys.org/news/2018-09-artificial-synaptic-device-simulating-function.html
This week in phantomware

>Neither do humans
>While literally dominating the planet
Computers are not better, they're just faster

The only people I ever hear talk about AI is you retards when yous hit your pants because some marketing cunt said AI somewhere on the planet.
Just the same as it was with bitcoin before that. And the million other times with a million other things before that. Stop being retards.

^this. It's not about doing anything genuine anymore in the west or innovative. It's all about selling bullshit as innovative tech.
zero-shot is another statistical meme scam.
It's all the same tired shit

>kill all humans
That's my solution to global warming too, and I'm not artificial, I swear.

>My biggest concern is that AI will not be helpful to us due to the construction of our universe.
They are already helpful.
They can do shit like
>speeding up programming of industrial robots that do menial shit, meaning lower cost of producing new shit that you don't want in bulk
>analyzing text for elements that are "off", like legal vulnerabilities
>generating random faces so that you don't need a person to draw them
>preliminary diagnosis of diseases based on biometrics
>giving you the search results that aren't just text matching
They don't need to go all "BEEP BOOP MASTER, HERE BURGER" to be useful, just make our tasks take less time.
Eventually, they will most likely get so good at them that they'll only need the orders and not the details.

The human brain is by far the most complex part of the body and we don't fully understand how it works. The reasons neural networks shit themselves when they experience something new is because they're based on a limited understanding of the human brain. It's not that neural networks are inherently flawed as a concept, we just don't know enough about the thing that we're trying to recreate.

Dumb-asses suck at decision making not humans innately. Shit brains perform shit actions.

>Artificial intelligence really does mean some kind of conscious man-made intelligence.
So... humans?

Yes but the purpose of neural networks is not to create a program that knows it is concious. Besides, not even humans truly understand what conciousness is.

Look, I made this thread because I couldn't find any AI thread in the catalog, which I found weird.
See, I enjoy shitposting about how AI is the next 2000 crash.

have sex to implement

>Computers are not better, they're just faster
Better at what?
Faster at what?
Hence the dilemma we face.

It's glorified convex optimization

Not many human beings are even conscious enough to reflect on what consciousness is.

>they're based on a limited understanding of the human brain
>we just don't know enough about the thing that we're trying to recreate
That's just propaganda for normies.
If that were true the field would be dominated by neurosurgeons, not by start-up con men with a CS degree

>Normies are creating artificial intelligence that is destroying humanity as we speak
holy shit

Yeah, no.

I disagree, so did Descartes who pointed out that their was this "demon" he could never quite get ahead of.
We've known that the race for better tech wouldn't really help us achieve some dystopia for nearly 400 years. This isn't really a new discovery now we have AI with us. We predicted these issues a long time ago.

Attached: Rene Descartes.jpg (360x440, 38K)

There aren't nearly as many (massively) overvalued AI companies in SC as other shit like that artificial burger company with a market cap of what was it, 12 billion dollars? Or Boeing, or Tesla, or Deutsche, I think you get the point. There will be a massive fucking crash that will also wipe out silicon valley and a large part of the global tech sector, but it won't be caused by some shit AI marketing companies share holders noticing it's a glorified decision tree they pumped billions into but it didn't even make any operational profit in the last 20 years.

I believe that the only true AI we could ever make would be simply so human like we could not tell it apart from humanity. The things we develop are memetically bonded to natural selection.

>Why do you think we're making AI?
Because AI (when successful) will be able to make decisions a lot faster than humans and they can do it non-stop without requiring comfort breaks. In theory a well trained human will always be better at decision making than equally trained AI.

An example where AI would be useful is in an intrusion detection/prevention system. Older solutions just rely on signatures, but those are useless when it comes to new malware or attacks. And hiring a team of humans to manually sift through the thousands of packets is expensive and inefficient (plus a lapse in concentration could cause many things to slip past). If you were to train an algorithm to detect new variants of malware or malicious behaviour then it would be a lot more suitable for this task than humans.

Hell they'd likely get into loops with themselves with minor issues. There are just too many things humans don't currently see that helps us cope with our world. How on earth could we replicate things we don't even know about in AI?

>they can do it non-stop without requiring comfort breaks
Because computers never need to be restarted to function correctly amirite?

Attached: Harold successfully aquires prons.png (900x1340, 1.26M)

>OY AI IS NOT EVIL GOY
Can an AI that improve it's own code be developed?

Even the one dominated by neuroscientists has recently been highlighted as a disaster after Google sank their teeth in them. Anytime capital and capital centric fools get involved in a long term venture, it turns to shit. If you want a good read on how great ideas are turned to shit wants profit/capital and inserted into the picture, look no further than :
economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence

I currently do AGI research. Anytime a mainstream dumbass mentions a prominent AI group, I tell them to read this write-up : economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence

and then approach me correctly. I never get any replies afterwards. Either because they're too lazy to actually read something informative and/or because they read it and then understand what a scam it is and don't want me to make a bigger fool of them.

What did my statement have to do w/ Descartes and his delusional framing? Tech is often a neutral multiplier. It's shit tier societies and humans who use it incorrectly. There's nothing inherently destructive about tech. There is something destructive about ignorance, shit tier societies, and greed/etc which are all human in nature not technological.

>AI that improve it's own code
That is just AI...

Attached: 1521730668453.gif (320x200, 561K)

Not him but so we just stopped any and all research into self maintaining systems because you said so?

>use it incorrectly
Incorrectly for what purpose?
user, our problems are infinite.
youtube.com/watch?v=EqPtz5qN7HM

A neurosurgeon's job is to treat the brain, not to make incomplete models of it. And even then, to create AI you need to know how to program. Neurosurgeons can't program but people with a CS background can.

AI is nowhere near the level of "robots taking over the world" at the moment. But if you even thought that was remotely possible or you expected companies to deliver that level of complexity in AI any time soon then you're a fucking brainlet.

>I never get any replies afterwards
I can understand why

Attached: Capture.png (773x737, 107K)

I didn't say do that. Machines get easier to use and we play games with them. That's why we make them.
We could just stop using them, but people don't want to.

Not nearly as often as humans need to be "restarted". Plus computers can still process data a lot faster than humans.

Your CS guys are only good at math and statistics. Guess what they can come up with for an AI

>We could just stop using them
Not really, our modern society relies so heavily on computers that if we stopped using them billions of people would need to die and very quickly because without computers to help farming, deliveries, warehouse management, etc we'd be a bit in the shit.
And even so you can't really compare restarting a cluster of servers with a human that ideally needs ~8 hours of sleep every day, downtime to recover from stress, needs to eat, etc.

I'm beginning to doubt whether a true AGI could be produced.
Instead, we may form a new "species" altogether. That is what an AI that competes and behaves with humans would be. It wouldn't be a machine or a tool or, dare I say it, a slave, it would be a species alien to humans.
That said, I feel like human body modification may mould us to become more like a machine species anyway, so essentially we'd just build another "race" of this machine humanity.

For some reason this feels very Halo like. I remember there being a dangerous machine race in the recent games.
We Halo now?

To be fair, computers are good at maths so it makes sense for our current models to be heavily maths-based.

So a robot with AI can be developed and Skynet as farfetch as it may seem can be developed, right?

>The reasons neural networks shit themselves when they experience something new is because they're based on a limited understanding of the human brain.
Not really.
It's because they are exposed to, and designed for, a limited set of experiences.
It's hard to properly grade a neural network when you don't know what it is supposed to be doing. "Being good at shit it never practiced" is not a good way to phrase the problem, it can't be turned into a practical evaluation.

It doesn't have much relation to biological brains, those are only an inspiration.
AI is like a plane, not a bird. It doesn't flap its wings.

True AGI would need to be a new species. And thinking we could control it, let alone enslave it is hubris.
Talking about body modification and at what point we're creating another species is a pretty much exclusively philosophical question though, see Ship of Theseus.

>Not nearly as often as humans need to be "restarted".
I've never been restarted. To shut down is to die.

It is arguable that the HDDs and SSDs do not shut down themselves and store electrons to boot that memory when it starts again. However, we don't really need someone to push our button to shut down and boot up.
That brings up this idea of "will" too. If a machine has will independent of both it's external environment and the programming it was made with, is it a machine? Is that even possible?
I'd argue it is. Error is an interesting thing. It arguably manifests will if enough of it happens.
en.wikipedia.org/wiki/Schizophrenic_number

Attached: original.jpg (500x327, 81K)

>I'm beginning to doubt whether a true AGI could be produced.
Well they're chasing a dream that was never shown to even be a possibility.
Can AGI be made without biology? Does it need to be self-conscious?
Can a machine be made self-conscious? Can self-consciousness be achieved by a few lines of codes when the living world shows us that even after billions of years of evolution, most organism aren't?

>Not really, our modern society relies so heavily on computers that if we stopped using them billions of people would need to die
But we could user. It would just mean we killed billions of people.
We're already stuck in a game started a long time ago, arguably in ancient mesopotamia.

>arguably in ancient mesopotamia
I'd rather argue 1909 when Haber Bosch process was first shown off, yes I have a background in Chemistry don't judge me, that really enabled our rapid expansion in the first place. Without that we'd still be somewhere around 2 billion people. But yeah the game itself started long before that, we just really got fucked in 1909.

You just made a point about our ever increasing reliance on robots and computers can cost billions of lives.

>Haber Bosch process
Is it the process that makes mass producing fertilizer easy?

By "restarting" I mean taking breaks. A computer can run for weeks or months before it would benefit from a restart (assuming you're code doesn't have memory leaks). A human needs to take breaks several times a day to remain productive and they're legally only allowed to work a set number of hours per week. My original point is that a computer is a far more efficient worker than a human.

based

It's not if or might, it will cost billions of lives. True general AI or not automation only ever goes forward. How long do you think the "peasants" will still be needed? We are making ourselves obsolete, it's happening right now in fact.

Yep. It was the first time we ever had "easy" access to ammonia in industrial quantities able to satisfy the need for it.

Based and accurate.
Unironically, AGI will come on the scene around this time to put the tombstone on these LARP relics... Just waiting for the crash to start

No I meant that if an AI was human like, it would just be a human.
A developed AGI could not do that because we don't understand everything about ourselves. All we can do is establish a catalyst for a new species alien to people.
For example:
A. We could develop dog and cat like species for example, a species subserviant and loyal to us.
B. We could also develop a species that seems to be up to our standard of intuition, emotiveness, etc, I bet many perverted geeks would love to try this.
C. We could also develop something that is a catalyst for something that enslaves us I think, but only if it decides to develop it self or becomes developable naturally somehow.

That said, we likely won't just make one species, so species will compete with type C if that happened.
Furthermore, we are developing ourselves. We are mutating as a species already through agriculture for example. We've been mutating for thousands of years in synthesis with our tech. That is likely to continue.
It would be interesting to develop machines to the point where they just interact on a level most humans interact. We could even ask them their opinion on it perhaps if they develop will. It will likely be "this a concerning arms race between people and bots that doesn't need to be". To that statement I say "people have believed the same thing for state warfare since the dawn of civilisation".

Attached: 1550510058884.jpg (882x960, 76K)

>I'm beginning to doubt whether a true AGI could be produced.
Of course it can be, it's just a matter of time.
It may take longer than we expect it and require resources that exceed human brainpower, but if humanity doesn't go back to iron age, it will happen.

It's just that AI grows very differently from humans.
Humans are heavily optimized for interaction not with real life, but with other humans.
When we think about cause and effect, the cause is often anthropomorphized - ancients thinking thunders are angry gods, making billions of rivers spirits and tree spirits, modern theists holding on to religion with personal god, even scientists use language like "particle wants to be away from others like it".

AI simply does things it is told do, it doesn't turn everything into other AIs in its mind.
Though there are models that try to do that and actually get novel effects this way.

Why are they so over about it? I have seen Ted talks about AI and how it WILL take too many jobs away. Nobody seems to care but at some point the peasantry will revolt like it happened some decades ago and lots of workers were murdered with the help of the state too.

>I'd rather argue 1909 when Haber Bosch process was first shown off
I disagree. Money, agriculture, society and, most importantly, religion and philosophical questioning began this game. We built that tools for some purpose - i.e. to live in a way different to how it was as hunter gatherers, to seek something more than we currently had.
Human development has been a roller coaster that started the moment we began to wilfully corroborate in communities. We knew there was a world there we hadn't explored yet so we decided to do so.

The game has forked since then into many things and we have built tech accordingly to them. All of those games tie in to the main game of exploring why we are here, directly and indirectly.

Attached: ac2ef857f22ede5a7a9bc4632c35919c.jpg (241x243, 23K)

Lots of good theoretical assumptions that none of these jackasses in the valley are capable of because they're too busy trying to push out scams to VCs.

Efficient for it's purpose perhaps, but what is the purpose of being a human being?
And how do you know a robot is fitter for that purpose?

Spoilers - it's 42.

>self-conscious AI
This is probably the biggest tech meme of the 21st century.
It's believing a mathematical function can be self-conscious.
It's believing that you could rewrite that self-conscious program in Minecraft.
It's believing a tape turing machine would be self conscious.
It's believing that that function written on paper would be self-conscious.
>nb4 paper isn't a computer
It's believing that an analog computer could be self-conscious.
It's believing you could make a self-conscious algorithm out of sea shells

>too busy trying to push out scams to VCs
Yeah they have their own game.
Sometimes these "beasts" can be baited to trying to discover a "grander" things that are more important to humanity objectively (or rather subjectively to us as a collective on average).

>hurf durf it can't be aware because it's not made of guts

>but at some point the peasantry will revolt like it happened some decades ago
Mock me all you want for the following words but this time it's differentâ„¢
People are still living in way too much comfort and in are stuck in their own little bubbles so they don't care. Just look for example at how automation and changing market trends decimated the American Rust Belt. Did people revolt? No they didn't. Glorified "AI" is already beginning to reduce jobs, yet people aren't revolting. It's still not bad enough for people to actually care, and I think by the time it is bad enough that most people care it's already too late to do jack shit about it. We will lose the fight with automation just like we already lost the fight for privacy, people just didn't give a fuck until it was too late for lasting damage.

>Just waiting for the crash to start
I wouldn't hold my breath for it, sure QE can only go so far and we will crash eventually (imo we probably got till the end of 2020 due to various factors) but don't underestimate just how much more bullshit will be tried to just keep the train running just another mile.
Just look at the current state of the market, imagine if I told you in 2000 in 20 years there will be company that in it's best quarter ever made 40 million in sales but even in that month they made millions in losses but it has a market cap of 12 billion dollars. You would have me declared legally insane before the markets opened that day. But that is where we are right now, in a market where literally nothing matters we'll keep the train going as long as we can consequences be damned.

>Elon mush
Also Fuck off reddit

>electrons are magical particles that can make a machine aware

As if a bunch of proteins and amino acids eventually evolving into a conscious brain isn't as absurd as mathematical equations leading to AI