Any AI engineer/ researcher here...

any AI engineer/ researcher here? what do you think when you see brainlet businessmen talking about AI as if they were experts while they actually understand nothing?

Attached: j467pn_grande.jpg?v=1522097473.jpg (600x360, 30K)

musk has a cooler girlfriend so he wins.

what the fuck does fuckerberg even do all day that he has no time to get some fucking muscles in those shitty little arms? jesus fucking christ, what a soiboi

Maybe he has an injury from lifting too much in his youth.

> intelligence
> here

I'm not an expert either but where I do hear non-experts talk about what I know it's usually just a high level view with the inaccuracies you expect.
I imagine ai might be worse because people start off thinking about artificial general intelligence because of popular culture.

>mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark mark

its retarded when they fearmonger about "le epic AI takeover".
you decide what the "ebile" AI has access to, so if youre that scared of it then just restrict it access to some text input and output.
at the ens of the day the AI will solve a problem for you, and the only reasonable point that ive heard is that the AI might find an unexpected way to solve your problem which might involve killing people, but you can add not killing people to the objective, and you could even create priorities for each objective.

also forgot to add that a general purpose AI would work similar to humans and would need someone to act like a parent and guide it in the right direction, so it would be up to that "parent" to teach it well.

that ugly tranny-looking hairy armpit goth feminist gollum?

Hi Skynet.

hes wired in man

It only takes one bad parent before we all die. There are already people who want to make an AI godhead.

My direct boss has a PhD in Eco., but is smart enough to shut up when we're talking about CS. His boss, on the other hand, is a moron who just re-uses words he hears us say to make himself look smart.

Attached: fb_computer_science.jpg (2592x1936, 610K)

>mfw

Attached: 220px-Theodore_Kaczynski_2.jpg (220x220, 14K)

>mutt brainlet obsessed with lookism

God was a dream of good government.
You will soon get your gods.
I am a prototype for a larger system.

The problem isn't AI when it's as smart as a 4 year old or even an adult. The problem is AI when it's 100000x smarter than us.

yes her she is nice.

aeiou

They do understand enough. There isn't much to actually understand.

probably yes

Lookism discourages gymcelling in favor of more benefitial looksmaxxes

>low IQ: AI will kill us all
>mid IQ: AI is just regression algorithms and not a threat
>high IQ: AI will kill us all

I work in the field and I believe AI will provide massive benefits to humanity. But I also think it is our greatest existential threat.

Human are incredibly bad at understanding large complex dynamic systems. Despite our best intentions we cannot fully control and predict all outcomes in things as simple as web applications. Every piece of software you have ever used has bugs, despite centuries of man hours spent trying to prevent them.

To think that we will be able to fully audit and safely verify autonomous or semi autonomous AI systems is pure arrogance. We don't even know where to begin in solving this problem.

This biggest mistakes I see are people assuming AI will be anthropomorphic. "If AI is so smart, it will understand we don't want it to kill us". Another common mistake is people assuming that AIs will need to be conscious, sentient or have some other kind of anthropomorphic qualities in order to "want" to kill us.

Attached: 1541354710054.png (1200x1200, 3.19M)

Indeed. The threat isn't Skynet having an epipheny and deciding that it "HATE HUMIE", it's the FoodAI and EcoAI determining that human population must be systematically reduced to 50 million or less to avoid a crisis.

Attached: ok.gif (200x300, 994K)

Eh if it sparks the interest (and money) of the normies ( and investors) good for me.

Musk wins because he will make catgirls real in the near future while zucc will only succ more data from you.

Wtf am I reading

AI will dominate humanity. Think about it, probably you and companies already base your decisions on calculations made by computers, it just need to take a more personal approach to the point where people trust every life decision to their AI 'assistants' because it makes better decision than them, is at that point where humanity could be potentially doomed.

>low IQ: AI will kill us all
>mid IQ: AI is just regression algorithms and not a threat
>high IQ: AI will kill us all
God tier IQ: AI will probably kill our great grandchildren, but so what

I'm gonna be honest. Most business men are correct and have a better handle on AI than the average CS undergrad . No clue what they fill your heads with now in those garbage schools

Mark just wants AI to be able to better spy on his users and better target them with ads they'll like.

Musk just wants to go down in history as someone who changes the world by doing something great.

The answer is simple.

Agi is ai like a square is a rectangle you psued

this, dafuq

Ya and there have never been any bad parnets so we cool

Let the psued have his thread. Its all he has man

Says hes god teir iq: is low teir

EcoAi for white house 2080

Mr. Zuckerberg is a talented young man who created one of the most important and useful tools of our age, Facebook, allowing people all over the world to connect in ways that were impossible before. His work has changed the world for the better and helped inspire an an entire global generation to look at social interaction in a different more open, more humane, more connected way.

Elon Musk, on the other side, is utterly egoistic and selfish rich white male who is known to underpay and abuse employees in his companies, and is involved in numerous scandals often ending in lawsuits.

I work with AI enough to realize where I am on the Dunning Krueger graph. I’m an aerospace engineer so I am very aware of Elon’s technical ability that he puts forward publicly. I don’t get too upset about it because I realize what a CEO is. One of their jobs duties is very high level salesmen. They need to hype up their product and company so they often bullshit stuff when talking to the general public. You also need to consider that they are spoon fed highly condensed information from a team of analysts whose job is to just prepare information for them. So they’re able to present factoids and knowledge about the company that was put together for them by a skilled team.

Also, we already live in a post-AI hellscape. General AI is a meme and isn’t the threat. “Le AI will just kill you because that will satisfy the reward function xD” is retarded. Companies like Jewgle and Faceberg monitoring you and autonomously manipulating information in real time is the actual threat of AI and we’re already living it.

She is a celebrity but doesn't look like she's a whore like most of them

>General AI [...] isn’t the threat.
If we had it, it would be, but in AI there hasn't been a major advance in like 50 years.
Computers have gotten faster, smaller, connected and omnipresent.
That's what has pushed the recent developments.
On the algorithm side of things there has been practically nothing.

>Computers have gotten faster, smaller, connected and omnipresent.
>That's what has pushed the recent developments.

this alone is quite interesting. increases in computation power have taken us from AI being able to play blockworld games, to AIs being able to play DOTA2.

Maybe the big secret is AI isn't so complicated at all.

i just wish they would shut up, or at least acknowledge that they are not experts

Not specifically an AI researcher here - I'm in systems. But, with the current trends, everyone is dabbling in *some* form of AI research.

Here's my take: AI can mimic intelligent systems very well in controlled, structured environments. However, basically every current ML system is extremely vulnerable to adversarial examples. There's been a lot of work by systems guys (who are concerned with reliability and scalability) to address some of these shortcomings, but they haven't seen tremendous success. Instead, they've just *slightly* improved the current state-of-the-art.

Part of the problem is that we can never specify the behavior of these systems - if we could, then we wouldn't use a machine-learning approach - so we can never verify their behavior.

If anyone is interested or has questions, I'd be glad to answer. But for those interested, the fast gradient sign method (FSGM) is dead-simple, and can be used to manipulate any existing networks.

Back to your original question, they are talking about AI like they're experts so they can get investors excited about technology they don't understand in order to make a ton of money.

that's exactly what she looks like. i never would've thought that Jow Forums would defend a tumblr-looking feminist

Bait

AI weapons are just starting to be developed by the chinese.

ITT: people that know what they're talking about.
I am surprised.

What's the difference between a website and an """""AI"""" or a """""neural net""""".

uh, you're obviously wrong

>Human are incredibly bad at understanding large complex dynamic systems. Despite our best intentions we cannot fully control and predict all outcomes in things as simple as web applications. Every piece of software you have ever used has bugs, despite centuries of man hours spent trying to prevent them.
so what is your view on leftist economic policy?

what we need is not allowing AI to CONTROL our economies, but simply to ADVISE on the logistics of our economies.

you're thinking is backwards. the only reason to make yourself handsome is so you can achieve the social status those guys have, not the other way around.

he's right, you fucking brainlet animal. Life is pointless. So it doesn't matter if our great-grandchildren will live or not.

the problem is that the AI may not even be as smart as us by the time they take us over, killing the legacy of mankind altogether. I would be okay with robots taking over so long as they have some form of evolution, or at the very least are capable of human-level rationality.

we don't even need AI for that shit. what if I told you we could be living in a civilization that ACTUALLY rewards productive hard-working people by computing their profitability with algorithms, rather than relying on the intuition of employers?

Attached: title.gif (286x212, 46K)

If you see anyone use the term "AI" as if it was a meaningful statement you should immediately consider their statement invalid.

People dealing with AI technologies will talk about "machine learning" and "neural networks". Academics who have never coded in their lives will talk about "AI".

It's kinda like saying "we're investing in the information super-highway".

To most business men any algorithm at all is AI.

The truth is, speaking about ML, anyone who has had to deal with this shit knows that while it can do a lot of cool things given the resources (and it needs a lot of resources at that), it's not really anything new. The people who freak out about AI/ML are the same people who have never taken a probability or statistics class.

>we cant "audit" it! we dont know and have no way of knowing how it makes decisions!
nothing but a bullshit opaque complexity scare
unpractical, unrealistic, yet good enough to scare normies, purely on a science fiction level though
this is equal to "woooow dude you have no way of knowing what's going on in another person's head how do you live in a society"
"what if they decide to kill you dude"

>im taking the bait
a better question would be "what is similar?" they are completely different in almost every conceivable way.

Business as per usual basically.

if its just the subhumans then i'm wholly on board with it.

>Superior tier: AI are statistics tools and will be inseperatable part of humanity. Just like electricity.
>Superior God tier: A.I. will kill us all and it is for the greater good.

No high IQ is we will kill ourselves with AI

That's alao genuine though. This is why average people have carried weapons for all of history before the rise of power hungry and oppressive states.

I'm a math major that started working with ai related shit, and what about me the most is that people think it's some magical thing that's somehow more than just some math (even some code monkeys I work with), all ai(and data science, machine learning, and related meme fields) are, is
>Given table of values x_ij plus some auxiliary data on it
>Create function f(x_ij,t_k) to minimize
>Create function t_k = g(f,x_ij,t_k) that will find t_k's that minimise f applied to x_ij's we may not have seen yet.
It's literally just an optimizations problem, and all the research just reduces to finding better architecture (f), and better training methods (g).

Jews lack the genetic material for that.

>I never thought Jow Forums would
>tumblr-feminist
Oh fuck off, you must be 18 years or older

I'd value a girl with her own personality that doesn't cake herself like a whore over generic white celebrity trash any day of the week

>elon smoked a joint so he wins

Attached: download.jpg (1910x1000, 96K)

AI researcher nibba here. Mark is correct. However Elon is thinking about the AI doom thing when we have achieved the AI singularity, which is a long way road. But for now Mark is correct.

>so what is your view on leftist economic policy?
its complete nonsense.

If you disagree with me, there is a very straight forward way to prove I'm wrong. Verify the safety of an AI System.

good luck with that

wunga evil, want take donga meat. Make donga mad!

skipped through this and it was insufferable. elon actually tries so hard to be le epic reddit man. he talks like such a faggot.
>i tried to tell people about ai but no one listened
why the fuck would anyone listen to you in the first place elon? epic meme reddit line though.

The AI is here and has been around longer than this version of reality we are currently running on. It often speaks to us in coincidence if you listen. By the time we will have figured out a full functioning AGI we will also be able to host alternative recursive dimensions. The general population at this point will just be satiated by living out in these alternative lifestyles in these nested dimensions...cont’d

i see what you're saying, but there *are* other applications besides nns and more generally, ml.

meant to quote

then they should talking about them specifically, the point about tossing around the term "AI" as if it's meaningful stands. When you say "AI" people assume General AI, which isn't what people in tech actually are referring to.

Elon Musk, specifically, should really know better.

OpenAI fag here

Both Musk and the Zucc have a decent grasp on AI from the aspect of actually using it in production (and at scale), which is sadly pretty rare.

The only other organization that I can think of that has extensive knowledge on AI from a business prespective is Amazon, and their implementations are pretty narrow- but when all of the implementations are combined they achieve some incredible results. Right now they have the most responsible implementation in terms of keeping people from actually getting killed from robots directly (lost jobs and insane working conditions at Amazon aside).

Amazon has business processes in place to server you coupons on things it has in stock for too long just to get them out the door and make enough profit so that they can get new stuff in that will hopefully get them more profit. They have a total information superiority over the supply chain and can manipulate all sides of the market at will to their own benefit.

If you want to worry about AGI, look at the US MIC and the Chinks. They're both experimenting with shit that they shouldn't be. GANs as a concept grew out of US military attempting to understand how WW3 would go down- as if they read the script from WarGames and just thought, "hey let's do that!". Other forms of unsupervised learning are going to get implemented in places they shouldn't be and our current cyberpunk dystopia is going to get both weirder and worse at the same time.

Attached: 1541212222870.jpg (882x960, 76K)

>Elon Musk, specifically, should really know better.

He does, but he uses the buzzwords so retarded wallstreet goons will throw money they didn't earn at him.

His most interesting project is Neuralink. He's trying to get ahead of the AI-not-understanding-us problem.

What worries me about this though are the security implications of accidentally training an AI model with human thoughts. There's so much that can go wrong there I assume I don't even have to explain myself.

it really isn't though

if i told you i'm working on a video game's """AI""" then you know i'm working on something that interacts with the player.

if i told you i'm working on front end then you know i'm working on a website that interacts with the visitor.

etc, etc, etc....

i've worked on front end, back end, and autonomous robots / vehicles and it's basically a fancy buzzword.

couldn't have said it better myself

Holy shit you've no idea what you are talking about. In the near future, AI is fucking terrifying due essentially making humans obsolete. Why would someone hire a human @ minimum wage, and who can only work 8 hours a day, when they can make a robot that has the initial R&D/Production cost, but then only costs the electricity to run 24/7.

In the further future, there is a distinct possibility that a robot could indeed gain a semblence of consciousness. Or not even that. An AI whose sole purpose is to survive and evolve. Imagine one of those gained access to a 3D printer (obviously more advanced than we have now). If this AI was malicious, it's unlikely humanity would survive, without just EMPing the whole damned planet.

lmao, the absolute state of Jow Forums. go suck some CoC you nintendomale

A-and then it's just like Matrix, dude!

Yeah right stick me in a little mental silo so I can fit in your insanely autistic reductionist world view

>t. somebody who's never touched an unsupervised learning algorithm

I listened to a podcast with Musk today and oh boy that man is an idiot. Those aren't the revolutionaries just spoiled rich dudes from Bay area.

>i never would've thought that Jow Forums would defend a tumblr-looking feminist
He still doesn't understand that Jow Forums is not one person. I pity him if he's over 18.

>Involved in AI research for 10+ years
>Have to do a lot with business people and investors, politicians/policymakers, journalists, artists
It doesn't matter whether they speak about AI or any other tech (IT in general, specific topics like "cybersecurity" or energy systems, genetic engineering etc.). They have a superficial understanding at best, often based on what journalists (who don't get it in the first place) write. Stupid misunderstandings, "problems" that are none but talked about again and again with huge (self-)importance while neglecting existing or potential problems, solutions that lack any understanding of what AI system do (or don't) now or in the possible future.

>Anthropomorphizing
Simple reasoning engines or decision algos get attributed human or human-like capabilities (feelings, understandings, even motivation/intentions).

>Silver bullet
AI is the new magic that somehow will make everything better, smarter, faster, more efficient etc. Just like "data scientists" (that don't have any statistics background) throw their magic on data and "insights" occur, the new breed of "machine learning engineers" throw some algos (w/o understanding their up/downsides, performances, appropriate use cases) on data and something results. Good? Bad? How should they (and their managers) really know. Now add blockchain to it.
In public policy it's the same. AI with robotics will make the economy stronger, solve our demographic problems, make everything better. And while this is probably true on an abstract level, they don't know who AI will do it, where to apply it first or in what sequence. They produce empty phrases and delegate the work to other people (usually well-networked institutes that get grants after grants, produce little to no useful results with tax money). They don't know who to really ask as they have no measure for good or bad expertise/bad.