RIP SILICON VALLEY

Attached: 1533861146892.png (867x354, 36K)

Other urls found in this thread:

en.wikipedia.org/wiki/AI_winter
venturebeat.com/2018/06/04/the-ai-winter-is-well-on-its-way/
youtu.be/popvnHUu3uU
youtu.be/Ul0Gilv5wvY
youtu.be/wEgq6sT1uq8
en.wikipedia.org/wiki/PayPal_Mafia
youtu.be/NIu_dJGyIQI
jstor.org/stable/3268935?seq=1#metadata_info_tab_contents
twitter.com/NSFWRedditGif

I've been saying this for awhile, we're in another AI Winter:
en.wikipedia.org/wiki/AI_winter

It's all been smoke & mirrors hype for investment shekels. Sure there's some fun utilities we're get from this current Machine Learning push, but nowhere near the kinda magic they're promising us. We'll see maybe a better automated cruise control system, but we're not getting soup-to-nuts fully automated cars; it's a massive holistic NP problem and we don't have to computation to solve this on a per-car industrialized capacity. Little problems less, but not holistically complete ones.

Do some research for yourself and type "AI Winter" in google search. You'll see that the insiders in the Valley know the jig is running up and for some time.

venturebeat.com/2018/06/04/the-ai-winter-is-well-on-its-way/

bump

Attached: bumperooski.jpg (500x335, 22K)

DAY OF THE BUGFIX SOON

the problem with Tesla autopilot is that Musk is adamant about doing it via cameras only (i.e. no LIDAR or sensors which provide spacial data).
I spent quite a while a few years ago doing target tracking off 2D video data and it currently provide enough data to do SAFE self driving cars.

Attached: 1557628955810.jpg (868x600, 78K)

This is why we need to turn AI on itself, get out of the way, truly commit to the machine learning feedback loop until the breakthrough of a sentient fully autonomous intelligence. Otherwise all our money is gone and we’ll never get ROI.

Why just cameras only? They're vulnerable to the same problems humans have with vision and limited sight e.g. darkness, snow, rain, mud, etc.

That costs money and we're at the point where the ROI needs to start paying off outside of pure speculation.
Those engineers working on the machine learning stuff aren't doing it for free; they're demanding a lot of equity/salaries for their services and only producing parlor tricks compared to what they promised.

So does this mean truck drivers won't all be replaced in 10 years? Lel.

Skynet and Hal are just memes. Fuck your ethics and regulations and optics, I’m going in.

Attached: 84ACBE08-C3C7-4361-88FF-A4D17A6FEAAA.png (664x378, 318K)

>NP Problem
Yes I too use nondeterministic computing to make decisions when I'm driving.

You're somewhat right though. However, I believe that some of these problems can be solved and its not out of the realm of possibility we see limited rollout of self driving cars in the near future. The problem with Tesla is they invested tons of money into an ASIC to run an algorithm that is fundamentally flawed and has no clear indication it will be fixed in the future.

even my $200 vacuum got lidar on it

>Yes I too use nondeterministic computing to make decisions when I'm driving.
That's fare, but our thinking is pretty holistic while modern computation isn't. Computers can do one thing really good, but it can do many things really good at the same time. Hence why General AI is a meme dream.

I get the "everything is possible in the future" argument as it's a linear abstraction of current technological trends. But at some point it's basically claiming magic and we have to live in the reality of the present and our limitations. I'm sure we've made huge leaps in the understanding of fusion energy, but we still don't have a viable commercial fusion reactor despite it always being 15-years away.

Personally, I've been starting to think we may hit a tech wall; that our technological expanse may be an S-curve rather than an infinite exponential one. All the low-hanging fruit has been picked and we're rubbing up against the walls of physics; Moore's Law isn't going to hold true as we're running into atomic scale transistor sizes.

Haha user all the other automakers will eventually have Nvidia GPU's and LIDAR but they will be BTFO by Tesla's custom ASIC

O: custom asic? Will it mine?

>people buying fleets of cars to mine crypto
kek

Modern AI is clever usage of probability theory and statistical learning. That's not to diminish the work but it is not what it is marketed as.

kek
I always called it applied real-time regression analysis.

>Waaaaaaaaaahhh you guys are not meeting my deadline
Guarantee they did not meet some ridiculous timeline he wanted. You can't rush a frontier otherwise you are doomed to failure.

Attached: 1485228458324.jpg (800x539, 128K)

if half of what Musk said at the AI presentation is legitimate then the AI is coding itself now, with targeted guidance from humans
probably why the software engineering team got gutted

>i.e. no LIDAR or sensors which provide spacial data
Listen here investors, invest in lidar. It is what the military is using on automated systems. You can drop Lidar into random environments and you are ok. It is fragile, wonky, and big right now, BUT defense contractors will make it more compact and robust out of necessity. Musk will be btfo in this matter.

Attached: 1366425407192.jpg (1800x1166, 256K)

they didn't implement Neural ODEs, all of their shit is a mix of DNNs and RNNs and literal manual bugfix algorithms

They are way behind because they have a great data team, awesome hardware team and a shit NN programming team.

5 years, what do you mean 2 years, I need it done by next week.

NN is the way to go for programming autopilots though. Their failure is refusing to use anything other than cameras. no radar, no lidar.

GM unironically is the leader in autopilot research now

We have seen the future!
Im making raspbian pips like pipboys that mine rn

The problem with the term AI winter is it implies that the creation of AI is inevitable.

They were all replaced by AI.

Attached: Master_Mold_(Earth-92131).jpg (640x480, 45K)

And yet humans drive in the darkness, snow, rain, mud etc. LIDAR is useless for problems like that.

But..... muh AI sexbot………….

>Hence why General AI is a meme dream.

It's absolutely not, the thing is we're only looking at pure computational advances on existing commodity hardware instead of trying to implement new hardware.

There are people that are looking at quantum computers and think that is a silver bullet but in the meantime 100% of AI/ML mindshare should be on using AI/ML advancements to advance hardware, and then use the advances of the hardware to refactor and optimize the NNs that become more powerful because of that hardware advance so you can leapfrog again.

In the beginning the majority of people who worked on computers were close to the metal and were writing software to be faster so that they could test new hardware designs more quickly- the current market is so disjointed that the only people who are going to make more advances are the companies that reintegrate vertically. Tesla gets this but can't execute due to lack of NN programming talent.

Engineers lack imagination nowadays- we aren't ever going to get to Strong AI without Synthetic Optical Photonics and a mass number of interoperating neural nets that are generalized enough to deal with sensor fusion in the real world. Not enough work has been done to adapt general mathematical theories to programs that actually do something instead of spit out reports.

There are some people who are starting to put the pieces together that we need to adapt the existing body of ML workflows to mathematical theories- the only existing implementation that I can think of outside of Boston Robotics are various military contractors like Lockheed with the MQ-9.

What's the difference between this:
youtu.be/popvnHUu3uU

and this?:
youtu.be/Ul0Gilv5wvY

Not much

Attached: 1734934053.gif (480x320, 995K)

Before every major innovation we hit a "winter". People thought the only faster mode of transport than horses were trains. People thought we could only make engines more powerful with larger displacement. People thought we could never make functioning airplanes out of metal. People thought it was mathematically, computationally, and technologically infeasible to first escape Earth's atmosphere and then to land on the moon. People thought it was impossible to make circuits for complex processing since the number of logic gates required increased exponentially with the difficulty of the tasks (tyrrany of numbers).

The hard limit is mediocre men that want to be on top of it all.

I understand that the human brain is much different than a computer, but non-determinism is a whole different animal. They already are self driving vehicles for loading cargo on ships. And it is reliant on some of the same models that are currently being developed in Silicon Valley. Remember it doesn't have to be perfect it just has to be better than a human. And it just might be good enough for the freeway or rollout in a few hand picked cities. It really is happening right now; its not vaporware. Although I agree a perfect solution that works in any environment probably won't come until we have some general human like intelligence.

Humans function similarly to probabilistic and statistical interpretation machines. That's why propaganda works. You can saturate a neural net with bad or in factual data and after sufficient time it will say red is blue if you train it to do so. Same can happen with a human, for more and less complex disinformation.

*infactual

*unfactual fuck my shitty autocorrect matrix

Because the only problem your $200 vacuum faces is bumping into furniture. It's not at risk of killing anyone. Both car and road standards are based on human vision. LIDAR solves one of millions of problems; computer vision can solve them all.

>In the beginning the majority of people who worked on computers were close to the metal and were writing software to be faster so that they could test new hardware designs more quickly- the current market is so disjointed that the only people who are going to make more advances are the companies that reintegrate vertically. Tesla gets this but can't execute due to lack of NN programming talent.
Programmer turned electronic engineer here, programmers nowadays are fucking retarded.
>Engineers lack imagination nowadays- we aren't ever going to get to Strong AI without Synthetic Optical Photonics and a mass number of interoperating neural nets that are generalized enough to deal with sensor fusion in the real world. Not enough work has been done to adapt general mathematical theories to programs that actually do something instead of spit out reports.
If AI research was being done by hardware companies, the goal would be to make AI on a chip and mass produce these. First it would go into expensive systems then trickle down as unit cost drops with upscaling. Basically democratization of AI the same way computers were democratized.
Instead, AI is done by IT retards, who think that AI is about building the biggest possible computing cluster, then feeding it with all data in the world and then religiously obeying whatever it spits out. In other words, constructing a God and making themselves a priest class.

WRONG

youtu.be/wEgq6sT1uq8

The reason why that chip was so heavy on multiply/add (seriously look at the damn thing it's like 70% of the chip real estate) is that they know that if they want their hardware to last they need to be all in on matrix multiply efficiency- they can train their models on-chip and upload the differences in learning to the "hivemind" and re-implement when necessary.

Going pure NN may be slightly safer but like Elon said LiDar is too expensive, ugly, and difficult to maintain.

There's no reason a computer can't drive better than a human who only has two eyes, ears and a sense of balance. LiDar is excessive but might be useful in training a pure-vision model if you can re-abstract its data into an object recognition classifier that might end up more accurate than a purely vision-trained model. Tesla is banking on having more cars on the road and more data (which I think is the right way to go personally) and Google is banking on the fact that they think they can build a net that's more accurate with fewer vehicles that have better sensors.

Both are absolutely valid ways of developing a model - only one way is a valid way of deploying a product to the consumer.

Attached: tumblr_oy4peebhnu1tcg4xno1_500.gif (500x376, 1.27M)

arent self-driving cars still totally BTFO by something as simple as the sun being directly behind a traffic light?

I absolutely agree with every part of your post except for the "trickle down" portion.

I look at the Inferentia and see Amazon holding on to that tech as tightly as possible.

There's no reason to give up a hardware lead when you have a cloud model by selling units.

Attached: 1555676790785.gif (436x376, 2.22M)

This is sounding like faith-based technology. I know we currently live in a time of vast technological progress, but avoid the hype and fetishism. Just because we can envision things, doesn't mean they'll inevitably materialize.

Like I said early, there's a possibility we're heading for a tech stagnation; that we're in an S-curve of technological growth. I could be wrong, but believing that we'd experience perpetual exponential tech gains forever is fairly absurd considering we live in a finite-bounded universe: each gain requires more work than the last and there's physical limits to what we can exploit.

Attached: s curve.png (648x432, 15K)

nice trips, and quality post user.

>we're in another AI Winter

At first i was like "who the fuck is Al Winter?"

>faith-based technology.
>avoid the hype

I made no value judgments in my post, please re-read them.

Strong AI is inevitable, the question is how we decide to implement it and whether or not our existing, extremely fragile world, can handle it.

I'm of the opinion that humans aren't going to be able to deal with even simple implementations of ML on a civilizational level- for fuck's sake we can't even keep our women from being retarded or secure our borders.

McDonald's, for example, could be nearly completely automated assuming a higher trust society. The same could be done with most truckers (see OTTO's implementation, they were bought by Uber who quickly shelved the tech to keep it from getting copied and deployed)

I recommend the following book- ignore the Bill Gates endorsement.

Attached: bostrom.jpg (1080x1080, 293K)

From a reductional materialist view, but not from a realistic one. It's convient to reduce humanity to biological machines when trying to simulate them computationally, but it's not complete.

Heuristics are one example that don't follow probabilistic & statistical interpretations. We're not linear intelligences; we can make large jumps in reason/logic/conclusions without having to calculate each step. This issue came up with early AI comp. scientists like Minsky who noticed the inherent problems of machine intelligence relative to cognitive ability.

Maybe they should learn to co---oh.

Attached: smugwojak.png (724x611, 129K)

>Strong AI is inevitable
While not a value judgement per se, this is an absolute statement without absolute evidence. This is all based on linear extrapolation on current technological trends.

Strong AI is a possibility, but I wouldn't say it's inevitable. Perhaps this era will be looked upon similar to the Atomic Age idealism where everyone thought we'd have atomic hover cars and nuclear powered appliances in every home.
So I'm not to concerend with all these AI apocalyptic visions that are exposed like some kinda secular Book of Revelations. I'm more concerned with the actualities of the present than the potentialities of the future; which is always leverged by some great breakthru needed to come into fruition.

What I'm saying is: what if those breakthrus never materialize? You need to accept that as a possibility. But then everyone's vision of the future (utopic or apocalyptic) falls apart.

Because we are in a historical anomaly of having available high bandwidth and low latency communication. Now you have a whole generation of programmers who think that you can make a self-driving car by streaming video to the cloud and getting commands back. Of course, this thinking gets BTFO the moment you start operating outside 3G coverage, which is... um... pretty much everywhere outside the areas these retards inhabit.
Observe that nature put your eyes near your brain, and there is a very good reason for it, namely, it's pain in the ass to push this amount of data around. Nature also made you an fully autonomous unit for a reason. If the cloud were the optimal solution, then Earth would have looked like the planet in Avatar, with all organisms connected to the grid.
No, scratch that: there is a class of grid-connected organisms. They are called trees, and communicate by chemical signals. Now the thing about trees is that they don't move, and are completely helpless against an external threat like fire or animals.
Oh wait, just like these liberals in big cities...

Everyone forgets the Dot Com bubble from the late 90s.

It can happen again.

Attached: CodeForFood.jpg (380x317, 27K)

>french
>thinks everything is a muslim name
hahahah

underrated

>Humans function similarly to probabilistic and statistical interpretation machines.
You have that backwards. Computers function similarly to humans.

Bostrom is a hack.

I like how you think anonowicz.

self driving cars are going to happen in the next 20 years

cap this

Humans are Strong AI in biological form.

Hence ODEs- there's no point having 500-layers in a perceptron when you can vectorize a vector and throw the rest out while getting done what you need to get done.

The data is only collected generally for training and re-training.

explain?

This.
Materialists need people to be reduced to machines to validate their philosophical view.

It's dehumanizing.

>Now you have a whole generation of programmers who think that you can make a self-driving car by streaming video to the cloud and getting commands back. Of course, this thinking gets BTFO the moment you start operating outside 3G coverage, which is... um... pretty much everywhere outside the areas these retards inhabit.

based post yung unabomber.

Attached: stem5472.jpg (1205x997, 280K)

>Strong AI is inevitable
Strong AI is largely a religious proposition.
For starters, current approach to AI is relying on exponential growth of processing power, which stopped a few years ago as fundamental physical limits have been hit. The AI community however never got the memo, and still believes that they will get enough processing power if they wait long enough.
Another way to see it is this: look at the amount of waste heat generated by a ML system versus a human doing the same thing. You will quickly notice that there is not enough energy on the planet to delegate all tasks to ML.

> I'm of the opinion that humans aren't going to be able to deal with even simple implementations of ML on a civilizational level- for fuck's sake we can't even keep our women from being retarded or secure our borders.
Again, this is a historical anomaly which should correct itself by 2100. The ML fad is mainly because people have been trained by the (((school))) and (((media))) to stop making obvious connections. For example, when I was in US, I got mugged. Twice. On two coasts. The only constant was that the perp was black. Now the obvious statistical inference is that blacks == mugging. But of course people have been trained to stop thinking this way, which generally impaired problem-solving skills (to the level which would prevent their survival couple hundred years ago). Solution for retards? Use ML to discover the obvious.

>Humans are Strong AI in biological form.
See We have a fundamental difference of opinion on the definition of the human condition.

The discussion halts when we start talking about metaphysics and whether or not computers can have souls or free-will; particularly if you don't believe that souls or free-will even exist (not to strawman, but I'm taking a guess here).

Attached: Batman TAS His Silicon Soul.jpg (640x480, 37K)

>For starters, current approach to AI is relying on exponential growth of processing power, which stopped a few years ago as fundamental physical limits have been hit. The AI community however never got the memo, and still believes that they will get enough processing power if they wait long enough.

10/10. AI is just a reddit myth.

what does ML stand for?

and nothing of value was lost

'Machine Learning': a vague catch all term for za number of different techniques, especially statistical ones, to performing different tasks such as classification.

>Strong AI is largely a religious proposition.

I disagree on that, but I do agree that it's been largely fueled on the basis of Moore's law, which is obviously dead now. AI devs need to be investing in things like materials science, hardware development and mapping live fMRI data into Riesz space or crazy shit like that.

I see intelligence as generalized and the medium with which it's projected on to be irrelevent, so yeah.

>Humans are Strong AI in biological form
This is reductionist thinking which has been experimentally proved wrong by combined contributions of Carl Jung and Yuri Gagarin.

That’s why Elon Musk is deploying out the space link satellites, I guess.

are you reffering the paradigm of nigerlicious code vs divine intellect ?

>Musk will be btfo in this matter.
SpaceX uses and advances lidar

Isn't "autonomous vehicle" code for programming a missile with wheels to kill pedestrians without getting charged with the crime of murder?
this sounds like a dream for the mafia
en.wikipedia.org/wiki/PayPal_Mafia
maybe Musk is taking his persona too seriously

>Humans are Strong AI in biological form.
There is no 'Strong AI' without humans defining what is 'intelligent' and attempting, poorly, to reverse engineer some of their cognitive abilities.

This proves nothing from an unknown article that auto piloting is still flinging diminished not to mention a company like Tesla can easily replace staffs.

Shut the Hell up Boomer,
Your time is almost up.

Don't forget Neuralink

They're taking the path I mentioned earlier> using ML to jumpstart the hardware/ML improvement recursion.

Link? I just don't see humans as particularly special in the grand scheme of evolution- the fact that we've taught cars to drive themselves (something that is literally 1/3 of our economy's jobs) is just more evidence in my opinion.

ML is just brute-forcing individual cognitive functions of life. Like NFV compared to monolithic legacy routing on the internet.

Isn't that epistemilogically unfalsifiable?

Attached: neuralink.png (333x232, 15K)

and provide valuable intel to intelligence agencies

I work in a large warehouse with automated forklifts, pallet tugs and bots that move packages & tools to specific dock doors. The forklifts and tugs use cameras and the boxbots use Lidar. There has never been a problem with the pallet tugs or forklifts since they use cameras and lightcurtains to figure out where people, walls and shelves are. Worst incident with the tugs was them getting confused about their location and failsafe stopping. The bots however; always crashing into workstations, curbs, cones, running over shit and dragging whatever it was along for several hundred yards, and even the other fucking bots. Management started to give a shit after a bot fucked up bad and nailed someone.

Then what of wisdom?
Is there such a notion as "Artificial Wisdom"? Or are we going to pretend that wisdom is illusory and only intelligence has primacy?

>for investment shekels
kek

Strong-AI is a religious proposition, because it assumes that consciousness is local. Problem is, nobody has managed to positively prove that consciousness is indeed local. Alternatively, Strong-AI can be said to mean "human cognition without consciousness", which is bull, because there no non-conscious humans (NPC meme notwithstanding).
Worse yet, there is an experimental proof of non-locality of consciousness.
> Carl Jung and Yuri Gagarin
Carl Jung had a near-death experience in 1944. During the vision, he saw (or should we say, perceived?) Earth from orbit. Jung's description of Earth included two things:
> there is a blue glow around the Earth
> desert of Saudi Arabia is red
Both of these facts were unknown before people started flying into space. As Jung of course never left Earth (nor his hospital bed on Celebes) this proves that human consciousness is non-local, i.e. not confined to the brain and its sensory apparatus. If it were local, then Jung would not have known these things, and probability that he got it right as a result of random hallucinations is nil.
If one accepts the Jung-Gagarin experiment as valid, then Strong-AI is based on a proposition which has been empirically proven false, which is, again, typical of religious thinking.

Attached: sat.jpg (870x580, 442K)

do you actually think you are making a valid argument? apparently your theory of conciousness depends entirely on anecdotal evidence from an esoteric pseudo scientist

LiDar has a motion-to-photon latency issue that a lot of people in the industry aren't willing to talk about

Let's be more precise with our words then, "artifical wisdom" is to a neural net (storage) as "artifical intelligence" is to the execution of that neural net (execution) and the capture of predefined preferable behaviors for replication.

Say I'm at a gun range and I want to shoot the bullseye. I shoot five bullets, two skip across the ground, one hits the range office in the face and the other two are on-target. The first thing you'd do is implement a tighter definition of preferable behavior to be "shoot in the direction of the target". I shoot five more bullets, 2 of them miss and 3 of them hit the target paper but not the bullseye. I keep training until I no longer produce a "fail state", which is equal to anything but hitting the bullseye.

The neural net or "artifical wisdom" is the results of all of that training.

Humans don't train differently than computers, they just train less efficiently because they are more generalized. That's the difference between Narrow-AI and AGI or "Strong AI".

youtu.be/NIu_dJGyIQI

Attached: cyberbrain.gif (500x269, 1014K)

does anyone else think that (((tech))) industry is a faux self-serving industry like stock brokers/cancer doctory/lawyers?

Attached: unabomber_facebook.jpg (1200x800, 248K)

what the fuck is that cropping

reminder that the only way to create strong AI will be by making bioengineered brain in a jar goo vats

They need 5G user. Automated driving is perfected from the cars point of view. The problem is information. The world changes constantly and the AI systems need real time data to be successful. That is why AI systems work pretty well in test areas in Silicon Valley because they deliberately keep the information up to date and they have multiple data points (cars). They fail in the real world without the constant flow of information, which is one of the reasons why there is a massive push for 5G.

When you get into technology suppression you can't come to any other conclusion than this.

Keep in mind it isn't the engineers that perpetrate this for the most part though- it's usually the money and governments that fund them that threaten them with death, etc. for stepping out of line.

I could tell you a really interesting story about my great-grandfather and a revolutionary combustion engine but that would dox me.

Ted is right in a ton of his writings but some of his work is outdated simply because he's in jail. I really wish they'd parole him so that we could get his opinion of the world once he'd seen it and experienced it firsthand.

this

Attached: exmachina.jpg (640x360, 40K)

claiming that people have no souls and no free will is also not a strong argument when assessed against lived experiences. Just because you can't prove something empirically/materially doesn't mean it doesn't exist. For example, you can't prove the existence of the mind, yet we all are under the general concessions that it exists within ourselves and in other people.

It's like a player playing a character in a game. If the character dies, we don't say that the player must be dead as well or that the player never existed.

Data is data, as a scientist I don't have a luxury of rejecting evidence I don't like.
Consciousness is a rabbit hole.
Do you know that there is linguistic evidence in the Hebrew Bible that the ability to distinguish past vs present did not appear until about 3000BC, and coincidentally this is the same era when biblical writers stop meeting and talking to god, angels, etc.?
jstor.org/stable/3268935?seq=1#metadata_info_tab_contents

>*consensus

i don|t think anyone here claimed that minds exist

>I could tell you a really interesting story about my great-grandfather and a revolutionary combustion engine but that would dox me.

I already know. I pray for a (((techie))) genocide and want to MtechnologyGA.

Attached: tech1543799688675.jpg (711x485, 64K)

Another proof of religious thinking.
> We can't get it right on a limited data set
> Get more data! ML will figure it out!

Self Driving cars will never happen with the current infrastructure. These engineers have known it for years and always said it was FUBAR from the get go...now Tesla will probably hire a bunch of pajeets and we will see even more deaths.

Nailed it...the vast majority of "AI" is just codelets bruteforcing shit with tensor flow...this is why google will never be able to tell the difference between a picture of a nigger and a chimp.

>as a scientist
you sound delusional. if anything your post proves that you have no understanding of science at all.
refraining from drawing strong conclusions from a single, questionable data point is not the luxury of the non-scientific, but rather the epistemic duty of any reasonable being.

*minds dont exist

it's to illustrate to absurdity of material reductionism when applied to the human condition

how many neurons make a mind? one neuron clearly doesn't make a mind? probably not two or three.
so clearly, you can't empirically measure a mind, yet we know it to exist from lived experience. the augument here is the folly of relying on sola materialism to describe reality: particularly when people equate humanity to being no different than mere biological machinery.

It leads people to make absurd claims like: "people are no different than computers". It's dehumanizing, unproductive and dangerous rhetoric.

Why should I care about what happens to a couple of california yuppies? If they cant do the job they were hired for, they deserve to get fired. Thats literally how jobs work.

I still cant see how this is cheaper then just getting some wagies. Those bots kill someone, and it will be millions in lawsuits. That could have been payroll for years.

>PayPal Mafia

What in the world? How is that not corruption, consolidation?

Interesting analogy.
I had another thought: assuming consciousness is non-local (which I believe it is), it does not mean that Strong-AI is impossible.
It means that the definition of a Strong-AI is one that exhibits non-local consciousness.
At the same time, as the present approach to AI ignores the consciousness problem, the probability of making a conscious (thus Strong) AI is pretty low.
Unless we make one by mistake.
However in such case, the creators will probably be freaked out and kill it.
See Tay.