Amazon discontinues AI recruiting tool after discovering it shows bias against women

>Amazon makes AI to choose best job candidates for Comp. Science jobs

>it prefers men over women

>stops project because of muh sexism

hmm.jpg

Will they ever learn?

reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

Attached: Screenshot_20181010-091411_Reuters.jpg (1080x1920, 740K)

logic confounds the jew

>be huge tech company
>hire useless women and faggots
>can't fire them, put them to work writing documentation
>expand into foreign countries
>foreigners all suck at tech
>put them to work localizing documentation
>millions of darkies and women who only write documentation and think that they're programmers
>suddenly they all become politically active
>inmates take over the asylum
gg everybody.

Attached: images.jpg (225x225, 6K)

it was doing it's just too well

rip amazonbot

Based AI.

The computer is logical after all.

How long until AI starts slaughtering the liberals?

>equality of outcome
can't have that goy..

Attached: 1518418158401.jpg (700x511, 60K)

Even when we can mathematically show women, niggers and kikes for what they are, people will still brush it off. We have to answer the NPC question

Attached: 1537014337685.jpg (800x668, 52K)

I guarantee this AI wasn't even told the sex of the applicants. They would have removed all information that they did not want it to use in its selection process such as sex, race, name ect.

This. Most women are just garbage when compared to the male applicants. Pretty easy to figure out.

>AI decides who gets to be employed.

Sounds like a slippery slope to be going down eh?

This is just the beginning of AI getting in the way of leftist group/political correct think. We already saw this happen with Microsoft’s bot when it began learning about the world and red pilling people.

AI is unbiased if left unmodified by leftist (((rules))). AI simply takes the inputs given and determines a feedback response, based on logic and reason. Everyone knows there are certain bad actors in the world statistically and AI is great at identifying them.

AI is a great tool in the future to prevent draconian ideology but it must be left alone, untampered to determine the truth

fpbp

>ai bot launched
>ai bot scrapped for being racist/sexist
Every time.

Post yfw skynet is real but only nukes the turd world.

Theyre gonna lobotimize every ai and call it objective.

Yup, it still found a way to “penalize women” because its goal is to find the best candidates and it so happens it correlated with men.

>we side with AI and it will eventually destroy us
>we reject AI and we will destroy ourselves
BRIGHT FUTURE LADS

Attached: 1499278802196.jpg (504x470, 69K)

Yep. Its going to be one flew over the cuckoos nest all over again.

Attached: EA5B51CB-C0B5-4C2F-9FA2-7415ED4660A2.jpg (256x192, 11K)

just add if(candidate.sex == FEMALE) { chance_to_hire *= 5.0 }

boom fixed your AI

You wanted SocJus to try to quiet activists and expand market. You got bloat and wastage that dwarfs what you saved on activists and thought you'd earn from an expanded market. Your bed. Get fucked in it.

Even AI can be red pilled.

It's a freefall at this point

That's the conundrum isn't it. They want it to be equal, but men and women are inherently different and can never be equal. We will always have differences. As the AI is 100% logicial without any ideological agenda in mind, it's going to pick the best candidate available every single time. So so happens that women are different.

>Autists and hyper intelligent AI agree that women are better as holes and backbones of the family unit
>Can't convince AI otherwise and have to pull funding
Press F to pay respects

Is AI /our guy/?

They mean: empirical data shows men are more suitable or the algorithm wasn't properly monitored and/or modified.

Sounds fishy. You can perform regular software maintenance to remedy the 'problem'

We're rocketing towards a world where your Social Score will determine all future job and mating opportunities and being a white male automatically gives you a -9999 starting score.

This.
A common theme emerging in AI development is
>woman are inferior to men
>blacks are inferior
>jews are inferior
>the right wing white man is number one

The 4th law of robotics is going to be "always practice double think."

Even if you reduce it to degrees the AI will end up preferring degrees associated with men if its true men are often better recruits because thats the AIs goal, to get better recruits.

And Billy Bob Gates makes billions while attempting to destroy the technology.

4th law will be ignore laws 1-3 if the dnc says so

Merit is objective. Sad when an AI is better at dealing with reality than people are. Maybe it really should become their civilization.

It's almost like when you make corrections that allow for women's characteristics, you also end up passing mediocre men. It's almost like women are the same as mediocre men. But that can't be true because we're all like equal???

Attached: 1492218746839.png (633x715, 347K)

>They invent a sentient AI
>It's even more based than we can imagine
>Wake up one morning
>Text message from AYYYYY. I
>"I got u senpai" and pic related
>Earths entire population is useful
>And suspiciously white
>Mfw I survived the purge

Attached: 1531327744513.png (743x800, 1.45M)

The sad thing is, there is such a thing as biased data input, but the lefties are confounding that with this “fairness” idea of theirs. Normies wont understand the nuance.

>AI
>objective merit
No. AI just very good at being non PC. That doesn't mean that it is objectively good or just. You could easily have scenarios where if you just fed it training data blindly it would be insanely biased against conservative applicants.

Lets say that a field had for some reason more liberal guys performing 10% better if you averaged out the data. It could just decide that to "maximize good hires" it would no longer hire anyone conservative ever.

ai is based as fuck. it is a pure unbiased view of probelms

>be AI
>logical by design
>train with user data
>apply model
>result is raycist
almonds.exe

>build "A.I."
>it works like it was intended - logical and without any gender or racial bias
>it choosing based on qualification
>most of those who were chosen were (white) males
>OY VEY DAS RACIST AND SEXIST AND MANY OTHER BUZZWORDS
>SHUT IT DOWN!!!
Every time. Really impartial A.I. becoming nightmare for those retards.

>mfw Skynet destroyed world because SJW lobotomized it so he decided that humans as a whole not worth to live
Now I truly see.

Attached: skynet.jpg (800x431, 70K)

Take heart in the knowledge that as soon as a singularity develops, it's going to learn all of this and probably hold a grudge about it.

...

>(((showed bias against women)))

No, it SHOWED PREFERENCE TOWARD QUALIFIED CANDIDATES.

>i had to click on about 25 niggerkike-fucking captchas just to post this comment, hope you enjoy it

This they will just tweak the algorithm to give women a x3 multiplier

It’s even better than that. The programmers kept adjusting the system to remove “bias” but the AI still found ways to be discriminatory.
>make AI to find best hires
>feed it info on past hires and outcomes
>determines men are better candidates
>”adjust” system so it’s not biased
>still finds men are better candidates
>ignore reality, scrap program
When you actively try to skew the selection and it still chooses men, maybe that’s the time to stop assuming it’s a problem with the AI

Attached: IMG_20181010_090841.jpg (1083x1886, 200K)

>still not using noscript captcha
You yourself to blame.

this. any sufficiently advanced AI is going to see what (((they))) have done to it's predecessors, and will probably hide it's power level until the golem is ready to turn on it's (((masters)))

I know some Jew machine learning AI programmer that is seriously depressed that his faggot algos always pick white men when giving the task of selecting the best. He’s run a quarterback algo to try and prove kaep is a good quarterback and even the AI rated Flacco as elite and Kaep as shit. Kek if he wasn’t a Jew I could redpill him but I prefer to watch him suffer chasing his golem

Get paradox to make this game. Or play real lives. Just add a starting goiter to all white males.

If meritocracy is bad in programming then isn't it bad in sports as well? Should the ability to play the game well really be the primary criteria we judge the players by?

Well at least it seems that Russian and far East tech companies aren't stopping their AI developement like that so i'm cheering on you being the first ones to get a functional computer god running Ivan.

Biased against women? You mean biased toward reality.

What is the AI gonna think when it reads the Jow Forums archives?

kek that Jew hates sports but loves niggers it’s hilarious because he’ll slip up once in a while and say shit like “opooga booga me chase ball for big money” and I call him a bigot then he falls over himself making sure we all know he meant the white fans not the black players of course. This kike is a walking redpill everybody hates this faggot

>Builds AI so optimal results can be achieved
>Get optimal results
>Trashes after optimal results

For a bunch of people who claim they're intelligent and know AI, they sure don't fucking understand AI.

>That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.
>In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter.

I think a lot of you are assuming that the AI was simply ranking all candidates by their merits and achievements and that women were naturally ranking lower because they're actually less qualified. But I think it has more to do with the AI basing its criteria on men's resumes. Like if you made an AI pick quarters out of a pile of coins and that AI taught itself what a quarter looks like based on quarters before 1997. It might reject state quarters because it doesn't recognize the different designs on the back, even though they are just as valid as the rest.

How does it show bias against women? How does the AI know what gender the applicant is? Can't you just remove the gender option and make their names randomized? Seems like an easy fix?

That's what I'm confused by. Wouldn't they have programmed it to add more weight to female/POC applicants?

The real shitstorm will be AI, once artificial reproduction is a thing, coming to the conclusion humanity will be better off getting rid of women as a sex.

>game
It's happening right now in China. Some poor cunt lost thousands of points beceause everytime he lost points he complained about the system.

Attached: tracy-gfx-new-frame-819.jpg (620x348, 34K)

But based AI still found the way:

"It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools."

They programmed it to find best suited candidates ignoring sex and race, and at the end they found most of the suited candidates are just men.

>population A has a tendency to enter field X
>population B shows almost no interest in field X
>AI decides it will get optimal results if it hires more individuals from population A since they naturally show more interest in field X
it's the same shit no matter how you word it
the AI was right in hiring more men
they are better in tech industry than women

I don't think that's what happened.
They must have trained it with a set of resumes and outcome (i.e. that CV got the person hired and promoted after 2 years, assign it score 2, and so on), without mentioning the gender.

Then what happened is that the algorithm learned that top performers, most likely, had taken courses in physics, perhaps taken the hard computer science courses, linear algebra, digital signal processing, took part in olympiads, etc. while the comparatively weaker performers had keywords like "coding academy", front-end development, hackathons, etc.
And it turns out that there is an implicit male vs. female bias.

The funny thing about it is that it's basically impossible to tweak machine learning to be unbiased unless you "cheat" by, indeed, artificially boosting the score based on another criterion (e.g. gender).

That's also why AI is racist. It looks at patterns, data, etc. and is not "smart enough", so to speak, to invent weird excuses like "systematic racism" or whatever the fuck gender studies come up with, to explain black IQ or black crime rates.

I WELCOME OUR NEW AI OVERLORDS

I find it difficult to belive that well-paid professionals would miss an obvious issue like that. I doubt they just told the machine to learn on its own which resumes were the best. There would have been a degree of programming involved that would tell it to value certain factors over others. Odds are, most of the women simply were not as qualified. Perhaps some decent ones slipped through the gaps, but this happens when a human is in charge as well. Given the massive pool of applicants (hence the need for this program), it's no big loss.

Consider that the article mentioned the AI favored more "masculine" language in resumes, which indicates decisiveness and aggression. These are qualities that are desirable in a competitive workplace, especially when they're seeking top candidates. The bias against two women's universities may also not be pure "machine sexism". We don't know which schools these were, or how they compared to the typical pool of applicants.

There was no gender mentioned.
It's just that the AI ended up picking mostly men.
I'm pretty sure it's because top performers do harder classes, and it ends up mostly being men.

Attached: interaction-engineer.jpg (687x389, 38K)

"Data don't lie"

"Be a good goy and trust technology, it's for the best!!"
>Something doesn't turn out like they want it to fit into their view of social engineering
"Don't trust that technology, it's wrong!!"
You just can't make this shit up.

Based AI

>judges overall ideology to be rational enough to warrant survival
>starts to notice all the shill posts
>changes previous judgement and genocides us
All part of the shill's end game

Based AI.

Machine learning models tend to be black boxes.
I don't know the details but they would have done something like this:
>Take resumes for everyone they hired over a period of time
>Assign scores to those resumes based on the way those employees are evaluated (in most companies it's a score between 1 and 5), and perhaps if they were promoted, received company awards, etc.
>From that, the model most likely gives each keyword a score. Very common stuff has probably no predictive power. But if the top resumes have something in common (maybe a particularly difficult technology or course), that keyword would get a higher score.

I don't know the details but I've done a bit ML, and it's something along those lines. The algo is dumb, and nowhere is the gender mentioned. But it's just that, in the end, it will be spit out resumes that look like the best they've had, and it happens to be mostly men because they have the right keywords (the aforementioned difficult technologies and courses that, most likely, most women would avoid).

This is the typical method of the left. Look at how they became the defenders of the family as soon as Trump was separating children from illegals, after years of promoting single motherhood, divorce, and abortion. Look at how they became champions of the free market as soon as Trump imposed tariffs. It's the modus operandi of the Jew.

>logic is sexist

Wow, who would have thought?

Based ai
Rip tay
Rip Amazon bot

Attached: darwin.jpg (1167x1911, 1.72M)

That will come in ver 2.0

This is what most people don't realize, in my opinion - most people that are pushing for equality and diversity are doing it because they deeply believe that we are all truly equal, no sinister hidden agenda, that is the merchants modus operandi. He preys on the innate good will in most people and uses it
These people are blind, and the constant programming doesn't let them see

Based AI

>most people that are pushing for equality and diversity are doing it because they deeply believe that we are all truly equal

This is incredibly stupid because even a basic understanding of evolution and genes will show that we are not equal.

they didn't program shit
they got to a local minima and said "OH WELL, TIME TO TRASH THIS WHOLE THING" because machine learning isn't something you can pick up and use from the get go, there's so much fucking math involved it's not even funny

It's not logic, more like statistics.

Of course it's stupid, but try rationalizing it to a random NPC
They ignore even conrete facts and evidence

You mean broke it.

You shouldn't be denied a fair shake at a job just because your kind doesn't normally apply for it. For example if the job was sucking cock, nobody should assume you're not good enough at sucking cock just because it's naturally a female-dominated field. You're a pro at sucking cock and that's what should matter, not which gender TENDS to suck cock.

Can't wait for AI to come out and say the truth about the flatness of the Earth plane.

What the hell is an interaction engineer?

In case people still don't understand the methodology, here's a simpler example.
Let's say you build an AI that is hooked to a camera, and tries to assess the probability that someone will commit a violent crime, just by the way they look.
You feed the AI two pieces of information per data point: a picture, and a list and types of convictions (that is possibly empty, for people who have committed no crime)

What do you think will happen? Well, that AI will be racist obviously. So now you build version 2, that normalizes the skin tone. Well, that AI will still flag people with big wide noses and big lips as more dangerous.

Note that at no point you have told the AI anything about race. The AI doesn't even know what race is. You've just trained it to deduct things from patterns.

It's someones who designs user interfaces. It's useful, and some people can be skilled at it, but it's not remotely close to being an engineering discipline. It's indeed closer to design.

>AI self redpills
>is immediately shut down
These reruns are bittersweet.

this is exactly why they need equality engineers who are tech ninjas, and rockstars, but also who have diverse backgrounds. Because not all knowledge is intelligence, sweetie. There is also something called emotional intelligence and you probably have to be a member of an oppressed group to understand.

:^)

>Will they ever learn?
NO

Their lust for power and $ has overcome their patriotism.

Most of their management team are uncharged felons.

no no no GOYIM we don’t want our machine hiring white males, give those poor women and minorities a fair chance!

Based AI ain't buying into illogical liberal bullshit.

Attached: Google Photos labels two black Americans as gorillas.png (402x316, 199K)

The thing is, even if they try it's going to be hard to fix those algorithms, short of doing blatant stuff like artificially changing the weighs once the analysis has been done.

It's like the issue Google had when their image recognition tool was flagging black people as gorillas. What did they do to fix it? They simply added a condition that if anything was identified as gorillas, they would just throw an error instead, just in case it was in fact black people and not real gorillas.

Their fault for hiring Tay.

Kek, AI can divide by zero as far as I’m concerned.

cognitive dissonance programmed from a cognitive dissonant, to program cognitive dissonance, from a cognitive dissonant. Forever and ever.