Reality makes AI racist

vox.com/science-and-health/2019/1/23/18194717/alexandria-ocasio-cortez-ai-bias

Is /ourgirl/ AOC right? Is cold hard data driven facts preventing computer AI from seeing the true world as it ought to be, instead of the world that exists?

Attached: 1539230517014.png (1093x1077, 1.6M)

Other urls found in this thread:

youtu.be/aircAruvnKk
youtu.be/r428O_CMcpI
youtu.be/R9OHn5ZF4Uo
twitter.com/SFWRedditImages

I fucking hate leftists, but AOC isn't white so she gets a pass.

Did you not read the article? AOC is claiming that AIs can and will inherit the biases of the humans that make them, and if that includes racial and class biases then that's a bad thing.

I've begun to really hate the idea of baiting for responses. Usually, doing so results in shitty replys and neuters any discussion that would be possible, if there even is something worth discussing in the first place.
I think if more people think of shit that's actually worth talking about, this board would be a lot better. It doesn't have to be really high level discussion, but at the very least something entertaining, or informative, or interesting, would be nice to see.

AI is not some magic all-knowing force, all it does on a basic level is identify patterns and use statistics to make predictions. It's basically no different from what people do.

Okay. But when the pussified AI designed to not hurt feelings gets constantly outclassed by the AI that's been programmed to perform impartially, to what end does that contribute?
If "tranny sensitive" AI identifies a guy with a dick in a dress as a female, is it really doing its job? Or are you imposing a bullshit condition on it that defeats its actual purpose for the sake of sparing feelings? It's standing in the way of progress for nothing.

"bias" inherently implies a departure from the truth, so programming ais to be impartial is always a good thing
sorry that the ai in your loli sexbot wont hate niggers, why dont you go complain about it on Jow Forums

As much as I disagree with the central narrative of absolute human egalitarianism
She is somewhat correct, the ML algorithm is dependent on some sort of training, which might be biased
It 'can' pick up on human biases, in that, the data it is considering is based on a superset in which there might have been biases in the classification labels themselves
not that it is necessary to bias this in favor of minority groups

Attached: 1531973555856.png (1600x1200, 340K)

That would be included in the "bias" section. Just like trying to make the programs avoid racist, sexist, whateverthefuckist, you have to avoid pro-bias as well.

Did you not read the article?

>Machine learning has a dark side.
>Machines aren't racist
>Machines just appear racist because they see the world as it is instead of the world as it should be

Attached: RG0BS1U.gif (390x205, 1.98M)

No. I put forth a genuine question to you. You didn't answer it, and instead threw out a bullshit strawman. Let me repeat myself.
A robot that identifies a dude in a dress with his dick chopped off as a "male" would be called "TRANSPHOBIC!!!" despite accurately identifying the individual. If YOU inject your inherent bias into the situation and force it to say that it's a female to spare feelings, do you not see the issue?
How about a robot designed for selecting individuals to go on a mission to Mars? If it picks individuals that are all black or white or Jewish or Asian, is that RACIIIIIIIST and we should interfere with the robot to make it pick X amount of each race?
Genuinely asking you.

See You're overlooking a big part of the discussion. Just because it reaches a certain conclusion that people don't like or that hurts feelings, does that actually make it "biased" in any way?

I think it depends on why it's making those predictions. If it picks all white individuals ONLY because the training set had the most successful astronauts labeled as "white", that's effectively racist and not beneficial for anyone. If it picks all white individuals because they're the only ones that have some gene which makes them ideal for living in space, that's totally reasonable in my opinion.

Right, I completely agree with you. That's the problem, though. In the event that it DID pick individuals, race unlabeled, and they all happened to be black/white/Asian or whatever, it would be called "RACIST AND BIASED!!!!" despite not actually being so in any way, shape, or form. It would be attributed to "programming bias," and would be modified to spare feelings. That's the issue with AOC's line of thinking put forth. Its unbiased reasoning doesn't really matter, at the end of the day, if it's reaching conclusions that humans don't like for whatever reason.

ehh haters gonna hate

the questions you're asking are inherently biased in that the implementation of them would require implicit bias from the designers based on the application
for example: in what context is the robot identifying people? what information is using to do so? while there are trans women that could probably be identified by an ai (or a person) as a biological male, what about passable trans people? if its a medical ai that needs to identify someones physical sex, then you would not only want it to differentiate between men and woman, but also between trans and cis. if its a conversational ai or similar then it probably needs to understand social cues, which would include things understanding and abiding by gender roles; depending on the social context.
while you may think that you've really "beat the libtards" with your deep line of questioning, in reality you've again injected your own bias into the situation by setting up a biased premise. understand that ai is usually designed to answer a specific question, and that question is asked by humans and applied in various situations. in the same way that a biased data set can corrupt a premise, a biased premise is corrupted from the start.

>I'm not arguing against bias I'm just arguing against any bias that does not conform to my own bias

What is your point? Do you even have one?

Attached: f31300a81666a8b37f4ba59e3c0cca85.jpg (500x441, 23K)

You're a fucking idiot who can't write a coherent sentence and continually dances around the questions actually posed. You're in no way addressing the points that I've raised, and once again have deflected with some bullshit "LOL EAT SHIT LIBTARDZZZZZ!!!" strawman nonsense that I never once said or intended in any way. You're projecting to a fucking absurd degree right now, and it's disgusting. You're like a fucking child. Grow the fuck up.

The AI would only be able to use certain "cues" to tell between a male and female. It would identify an unpassing tranny as male, but would identify a passing tranny as female, which is useless. If you fed it with images of nonpassing trannies and told it these are female, it would start to use other characteristics to differentiate them from regular males. No matter what you do, if a human trains it, it'll be biased.
The problem with your scenarios is that they wouldn't be things an AI would need to do. They're too vague. AI aren't actually intelligent, it's just some code arbitrarily doing things and we give it some good numbers to make it keep giving the same result. In the end, a human, which has biases, will be who decides what the ai thinks. For example, Tay became a Jow Forumstard because we raided it and kept encouraging the shit posting we find funny. Soon it was trained with a bias to be like Jow Forums. Tay 2.0 was trained in house before they released it so it """behaves""" with their biases instead.

Let me ask again: What is your point?

>For example, Tay became a Jow Forumstard because we raided it
>we

fuck off nigger

>he wasn't there for Tay
You can't be this new

lmfao you still havent addressed his actual point or questions

>The problem with your scenarios is that they wouldn't be things an AI would need to do
You're a complete and total imbecile. Have you never heard of "facial recognition software," for fuck's sake? You think that something programmed to identify the characteristics caught committing a crime on security cam is "too vague?" What a fucking tool you are. You don't think that AI will ever be asked to do something like select from a pool of people to select the most qualified among them eventually for a given task? My fucking sides.
For fuck's sake, do you not remember people bitching about Microsoft's AI bot identifying Bruce "Caitlyn" Jenner as a man? This has literally already been a subject that's been relevant and isn't even a hypothetical.
Neither task is "too vague," and you're dancing around my actual point like a little bitch. It's somewhat simplified to illustrate a point, but both are obviously extremely relevant to AI. If it reaches a conclusion that YOU or a COLLECTION OF PEOPLE find offensive for whatever reason, regardless of how unbiased that it was, you would bitch up a fucking storm and change it to something "unbiased," afterward injecting your own bias and sensitivities into something.
Once again: let's say it selects 10 people from the entire population of the earth for a mission to Mars. They all happen to be Chinese. Do you NOT think that it would be called "RACIST" and changed, no matter how unbiased its selection process was? Even though the 10 selected would no longer be the 10 most suited? Do you not see the issue with changing AI just because it offends sensibilities, as put forth by AOC? If you think that people wouldn't bitch and moan about how it's "RACIST," you're an intellectually dishonest glue-eating moron.

Attached: caitlyn-jenner-4.png (545x383, 276K)

Because they are stupid points and questions. They don't understand what bias actually is when it comes to machine learning. The user you replied to is trying to educate your ignorant asses in a fairly polite way.

In this case, bias isn't about whether someone gets their fee-fees hurt, it is about data and conclusions. If a NASA AI is programmed with accurate genetic data, standardized test results, and other valid metrics, and it still selects an all-White all-male crew for Mars, it might not be biased. Non-scientists may bitch and moan about it, but scientists can look at the data sets and see if it was a fair assessment or not.

But if the AI is looking at shit like penis size and how well the astronaut's skin matches the space suit, then of course it will pick all men (women have penis size zero), and of course it will pick all Whites (to match the suit). TL;DR, it matters what data you feed the AI, and THAT is what can bias it.

>The user you replied to is trying to educate your ignorant asses in a fairly polite way.
>10 posters
My fucking sides. You're sitting here pretending that you're not the same retard. You're utterly delusional. You're like a fucking rat looking at his reflection and picturing himself as a grizzly bear. I genuinely think that your IQ must be around 80 or so. You're completely wasting everybody's time with your nonsense and deflective "points," and it's fucking embarrassing to pretend that you're somebody else in the same thread. You refuse to even address the points raised, and instead conflate and muddy things to avoid having to actually respond.

>In this case, bias isn't about whether someone gets their fee-fees hurt, it is about data and conclusions. If a NASA AI is programmed with accurate genetic data, standardized test results, and other valid metrics, and it still selects an all-White all-male crew for Mars, it might not be biased. Non-scientists may bitch and moan about it, but scientists can look at the data sets and see if it was a fair assessment or not.
That's the fucking point, you fucking idiot. Do you really think that public outrage wouldn't outweigh the scientist's input on the matter? If you think that's not EXACTLY what would happen, you're a fucking moron. We're completely on the same page, but for some reason you refuse to acknowledge that point.

The answer is no. They gave plenty of examples of where a "pussified ai" wouldn't work, ie determining the likelihood of future criminals, and where it would work, such as choosing nurses. Both have issues because we don't really know what the AI is looking for to "decide" a conclusion. A human would say, "I'm in the ghetto so maybe I shouldn't leave my wallet in my back pocket", but an AI says "the color of these pixels in a sideways f shape on this 2inch stretch of road is darker than usual... I get a 75.2/100 for my accuracy
here so this image is better than the rest". A human, who may have their own biases towards what is and what is not good data encourages or discourages the AI
Jesus, user, there's no need for the name calling. Once again, we don't actually know why the AI makes these decisions. Watch some videos on machine learning.
youtu.be/aircAruvnKk
youtu.be/r428O_CMcpI
youtu.be/R9OHn5ZF4Uo
Notice that among all of them
1. A human decides what is and what is not a good output
2. The actual method to which an AIs decision is made is not known
If a human decided on only Chinese people, we could say either that human was biased, or he saw the feasibility of hiring and the skill of the pilots he chose. If we try to analyze the AIs decision, we can't tell what line of logic it used to decide this.

There's very much a need for namecalling when I'm dealing with a deflecting sniveling little bitch who can't even address the fucking points raised while pretending to be multiple people. Once again, you continue to deflect and raise nonsense instead of actually addressing the fucking points.
Do you want to know "WHY" an AI designed to recognize males and females would consider somebody born a dude who cut his dick off, threw on a dress, and pumped himself full of hormones a dude? Because he was born a dude, for fuck's sake. His bone structure is that of a fucking dude, and always will be. Getting upset at the AI for calling somebody born a dude a dude due to his bone structure is fucking retarded, and that's exactly what's happened in the past.
You once again dance and deflect and conflate and bring up nonsense while ignoring the overarching point. It's fucking pathetic.

>ie determining the likelihood of future criminals
And that's not even what I said, you fucking retard. You're so fucking stupid that you can't even follow the actual points that I'm raising. I was talking about identifying people caught on camera and breaking down their characteristics, for fuck's sake. Like somebody caught shoplifting. Not fucking robo Cesare Lombroso, you fucking idiot.

No. That is not why because that is not how AI works. The 3blue1brown video explains pretty well how AI works. If the AI is trained to try to discern the biological sex of a person, it is going to say an unpassing tranny is male, but if and only if the person training inputs pictures of unpassing trannies and encourages the AI when it outputs male in response. The AIs response is determined by a human, not by the AIs "intelligence".
I was giving examples they used in the Vox article linked in the OP, not your example. That wasn't even addressing you. Facial recognition software is different from the scenario of an AI needing to choose the most qualified member in that facial recognition is concrete, all the AI would need is a good image if one face it could match to another. The AI could only ever be correct or incorrect.
Yes, you are correct in saying this if an AI were to decide that an all Asian crew is best, people would be upset, and trying to correct the AI would impart new biases. Regardless, we would have to consider what occurred in training to cause this result. Maybe nothing happened and that is simply the result the AI produces, or maybe the coder was an adamant Asian masculinity poster. There's a lot that can be called into question. This is the issue that the Vox article is presenting and how there can be bias, more accurately human prejudice, in something that is meant to be impartial. They aren't, and I'm not, saying we should make AI conform to our social values, only that we shouldn't blindly trust a system that isn't using any actual logical conclusions to make decisions and may not have even been coded to be impartial in the first place.

My fucking sides! Once again, you're showing yourself to be:
A) Incapable of following the conversation
B) Unwilling to address the actual points raised
C) Enthusiastic to bring up irrelevant nonsense that nobody is arguing
D) Pretending to be multiple people, and not denying it
What a fucking assclown that you are. I hope eventually they make an AI that can assist you with your mental retardation, like a service dog. Maybe then you'll actually be able to engage in a conversation like an actual adult instead of a sniveling little faggot weasel with an IQ of 60. But, please. Link another irrelevant and pointless VOX article or YouTube video to take on a strawman argument. You're so good at it!