US lawmakers say AI deepfakes ‘have the potential to disrupt every facet of our society’

>fake videos can be spread to discredit political opponents
>deepfake porn can be made off celebrities
>you can make it look like someone said or did something they didn't

How real is this Black Mirror threat, Jow Forums?

theverge.com/2018/9/14/17859188/ai-deepfakes-national-security-threat-lawmakers-letter-intelligence-community

Attached: serveimage.png (1600x927, 1.42M)

Other urls found in this thread:

qz.com/1165775/googles-voice-generating-ai-is-now-indistinguishable-from-humans/
gizmodo.com/this-artificially-intelligent-speech-generator-can-fake-1794839913
digitaltrends.com/cool-tech/baidu-ai-emulate-your-voice/
unionleader.com/news/politics/deepfakes-the-next-threat-to-our-elections/article_048c1795-7e51-51ad-8ce2-8b8121eedf15.html
tech.slashdot.org/story/16/10/28/1520233/googles-ai-created-its-own-form-of-encryption
twitter.com/SFWRedditImages

Fakes are easily detectable via computer algorithms. They can only fool the human eye.

Bruh, they've been making deep fakes on porn sites for the past 10 years. Calm down.

They say that you can make it look like someone did something but I say that you can do something and say it was deepfaked.

DONT BE A RETARD AND KNOW YOUR SHIT AND YOU WONT GET FOOLED

what's the point of swapping two unfunny corporate shill lefties' faces? the effect is 0

Fearmongering by politicians, when deepfaking is a fundamental part of society the reliance on video/ audio footage as a standard of trustworthy evidence will be destroyed. Deepfaking's success will also dismantle its validity.
Also what
says. Itll be a while before we'll be able to perfectly fake evidence

The key part of the learning algorithm is one network trying to fool another network.

>make video of a person saying something they never said
>use it to incriminate them
>implicate them in crimes
>use audio clips of them speaking and run it through some ML until you have a total sequence of all their speech patterns
>make phone calls and pretend to be them
>fool people into acting as if they were under the direct orders of a given target


Not only could you potentially ruin someone's career in politics or the business world, you could also disrupt basic government and corporate function by pretending to be someone with authority and telling subordinates to do something damaging.
It is pretty significant, but this isn't something that can ever be controlled. We just have to learn to live with it.

uh, these new ones can't compare at all. you're missing out, user

>inb4 both are fake because politics ;^)

Attached: maxresdefault.jpg (1280x720, 87K)

>Itll be a while before we'll be able to perfectly fake evidence
yes, but it seems voice emulation is already a thing
qz.com/1165775/googles-voice-generating-ai-is-now-indistinguishable-from-humans/
gizmodo.com/this-artificially-intelligent-speech-generator-can-fake-1794839913
digitaltrends.com/cool-tech/baidu-ai-emulate-your-voice/

you should probably go re-read that article

You can already do this by planting cp like politicians and statists undoubtedly already do.

They probably said the same thing when Photoshop came out.

Thanks America.

The deepfake community consists of two types of people. Users who are interested in deep learning and users who want to fuck around with the tech to get a quick laugh/fap out of it.
>Why do I get OOM errors?
>Why are my fakes shit quality when I use the low-mem trainer?
>Am I supposed to remove blurry pics from my faceset?
>How do I get rid of the obvious border around the face?
>How do I get the right cuDNN version? Nvidia only provides the newest version.
>Why does my PC freeze when I try to train on 100K images?
>How can I update the software? What do you mean 'use git'? What is that?
>Should I delete those wrong alignments?
This is what the second group sounds like. Are you afraid yet, Jow Forums? 95% of the deepfake community are retards who think the concept sounds cool, but don't have the patience to learn about the theory behind it or experiment to get good results.
The remaining 5% are able to produce somewhat realistic deepfakes, but if somebody has the motivation to destroy someone else's life (be it revenge, politically reasons or something else), then they can do it with or without deepfakes. It's just another tool.

>if somebody has the motivation to destroy someone else's life (be it revenge, politically reasons or something else), then they can do it with or without deepfakes. It's just another tool.
examples plz

Being able to detect them with algorithms is irrelevant, the question is whether the audience can detect them or whether the exposed algorithm will reach enough of the original audience.
For example, let's say the US wanted to kick the Syria shit into full gear, a simple deep fake would easily be able to show Assad as an evil dictator who totally needs to be taken down. Since there's video/audio evidence people won't just brush it off, they'll believe it. It's irrelevant if down the line it's exposed as a deep fake on page 13 of some news paper, the damage has been done and the public now believes it.

>a simple deep fake would easily be able to show Assad as an evil dictator
haha, yeah, because that's totally not the case :^)

It's really easy to tell when one is fake, at least for me. The head movements don't at all sync up with the mouth movements and it looks completely uncanny.

Acid attack

that's physical violence against somebody, we're talking about political and social "violence"

For now, you mean. If you have HQ video of some politician talking for hours and hours that's so many frames you can use to throw into your machine.

>disruptive
You mean people will no longer believe what politicians say?

Attached: [HorribleSubs] Zombieland Saga - 03 [720p]_00:21:09_66.jpg (1280x720, 153K)

With violence you can kill a single leader, with the right propaganda you can kill a whole country/party/movement.

And that's the other side of the coin, and it's as dangerous

Good thing white women are already degenerate because they are about to get blacked

This already happened to a guy running for office in New Hampshire:
unionleader.com/news/politics/deepfakes-the-next-threat-to-our-elections/article_048c1795-7e51-51ad-8ce2-8b8121eedf15.html

It's more likely it'll be the other way around. They'll get blacked for real and then just claim deepfake

Yes goyim, only the government authorities should be allowed to develop and use this technology

Or you know, build a system based on shared secrets or cryptography rather than fucking voice recognition.
Instead of faces everyone shares their PGP public key.

This will be the turning point of media. Where there is only one media source which the gov tells us is the *real* news, and if you don't watch their news, then you're watching fake news.

I mean, holy shit. Every fucking thing can be "fake news, deepfake!" That's our surrender as a society, we will not know what's real, diving deep into matrix-like territory. Who the fuck decided creating this was a good idea? Why in the world would anyone fund this, besides for political gain, and control of the masses? This is it.

>Why in the world would anyone fund this, besides for political gain, and control of the masses?
Chaos

Oh yeah, it's going to be easier to start wars.
>NK nuked us, but it only hit the desert, killing no one.
>Here's the video of the incoming bomb and it landing
>We must attack them before they launch another attack
>'Mericans: let's fucking get them!

>Why in the world would anyone fund this, besides for political gain, and control of the masses?
Porn

No

Attached: 6524f3851b301b268a2fd8c0b948a3da27fb0c9e9a5b11713c179dc7ea569fb8-leftypol.jpg (600x425, 76K)

You could kill an entire party with violence.

Yeah but then you need to individually kill everyone, and even then there's still the supporters.
Now let's say you deep faked the heads of the party talking behind the scenes about how the supporters are all retarded, support will wane, the party and its ideas will die and nobody will need to explain why everyone suddenly died.

Only if the party has no moshpit. #rekt #420noscope

So we finally can have a video of a Loli Natalie Portman fucking with Jean Reno?

You will only help to make better deep fakes.

>but what about x

What AI are they using to make these?

Real enough for Russia Today

Attached: assange deepfake.gif (960x540, 3.32M)

>Since there's video/audio evidence people won't just brush it off, they'll believe it.
Maybe this is a reason to stop letting the opinion of some bumfuck hicks decide the fate of another country, don't you think?98

Here's an idea, what if like educated masses are a pillar of a functioning democracy?

>Using a still photo to "demonstrate" the power of deepfakes

Fuck off with that picture

Last thread discussed this likely being a transition cut between interview parts. Of course I don't have evidence on hand, but I'd take this picture with a grain of salt too.

what's the easiest way to create deepfakes?

Welcome to the future of a population so indoctrinated by a reality of confusion, they become utterly submissive to complacent will governed by a world of hyper-capitalism. You already see it now. Reality is what you see through a screen.

Attached: 1511271852162.gif (500x214, 980K)

They are preparing for videos of high profile celebrities and politicians saying and doing nasty things coming to light so they can put doubt in the minds of the public "maybe it's fake" like that little slip up Hillary made in that recent interview when she said all blacks look alike. Imagine if they could just say "oh that was a deep fake, she didnt really say that"

Maybe, but don't forget the magic and fairydust!

People will now have to do their homework instead of relying on what other people tell them. Good.

Look up faceswap on Github.

Yes goy, just let the government decide everything for you, we promise it'll be in your best interests.

Pay someone to do it for you.

This
see: tech.slashdot.org/story/16/10/28/1520233/googles-ai-created-its-own-form-of-encryption

>Googlers Martin Abadi and David G. Andersen have willingly allowed three test subjects -- neural networks named Alice, Bob and Eve -- to pass each other notes using an encryption method they created themselves. As the New Scientist reports, Abadi and Andersen assigned each AI a task: Alice had to send a secret message that only Bob could read, while Eve would try to figure out how to eavesdrop and decode the message herself. The experiment started with a plain-text message that Alice converted into unreadable gibberish, which Bob could decode using cipher key. At first, Alice and Bob were apparently bad at hiding their secrets, but over the course of 15,000 attempts Alice worked out her own encryption strategy and Bob simultaneously figured out how to decrypt it. The message was only 16 bits long, with each bit being a 1 or a 0, so the fact that Eve was only able to guess half of the bits in the message means she was basically just flipping a coin or guessing at random.

>the verge
Boomers who use but don't understand technology.

I cant wait till deepfakes are realtime so i can fuck any celeb i want in VR/AR a decade from now.

>John Oliver
>Jimmy Fallon

NO ONE CARES. FUCK OFF Jow Forums

I work in video, it's definitely not a standard cross fade. It looks more like a morph transition which, while that is a real transition and just another one included with any video editing software (vegas, premiere, final cut, whatever)... That would be a really weird transition to use in an interview.

Definitely possible that it is actually just a transition, but a really odd choice to use for something like an interview. Who knows.

This is no different from how today's (news) media already operates, I don't see why it's a big deal.

Attached: 1541153298094.jpg (750x934, 569K)

Didn’t this community die already when reddit took it down?

there's probably a darknet for it.
i was interested in deepfakeporn but couldn't find anything.
either way, censorship isn't going to destroy this, we'll probably see fakes that are very difficult to detect within a few years.

they've been doing this shit for a decade

I'm wanna make a website called deepfake.me where you can upload your face at the different angles and five you probable deniability.

well, i could easily detect christine ford was a liar, but apparently the average american can't detect a liar.

>That would be a really weird transition to use in an interview.
Interviews are the kind of thing that gets edited heavily anyway. With this kind of technology, "do-overs" are possible as long as they get the guy to sit in relatively the same position and just act natural.