Next generation of deepfakes;
you can literally use a neural net to become a girl over the internet: github.com
Source: github.com
This is insane. this stuff is crazily photorealistic. I feel like we're approaching some sort of breakthrough.
Next generation of deepfakes;
you can literally use a neural net to become a girl over the internet: github.com
Source: github.com
This is insane. this stuff is crazily photorealistic. I feel like we're approaching some sort of breakthrough.
Other urls found in this thread:
github.com
youtu.be
twitter.com
can I use it to become Trump? This would be big if true
Deep fakes is old shit GAN
New GAN are monster but pic paper use 8 Volta 100 to training over 10 days and a lot cheap human judges for quality control.
But data set is car drive footage and YouTube videos.
GAN never gained traction in the deepfaking community in the first place.
Deepfakes community is just dead....
It's obscure again, but not dead.
Nice is alive
github.com
Computer science has gone too far. We have to go back.
It's actually easier because public figures have large datasets.
what is that edge to face shit? I don't get it.
Edges to face, a video of the edges of a face is taken and transformed into a real video, the women on the rigth do not exist.
Nvidia for the autodrive program needs to take video from the car and transform it into a video that tags the objects and edges for the autodrive system to work.
In the case of this paper the researchers show how to take the videos of edges and tags and generate real video, with this they could create styles like videos with rain, snow or obstacles to retrain the AI in specific tests.
All I want is more Danielle Panabaker and Elizabeth Henstridge deepfakes.
>Edges to face, a video of the edges of a face is taken and transformed into a real video, the women on the rigth do not exist.
yeah but who made those edges? because this one even has a detailed shelf.
it's like they did an edge detect on a video and then shoved it into the machine learning algorithm
Looks like edge detect then they overlayed some shitty line draeing of a face o top.
so they already had the video of the background and drew a fake on top? this is fishy because it's supposed to be a 100% man made drawing that was made into a video by AI when it's obviously edge-detect generated from an existing video
You mean I can actually see nudes of my waiting??? (pic related)
I mean *waifu
This shit is really scary... How the fuck will you be able to tell if ANYTHING is real anymore. I'm getting paranoid already, fuck.
>this is fishy because it's supposed to be a 100% man made drawing
No. It's edge-detect from actual video, that's then used to generate fake video.
youtu.be
But, friend, consider the real redpill: nothing was ever real in the first place. Embrace hyperreality.
I'd rather kill myself.
you can already see her whore body all over the internet you fucking degenerates
Doubt that, she's pure.
More like pure coal burner
>it's supposed to be a 100% man made drawing
What? Where did you get this idea from?
Aight actual data scientist coming through
First of all it's idiotic (for a company like Nvidia) to make a paper about fake shit because re-creating the model from the paper and harvesting gigabytes of data of sluts from YouTube is not a difficult thing to do.
Second of all, your input and output to a neural network can be whatever you want it to be as long as there's a pattern in the data.
It's true that many data sources are hand-made and hand-labeled. The idea that data sources must be "real" is so divorced from reality that it's common for researchers to think outside of the box when it comes to their input. They'll do interesting and creative things to make it easier for their neural networks to determine signal from noise, to improve "results" with less data and less training.
I'll go through things one by one. In the first section, the output is a road in a certain "style", perhaps a city at a certain time of day. The output is segmented into labeled components, which becomes the input.
The model is fed a huge set of these pairs of data during training, and now we purport that the model has "learned" the style of the road.
Now if you give it any *new* segmented inputs, it will output to you a *new* fake road in the same style
cont.
In Edge-to-Face, the researchers have applied an out-of-the-box approach to the transformation on the output data.
Here, they've run an edge detection algorithm on some goytube slut's face. But that's not enough. We need to increase the signal:noise ratio even higher. So they use a face detection algorithm--which may be another neural network--to apply an anonymoose mask over the edge-detected face. Except, this simplified anonymoose mask uses very few lines to represent detailed facial expressions.
If you think about it, all you need for a face is three holes, a nose, and maybe lines for eyebrows. The eyes need to close, the mouth needs to jiggle a bit, and the eyebrows do whatever they want.
To explain this: neural networks work great with detail, but they're very finicky about "what" details. What if you trained on a dataset filled with oldfags, edge detected their faces and did nothing else? Well I suspect you would have a lot of artifacts and other trouble once you give the network faces of young people.
The logic of this technique, then is to unify *everybodies* faces so that they arr rook same. Because the neural network doesn't need to give a shit about what you look like, just what you're going to look like at the end of this.
So logically, if we feed this minimized anonymoose face to a neural network, we should get better facial reproduction at lower cost, and fit a much larger variety of *new* input faces with less trouble.
Pure slut.
In Pose-to-Body Results, they've done something real funny. I haven't read any of these papers so I could be wrong, but it appears they applied a motion tracked skeleton to the body--which could possibly be another neural network--and then segmented and labeled the body using that skeleton. Or if they used a neural network, the output may have just come out like that.
I could be wrong, but I don't see anything special about the gradients. It's probably because the model needs it to detect overlapping features.
There are some weaknesses to the Edge-to-Face technique for deepfakes.
Because of the preprocessing to bring out the anonymoose mask, this is actually a bit more difficult.
You want to apply a mask so that the model only has to deal with the face. But the anonymoose mask is literally copy pasted on top of the face. This is going to result in artifacts because if the girl has front-bangs, then the model will think that her hair is a part of her face, which is an artifact that will show up in your results.
In general you will get unpredictable results if the girl obscures her face in any new input data.
edge detection convolution filter? are you retarded?