Welcome to the new uncensored JAV era, Jow Forums

Welcome to the new uncensored JAV era, Jow Forums

Attached: czkz1vu0g0k31.png (1024x964, 970K)

Other urls found in this thread:

arxiv.org/pdf/1906.03612.pdf
iforcedabot.com/photo-realistic-emojis-and-emotes-with-progressive-face-super-resolution/
arxiv.org/abs/1908.08239
twitter.com/NSFWRedditVideo

>e new uncensored JAV era
FINALLY !

look up adversarial attacks
is JAV censoring start using that, no DL model as of now would be able to un-censor is.

Moustache AI.

this proves humans are npcs

But then we'd need an AI with like a billion pictures of vagina.

oh fuck my bepis is going to die

>get tons of uncensored jav porn
>censor it
>do some image manipulation to increase your training set
>train your GAN for quite a while

Attached: (77).jpg (736x733, 65K)

Nigger, adversarial attacks are against specific models.

Wow, it's like they used the exact same images for both training the network and showing that it works.
Is it really this easy to trick normies into believing neural network magic?

>is JAV censoring start using that, no DL model as of now would be able to un-censor is.
Why would they? They are in compliance with some shitty law, it's not their choice. They'd show you that pussy if they could!

Nigger read the capsNet paper by hinton which explains why all current "state of the art" CNNs go down the drain. I repeat, "ALL".

What is wrong with everyone's eyes?

Wrong.
JAV is pixelated because in Japan, porn actors retain copyright of their genitals.

False.
T.jav expert

The only input is the 16x16 versions

nobody tests on training data
are you 15?

I've already seen this the other day in the Hitomi thread on /gif/. There's a webm posted there where the snatch is somewhat uncensored using this type of technique. It's a little blurry but it works

Hitomi Tanaka kept herself censored even when she moved to US porn production

>It outputs diabolical versions from bizarro world

What does jav even stand for
Japanese Anal/Vaginal?

Those predicted faces doesn't look nothing like normal AI generated ones
My guess is, they used those very same faces from FFHQ to train the algorithm, but since its a very shitty one, it can't even reconstruct them properly

Japanese adult video

nihonese adult vidya

Those attacks are heavily targeted and whitebox. They don't transfer all that well between architectures.

You're probably talking about arxiv.org/pdf/1906.03612.pdf
The only "strong" example of transferred attacks is the "Universal" attack and even then, it doesn't get more than 50% fooling rate often.
Universal attack is notable for causing perturbations large enough to be visible to the naked eye.

And this is all before any "hardening" was done. Those are all naive networks that were trained without assumption that someone will try to fool them with attacks against other models.

iforcedabot.com/photo-realistic-emojis-and-emotes-with-progressive-face-super-resolution/

Sauce: arxiv.org/abs/1908.08239

THIS
No idea what's their angle, but this is clearly a scam

Attached: ai1.jpg (900x1200, 289K)

those predicted faces only look the same if you are literally autistic

They dont look the same at all

Its almost like.. If you resize and image tp 16x16.. You lose information.. Woah.

Not the guy you're arguing with but are capsule networks useful in practice?
I recall reading a paper on it and found the idea interesting, but the inner loop required to train each capsule wouldn't make the training take so long it would be impractical?

Attached: 1557384757804.png (1920x1080, 1.35M)

The "other guy" (arguing against attacks being feasible) here.
Capsule networks are used in practice. Bixby uses, or at least used one.
I heard that almost every dev who worked with Bixby hated it, but that might be related to the weird capsule routing system rather than the capsule network itself.

I was under the impression that capsule networks are even more inefficient than current neural network architectures. Seems like every "breakthrough" requires even more insane computing power than before.

CapsNet is more or less state of the art, but usually not used alone. It can be stuck to one side of GAN, the other side RNN. The whole point is to dispense with manually guessing your feature detectors/autoencoder.