Adversarial Imaging and Machine Learning Vulnerabilities

>blog.openai.com/adversarial-example-research/
This breaks the botnet's AI. This is the easy way to BTFO them. Why do you anons think about this adversarial examples?
youtube.com/watch?v=r2jm0nRJZdI

Attached: pandatogibbonml.png (470x178, 111K)

Other urls found in this thread:

infinityplus.co.uk/stories/blit.htm
twitter.com/SFWRedditImages

I see you're still stuck in 2016

are you going to make a thread about covering parts of an image next?
get with the program, grandpah

so what's the fix?

nothing. ML is btfo forever now. We will be saved from the botnet

Panda sexually identifying as "gibbon".

kek

Attached: 1496259921960.jpg (236x318, 15K)

Could this be used to, for example, create porn images that pass "worksafe" filters and appear in innocent searches with safe search on?

If the filters run with ML. Since you still have to preserve original quality and stuff so that it is recognizable, other algorithms like pHash can still have can recognize that the image is of what. This technique is specifically targeted against ML. There's actually a whole project dedicated to ML exploits called cleverhans.

However, maybe you can use this to mess with let's say whatever AI google is training with their stupid captcha system

The captcha is 100% about a self driving vehicle AI so that probably qualifies as a terrorist act in the year 2080 or something

Lol. Imagine getting vanned for messing with an AI's training. The future sounds dim.

there isn't one. Not a real and permanent one, anyway.
It's a statistical model, not a formula. It fits a bounded region to the data it's given but needs to know when to give up so that it doesn't overfit.

Even humans see faces in wood, and sometimes you have to stare at something and wait for it to resolve before you know what it is--and at which point you can't "unsee" it.

Attached: dogs-in-wood-grain.jpg (672x372, 47K)

The real fun is going to start when self-modifying neural networks systems that interact with customers begin getting exploits. Tay was a good demonstration.

>sometimes you have to stare at something and wait for it to resolve before you know what it is--and at which point you can't "unsee" it
infinityplus.co.uk/stories/blit.htm

>tfw you train a botnet of AI cars

Nazi AI cars.

>deface public signage
>get in trouble
yeah man. who would've fucking thought.

how can i adversarial learning my way into being an attack helicopter? asking for a friend.

With hookers and blackjack.