Anyone come up with potential workarounds for Article 13?

I've been reading up on some of the implications of the article, and it seems the only feasible implementation would be AI scanning images and media to see if they're similar to registered items or by banning eurobros.

Aside from using a VPN, has anyone come up with a workaround? I wrote a simple python script that adds random bytes and can also remove them, if that helps.

megaupload.us/2GM4/memefreedom.py

Attached: article13.png (2538x1795, 2.86M)

Other urls found in this thread:

megaupload.us/2GM4/memefreedom.py
pastebin.com/XJD3nrJ7
blog.openai.com/adversarial-example-research/
law.cornell.edu/uscode/text/17/201
twitter.com/NSFWRedditImage

CUT THE FIBER

Attached: shutterstock_265235672.jpg (1000x664, 396K)

You realize if Europe passes this shit, Big Tech might just decide to implement the policies in the US as well? We could all be fucked

>megaupload.us/2GM4/memefreedom.py
no one is going to download this shady shit. use gitlab/hub.

Alright

I'll do you one better

def scramble(data):
res = bytearray()
print(data[0], type(data))
for d in data:
print(d, 97)
res += bytes([d, 97])
res = bytes(res)
print(res)
return bytes(res)

def unscramble(data):
res = bytearray()
c = 0
for d in data:
if not c % 2:
res.append(d)
c+=1
return bytes(res)

pastebin.com/XJD3nrJ7

better.

and what you're trying to achieve with this script?

No. Because I will cut the fiber. Do you not understand how simple it is? The communication systems are vulnerable. If these faggots want to play tyrants than we will show them exactly how impotent they are. CUT THE FIBER.

Just a simple proof of concept, hopefully more competent techbros implement this type of code in PHP so you can upload scrambled images and have the server translate them automatically. Idk, I'm lazy.

>scrambled images
?

do you know what media fingerprinting is?

blog.openai.com/adversarial-example-research/

Allow zero posts, play it save mate.

Can you still view the pics after you added the random bytes?

You have to run the unscrambler over the data. The script is basically a minimalist encryption system.

Censor and block ALL European content everywhere as to not accidentally violate the law. Bleed them until they beg and grovel like Google did to kill that spanish hyperlink tax.

ya it'd work, you could scramble the image, and when you send it, you send the image data as binary, with decryption string, but idk man thats not feasible. you could make a site that'll do this for you, but the hosting would cost a bit if the traffic picks up, and idk how to convince the average internet user to use it

Technology will ultimately solve this problem. What we'll see start popping up is things like browser add ons that detect when images are being included in a http POST and then just modify a few random pixels imperceptibly.

Odds are the best we'll get in terms of "AI" scanning images is checksumming the images to see if they're original or not and not allowed copyrighted checksums, everything else is a derivative work. The sheer degrees of freedom with pixels that can be changed without perceptible image change is huge.

Off the back off that there will likely be ways to frustrate image scanning, I'd be willing to bet that systems that scan these images would be super easy to fuck with.

The larger the protected database of images get the longer it will take to compare uploaded images, so I could see attempts by large groups of coordinated people (via apps or browser addons) deliberately spamming these databases with images full of noise.

Pattern recognition software could be spammed with images that have been generated specifically with many patterns that match copyrighted material meaning the cost to a real AI solution to scan for specific patterns in the image is frustrated.

There's so many ways people will fuck with this in the name of freedom that the fallout will be glorious, if we can push facebook/google server time up by just a few milliseconds per image then it could cost them millions in increased server costs

You have to stop thinking like that. There wont be any filter. They know the political influence memes have on social networks so they will force them to delete it all or they'll get fined. Just like with the hate speech shit

>Anyone come up with potential workarounds for Article 13?
Yes, but it would be an indirect ripple effect.

The US Constitution explicitly says, in Article 1, Section 8, Clause 8: "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;"

Now, anything in excess of the Constitution Congress is not allowed; notice that the power granted here only extends to securing the exclusive rights to the author/inventor? Keep that in mind.

Current copyright "law" law.cornell.edu/uscode/text/17/201 says that these can be transferred; that's not a part of A1S8C8. Moreover a lot of the creative employment contracts contain "anything you ever create is ours"-style clauses.

To go to a hardline reading of A1S8C8 and invalidate ALL patent/copyright held by corporations would be a severe blow, causing them to die if they can't try to buy the courts our ram an amendment through w/ the bought politicians.

Since so many companies are based here in the US, they would lose all their IP. That in turn would free it into the public domain, which would flood over into things usable in Europe.

I think we're way past just check-summing the images. Google's machine learning capabilities are probably way beyond what we can imagine at this point.

You don't have to cut the fiber, numbnuts. Thats too visible a fix. What you do is you get into the infrastructure and you walk past the bypass junction, about a hundred feet (30, 35 meters) and then you take the cable and you bend it until it cracks. It'll take them hours, maybe days to find the source.

Yeah they do deep learning stuff but that's massively expensive to run, first of all you can't really do it on high resolution images so you have to downsample those first and then do all this processing and that can become super expensive, especially to do it quickly so their services don't appear slow.

In already mentioned a few things but someone else had already posted a similar ideas here Machine learning based AI applications are very easy to break for people who know how they work, not just break but really mess with, specifically if you do things like fuck with the output or if you want to frustrate the attempts to detect certain things, or what I'd target personally if I was doing this is increasing the cost to process, because the giants like faecbook and google process so many images that any small increase in cost per image is a huge overall cost.

Maybe I am just a barbarian, but my logic told me to dig hole, cut fiber, fill hole, leave no trace and rinse and repeat at a faster rate than they can maintain. The point is that either way you look at it, the censor fag NWO loses.

I like your idea. A botnet uploading pics with these hacks would make the costs astronomical