/p2p/ - Decentralization General

The best candidate to replace all your web experiences with a normie friendly P2P alternative is at the moment ZeroNet.

>Why?

We believe in open, free, and uncensored network and communication.
No single point of failure: Site remains online so long as at least 1 peer is serving it.
No hosting costs: Sites are served by visitors.
Impossible to shut down: It's nowhere because it's everywhere.
Fast and works offline: You can access the site even if Internet is unavailable.

>Features

Real-time updated sites
Namecoin .bit domains support
Easy to setup: unpack & run
Clone websites in one click
Password-less BIP32 based authorization: Your account is protected by the same cryptography as your Bitcoin wallet
Built-in SQL server with P2P data synchronization: Allows easier site development and faster page load times
Anonymity: Full Tor ntwrk support with .onion hidden services instead of IPv4 addresses
TLS encrypted connections
Automatic uPnP port opening
Plugin for multiuser (openproxy) supprt
Works with any browser

>How does it work?

When you visit a new zeronet site, it tries to find peers using the BitTorrent network so it can download the site files (html, css, js...) from them.
Each visited site is also served by you.
Every site contains a content.json file which holds all other files in a sha512 hash and a signature generated using the site's private key.
If the site owner (who has the private key for the site address) modifies the site, then he/she signs the new content.json and publishes it to the peers. Afterwards, the peers verify the content.json integrity (using the signature), they download the modified files and publish the new content to other peers.

ZeroNet - zeronet.io/

>Related technologies:

IPFS - ipfs.io/
Freenet - freenetproject.org/
Beaker Browser - beakerbrowser.com/
GNUnet - gnunet.org/en/

>Anonymization:

i2p - geti2p.net/en/
Tor - torproject.org/

Attached: 1200px-P2P-network.svg.png (1200x1240, 122K)

Other urls found in this thread:

ipfs.io/ipfs/QmW9unbUioai2w5Q5xFEhBAMAkwWFjZDv5ek8fXknb9mme/webmalpha.html
pastebin.com/raw/ZjBnqRRx
twitter.com/SFWRedditGifs

what kind of content is distributed over zeronet? html webpages just like clearnet?

Its not about being "clearnet" vs "darknet" (tor,i2p) its about how you distribute it.

bump

Bump

cp

FUD

For me, it's libp2p

Attached: 41525062dc516766738e5bf5e5bf80f8952648c661bb06c1a5e1716338e04f42.jpg (800x450, 57K)

What makes it better than ZN?

Yes. Just webpages and files.

The only difference is that it's p2p, everyone gets a copy. So yes, you can just save that cosplay gallery as a whole and rehost it in other form if you want.

What do you mean "just"? What else is there to deliver?

It's not that good with actual realtime stream transports like mumble or whatsapp or such. It's also not good at low bandwidth text and status updates. And other things the internet may do.

That said, they'll not take down the hydrus board section over some infinity Jow Forums website assassination drama on that platform.

They're not comparable. ZN is a particular use of Bittorrent, libp2p is a set of network interfaces and standards, which include bittorrent. ZN is more comparable to IPFS which is a particular use of libp2p and other standards.
IMO it will trump anything because it's modular. If you built ZN with it, you could just use whatever transport you wanted rather than being stuck with a specific p2p protocol that you yourself are wrapping and maintaining. Instead of just using multihashes and saying "whatever" to thinking about how to transport them to other peers. Let the peer's client decide what they want to use for what component. This is just built in by default and not up to you to assemble. All you do is write the application with content addressable data and peer id's and that's all, and clients will always be using the specific settings they want. Like what networks to talk to, what protocols they support, etc. Just like how you shouldn't have to fuck around with feature support, client settings, encryption settings, etc when using an HTTP library, you just care about urls and the data at them.

Attached: mdag.waist.png (4000x2250, 3.51M)

>Just like how you shouldn't have to fuck around with feature support, client settings, encryption settings, etc when using an HTTP library, you just care about urls and the data at them.
but distributed instead of centralized since it's a generic p2p library.

Attached: centralized-decentralized-distributed.gif (640x360, 3.17M)

>IMO it will trump anything because it's modular.
Why will? Why doesn't it already if it's this easy and useful?

So, to make it normie friendly.
ZN is a car ready to drive and libp2p is a convoluted disassembled steam engine?

reminder that of you use this shit and one asshole posts cp, you all go to jail

FUD

youre a retard who cant even read
I never said people are currently using it for that purpose, but if a single user does any illegal shit then you'll all be fucked
it takes one troll

FUD

>Why will?
I'm not sure what you're asking. I think libp2p will be the baseline people will use to build applications because it's convenient for developers, unlike writing almost everything P2P and distributed networking, yourself. Thing like libtorrent alone are nice but that really only covers your transport, and whatever you're making isn't going to be compatible with anything else, so you're stuck maintaining it for life.
With libp2p, if something better comes out, you just do nothing and get more modularity for free. There are demos of this thing working over bluetooth, so when that's done you just do nothing and now your p2p application works offline via bluetooth because the clients now support that. The interface remains the same.
Look at how many good and useful features and clients come out in the history of P2P software, that just do not take off because they're hyper specific. If you don't use client X you don't get features Y and Z. Client A can't talk to client B etc.

Pic related is just 1 popular protocol extended. None of the advances in the network technologies ever makes into other programs, they just become incompatible.

>Why doesn't it already if it's this easy and useful?
Why doesn't it what already? I'm saying in my opinion it trumps the alternatives. Nothing else feels as useful in the same way. It's like comparing a socket library to a full on http library that wraps the semantics of http, not the semantics of a particular transport. It's too convenient for distributed development in a maintainable way, rather than creating something that might be slightly better for a few years before you drop it and move on to a new protocol that you need to develop around yet again.

Attached: Dcpp_clients.png (1775x717, 121K)

Who gives a fuck about normies, you think my distributed anime trackers are going to be targeting normies or the general public?

If you don't make it normie friendly it will never take off, if it never takes off it will never have the peers necessary to make it a viable replacement of mainstream services.
Get out of the "muh sekrit clubhouse" mentality from gnutards.

Hm yea, I get the idea.

But if it's so good, why ISN'T it used to make a nice all-protocol client?

I don't know how long you've been using the internet but this same exact thing is said every generation. P2P technology is the only one where the better one wins in mass since it depends on the peers themselves. Bittorrent isn't normie friendly and it's the majority of traffic on the entire net. Things like tor, i2p, and freenet are hardly friendly but they succeed in their area.
It's moot regardless. You have to make it easier for developers BEFORE you can make it friendlier for users. It's not feasible to maintain both a friendly client and the protocol and the server and the extensions. You should only care about building the client application not the entire network stack. This is true in other decentralized systems that made the same mistakes, like XMPP. It's technically extensible, but it depends more on human collaboration rather than spec requirements that every implementation has to adhere to.

>Get out of the "muh sekrit clubhouse" mentality from gnutards.
I feel like you're missing the point on purpose just to argue. The fact that you don't have to appeal to a specific technology, just a specific interface, is a massive advantage because you have the ability to create your own application that's automatically compatible with other people using those interfaces, but without having to use the exact same implementations. You let the clients negotiate support rather than forcing it on them. That's huge.

>But if it's so good, why ISN'T it used to make a nice all-protocol client?
I don't know what specifically you mean by a "nice all-protocol client",
You can write a client that does what you want it to do and all the protocol handling just comes with it.
You shouldn't have to do shit other than write the application with in mind
>write the application with content addressable data and peer id's and that's all
The point being you don't have to care about that.

If libp2p adds support for a new network protocol, you just update the dep and now your application can fetch data from the new network with no other changes. If the client doesn't trust that network, they just disable it and libp2p will figure out how to satisfy the requests through other means that are supported and enabled.

>bittorrent isn't normie friendly
All you have to do is download a client and click a button, if you installed an app in your phone you can use bittorent, you are a retard.

>need users to install new software just to send data to them
You're not helping your case at all my dude.

>I don't know what specifically you mean by a "nice all-protocol client"
That useful practical implementation thing that practically demonstrates this lib is useful and not buggy.

> If libp2p adds support for a new network protocol, you just update the dep and now your application can fetch data from the new network with no other changes.
Good idea I guess.

>downloading torrents isn't normie friendly
I hope you never reproduce.

(cont'd)
BTW libp2p says it's "modular". So is it really the case that you can just drop the whole package in in place of a modular part and then suddenly have support, for, IDK, that new BT protocol fork running on top of a new i2p fork? I so far do not think it works like that at all.

Seems more like a pile of code that everyone "should" be able to use in the eyes of the authors, but nobody uses even to run anything across even just the whole set of lib modules.

>That useful practical implementation thing that practically demonstrates this lib is useful and not buggy.
You haven't said what you're practically trying to use it for so I don't know what metric you're going by. What are the practical goals of the application you're trying to build. If you want to find out if it bugs out in that case, you're probably going to have to try it. What alternative even is there? It's like asking how do I know the standard http library will work, it either conforms to the spec or not.

I'd rather you make a point. Can't you humour us /tech/ refugees by being serious for a little bit.
You have 2 video files you want to send to someone who is a know nothing normie. Are you going to
A) send them a magnet link and a link to a bittorrent client for their machine whatever it may be
or
B) send them a link that just has the data they want, coming off your machine without restrictions

Check it out, this is coming from my machine and you didn't have to install anything.
ipfs.io/ipfs/QmW9unbUioai2w5Q5xFEhBAMAkwWFjZDv5ek8fXknb9mme/webmalpha.html

Firstly this network can be taken down by ISP's. Secondly if it gets too big, each peer have to have lots of storage. This only could work for few people, not for entire world overtake. How do you want to fight 51% attack, when gov servers will take place? Anons dont get fooled by this new ideas(tech is not new). Better build your own wifi networking schema so you wont get fucked by ISP's.

The intention is to take care of p2p networking for you, you would replace anything related to the routing and network layers here If you use IPFS, Ethereum, or some other data/block format, you replace exchange and naming with those.
At which point you're just building decentralized applications that use all of these other module in the other layers when appropriate.

In your example, if someone implements bittorrent as a libp2p transport, then someone else implements the new BT protocol as a transport as well, then you could just use either or both in your own application since it's going to have to conform to the transport interfaces. You'd want to just use the defaults and try everything, using whatever is ranked as the best after the clients negotiate. if they both support new bt it should use newbt if that's ranked high for both.

>I so far do not think it works like that at all.
I'm not really sure how you mean.
I think libp2p and most of the application being built on it are pre alpha. Which to me is very exciting since that means they're as good or better as the current solutions, without optimizations and without mature client implementations.
So the only thing you really have to do is wait for all the implementations to be finished and tested. Even if you built something today and decided it's not good enough, you just rebuild when everything is done and then the program would be trying the latest p2p technologies regardless of what they are.

As libp2p modules get implemented everyone can just use them like that. You can imagine the practical implications there. Your application would go from likely depending on TCP/IP and some form of LAN/WAN to then being able to have 2 mobile phones talk to each other over bluetooth even if they're in the middle of nowhere, just because someone implemented bluetooth transport.
When they implement i2p routing your application is just going to automatically get anonymous routing support.

>Check it out, this is coming from my machine and you didn't have to install anything.
A link which can be taken down by glowies, completely defeating the purpose of a decentralized alternative.

Bittorrent can be taken down by ISP's too.

>So the only thing you really have to do is wait for all the implementations to be finished and tested.
>JUST WAITâ„¢
I bet you believe next year is the year of linux too.

>A link which can be taken down by glowies, completely defeating the purpose of a decentralized alternative.
You obviously don't know how this works, why are you pretending like you do and/or lying like that?
If you want me to explain it just ask, Jow Forums should be better than this.

In that URL is a multihash of the content. It's being proxied by the gateway there ipfs.io, which is hosted by the project.
If someone decides to take them down, I can simply point people to another gateway, like one hosted on my own machine in whatever way I want, the only thing that changes there the domain.
pastebin.com/raw/ZjBnqRRx

But lets pretend that all of DNS goes down, you still have the hash so you can just use any IPFS aware application with the hash alone to get the content over whatever libp2p supports which is a growing list of technologies.
Meaning it's even more annoying to censore than bittorrent already is since it has all the same traits but it's also made available over multiple protocols.
The IPFS project has a reference CLI program that makes it as simple as `ipfs add $FILES` and `ipfs get $HASH`.

Unless you take down every node hosting it on every protocol you're not going to be able to revoke access to it.

What are you waiting for ? I already asked specifically what you're trying to build that this doesn't support. If it does everything you need now, you can just use it today, if not either implement the thing you're going to have to implement anyway, except conforming to a standard instead of conforming to nothing. I don't see the point in building something yourself in a non-standard way when this is an alternative. It makes very little sense.
Even though I think Linux is awful, you can't deny it has the same advantage by being modular and stealing parts of good standards from real Unix systems.
Are you going to write your own OS or are you going to write a Linux kernel module to do what you need while getting all the modules other people have written?

>look is so easy to use!
>shares standard https link from a centralized website with a centralized domain
You are retarded if you think that is any different than sharing a google drive link.

While I support decentralization as a concept, what actual use case is there for this right now? My understanding is it doesn't enhance anonymity (if anything it weakens it because p2p), and sites still have an owner and thus can be taken down by going after said owner (otherwise what happens to administration). End users are still vulnerable to being targeted as well.

I suppose from a service management perspective, you can reduce costs. In terms of freedom, however, I don't see much significance (not that I am knowledgeable, so please inform me).

Did you read the post at all?
>In that URL is a multihash of the content
>If someone decides to take them down, I can simply point people to another gateway, like one hosted on my own machine in whatever way I want, the only thing that changes there the domain.
>But lets pretend that all of DNS goes down, you still have the hash so you can just use any IPFS aware application with the hash alone to get the content over whatever libp2p supports which is a growing list of technologies.
How is that the same thing as a Google drive link which is tied to Google's domain, running Google's service, on Google hardware.
This is a content hash, a URI nor a URL. If you've ever used DC, ed2k, or bitttorrent, they all use the same concept.
The most popular example today is magnet links. Explain to me how well those are being censored today.

Centralized is inefficient and fragile. The fact that most of these programs and protocols allow you to host data and services out of your house that still scale to a massive scale, is a big deal imo.
Hosting a webserver out of your house is going to get harder and more expensive as your grow. The opposite is the case for most decentralized systems.

8ch would still be online.