IPFS

Can someone tell me what's the catch and what's it really good for?

Attached: ipfs.png (1024x1024, 70K)

Other urls found in this thread:

filecoin.io/
github.com/ipfs/roadmap/blob/master/README.md
gateway.ipfs.io/ipfs/
hackernoon.com/ten-terrible-attempts-to-make-the-inter-planetary-file-system-human-friendly-e4e95df0c6fa
github.com/ipfs/notes/issues/296#issuecomment-406884233
github.com/ipfs/notes/issues/212
github.com/ipfs/go-ipfs/issues/6342
github.com/ipfs/go-ipfs/issues/3131
twitter.com/AnonBabble

...also what about sharing "secret" static websites? What about server side language?

I can't wrap my head around this.

Doesn't torrent work the same way?

It's bittorrent+git+more. Some main differences included global scale network deduplication as opposed to per torrent deduplication and versioning.

So is it good for anything?

>Can someone tell me what's the catch
it should go without saying, but if nobody mirrors your shit then there will be no mirrors of your shit
>what's it really good for?
good content and popular articles will be mirrored a lot, it will serve as a good intra-planetary file system where latencies might be hours or days

>what's the catch
The infrastructure for actually storing all those files hasn't yet been finished. filecoin.io/

>actually storing all those files
Permanently, I mean--not temporarily.

It's still pretty niche and not that normie friendly, I think there are some databases and archives on it.

That's the question. In practice, it doesn't see much use.

There might be less data on it than, say, on Zeronet.

The IPFS devlopers are ambitious as shit. They want to integrate IPFS into package managers this year (they have builds of pacman, portus, npm, apt, etc that do this).

Read their road map.
github.com/ipfs/roadmap/blob/master/README.md

Here is what the internet will look like in the future: you will resolve .ENS or handshake domains to ipfs:// addresses. The first applications will be ethereum Dapps (Augur, Dharma, Maker, UniSwap).

It's a weird feeling to see the future...

Attached: future.jpg (251x201, 6K)

>The IPFS devlopers are ambitious as shit
Might be more helpful if they were good developers with time for the project.

So far it hasn't exactly been amazing.

they formed a literal company around IPFS so I think they have time for it. Can you share your experience?

so I "added" some files and it loads endlessly if I try to look up the link?

Attached: cover.jpg (360x360, 34K)

> they formed a literal company around IPFS so I think they have time for it.
Good luck to them. I'm not currently a believer.

> Can you share your experience?
It wasn't even capable of mirroring Wikipedia (or other websites) with current updates, nor could it handle Gentoo's distfiles last time I checked.

Also, I recall nyaa.pantsu not managing to do anything useful with IPFS despite people trying. And that's just a torrent index site.

Meanwhile the devs are usually chasing the blockchain meme of the day rather than fixing the core of IPFS until they can at least mirror Flickr or Youtube in a sensible fashion. And then of course nothing even "medium-sized" never mind large ever works.

I've been trying to figure out how to develop small applications using it but have come up pretty empty

gateway.ipfs.io/ipfs/ + folder hash

doesn't do anything anons :-:

hackernoon.com/ten-terrible-attempts-to-make-the-inter-planetary-file-system-human-friendly-e4e95df0c6fa

(cont'd)
I actually find it pretty crazy that you can set out with the ambition to host the whole internet, then take BT+Kad and so on and introduce bottlenecks everywhere that make even just Portage's distfiles hard to host. E.g.:
github.com/ipfs/notes/issues/296#issuecomment-406884233

Causes and issues include:
github.com/ipfs/notes/issues/212
github.com/ipfs/go-ipfs/issues/6342

When you see that, you too probably should doubt they really "designed" and tried to test anything for the scale they're saying they're working on.

>what's it really good for?
Distributed content hosting
>what's the catch
No one uses it yet

>400GB
yeah that's pretty small compared to the TLMC and the Batoto backup database

>No one uses it yet
So far it was more like:
Everyone I know that wanted to use it basically hit srs issues that stopped them from using it, and as far as I can see that matches the situation on various project's sauce repositories and IPFS' bug tracker.

I think it's more that the distribution model is so different from HTTP that people don't know how to use it effectively. They try to replicate HTTP in IPFS, sort of like a Java programmer trying to write Java OOP code in Python/JS/C/Haskell/Go/etc

There are a ton of assumptions in the HTTP model that IPFS gets rid of, for example that the content at a given location can be changed.

I don't even know how large these are.

But it's not a challenge for basic BT and the vast majority of actual BT clients - so how did the project with the ambition to put up to the whole internet on its P2P "CDN" fail at this?

It's not like it was (presumably still isn't) good at small files either:
github.com/ipfs/go-ipfs/issues/3131

isn't zeronet build on top of IPFS?

>I think it's more that the distribution model is so different from HTTP that people don't know how to use it effectively.
I'm rather convinced IPFS isn't doing much to actually make it effective if even the main client can't do it.

Besides most projects I recall hit walls on multiple ends - core IPFS, IPNS, everything else. I figure you could check the Jow Forums archives and read how many issues even just nyaa.pantsu collided with.

No. It's just also using Bittorrent.

While it's not anywhere as ambitious as IPFS and likely won't ever solve all our CDN issues, at least it works to actually publish websites.

it's an awesome idea which i hope will become used more

the basic premise is that it switches up the current "client connects to specific server to get content known by that server" to "client requests specific content from anyone who has it"
sounds like a small change, but this has huge benefits, the most obvious being anybody with the content can be a host for it
another one imo is that this also has the property of deduplication, since addresses address content itself, there can only be one address for a particular piece of content, so even if a particular file is named differently or part of many different sets, it still has the same address, and can be served by anybody who has it

it's kind of like if bittorrent was just one torrent, with one swarm, with everything in it

Will ipfs be more private than current protocols?

Reminder that IPFS is just a ploy by the feds to get you to unknowingly host pizzas on your PC so they can come kick down your door at any time they please.

you're probably thinking of freenet, ipfs doesn't put stuff you don't know about on your computer

> 2019
> Still thinking that package managers are a good way to distribute software

So, they're as retarded as the average Loonix distro devs.