Is it possible to download the entire internet so you have to have an internet provider?

Is it possible to download the entire internet so you have to have an internet provider?

Attached: 이달의 소녀 1_3 (LOONA 1_3) 'New Zealand Story' #1-PGcyzBtyEkU-[01.25.460-01.28.922]. (1920x1080, 2.78M)

Other urls found in this thread:

httrack.com/
en.m.wikipedia.org/wiki/Wikipedia:Database_download
twitter.com/NSFWRedditImage

so you don't have an internet provider*

What about updates?

yes

httrack.com/

anyone use this?

>downloading every bit of trash on the internet to a huge fucking server room full of harddrives that would use up so much electricity you're better off paying your internet bills.
For what purpose?

spbp

OP is retarded, delete this thread you moron. Before it's too late.

>so you don't have an internet provider*

Attached: 1524775247439.png (191x173, 82K)

You need internet to download the internet

>I'M GONNA HAVE MY OWN INTERNET! WITH BLACKJACK, AND HOOKERS!

Attached: pimp-bender.jpg (1024x768, 45K)

In theory yes, in practice no

Obviously sites like Jow Forums and Netflix and eBay would be pointless and not work

Some people do this with large chunks of Wikipedia and other similar sites if they are going to be someplace with no Internet for a long period of time and just want things to read

Epic lol, take my upvote

Thank you, le kind sir!

Attached: (.jpg (480x640, 42K)

>Some people do this with large chunks of Wikipedia
Not just chunks, but the entire site. It's not even that big, too. Last time I downloaded wikipedia's text only database was about 20 GBs. But that was a couple years ago.

Me on the left

You can't download everything because it's simply not possible. You will have to hack all cloud companies and download all user files and data of all people in the world so say you have the whole internet stored somewhere.

Can you do that with a traditional crawler?
Is every page on Wikipedia linked to from some other page at some point?

No nees to even use a crawler. Wikipedia themselves provide compressed archives of their database: en.m.wikipedia.org/wiki/Wikipedia:Database_download

If we assume you can store all that data, and access it even, then you could get a snapshot of the internet from when you started downloading, sure.

Attached: chameleon.webm (640x360, 1.58M)

no

yeah

its so weird how asian girls look super hot and ugly at the exact same time.

And you'll click post on Jow Forums and he won't have your post stored, so he would have to crawl again.
And this will go on forever.

And he can't really download what's on backend, he can only download a resulting page, so a lot of with would be half working.

What's the point of your idea, OP?

/thread
google kinda tries to index every part of the internet, but it fails since it can't.