Question about TOR

I know that TOR's .onion domain names consist of 16 digit strings containing letters a-z and numbers 2-7. Would it then be possible to use a random string generator, which has been set to those parameters to generate random .onion addresses in an attempt to find unlinked or unlisted sites? Just wondering. And I know if this did work it would be rather tedious trying thousands of times perhaps to find active pages.

Attached: TOR.png (225x225, 7K)

Other urls found in this thread:

blog.torproject.org/ethical-tor-research-guidelines
twitter.com/NSFWRedditImage

Why would you do it randomly instead of in order?
Also the new v3 addresses try to prevent this.

>Also the new v3 addresses try to prevent this.
What format do those take?

I know the new ones are going up to like 56 digits now. And what do you mean in order. Sorry I don't know much about how they are generated.

56 characters long instead.
Anyone using Tor should really be keeping up with it.

Attached: file.png (712x519, 57K)

I see. I wasn't sure weather or not this had been rolled out yet. I haven't seen any that big.

I haven't used Tor in ages. Couldn't find any content that appeals to me.

>And what do you mean in order.
Say you have a list of websites you need to visit,
They are 1.com, 2.com, 3.com etc...
Why the fuck would you generate random addresses (numbers, in this case) instead of tracking where you are in the list. You would have to keep track of every single address you visited.
You can generate strings in order the same way you can numbers.
And this way you can even split up sections you are testing and run multiple processes, possibly on multiple machines. Dramatic speed increase.

Ok. That makes sense. Thanks.

Wow that is quit an address space. Apparently 256-bit. IPv6 is only 128-bit, and that is still considered too big to scan.

Something of additional note: it looks like if Tor caught you attempting this they would shut you down: blog.torproject.org/ethical-tor-research-guidelines

pedos and druggies ruined it for everyone else, everyone who wasn't a degenerate stopped using tor after the first big crackdown that shut down half the tor sites, now there's literally nothing of interest to find

What did it used to have before then?

It's more than just a longer address. Sure, that's useful to prevent potential hash collisions. However, the more significant aspect is the transition from 1024bit RSA keypairs to ECC, which is a major boon for security.

>It's more than just a longer address.
That's the only part relevant to OP's question.

Any interesting .onion websites? I have yet to find anything that interesting besides markets and cp....

Information mostly, valuable information. There are still plenty of legit sites that aren't criminal, however it's not as collaborative anymore unless you're an identity thief.

Mostly all of the CP is gone and that's a good thing. TOR needs legitimate traffic, not bullshit ass petter bullshit. Any CP remaining is most likely honeypot, given the most of the exit nodes are owned by the authorities.

>Question about TOR.
>Would it then be possible to use a random string generator, which has been set to those parameters to generate random .onion addresses in an attempt to find unlinked or unlisted sites?
No
>Just wondering. And I know if this did work it would be rather tedious trying thousands of times perhaps to find active pages.
Billions of times. It's way easier to just setup a node and get the HSDir flag.

so were did the podos moved? freenet? i2p? the clearnet?

All 3 you listed.
Well if you know where to look

I wonder if there are any camwhore onion sites so I can know I am only fapping alongside sophisticated internet users like myself instead of the common folk.

Would it even be fast enough for that to work?

>the clearnet?
I do see a lot of questionable search suggestions when looking for porn on bing images.

This question is loaded. I'm an IT professional that came from the MSP sector. I have been studying and practicing forensics now in hopes of becoming a specialist. I can "presume" that there are multiple avenues in which pedophiles utilize in order to congregate and distribute their smut. I will not confirm or deny the services you've listed for many reasons. Just know that there is a force of up and coming forensic specialists that desire nothing more than to assist in the successful prosecution of Identity thieves, child pornographers, murderers, rapists, white collar offenders, etc In the end, it's a cat and mouse game. There is a lot more than encrypted packets that "podos" should be worried about.

Attached: mybesemeow.png (240x160, 5K)

I'll take what does Winnie the Pooh eat for 1000 Alex...

it's estimated that at least 10% of males have clear podos tendencies, ranging from teens to even toddlers. actually, if we account only teens, i'd say it's at least a good 50%

the market is just too big to crack down. you can persecute the computer illiterate idiots who share the stuff on facebook chats, groups, follow youtube videos of little girls doing gymanistic, lurk periscope (the absurd thing is that kids themselves have been creating petabytes of material in the last years thanks to their smartphone give to them by their parents at young age), and these idiots are a lot, but how can you crack down tor? the exist nodes in the hands of the government is bullshit, and it would hardly make a difference.
you can hijack the servers hosting the onion sites distributing the material, but even then you can't get the users unless they fuck themselves in the ass by using javascript, and anyway, they are just too many.

even more, the vision of the material isn't an offence, because it would be a thought crime. only the possession, distribution and creatio

even a lot of security experts at the pentagon were jacking off to the same cp they got into

>But even then, you can't get the users unless they fuck themselves in the ass by using javascript?

The FBI never officially released their methods of de-anonymizing those "two high profile" TOR users a couple of years ago. As a result, the case against the defendants was thrown out. May seem like a slam dunk for the defense. Bigger picture, there is a very good reason the FBI never released their methodologies. The java script vulnerability was utilized in Operation Torpedo and not the vulnerability I'm referring to. Cambridge University assisted with the case.

tl;dr The fed knows how to de-anonymize TOR users. They'd rather let the low hanging fruit off than provide how they, with the assistance of Cambridge University, performed the de-anonymization.

>the exist nodes in the hands of the government is bullshit, and it would hardly make a difference.
This is highly incorrect.

>tl;dr The fed knows how to de-anonymize TOR users.

it's more likely they dont. which is the more likely hypothesis: the fbi snooped the fuckers with a simple malicious javascript after gaining access to the server thanks to an informator, or cracked down the tor network?

>They'd rather let the low hanging fruit off than provide how they, with the assistance of Cambridge University, performed the de-anonymization.

i dont see how the two things are mutually exclusive

but anyway, let's say tor isnt secure anymore, or has never been, since it's using server for their hidden services and relies on a browser with all the vulnerabilities of broweser techonologies. how do you crack down freenet?

Like Tor, Honeypot nodes have been operated now for sometime and have resulted in multiple arrests. Being de-centralized poses certain challenges, but not eliminating the capacity to perform surveillance and or monitoring.

>Honeypot nodes have been operated now for sometime and have resulted in multiple arrests.

on freenet? source?

>plenty of legit sites
Care to share ?

[email protected]

I dunno. It's only really with certain searches. But then I suppose it is searches you would expect a pedo to be interested it for the most part.

Freenet. The nodes run heavily modified freenet software in order to perform the specialized monitoring.

can you link me any source about this?

>en.wikipedia.org/wiki/Freenet#Vulnerabilities

>deepdotweb.com/2015/11/27/police-log-ips-making-arrest-by-planting-own-nodes-in-freenet/

>cs.tufts.edu/comp/116/archive/fall2016/cjacoby.pdf

Anyone can find out what data is in their cache by decrypting it. If one applies the correct Content Hash Key (CHK), the data will be revealed. But because it's encrypted, one can avoid knowing what's in their cache simply by neglecting to run a list of CHKs against it - hence deniability in case a forensic examiner should locate illegal files in one's Freenet cache. It is, or rather, ought to be, impossible to determine whether the owner of a particular machine requested the files in his cache, or if his node merely proxied and cached them for others. Obviously, this works only so long as cached data that the node's owner has requested, and cached data that his node has proxied, are indistinguishable. The Register has discovered that this is not the case for large files.

Don't forget about clickjacking. This has proven to be effective as well. Although the particular clickjack vulnerability I'm referring to was patched, l2p still is vulnerable to a slew of web based attached. Any web based distributed network will be.

>The Register has discovered that this is not the case for large files.

Im not finding any infos on this.


So, theoretically, what's the most secure among the darknet software? is i2p compromised in the same way of freenet since they both work roughly the same way?

Let there be a file of, say, 700 MB - maybe a movie, maybe warez, and possibly illegal, that you wish to have. Your node will download portions of this "splitfile" from numerous other nodes, where they are distributed. To enable you to recover quickly from interruptions during the download, your node will cache all of the chunks it receives. Thus when you re-start the download after an interruption, you will download only those portions of the file that you haven't already received. When the download is complete, the various chunks will be decrypted and assembled, and the file will be saved in your ~/freenet-downloads directory.

If you destroy the file but leave your cache intact, you can request it again, and the file will appear almost instantly. And there's the problem.

>If you destroy the file but leave your cache intact, you can request it again, and the file will appear almost instantly. And there's the problem.

Sorry, but I still can't understand. What's the difference here? He is just getting the file again without waiting for discovering the same nodes from which to download all the file chunks

can you answer me on this too?
>So, theoretically, what's the most secure among the darknet software? is i2p compromised in the same way of freenet since they both work roughly the same way?

Freenet distributes files in a way that tends to select for frequently-requested, or "popular" data. This is partly because the other nodes that one's requests pass through will also cache parts of any files one requests. The more often a file is requested, the more often it will be cached, and the more nodes it will appear on.

I tested this, and found that a 50 MB file took six hours to download the first time we tried. After I eliminated the contents of my own local cache, I requested the file again, and it took only two hours and 20 minutes. Clearly, my "neighborhood" nodes had been caching a good deal of it while I downloaded it the first time. That behavior is by design, and it's nothing to be concerned about. The difference in download times between files never downloaded before and ones cached nearby is not revealing, because anyone else nearby might have initiated the request.

However, it is quite easy to distinguish between a large file cached in nearby nodes and one cached locally. And that is a very big deal.

As I noted earlier, a large splitfile will be cached locally to enable quick recovery from download interruptions. The problem is, the entire file will be cached. This means that, when a file is downloaded once, so long as the local cache remains intact, it can be reconstructed wholly from the local cache in minutes, even when the computer is disconnected from the internet. And this holds even when the browser cache is eliminated as a factor.

I tested this by downloading the same 50 MB file and removing it from my ~/freenet-downloads directory, while leaving the local Freenet cache intact. On our second attempt, it "downloaded" in one minute, nine seconds.

I ran the test again after disconnecting our computer from the internet, with the Freenet application still running, and it "downloaded" in one minute, fifty seconds.

So, it took six hours initially; two hours, twenty minutes with neighboring nodes caching it thanks to our request; and less than two minutes with our local cache intact, even when disconnected from the net.

The difference in download time between a splitfile cached locally (seconds) and one cached nearby (hours) is so great that we can safely dismiss the possibility that any part of it is coming from nearby nodes, even under the best possible network conditions. It's absolutely clear that the entire file is being rebuilt from the local cache. Forensically speaking, that information is golden.

Exploiting that information would be trivial. Only a bit of statistical data, of the sort that any government agency in the world could easily afford to obtain, will be needed.

Here's what you need to know: how many chunks of a splitfile will appear on a node that only relays file requests after x amount of uptime. That's it. I already know that for nodes requesting a splitfile, the answer is 100% of the chunks in the amount of uptime needed to fetch them. By running several nodes and observing them, I can easily determine how long it will take to cache, by relaying alone, an entire file of x size.

Since Freenet logs uptime, a forensic examiner can easily learn how long your node has been alive, even if there have been interruptions. So it is quite possible to estimate how many intact files, and of what size, your node ought to have cached without your participation. If the examiner finds many more files, or many larger files, than predicted, and they are illegal, you are in trouble.

Using a tool called FUQID, which queues Freenet file requests, one could easily run a list of forbidden CHKs against a disk image. If the number/size of whole files containing naughty stuff is significantly higher than predicted by your node's uptime, you are in trouble.

A forensic attack can be made more damning if the examiner has statistical information about the density of certain types of files on the network overall, which, again, running several test nodes will reveal. If the density of intact naughty files in your cache doesn't mimic within reasonable tolerances the density of such files on the network, you are in trouble.

You might be smart enough to disable your browser cache and its downloads history, and smart enough to wipe properly or encrypt dangerous files you've downloaded, but your Freenet cache, over which you have little control, will still tell on you.

>but your Freenet cache, over which you have little control, will still tell on you.

yes but the cache putting me in trouble would still be a file on my pc, why not deleting it? I mean, one could say that the true reason one gets shafted by the police is that he is keeping illegale files on his hd.

Knowing that the cache could behave like a normal naughty file stored in clear on your machine (and this is obviously a flaw of freenet in this sense, and I don't know if the developers cant patch it or can do it but dont patch it anyway because they ultimately dont care if pedos gets snooped in their network, since the only files that police wants to snoop are cp) why dont you just simply delete your cache? or encript the cache like any other naughty file you want to keep, and store in some usb drives as invisible files to be hidden elsewhere?

I mean, this problem wouldnt be different from the simple decision of wheter keeping cp on your machine, be it clear or be it encrypted but still with clear hints attached to it to be cp, or not storing it at all

Because at the end of the day the criminal charge is always the same: possession, not downloading, not viewing. If you dont store it you dont possess it

ayo why not just get a burner phone top it up with cash and download illegal shit in some crowded place then throw away sim and never connect the phone again?

that's an idea, but how do you get a sim with an internet plan without getting it associated with your documents?

Depends on the county. In most of EU, regardless of the sim registration laws or lack thereof, you can still walk up to a counter and just get one without any hassle or saying you're John Smooth, Buckingham Palace, London.

or simply saying*

Probably not unless the bitrate was abysmal

this

>if you know where to look
>freenet

its so easy to find it on there that many indexes have to list themselves as cp free.

it's more likely they dont. which is the more likely hypothesis: the fbi snooped the fuckers with a simple malicious javascript after gaining access to the server thanks to an informator, or cracked down the tor network?

No, it's very likely they do. However they'd never use it against you until other evidence is brought up. And they wouldnt say that it's how they got it. They'd find another way they could have gotten the info, and use that as their explanation. (note: this is just my hypothesis, it seems to me like the most likely case but i cant guarantee its true obviously.)

>since the only files that police wants to snoop are cp
maybe now. But maybe down the line, you've looked at a file that disrespects the current establishment, and they arent too happy about that. You dont have cp, but you have something they consider just as bad.

This doesnt have much to do with the rest of your post, but it's dangerous to get yourself into thinking "its only the four horsemen that they care about."