Network throttling

This might be a dumb question, but it's something I've always wondered.

How does network speed scale downwards? May not be the correct terminology, but what I mean is if I compare a 100MB/s and a 1MB/s connection, does it also mean that the time to transfer a few kilobytes is 100 times faster? What about a single byte or a bit? Since the connections are artificially throttled most of the time and it's actually a limitation of the technology, does a 100MB/s connection transfer a single bit 100 times faster than a 1MB/s connection? Assuming they're using identical infrastructure with only the throttling handled by the ISP set up differently.

Or does the bit still go just as fast, but the limit on how many are allowed to go each second (or tenth of a second or millisecond) is limited?

Please no bully I'm serious.

Attached: qc0yfk4t7t811.jpg (750x750, 84K)

not* a limitation of the technology

they're both inferior to small ass small tittes desu

It depends on packet velocidensity and the transfer protocol involved. Seriously though you answered your own question. Delete the thread now before you embarass yourself further.

Next time google this stuff.
P.S. both tiddy and booty need moderation.

I don't know the terminology to google it myself and if I did answer my own question I don't have the necessary knowledge to realize that.

Just using common sense and elementary school math I figured that there could be many options.

A. the packets move slower on a 1MB/s than a 100MB/s
B. the packets move at the same speed but only for 1/100th of a second per second
C. the packets move at the same speed but they're spread out with pauses in between
D. the packets move at the same speed until a maximum amount allowed for certain time interval (like one second) is reached and the network stops accepting new packets for a short while
E. something completely different

so which one is it closest?

small tiddy small ass small benis = best gf
but to answer your question, you're basically right but latency, packet loss and packet size are important factors too. latency is the time it takes for a packet to reach the destination and return an ack(nowledgement), packet size is how much data is sent at once (and therefore how much needs to be resent if it is lost due to shitty connection, badly implemented throttling, underspec'd isp equipment etc).
use iperf3 to test throughput accurately.

Ok let me spoonfeed you.

Speed depends on the frequency of the signal. Higher frequency = more data / s. Fundamentally, bits travel at the speed of light, and frequency dictates how often bits can be sent.
Total bandwidth available on a single cable/phone line depends on some physical attributes of the connection. Length, cable diameter, interference, all play a role in how many frequency bands the distribution box can send to your home router. Google "signal multiplexing" to learn more about that. It's a bit more complex than elementary school math though.

Attached: Spoonfeed.webm (640x360, 2.77M)

That's interesting and makes sense, but it doesn't really answer my question
>Assuming they're using identical infrastructure with only the throttling handled by the ISP
So the physical side would be identical. Imagine two neighbours who receive their internet from the same provider and connect to the same physical cable, but one is paying for 100MB/s and the other for 1MB/s

Do they have the same latency?

If one is paying for 100 Mb he is getting a wider range of carrier bands "unlocked" to him while the 1 Mb guy is getting less.
Pic related, it's my 50/10 Mbit DSL line spectrum as displayed in my router. Blue bands is for receiving, green bands are for sending data. If I had been paying for 1 Mbit internet, the range would be within the marked yellow box.

The router/modem takes the multiplexed signal, processes it back into TCP/IP packets then sends them along to your computer.

Attached: spectrum.jpg (1413x212, 36K)

What about with a fiber connection?

my ISP for example has 1gbps fiber, it's limited by the 1GbE NIC in the ONT and the router.

But someone with 100mbps from the same ISP uses the same fiber cable, the same ONT, same router, same 1GbE NICs.

As far as I know fiber is not tied to carrier bands like DSL is, so how is the speed difference quantified? Does the ONT artificially limit the outgoing connection to 100mbps? Or it get transferred at full 1gbps speeds to to the ISP then the OLT at the ISPs collocation point limit you to 100mbps?

On fiber, signal is still being multiplexed, just using light instead of electricity. Same idea, just much faster.

What sort of magic is happening at the ISP datacenters I can't tell you, haven't looked into that at all. Probably insanely fast ASICs everywhere.

Thanks sir I learned something new.

Thanks, that explains a lot. So in the case of a single bit the advertised bandwidth shouldn't make a difference then because it fits even in the smallest range, right?

The difference can occur due to varying signal quality, interference, line damage, network congestion, and more (probably). Latency also depends on signal to noise ratio and carrier length.

You know you could just have googled "How internet works" right?

And browse through dozens if not hundreds of pages of information in hopes of finding the one relevant bit when I can just have someone on Jow Forums spoonfeed it to me? Why on earth would I do that?

>Why on earth would I do that?
Sorry I thought you were genuinely interested in the topic beyond the surface level descriptive shit. You're welcome, I guess.

As almost always, Crowder is right.

Nah quite the opposite I already dropped out of computer science and switched to humanities. I was just curious about that one thing.

>dropped CS to switch for humanities

Attached: 1494782303526.jpg (750x864, 178K)

and all 3 are inferior to big booty big tiddies desu

Pretty cool stuff! Thanks for the info.