Would it be a good idea for the network card to have its own cache and/or cpu?

would it be a good idea for the network card to have its own cache and/or cpu?

Attached: d930d8026cfc78bf3dd3b38999731b3c.jpg (400x370, 26K)

Other urls found in this thread:

hothardware.com/news/amiga-enthusiast-gets-quake-running-on-killer-nic-powerpc-processor
twitter.com/SFWRedditVideos

what does the c in cpu stand for

well make a new npu.
network processing unit.

what will it process

websites

Cunny.

It already does. Heck when the NE2000 came out decades ago it had its own cache and CPU.

that's cool I'm all in for that

So basically an hardware integrated js interpreter?

We have for decades. But consumer cards don't really need that.

this is already a thing
usually advanced firewall rules, VXLANs, VPNs and other fancy processing to offload the CPU

This. Plus even earlier enterprise cards.

Attached: file.png (355x355, 134K)

TCP/UDP packets for example.
Even on a modern CPU, a 10GbE link can utilize as much as 10% of CPU load while transfering full speed.

Wasn't this some kind of 2006 network card that promised to remove lag in online games?

The Killer NIC? Yeah, it even worked, the lag reduction was miniscule but removing CPU overhead, you could gain noticeable FPS improvement in multiplayer games.
It's one example of a consumer card with a CPU and RAM to offload processing. Pretty sure it was supposed to do VoIP and such too.

Some do already dumbass. You’ve never seen a NIC that has hardware acceleration?

Are there any modern NICs that do this?

Bullshit only thing to reduce lag is smart interrupt moderation. ARP offloading is nice because you don't have to wake up the host. Its cost close to nothing doing 200pkts/s so proof it or stfu.

Many, even consumer NICs have RISC cores to offload package processing, etc. They increase latency compared to the CPU though but reduce CPU utilization. There are higher end enterprise cards too that do this. All PCIe these days.

As said, the lag reduction was miniscule, but that was compared to hardware of the time, specially onboard and worse. You're free to check up on the benchmarks yourself, that hardware is almost 15 years old now.
Compared to modern cards, it's highly outdated in its capabilities and it's main benefit wasn't latency reduction but lag reduction, offloading processing from the CPU to free the CPU to do other tasks, this was at a time when multiple cores was still uncommon in consumer machines.

There's a difference in lag and latency too when it comes to gaming, lag isn't strictly network latency.

so if i buy a network card will websites load faster?

i think you should try it for nas first? i did mine once got a asus p5e with 2 gigabit lan. It's kinda slow and stuttering thought. Cpu and ram usage jumping like crazy when someone backing up files.

No, workloads like that are noticeable.

>network processing unit.
These already exist

>They increase latency compared to the CPU though but reduce CPU utilization.
You mean they reduce CPU load, reducing utilization is not a good thing. Also, they only reduce latency if you have available CPU power, but if you need to context switch out of a VM in order to deal with an interrupt for example, then you actually increase latency.

The Killer NIC is 2006 price tag $280.
Pentium D was Jan 2006
Athlon x2 was May 2006
even on a single core for gamers CPU usage because of network traffic doesn't matter.

parallel pci thread ok this is ebic :^)

What?

dont all these javascript heavy websites require processing though?
i cant even browse aliexpress without it getting stuck sometimes and my internet speed is 50 mbps.
not that thats much but it should be enough.

Obselates in 2 month

>You mean they reduce CPU load, reducing utilization is not a good thing. Also, they only reduce latency if you have available CPU power, but if you need to context switch out of a VM in order to deal with an interrupt for example, then you actually increase latency.
They reduce CPU utilization from the network stack. The CPU is utilized for network less so it can be utilized more for other things.
This increases latency thought since processing on the CPU itself is still faster today.

>CPU utilization is the sum of work handled by a Central Processing Unit.
Aka load.

>would it be a good idea for the network card to have its own cache and/or cpu?
You see those black sort of boxes with those silver things that are glued to the board or whatever? I wonder what they do?

capacitors?

>This increases latency thought since processing on the CPU itself is still faster today.
See interrupt thrashing and particularly with regards to VM loads. There is a trade off between spending CPU time on dealing with network and letting CPUs run VMs and offloading packet processing to hw.

Can you explain to me why a network card needs VM tech?

Read the post very carefully again.
Also yes, it did matter. As I said, you're free to check benchmarks yourself, made by unbiased 3rd parties. Be sure to include widespread hardware of the time, not modern comparisons.

Even sound decoding and processing mattered at that time, network and sound overhead in games could make a noticable difference in framerate at the time. Hardware accelerated sound is dead too today though, for the exact same reason sophisticated NPUs are, the CPU can handle it fine these days.

>The Killer NIC is 2006 price tag $280.
>Pentium D was Jan 2006
>Athlon x2 was May 2006
Yeah, while both those CPUs were twice that, not even including a new board. This is irrelevant though.

I was talking about latency critical applications. For consumer market this has no effect, barely does for games. As I said, you get better latency without offloading.

Attached: file.png (616x418, 35K)

CIA-backdoor

You're not reading what I'm saying. The NIC doesn't have anything VM related, I'm saying that there is a trade off between aggregating packets and offloading processing of them to hardware (which may be a bit slower compared to a CPU) if you have a server running hundreds of VMs. Having a NIC that fires of an interrupt, causing a CPU to context switch out and process packets, will lead to poor performance for whatever you are running in your VM. In these cases --- though not the average Jow Forums use case, I'll admit that --- it is beneficial to aggregate processing in bursts of packets and/or offload packet processing to hardware. While it increases the latency of processing of each individual packet, it may actually increase the throughput of data and reduce time spent dropping out of VM context

>I was talking about latency critical applications
You need to be more accurate than that. What's the latency critical part? Processing network packets, then yeah, sure, doing it on the CPU makes sense. Doing calculations and not being interrupted by I/O and interrupts? Then offloading it makes sense.

>What's the latency critical part? Processing network packets, then yeah, sure, doing it on the CPU makes sense.
Since I was talking about consumer NICs and Killer NIC, the only even remotely "latency critical" application consumers use NICs for is multiplayer gaming. In which, yes, exactly.

>the only consumer application that matters is gaming
Sure is /v/ in here.

>implying audio recording where every microsecond of system upkeep adds to the accumulated latency isn't a consumer workload
On audio forums, people literally recommend that people shut off their wifi and pull out network cables because it might have an effect on recording timing.

CPU offloading is cancer, hardware is always good if it's cheap enough

>own cache
all network cards have a cache
>and/or cpu?
they already have, but they don't run an OS, they run firmware
>usually advanced firewall rules, VXLANs, VPNs and other fancy processing to offload the CPU
so what you really describe is a pci-e router/switch>the lag reduction was miniscule but removing CPU overhead, you could gain noticeable FPS improvement in multiplayer games.
bullshit. there's no difference in the gay-men network chips and regular network chips, apart from the polling frequency of the driver to fetch data. changing the polling frequency reduces the latency that you read the packets, but create more overhead to the cpu.
it's just a s/w gimmick that prioritizes certain workloads for the CPU and certain packets on the card to process.
nothing less, nothing more... and ofc with a premium price.

Genuine question, I was under the impression that a majority of consumer NICs were interrupt driven rather than using polling. At least the Intel drivers for NICs in the Linux kernel seem to prefer interrupts by default.

>made by unbiased 3rd parties
testing notoriously developed games like WoW, FEAR and CS shows that it's the game's fault for implementing networking wronk.
again, the id tech game shows how the games should be developed.
My second computer was from 2006, pentium D, ATi 1900 XTX 512MB, Foxconn motherboard, 2GB ram... it's been 13 years and I can't remember the specs exactly. never fell for the soundblaster, killer, physx meme and it paid off.
just for the record, eliminating latency is a protocol problem, but I don't see anyone running infiniband in their home network for gay-men.

interrupts are just signals and the CPU is checking them within some predefined intervals.
the only interrupts right now that trully stop the CPU to process their data are the ones from the PS/2 port.
every other interrupt, especially those from pcie, are pushed to a queue and the CPU checks every now and then if there's any.
That's how every modern OS works. I don't know how MS handles interrupts and if there's a way in their kernel to check a device ID more frequently for interrupts, but the default behavior in Linux is to increase the interval once a device sends more interrupts than a certain limit. It happens very frequently with GPU drivers on Linux.

>interrupts are just signals and the CPU is checking them within some predefined intervals.
They're more than that, because they cause a CPU to context switch from running your application to running an interrupt handler.

Even worse, with MSI interrupts, a device can really just target a single CPU, so if the network card receives a packet and DMAs that into memory address closest to NUMA node 2, and the CPU where the interrupt routine runs is NUMA node 1, the CPU needs to cross a QPI/UPI link to retrieve the data that the device just wrote to memory, causing significant latency overhead. (MSI-X, of course, alleviates this by being able to target specific CPUs, but I'm not sure what the current state of NICs are wrt interrupts, it seems MSI is the most common).

>every other interrupt, especially those from pcie, are pushed to a queue and the CPU checks every now and then if there's any.
I wasn't aware that IRQs were pushed to a work queue, are you sure about this? I will admit that it's not my strongest suite.

>but the default behavior in Linux is to increase the interval once a device sends more interrupts than a certain limit.
Interrupt aggregation leads to increased latency though. If you increase aggregation limits to high, you might as well start relying on offloading instead to be honest.

The alternative to interrupt driven packet processing is of course for a CPU to poll the RX ring buffers (or use hw offloading), which is what SPDK for example does.

>Sure is /v/ in here.
Consumer, yes. Not prosumer. That's pretty much facebook and /v/, you're correct.
The Killer NIC itself was "gamer hardware" to begin with.

Well, it's a fair point regarding the Killer NIC, I was talking more generally. But audio recording is frequently cited on Jow Forums as one of the reasons people refuse to migrate from Windows for example, although I suspect that this is LARPing in 90% of the cases.

>bullshit. there's no difference in the gay-men network chips and regular network chips
The Killer NIC specially used a PowerPC CPU with 64MB of RAM on it. It was literally a computer on a board specially for offloading network activity, at the time it was far ahead of puny RICS cores on higher end networking cards.
hothardware.com/news/amiga-enthusiast-gets-quake-running-on-killer-nic-powerpc-processor

You're probably young and think we are talking about the modern NICs made by Killer Networking that are literally rebrands of other NICs with their own software.
See: >for the exact same reason sophisticated NPUs are, the CPU can handle it fine these days.

Yeah I can tell that was your second computer by the fact how clueless you are about it all.

>never fell for the soundblaster, killer, physx meme and it paid off.
Hardware accelerated sound was huge at one point, A3D, EAX. Having hardware that supported it was hugely beneficial for your gameplay experience.
PhysX was once supported widely by games, these days it's all integrated into GameWorks.
There was a time almost every game supported one of these.

>just for the record, eliminating latency is a protocol problem, but I don't see anyone running infiniband in their home network for gay-men.
As I already said, the Killer NIC made sense at one point, I stressed it enough that these days it's different and because you lack experience from that time, you can't comprehend that. I already told you that these days you're going to have a better time letting the CPU do the processing for latency reasons when gaming.
When your CPU was more limited, it wasn't just a software protocol problem.

>testing notoriously developed games like WoW, FEAR and CS shows that it's the game's fault for implementing networking wronk.
>again, the id tech game shows how the games should be developed.
Or the games just utilize networking more heavily? WoW and CS have far more sophisticated netcode I'd bet.

See:

user is obviously a zoomer, ignore them

Yeah I'm not saying otherwise, you're right.

If that firmware is free and open source software then sure.

cock penis uganda

Attached: 1277833035029.jpg (310x310, 20K)