Mind = Blown Architecture

How the fuck do interrupts work? How can it even be implemented? I mean how can something ever know that something has been changed without regularly checking it with each clock cycle? There has to be loop somewhere down there inside the processor. This completely fucks with my mind.

Attached: 300px-Network_Protocol_driver.png (300x465, 25K)

DMA

I had the same question. The answer is that the processor does actually check regularly for interrupts. It looks shitty and inefficient from a software point of view bit I suppose hardware guys have pulled some tricks there to speed it up.
Or it can just be shitty and inefficient.

In theory you'd only have to check once for all interrupts though. Would still be much better than polling. Plus you wouldn't need to check on every single cycle.
I suppose if you time it right you could just overwrite the IP and stack registers like a true cowboy. I wonder if any hardware has done something like that.

t. knows nothing about hardware

Message queue poll for all interrupt devices that's regularly checked?

Wouldn't the CPU having dedicated circuitry for polling be even faster than software implementation that have to use the jmp command whenever it has to loop again. I would assume having access to the actual circuitry of the processor would allow for even greater efficiency in implementing polling, thus main interrupts faster than polling. Why do some people insist that interrupts are slower?

>software calls interrupt
>intstruction pointer gets changed to the interrupt descriptor table
>jmp instruction is performed at the given interrupt given the address pointed to by the idt
>code gets executed
>iret instruction gets encountered, instruction pointer returns back to software that called it
In protected mode, you it's the same thing - except due to Intel's retardation, you need to toggle real mode then execute the interrupt then go back into protected mode.

>How the fuck do interrupts work?
Hardware interrupts? I personally haven't looked much into it, but CPUs are just another controller on the system bus. Other hardware (eg. keyboard controller) should be able to address it, causing it to jump to a kernel/driver-defined interrupt handler.

>how can something ever know that something has been changed without regularly checking it with each clock cycle?
State change notifications. Most basic example is a callback function - client code sets a handler function, tells the back end to do its work, and that calls the function if/when needed. Another example: you request a read from a socket -> the system may record the request, perhaps even satisfy it partially -> when data comes in from the network, it sees a pending read request on the associated socket -> it notifies the process associated with the request through whatever OS-specific mechanism.

Look into event-driven programming.

>except due to Intel's retardation, you need to toggle real mode then execute the interrupt then go back into protected mode.
Real mode interrupts are provided by the BIOS and are 16-bit code, segmentation and all. You can't just call that from 32-bit Protected Mode (and especially 64-bit Long Mode, since it doesn't support segmentation AND requires paging). You don't need any toggling if you have actual drivers. It hasn't been done by any serious kernel since like Windows 3.0.

>Why do some people insist that interrupts are slower?

Because interrupts cause context switches as they have to be dealt with in Kernel space, and context switching is expensive.

The Atari VCS has a raster interrupt to indicate a new display line, but adept programmers counted counts cyles between lines to modify the display in sync and grab some cycles for logic.

Commodore 64 and Atari 8bit had nice raster timing features, but the Vic 20 has no raster interrupt so demo FX need to be calculated by timer interrupts and cycle counting. Fortunately this is comparatively simple on the 6502 CPU and was leveraged on the c64 for even more impressive display tricks.

"Event driven" programming and "callbacks" are ultimately implemented under the hood with an event polling loop in the background that uses up cycles. All event driven languages work like this, this is why this style of programming (atleast for single threaded applications) is often waaaay slower than programs written in C (that and the fact that event driven languages usually almost always tend to be scripting languages which run on top of an interpreter which in itself is another polling loop that adds its own overhead.).


Interrupts on the other-hand supposedly require zero background overhead.

It's just a logic circuit, there is no "loop" checking for the interrupt flags.

>are ultimately implemented under the hood with an event polling loop in the background that uses up cycles
No. Or at least not if you use something more sophisticated than select/poll.

>All event driven languages work like this, this is why this style of programming (atleast for single threaded applications) is often waaaay slower than programs written in C
/facepalm
nginx is written in C, can serve thousands of requests/second on a single worker and is completely event-driven. Nobody has ever described it as slow, quite the contrary.

It has nothing to do with the language, though I can see how you might think that if you've only ever used shitty languages with cancerous "frameworks" that do everything for you, poorly.

>No. Or at least not if you use something more sophisticated than select/poll.

I have trouble believing this is true. I cannot imagine a callback or any form of reactive event type system occurring without and asynchronous event loop taking care of the messages being passed. This is the only way it is possible for a computer realize that somethings has occurred other than interrupts. There is definitely a loop happening somewhere that is checking for updated states continuously. Unless you are simply talking about recursion and the frame being stored on the stack.

>I mean how can something ever know that something has been changed without regularly checking it with each clock cycle?
tl;dr: The processor does check for interrupts every clock cycle. It has dedicated circuitry for it.

Going into the details, a computer generally has an Interrupt Controller chip on it. Devices that can generate interrupts -- storage controllers, PCI devices, shit like that -- can communicate with that interrupt controller directly, and the interrupt controller keeps track of which devices currently have pending interrupts. There runs at least one wire from the interrupt controller to the processor, which always says 1 if there is an interrupt pending and 0 if not (the details of which can be configured by the processor). The processor checks this wire every clock tick. Whenever it becomes 1, the processor asks the interrupt controller for details -- which interrupt? what device? -- and enters an interrupt handler using the interrupt handling circuitry. This includes telling the interrupt controller "got it, go clear interrupt 17".

>There is definitely a loop happening somewhere that is checking for updated states continuously.
There is, but it's a blocking call. It calls something like select(), which turns the process to sleep until there is input to process, at which point the operating system wakes up the process. The program does not spend any processor time at all watching for input when there isn't any. Instead, the event loop body ends with "wake me whenever something happens".

There is no "code loop" running anywhere.
It's a physical, edge triggered pin (and internal instruction) that overrides the next operation with a call and some pushes.
The int instruction is pretty much an alias to "push all gets, call (interruptvec)", while the pin redirect the instruction queue stuff to the call.

They're logic circuits, you philistine. Everything happens at the same time in a clock cycle. It costs them literally nothing to look at an interrupt flag.

>without an asynchronous event loop taking care of the messages being passed. [...]
>There is definitely a loop happening somewhere that is checking for updated states continuously.
Asynchronous event loops should never poll, they should wait indefinitely. If they poll they are not really asynchronous.

When a thread is put to sleep waiting for "something to occur" it is excluded from kernel scheduling, the kernel simply ignores it. If "something occurs" that the thread has associated itself with (such as data arriving on a socket associated with an epoll set or I/O completion port), the system can figure out the association and make the thread schedulable again, "unblocking" it and allowing it to receive some form of notification that "something has occurred". The thread never needs to poll, it just sleeps until the kernel allows it to run again. The kernel itself may not need to poll, though chances are it will need to deal with pending notification queues. The CPU might poll, but at that level it can hardly be considered polling as understood at a higher level.

>This is the only way it is possible for a computer realize that somethings has occurred other than interrupts.
The computer doesn't "realize" anything, nothing changes in a computer unless software instructs it to. Be it microcode, firmware, the kernel, a driver, some API or your program, the software always knows what it's changing and can notify something else, if by no other means than returning an error code.

The epitome of software developer fantasy about how hardware works

nope-wrong. Asynchronous means that the processes happen independently in two separate threads (ideally two seperate clocks). Hence the name non-synchronous. Polling can definitely happen asynchronously.

>Polling can definitely happen asynchronously.
If you are waiting asynchronously for a synchronous operation (polling) you are, by extension, synchronous.

Are servers higher performance than games and HFT programs?