Why don't we have co-processors anymore?

Why don't we have co-processors anymore?

Attached: 80386with387.jpg (3648x2048, 3.86M)

They are included on the CPU die now.
GPUs are basically co-processors though.

da joos

>>They are included on the CPU die now.
And this is done because a.) that makes things a lot faster, and b.) unlike in the mid-80s, we have transistors to burn. Splitting out the FPU back then made sense because it reduced the size of both chips to something that could be manufactured with the lithography of the time. Now the limits are basically all thermal, rather than making all those transistors.

North Bridge/South Bridge

We still do, it's called IME.

A few reasons
>1) A coprocessor is an extra, sometimes weaker, computer processor to run system tasks
This is basically like a dual core set up, but with a weaker core. Today, it is easier to just make one chip with multiple cores on it. Each core is its own processor. Even the Raspberry Pi has 4 cores. This shows it is pretty cheap to just make a multi-core CPU in 1 chip instead of 2 single core chips.
>2) You actually can buy a coprocessor still
You can by many different modules which will plug into a PCIe slot on your computer to speed up computation or hand system tasks. Examples include:
>A GPU (graphics processing unit) is a type of processor which handles video so CPU doesn't have to. Some motherboards have a cheap GPU built in. A GPU has a fundamentally different computing structure from a traditional CPU. Look up "CUDA" for more info
>A Sound card is the same idea as the graphics card, but for sound
>You can actually still buy an intel coprocessor. They look like graphics cards, but their chip design is more similar to a regular CPU than the GPU described above

you do have them

Attached: intel-me.png (617x793, 187K)

it's built-in the CPU since the Pentium

the 486 had a with-coprocessor version (486DX) and a without-coprocessor version (486SX)

Attached: s-l1600.jpg (1600x1067, 116K)

> What is the GPU for math computations?

Coprocessors where nothing like extra cores. They had dedicated tasks. Unlike SMP capable machines with several processors (or cores).

While the post is perhaps not technically correct, it still maintains a reason why we do not current have discrete coprocessors.

Why would you need a coprocessor if you have extra cores and can do all the work in software?

That's not the point at all, extra cores already have decreet floating point units inside them. GPUs already do DSP functions.
We don't use extra cores to do things in software also for work that GPUs for example are better at.

I don't think there's a use-case around not having floating-point instructions nowadays, so there's no use having two separate units.

Most (maybe all? someone feel free to correct me) GPU systems don't do double precision floating point operations, only single.

>Xeon Phi, Apple M7, Tensor processing unit
They still exist, user.

They actually do, it's just not utilized for consumer applications. But this has nothing to do with CPU floating point math or what user corrected you about.

If you mean "what the GPU is capable of", they're all capable of double precision these days. They're slower at it than they are at single precision, of course, by a factor of at least two. Nvidia intentionally gimps DP performance on consumer cards, to like 1/32 of SP speed or something, basically only to force people that really need that to pay up for the pro cards. (this is also why consumer cards always have their PCI-E power connectors on the top of the card and not the back - to interfere with them fitting in rackmount cases)

If you're talking about "what people generally do with GPU compute", that varies. Some stuff strongly requires DP, other stuff is fine with SP.

Thanks for clarifying about the floating point operations comment.

I actually work in parallel computing, but not with GPU devices. I work with the types of CPU systems compatible with OpenMP and MPI. On the occasion which I find myself using a GPU, I usually am doing some work of CUDA signal processing. This stuff is for physics simulation, so in honesty I don't actually know about how particular sound or video drivers are processed. My whole intent was to show to OP why some systems have only a single chip and also that other systems have multiple types of processing units.

You mean like a GPU or an FPGA accelerator? We still have those gramps.

Attached: XpressGXSXS10.jpg (2500x1142, 1.7M)

delet

>what is GPU
>what are hardware decoders, encoders

>This is basically like a dual core set up, but with a weaker core.
No, it really isn't at all, not even in the same ballpark. Co-processors not general-purpose processors and only exist to accelerate certain tasks and computations, which they often take over entirely from the host processor. They also can't be compared on performance in that way; to take OP's example, an 80387 is certainly "weaker" than an 80386 at general computing tasks because it can't perform them at all, but it's very, very fast at floating-point math. And that's all it exists to do.

see

I don't know what this has to do with anything I said.

based and redpilled

>Why don't we have co-processors anymore?

GPUs.

If you read the replies to the post you commented on in the thread, you would notice that your corrections had been addressed.

>(this is also why consumer cards always have their PCI-E power connectors on the top of the card and not the back - to interfere with them fitting in rackmount cases)

Ah, I was actually wondering about that. The last card I remember with the power connector on the back was the 5850, I remember specifically buying that because I wanted to save its cooler for another project and wanted it to be symmetrical, ie. no cut-outs on one side.

Oh, yeah, I looked through that a little bit but the comparison to cores was just a little too silly for me.

It's not like it really matters in the end, though. Just a bunch of pedantic shit. Carry on.

whacha doin

kek

BASED

toppest of keks

In the 1980's processors were slow and communication between chips was not a bottleneck. Now the cost of sending work off chip is quite high relative to what can be done on chip. So unless you have a lot of work prepared for some other more specialized processor to do, you are better off not sending the work elsewhere.

so?