What does Jow Forums expect Intel's first dGPU to be and perform like?

What does Jow Forums expect Intel's first dGPU to be and perform like?

Note: they have some AMD guys

Attached: 1505938088802.jpg (800x800, 107K)

wat

Intel is working on a dedicated GPU

Why don't they just make integrated graphics not shitty

Making good ultra small, ultra low voltage GPU that fits in a CPU is harder than a dGPU. plus, the market for dGPUs is very lucrative right now.

>implying someone is really using integrated graphics outside of the office.

Laptops though. Most people don't need a dedicated GPU, even if they are playing games. Modern AMD APU's are more than enough for 1080p gaming.

god, i want to fuck tesselation

in the near future, small multiple ARM cores will run Crysis in 60FPS

All that integer perf is a waste for graphics unless we go back to full software rendering with crazy stuff like splines, ray tracing and voxels at the same time.

I am a brainlet but can you explain why multiple cores cannot make everthing faster than one single powerful core?

Somethings can't be done in parallel, you have to finish the fist step before going to the second. Like making a cake, you can't mix the ingredients and bake them at the same time.
[citation needed]

cannot we make parallelism a standard in software development?

currently the standard C++ library cannot touch the graphics processors on the CPU to accelerate performance in common STL functions. Having all that silicon on the chip without it being used is a waste.

>The Intel740, or i740 (codenamed Auburn), is a 350 nm graphics processing unit using an AGP interface released by Intel in 1998. Intel was hoping to use the i740 to popularize the AGP port, while most graphics vendors were still using PCI. Released with enormous fanfare, the i740 proved to have disappointing real-world performance[1], and sank from view after only a few months on the market. Some of its technology lived on in the form of Intel GMA, and the concept of an Intel produced graphics processor lives on in the form of Intel HD Graphics and Intel Iris Pro.

Attached: KL_Intel_i740_AGP.jpg (1912x1316, 1.02M)

No because for some computations you need the result of other computations. Supposed you had one software, A, that simulates chemical reactions of the cake ingredients being mixed together and another one, B, that simulates the chemical reactions of the cooking. Running the B before or while A runs is completely useless. B will just sit there waiting for A.

Also, to the natural question of "can't we make each core compute one pair of molecules?". Yes, but we have to put the results together at the end. If you tried to do this with many processors you would face deadlocks.

thanks, that was informative.

The last time they tried to do this the Xeon Phi was born.

Attached: Xeon Phi.jpg (620x465, 39K)

Going by past experience, it's going to be nothing like anyone else's GPU. Development will go in 5 different directions, led by as many different teams. Somehow they will produce a working project despite that. Then just as it's reaching completion, management will walk in and cancel the project or turn it into something completely different.

Human brains aren't big enough and programmers aren't paid to deal with that complexity.

Attached: 1525398855949.jpg (3264x2448, 2.46M)