What kind of mental gymnastics would be required to successfully defend this?

What kind of mental gymnastics would be required to successfully defend this?

Attached: less is more profits.png (534x688, 30K)

Other urls found in this thread:

caly-technologies.com/en/die-yield-calculator/
semi.org/en/node/50856
twitter.com/AnonBabble

something something ayymd housefires

I love Nvidia. I want an x86 CPU made by them

better than ayylmaoD

better jewish practices?

dunno, ask nvidrones

sage

bump :>

AMDs incompetence.

>1 player drops the ball and the rest of the world gets jew'd
that shit ain't right

It would be better if you expanded on your point. As far as I can see you're just comparing die sizes, which is a pretty 1 dimensional way of comparing GPUs.

what other dimensions are there? R&D expenditures have increased, but they've also tapped into other and still rapidly growing markets
I would go as far as to say their relative R&D expenditures have gone down

>what other dimensions are there?
Cuda cores, texture units, memory bandwidth. Just focusing on section seems a little dumb.

these have different value to customers, not higher cost to nvidia

>not higher cost to nvidia
Not necessarily, for instance if you look at a Titan X vs a Titan Xp they have the Xp has a greater number of cuda cores on a smaller die. It could be the case that the process to put them on a smaller die costs more. That cost would obviously be passed on to consumers.

>these have different value to customers
Well yeah, Nvidia will sell them at the price the market will bear. Or in other words, if Nvidia thinks that they can sell a GPU at a higher price, they will.

the PC industry literally cannot survive unless you give Nvidia a thousand dollars

yields n sheit

>Nvidia can't afford to make or sell full chips at a reasonable price anymore

Attached: image004-1-e1518538751669.png (500x298, 67K)

Transistor count > die size

Attached: 1532148461633.gif (480x360, 971K)

>Cuda cores
Unicorns aren't real user

I miss 2010-2012's cheap GPU.

More like amd's lack of budget caused by nvidiots funding inferior company

>HD 5870 destroyed Nvidia card.
>most gaymer bought the 480 gtx housefire.

Do you unironically think that the cost of the silicon in the die is actually meaningful to the final cost?

It's not the cost of the silicon but the cost of manufacturing a die of certain size. Small die means high yields and is therefore cheap.

The geometric yield increase from smaller dies is not significant compared to costs of the later processes, wafers aren't free but not decisive

full 12nm disk is around $10000 so do your math, imbecile

>yield increase from smaller dies is not significant
it is, your argument is invalid

okay now measure the size of the transistors used.
the cost of smaller processes are generally much higher.

What the fuck
Is this image saying that their income is through the roof while expenses barely grew at all? Making profit margins explode? Or am I brainlet?

>Gaymers got buttfucked by AMD over and over with housefire and no drivers
>Expect them to be loyal drones as soon as they get better

>things that never happened

Consumers dont care about die size. Consumers care about price/performance.

Using this site caly-technologies.com/en/die-yield-calculator/
70mm2 die with 300mm diameter wafer and 0.1/sq in defect density yields 789 dies per wafer, 471mm2 die yields 70 dies so a tenfold decrease, sounds terrible but then again 300mm wafer costs $400 according to semi.org/en/node/50856 which is a 4 year old article by now, which means 471mm2 die ends up costing about 5 dollars and the 70mm2 die about 50 cents, so not particularly significant

Even ignoring the fact that a GTX 580 is a bigger chip than a Titan Xp, which is not even the full chip (GF110 vs GP102)
~65% increase in node cost 40nm to 16nm but 300% increase for the flagship GPU?

Attached: Node costs.jpg (929x593, 82K)

Still beat AMD largest dies and expensive memories.

t. I owned the HD5850 and the driver regularly crashed while I play game. Never had this problem with geforce drivers.

Not a single 12nm gpu listed by OP.
14/16nm and especially 28nm were somewhat cheaper.

Exactly, fucking Jow Forums only cares about bus width and RAM, no one in the real world does. I don't care if has a 32 bit bus and 64 meg or RAM, if i can play AAA titles at 100fps, do whatever works.

Costs of wafer for 14nm were about 4000$ if I remember correctly and 7nm was expected to be somewhat between 6-10k$. If I remember correctly that is.

People with jobs (in some fields) do actually care. That said there's no such person on Jow Forums.

>largest dies
471mm^2 vs 484mm^2
>expensive memories.
$1,200/$699 vs $499

>Still beat AMD
You'd hope so.

1080Ti > Vega 64

That's represented by the numbers, yes.

A tip: the average Jow Forums user isn't that bright so be careful and label everything as specific as possible so they have less chances to fuck up. Thanks.

Hey guys, Remember when a high end GPU was less than $500?

remember when the high end luxury cars where less than $400,000

All cars today are totally a ripoff

capitalism is never right.

>All cars today are totally a ripoff
This is, in fact, a true statement. There have been no functional improvements in cars since the mid-2000s but prices have increased every year regardless.
The car bubble is going to pop eventually like the housing bubble did a few years ago.

Attached: 1490042646578.png (639x349, 158K)