$379

>$379
It's fucking dead.

Attached: 1543702538578.png (1024x576, 462K)

Other urls found in this thread:

97da0272-a-62cb3a1a-s-sites.googlegroups.com/site/raytracingcourse/Hardware Architecture for Incoherent Ray Tracing.pdf?attachauth=ANoY7colXV61WeV6ToF_8oY3bJMDgw7I7WzwTbMZMf_LW7K9B_27EK31KKW6C9te_d2vAWMdXvEqRYn-DVVm7T55lda48Wxpi1bC4fBast5HeA3hD5xQQkPRy_9meyw9yg1W1ULiuzhlU4OtiVyvLD1oDmGhi0K-EBGcYEBVl5O_bPCaUA27ZcB5KL6ucnxRTBIVXhpS0Xs9NK3tflWKWdWPnFZol2lt82E3hylFfHlFB7nmdpnGDUvAEd7kk9g4KggDvhIxYbkfCSxk3uc0qSgh3_ffKboyRA==&attredirects=0
sites.google.com/site/raytracingcourse/
techpowerup.com/reviews/NVIDIA/GeForce_RTX_2080_Ti_Founders_Edition/33.html
tomshardware.com/reviews/nvidia-geforce-rtx-2080-ti-founders-edition,5805-11.html
tomshardware.com/reviews/nvidia-geforce-gtx-1080-ti,4972-6.html
techspot.com/article/1793-metro-exodus-ray-tracing-benchmark/
techpowerup.com/reviews/Performance_Analysis/Metro_Exodus/6.html
pcworld.com/article/3400866/microsoft-next-gen-xbox-project-scarlett-console-reveal.html
youtube.com/watch?v=1nqhkDm2_Tw
tomshardware.com/reviews/nvidia-turing-gpu-architecture-explored,5801-10.html
twitter.com/SebAaltonen/status/1041631227168600064
twitter.com/SFWRedditGifs

They needed something for the $180-300 price bracket to replace the 580 and 590

not even. the 5700 for $300 would be a much better value than 379. throw in the 5700xt for $400 and they'd see way more sales.

Now really though they could have owned the market had they been mad enough lads and done $250/350. shame.

What's the point? Just beat Nvidia by 10%

Radeon needs a 5800, 5800XT, and 5900 to compete with the RTX 1080, 1080 Ti, and Titan

>they could have owned the market by undercutting
KEK What do you think AMD have been doing the past 10 years?

2060/2070/2080.

>Most popular games
I don't see Counter-Strike there. All I see is deadwatch.

Anyone else have driver problems with radeon on linux? I put a RX 550 in my HTPC just to play emulators and simple shit and it sucks major ass, get barely 2-3x the performance of the GT 210 i had in there before OS is Xubuntu 18

Beating nvidia by 10% is cool but that won't last for 6 months when Nvidia refreshes their lineup.

The point is they could do better. They need to push it harder and try to corner the market. PC enthusiasts already feel more inclined to buy GeForce products based on nvidias heavy market presence and apparent technology leadership.

Almost anyone who knows the recent history of the gpu market knows AMD is trying to make up for lost time. People don't trust them to make an impactful product so they aren't going to look at an AMD card when their 5 friends all run RTX cards and the refresh is coming soon.

Now if they pulled some Ryzen 3000 shit off with their GPUs, undeniably beat the competition at an obvioisly lower cost. And with confidence. That would make a shift. But this is not it.

For 1080p 144hz, should i get a RX 590 now or a 5700 later?

Let's see the Fortnite performance, since normies actually play that.

Prediction: It's going to be a "Meh, OK, not incredible" purchase for all of the time it takes for NVIDIA to drop SUPER, and which point it's going to become irrelevant, like Vega.

My understanding is that production for Ryzen 3000 is something of a miracle for AMD. Their graphics division probably just doesn't have the R&D to come out with a Ryzen-like answer to Nvidia. If APU's are able to do just about everything ten years from now, AMD will probably be better positioned.

1660 is a better buy than the 590 all around, so I'd scratch that off even as a potentiality.

Counter strike runs on potato batteries dude you dont need to see a counter strike benchmark

>What do you think AMD have been doing the past 10 years?
Having their price competitiveness negated by literally illegal activity on Intel's part.

>playing on maxed out cs go

faggot spotted

The CPU market with Intel and the graphics card market with NVIDIA aren't really the same. Up until Ryzen, Intel had essentially stagnated on purpose and then repeatedly failed to get their 10nm process ready. AMD had a lot of time on the CPU side of things to catch up and then to jump ahead of Intel due to their catastrophic 10nm issues. Going up against NVIDIA isn't really the same thing. Despite being in the lead for a pretty long time now, NVIDIA's products have never really stagnated. They've constantly improved performance with each generation, Pascal was a huge leap despite AMD being unable to match it in terms of efficiency and performance at the high end when it released.

NVIDIA did not release (the equivalent of) 5% improved quad cores each year like Intel did, so AMD can't really just come up with a good design and leapfrog ahead like they did with Ryzen, because NVIDIA has been constantly raising the bar instead of sitting on their ass for half a decade like Intel. NVIDIA is a shit company and greedy as fuck, even more so with RTX/Turing, but you can generally tell they're ready and able to keep their position intact by how they always have something prepared to respond to anything AMD does, like the way they've had this Super shit ready to go in order to cut off any momentum AMD may have had.

Used Vega 56.

I'm starting to wonder whether this is a deliberate move by AMD just to have something on the shelves without actually affecting the schedule of other products. Think about it, Zen has been massively successful and companies as well as consumers are lining up to get their hands on them. Add to that the new batch of hardware security flaws affecting Intel CPUs causing a sharp increase in the demand for server parts. At the same time both Microsoft and Sony are releasing new consoles, both of which AMD have committed themselves to deliver millions of custom-made silicon for. Virtually all of this on the 7nm node, which the major fabs are still getting up to speed with. In what universe does it make sense to use that limited capacity on shitty margins when people end up buying nvidia anyway? Just look at the tragic story of the RX 570 and how it was massively outsold by the vastly inferior 1050 and 1050ti.

Pretty much this.
AMD already laid out the architecture for the consoles and its APUs. It was trivial to adapt it to a desktop graphics card. Why would they sell GPUs in a price war with Nvidia when they could be fabbing more Rome CPUs with HUGE margins with that same capacity?

AMD GPUs are $100 cheaper and slightly more powerful than the green team's competition, but they don't the barely used real time light right tracing meme or a card that costs well over $1000, so AMD has fucked up as far as you're concerned. Hey, if that's your opinion that's your opinion, but you might as well buy a mac if you're looking to buy a computer based on marketing.

I thought that was like your budget for going to Starbucks for one week?
t. third world user.

Should I get a sapphire Vega 56 for 300 or a ??? 5700 for 70$ more

I mean, isn't the price due to them using 7nm? EUV was postponed to 7nm+ and i read production on 7nm is 3x the price of 14nm. I would guess it will come down significantly in price over time, especially since its only like 250mm.

This. AMD *wants* to fail. It's all part of the plan. It has to go down to go up. Its defeat is its strength. By surrendering the market, they ultimately win the market. To lose is to succeed. White is black, and yes is no. Have faith comrades.

It can only be a deliberate move in the sense that they focused their limited resources into more successful areas, as in CPUs, enterprise and console hardware. Then they take the hardware which has mostly been designed for something else (enterprise compute in the case of Radeon VII, consoles in the case of Navi) and spit out a "gaming" variant as well so they don't lose their presence in the market entirely.

Counter strike doesn't need more than 1080p because then character models get too small to look.
Most crazies still play under 1080p so that they get bigger character models to see and shoot.

wait for benchmarks or get a used Vega for around $200

>$379
im sorry but that is going to be the price after 2 years. ayymd will sell only 50 of that to jack up the prices

Nah, its more like lunch budget for 5 or 6 weeks

I just want a high refresh rate 1080p monitor. A 1440p monitor would have to be a really big boy just to be worth getting, otherwise you're just scaling your shit up and defeating the point of it.

Attached: 1490886379353.jpg (461x426, 46K)

that doesn't make sense.

boomers use the same settings they've used for years and literally make shit up to convince people that it's the best way basiply

it's my food and drink budget for 7 weeks.
t. poor us native

Try amdgpu instead of radeon. A 550 is pretty low end though, I wouldn't expect wonders

i've played cs on different resolutions and i wouldn't say it's easier or harder to play at 1080p. there might be a few people out there who think and say certain things, who knows.

Almost nobody plays CS:GO at 1080p imagine at 1440p

Is it significantly more powerful than my 1070?

Yes.

So then it's worth the money being as 1070s still go for about the same price

Never spend more than $150 on a video card

Attached: guybrush-and-elane-never-pay-more-than-20-bucks[1].jpg (810x617, 205K)

they undercut intel with zen, but navi is the same price as nvidia offerings with only slightly more performance.

Enjoy playing 150 dollars a year instead of about 500 every 5 or 6 years

I paid $300 for my 970 and it's lasted me 5 years
paying more than $500 for a card is fucking stupid though

Yes, AMD is going for low volume high margins GPU this time, plus Turding having low margins cuz of their die size AMD might pull ngreedia in a price war that nvidia cant win. Plus nvidia retarded antics made most of the tech companies if not all dislike them (with TMSC being the latest victim to put up with nvidia manchild behavior) allowed AMD tentacles to spread every were. There is a jokes on certain tech circles that nvidia might end up with only the GPU goymer market.

>he pays out the ass for the most overpriced hardware out there hoping that it wouldn't die in the next half a decade as it's being surpassed in performance by budget cards

Attached: 1463847264219.jpg (451x359, 20K)

My 1070 I bought forever ago is just now being surpassed and it's not even by budget cards the 20 series wasn't even. Enough to make me consider swapping and I get very lucky with hardware never got a DOA part

>plus Turding having low margins cuz of their die size
I somehow doubt it considering how astronomical RTX prices are.

idk maybe they're trying to have shitty pricing on purpose? IDK whos actually going to buy these cards. Steam has more users on fucking sandy bridge integrated graphics than people who bought a vega GPU. While on the other hand the 570 and 580 are the only cards in the top 20 made by amd. They need some way to improve marketshare because nvidia is going to over run them at some point.

idk or they're just okay with having a low volume GPU sold at a higher profit margin just for the few sales it makes. I don't see them being stupid enough to actually be losing money, but at the same time how much money are they really getting back from all the time spend taping out and designing a desktop GPU, running simulations, ect. Like they could just leave the dedicated GPU market and just focus on custom designs and APUs. And its not like navi is ever going to be used for compute cards.

I just want a 5600 non XT with low TDP that brings 1660ti performance to use as a eGPU, I will pay $250 for that.

I bought a 1080Ti back early last year and I expect to keep it for the next 9 years.

I assume they made Navi for consoles and the PC cards are more like a token effort to keep their seat warm in the GPU market, so to speak.

Even if you bought it at launch, it's not even 3 years old and it's been totally demolished in performance/price ratio. Kill yourself mongoloid.

By what? I have not ever in the time I have owned it felt unhappy with it and I still don't I could easily go another two years with it if I want and probably longer nothing that has come out has a high enough performance boost to justify spending the money on it and replacing this card

He's retarded. He mixed up aspect ratios and resolutions, a lot of pro players use 4:3 stretched to 16:9 because everyone looks wider. You don't have to use a "low" resolution to get this, you can just set it to 1440x1080 and stretch it to 1920x1080.

From what we know so far, unlikely. It sounds like the Xbox will get a customized variant of navi that will probably contain goodies going into the next gen, e.g. hardware raytracing. So in 2020 it'll be a sort of Navi 1.5 or Navi 1b. Meanwhile PC will probably get a full blown Navi 2 with a few more changes.

AMD can't just give up on PC gaming or developing new hardware. Vega in the end worked out for them because they shipped so many mining cards. And there's a chance they may crawl something back with Navi.

That said. I'm pretty sure AMD didn't move off GCN for a long time because they didn't want to maintain two different GPU uarchs. Because the PS4 pro GPU turned out to have proto Vega features meaning Vega's changes were probably propagated from changes made for the PS4. That tells me that they were using changes from console for their main releases. Which suggests to me that they didn't want to diverge from, probably because of cost. But at the same time that kind of locked them down. It's probably not a coincidence that RDNA is coming out as consoles loom. Hopefully since CPU is in better shape now they can take on the risk of maintaining a slower moving branch of GPUs for console and a faster moving one for PC.

But, this talk of a 64-wavefront compatibility mode doesn't sound good to me because it sounds like they want RDNA to have instruction level backwards compatibility even though they indicated that the ISA has changed radically for the new RDNA 32-wavefront mode. It sounds like they did it so consoles, which are programmed at a lower level, can maintain backwards compatibility. Good for consoles, not so sure about the future of AMD GPUs.

They mentioned RDNA 1 was a hybrid of GCN and a new architecture. We'll have to see what they do with RDNA2, if it's completely free of GCN legacy then that's a good sign that AMD is ready to be competitive with Nvidia again. Because the yearly GCN changes were a joke

lmfao they compare it to a 2060 and not a 1070 nice joke AMD.

It's just a stopgap to please the shareholders. Real Navi is coming next year.

So it's even better than the 1070 ti

Attached: relative-performance_1920-1080.png (500x1010, 52K)

>They mentioned RDNA 1 was a hybrid of GCN and a new architecture. We'll have to see what they do with RDNA2, if it's completely free of GCN legacy then that's a good sign that AMD is ready to be competitive with Nvidia again. Because the yearly GCN changes were a joke

continued. The fact that they said RDNA 1 is a hybrid is probably a good sign. It makes it sound like they're ready to hop off GCN for good on future PC GPUs, whereas consoles would probably demand GCN compatibility. Some games like The Tomorrow Children were so low level the devs invented async compute using PS4's "barebones API" that was very close to metal (as described by the Metro Redux devs). So the less we see of GCN the brighter AMD's future looks. Hopefully Raja didn't fucking sabotage them to prop up Intel and their roadmap will look good. Because I'm pretty sure David Wang joined way too late to change the roadmap for the upcoming 2 years

Why don't they just play on TVs? FPSs are a lot more playable now because everyone is so big on my screen.

Isn't the biggest change in RDNA the compute unit?

>my 580 has 4 CUs reserved for a feature I'll never use
wtf I hate amd now

Attached: RX580.png (904x561, 157K)

I doubt consoles will have hardware raytracing like what NVIDIA did with Turing. That would mean dedicated die space and power allotment for hardware which will not even be used for all games and even when it's used the visual difference may not be very large since it will only be used for certain effects, as RTX has already showed us. Having dedicated hardware like that in a console sounds very inefficient. I find it much more likely that there will be no special hardware and if any raytracing is done it will be done in similar ways to what Crytek have demoed and what AMD themselves have already claimed, namely that their GPUs can do it on their shader cores (which NVIDIA GPUs can do as well, of course).

This way they won't have potentially useless hardware inside the console and the devs who really want to get extra fancy should probably have enough performance available to implement some RT effects if they target 30FPS and don't go too crazy with the rendering resolution. Maybe we'll see dedicated hardware in a next-gen refresh, but I doubt it for the initial release.

KILL ALL TRIPFAGS REEEEEEEE

Aren't the CEOs of AMD and NVIDIA related? So how long until we get a whistleblower for their price fixing scandal.

SWITCH TACTICS NOW!
BLUE VS RED TEAM! AMDRONES NVIDIOTS MOMMY SU

I'm pretty sure the 4-cycle latency was a direct consequence of the GCN ISA calling for 64-wide wavefronts. So it seems like the ISA has changed, which AMD themselves seems to corroborate by calling this a new era. The last being GCN and the previous being VLIW. So while they may carry over some design elements like the 4 shader engine thing everything is virtually new on the inside. I think we'll find out more with RDNA2.

Like mentioned above AMD called RDNA 1 a hybrid, and to me they would only use that kind of language if they literally mean there are two distinct architectures that are being blended here. And I think they partially did it to use as a waypoint to the next gen consoles as well as to the next true PC architecture. And also because they have to release SOMETHING right now. They can't just let Nvidia recover from RTX which wasn't doing so hot.

Reddit analyzed TU106 and TU116 (one has RT cores the other doesn't). IIRC the RT cores only added like 8% more area to the SMs. Tensor cores added like 10-12%, or about 20% altogether. But that's just the core part of the GPU. Overall it only amounted to like 10% total of the die difference. So I think RT cores are actually pretty lean and Nvidia was probably smart to get the drop on AMD. The tensor cores, not so sure about. They don't even use them in games except for DLSS. The raytrace result denoising is done using temporal denoising in every game that has RTX IIRC, that's just done on the shaders.

>namely that their GPUs can do it on their shader cores
That would be cheap. And the leak said raytracing hardware.

>NVIDIA GPUs can do as well
I would hope so. They invented the basis of the technique Crytek was demoing.

I think consoles would want to have it, even if it's just for the 1st party titles. Think about how the PS3 had the Cell processors, now that was a waste of space. IIRC next to no one used them so they just sat even if 1st party devs could do great things with them. If AMD can provide a dev kit for their hardware raytracing like Nvidia did and DXR and Vulkan do then I'm sure more devs would be happy to pick it up since nvidia has been touting "it just works"

NVIDIA's CEO Jensen Huang is the uncle of AMD’s CEO Lisa Su. They are related by blood. Jensen also worked for AMD in the early 90s as a microprocessor designer. It's pretty interesting that the biggest GPU manufacturers on the market are led by members of the same Taiwanese family.

>ipic conquring domination graphcis card with epic FOUR TEEN FOURTY gaymin
who the fuck is bragging about and getting hyped for sub 2k pleb garbage in 2020-0.5?
This shit is more cringe than laptops trying to brag about a 1050.

Ayymd has gone jewry

Attached: 1384116785181.jpg (720x720, 171K)

97da0272-a-62cb3a1a-s-sites.googlegroups.com/site/raytracingcourse/Hardware Architecture for Incoherent Ray Tracing.pdf?attachauth=ANoY7colXV61WeV6ToF_8oY3bJMDgw7I7WzwTbMZMf_LW7K9B_27EK31KKW6C9te_d2vAWMdXvEqRYn-DVVm7T55lda48Wxpi1bC4fBast5HeA3hD5xQQkPRy_9meyw9yg1W1ULiuzhlU4OtiVyvLD1oDmGhi0K-EBGcYEBVl5O_bPCaUA27ZcB5KL6ucnxRTBIVXhpS0Xs9NK3tflWKWdWPnFZol2lt82E3hylFfHlFB7nmdpnGDUvAEd7kk9g4KggDvhIxYbkfCSxk3uc0qSgh3_ffKboyRA==&attredirects=0

Samsung build raytracing hardware and present in siggraph 2013 with nvidia and other companies future raytracing, AMD and samsung sign agree for samsung use RDNA in mobile.

sites.google.com/site/raytracingcourse/

Which leak said hardware raytracing? All I remember was that there would be raytracing support.

>Tensor cores added like 10-12%, or about 20% altogether.
Are you sure? 1080 Ti has a 471mm^2 die on 16nm for ~3500 shaders. 2080 Ti has a 754mm^2 die on 12nm for ~4300 shaders. 2080 Ti is 60% bigger by area, on a more advanced process, for only ~800 extra shaders. There's a very large difference in die size for only a few extra shaders, so what is taking up all the space if not the RTX hardware? Are Turing shaders so much bigger than Pascal?

Also, let's not forget, the mammoth 2080Ti GPU still offers only fairly poor RT performance, despite being hardware accelerated. It also comes in at a 250W TDP in order to offer that performance. I absolutely do not see a console GPU being able to integrate sufficient RT hardware to offer decent performance, especially not in the much lower power target it will certainly have. I can't see how AMD will fit hardware RT in a GPU which would have to be both fairly small in order to keep yields up as well as work in a very tight power budget.

The currently fastest console, the Xbone X uses what, like 170W at the wall? I do not expect next-gen consoles to really go way above 200W anyway, as it becomes a problem to cool in a sleek, presumably quiet box. It will probably have 8 Zen 2 cores to contend with as well, though those probably will be quite efficient and won't run at very high clocks. Maybe they'll have ~150W available for the GPU, I can't see how that will power sufficient hardware for any decent amount of hardware raytracing, especially not at the resolutions (4K, 8K) and FPS (60FPS, up to 120) numbers being thrown around, or even why they'd waste anything on such a niche feature.

Hardware acceleration may come later, at a point where it can be had in a small GPU, with a low TDP. I can't see it right now unless AMD's implementation is somehow vastly superior to NVIDIA's.

>There's a very large difference in die size for only a few extra shaders, so what is taking up all the space if not the RTX hardware? Are Turing shaders so much bigger than Pascal?
They expanded the cache. Turing is almost 40% faster than the 1080TI in the best case despite only about 21% more shaders so it's faster per core.

techpowerup.com/reviews/NVIDIA/GeForce_RTX_2080_Ti_Founders_Edition/33.html

They're about the same clockspeed on paper as well. So probably no gains on frequency.

tomshardware.com/reviews/nvidia-geforce-rtx-2080-ti-founders-edition,5805-11.html

tomshardware.com/reviews/nvidia-geforce-gtx-1080-ti,4972-6.html

They look pretty similar in practice too even with the automatic OC. So there were definitely changes to the core. That's like 18% over Pascal, not small stuff. It's only a little less than the RDNA 1 gain over GCN.

>mammoth 2080Ti GPU still offers only fairly poor RT performance
Good enough for Metro's GI, which looks fantastic and is probably the biggest display of RTX except for Quake II.

techspot.com/article/1793-metro-exodus-ray-tracing-benchmark/

The benchmark, which techspot calls the "worst case scenario" is still about 30fps on 4k. Consoles are okay with 60fps and I doubt they'll do 4k anyway.

techpowerup.com/reviews/Performance_Analysis/Metro_Exodus/6.html

TPU tested it with DLSS and even at "4k" (we know it's not but whatever) it runs great. Not sure if it's the benchmark "worst case" or just random samples of the game.

And the 2080TI is just a big fat GPU. The 1080TI almost consumes as much even without RTX while running significantly slower. So I don't think the RT cores are eating nearly as much power as you're suggesting.

>I can't see how AMD will fit hardware RT in a GPU which would have to be both fairly small in order to keep yields up as well as work in a very tight power budget.
Just have more raytracing cores.

>I can't see how AMD will fit hardware RT in a GPU which would have to be both fairly small in order to keep yields up as well as work in a very tight power budget.

I don't think nvidia is all in on it yet. They're waiting for AMD to do their part before they go on a wild goose chase, although Nvidia has a tendency to see those through anyway.

>especially not at the resolutions (4K, 8K) and FPS (60FPS, up to 120) numbers being thrown around
They've been saying this stuff for ages. It's just for show. MS was probably talking about HDMI 2.1 support when talking about 8k 120hz. Even a 1080TI can't reliably play AAA games at 60fps at 4k so there's no chance a console can. Most likely they'll target 1080p or 1440p which is already enough for the 2080TI. They just need beefier raytracing hardware. Look at the 2070 in the two reviews for Metro. The 175W navi 5700 should be within earshot of the 2070. It's almost at 60fps with rtx high at 1080p. Half a gen of customizations for Navi, bigger raytracing cores, and lower level console programming magic and it might hit 60fps. Lower settings like consoles do and it's probably a sure thing, on average.

I think it's possible

>Which leak said hardware raytracing?
pcworld.com/article/3400866/microsoft-next-gen-xbox-project-scarlett-console-reveal.html
>And yes, it will support adaptive sync variable refresh rate technology as well as hardware-accelerated ray tracing

>Good enough for Metro's GI
On a 250W GPU, compared to a console which is probably going to be ~150-200W at the wall
Also on a GPU which is vastly larger than anything which will likely go in a console nowadays
And also on a GPU which comes on a card which alone costs more than double the typical console price

>I doubt they'll do 4k anyway.
They probably will, since 4K TVs are common now and with the ~10TFLOPS these are supposed to have it's plausible for them to hit 4K with good graphics, at 30FPS at the very least.

>TPU tested it with DLSS and even at "4k" (we know it's not but whatever) it runs great
"4K" DLSS is pretty much equivalent to 2560x1440, so it's actually very far away from actual 4K.

>Just have more raytracing cores.
Those take up space and use power. Navi does not seem to be more efficient than Turing, despite being 7nm. There is absolutely no fucking way AMD can squeeze 2080Ti-tier performance in a

Jesus fucking christ, dude.

Attached: Chad_coming_through.webm (640x360, 2.58M)

I would give this theory more weight if it wasn't theoretically shaping up to be a decent card, just at an absurd price according to my limited budget that included a used 580 rx last year.
I want to bust out of the bottom tier of used PC gaymen parts, I really do.
I would love to buy a mid-range card at launch but they've gone and priced these where the high end was before the Bitcoin explosion. My budget has increased slightly since then but my experience remains as grounded as it was back then.

They should do 144hz 1080p on next gen consoles

>Having dedicated hardware like that in a console sounds very inefficient.
That sort of dedicated hardware is what makes the limited ray tracing known as RTX actually work. I don't doubt something vaguely similar would could be accomplished with conventional hardware, but it's not going to be nearly as fast.


The next gen consoles aren't going to be targeting 1080p at 30FPS and they're not going to put $500 of graphics hardware in them, so I wouldn't anticipate them having tons of unused power. I'd be incredibly surprised if fewer than 95% of the AAA titles take advantage of the hardware if it is available on the next gen of consoles. Similarly I doubt much more than 5% of games will use it if it's software only because they'd have to take the processing power from texture detail or FPS.

I'm not sure if they'll do this, but I'd love it if AMD put real time ray tracing on a separate die that shares the VRAM and RAMDAC with the GPU. They've had a lot of success cobbling together CPUs out of multiple chiplets. I don't see why they can't do something similar with a ray tracing pipeline. It's certainly a lot of work to design the hardware even if it's on it's own die, and then they've got to write the drivers, but I can't think of a reason they couldn't pull it off. I don't know if they'd risk it without a console deal bankrolling it, but that basically guarantees they're going to move tons of them, so it's worth the investment. Once they've gotten it working for their console deal they'll be able to add it to their current graphics cards without much further expense.

its not dead at all

amd according to the slide is 7% faster on sottr
which means that somehow they managed to find a fucking 83% from 580 to 5700xt

even if its expensive such jump for what basicly is a stripped down gcn is incredible

Attached: images_t2_mujhk_0llpysgm7k431.png (766x656, 23K)

>hardware raytracing
Absolutely not. The hardware for the 2020 consoles is already finalized and set in stone. Probably has been for about a year.
I think the point of RT being technically capable is that the Xbox GPU is (apparently) a 4000 CU unit, and a sizeable chunk of those cores will be utilized for "software" or shader effects. Navi based GPUs can more effectively process 16bit ops, which I'm almost certain is going to be the target precision for whatever raytrace process they can implement.
So lets assume that a game is going to use half the gpu for rendering and the other half for shader effects and raytracing, if not running at 4k. The other 2k cores should be more than enough for at least a passable use of raytracing.

Let's not forget that consoles have customized APIs and other hardware specific "hacks" an optimizations - it typically takes twice the GPU power for a PC to reach the same quality as a console.

Lookng forward, you will need whatever the equivalent of an 8000 CU navi at 1.3Ghz is for your PC to be as good as the 2020 Xbox at the same settings.
/v/ is going to shit on the pc master race for a long time simply comparing the price of graphics cards to consoles.

maybe AMD has RT cores in development and they will be on separate chip as RT accelerator? Who bloody knows, too many possibilities here.

Trifpags website.

>Absolutely not. The hardware for the 2020 consoles is already finalized and set in stone. Probably has been for about a year.

dxr is only ONE way of doing ray tracing

dont forget that crytek did this youtube.com/watch?v=1nqhkDm2_Tw with a vega 56

B A S E D
A
S
E
D

Please, more.

>So lets assume that a game is going to use half the gpu for rendering and the other half for shader effects and raytracing, if not running at 4k. The other 2k cores should be more than enough for at least a passable use of raytracing.
They're not, not even remotely close. The throughput needed for raytracing is an order magnitude different from conventional shading. The RT cores on the 2080ti are collectively capable of about 100 TFlops. Half of something isn't even worth considering

tomshardware.com/reviews/nvidia-turing-gpu-architecture-explored,5801-10.html

2080ti is capable of 10 gigarays/s
10 teraflops per 1 gigaray/s

Voxel conetracing, which is what they're doing there. Is an nvidia technique from 2011. That's not even worth calling raytracing anymore when nvidia's doing per vertex raytracing

>no real tests yet
>no in-store prices yet

yaw, call me when its actually released

Attached: 31766420_p0.jpg (600x769, 60K)

Three weeks to go

>What do you think AMD have been doing the past 10 years?
doing same shit they do with navi =5-10% below on price, 5-10% ahead in select titles for performance
their tactic is shit. they went from 45% of market share to 15% in 5 years with decent hardware.
"how to kill your 5B division purchase in 10 years" plan
Nvidia murdered them with 970, that was the true breaking point. By UNDERCUTTING for superior performance. what AMD did after 970? R9 390 10%price/5%perf. and lost even more market.

You want a 5600 non XT with low TDP that brings 2080ti performance for $250, fuck off nigger.

why do people who cant read still post

An RT core is, basically, a 16bit matrix calculator array (plus some other goodies for pre- and post-processing ray data). So while the RT core can push a very high amount of 16bit ops, it's not any different (though more efficient) from an equivalent number of cores pushing 16bit ops. Hence, we see the 16xx series with a shit ton of dedicated 16-bit units outperform the 1000 series at DXR.
So the rumored (is it still a rumor? I don't follow console rumors anymore) 4096CU XBOX GPU could push a decent custom raytracing solution across half the cores, if the first 2048 are only used for scene rendering. But, definitely not at 4K.

In any case let's compare those numbers. We do have to assume that my assumptions hold water so this is just an exercise in hypotheticals, I'm not saying I'm right, I'm not saying you're wrong.

A GCN CU is 64-bit, basically, and a Navi CU can effectively push four 16 bit OPs simultaneously.
I'm assuming that the XBOX GPU will hit ~1,300Mhz.
I'm assuming that the XBOX GPU will only use 2048CUs for 1080p rendering on a raytrace capable game.
I'm assuming that the entire other 2048 CUs will be used for RT.

That's 16(bit) * 4 * 2.048 * 1.3Ghz,
At a hypothetical 100% efficient workload that's 10.6 billion raytrace operations.
What we don't know is how much extra processing non-hardware RT solutions need for the finished result, but its safe to say that a Navi based RT implementation won't be anywhere near Nvidia's

I absolutely understand that efficient "software" raytracing exists, my reply tried to be open-ended about methods/techniques.
In any case we should assume that console "Navi-hybrid" has no special GPU hardware sauce and instead has tons of programming secret sauce, kind of like how the XB1X has dedicated draw-call processors or how the PS4 has special software commands in the Sony API.

posting nvidia bullshit pr while not knowning that nvidia only measure casted rays with no shading what so ever is literally comedic

4K at 60 fps = 0.5 Gigarays/s, assuming one ray/pixel. Thus RTX 2080 Ti allows you to cast 20 rays per pixel in scenes like this at 60 fps (4K). Assuming of course that you use 100% of your frame time for TraceRays calls, or overlap TraceRays calls (async compute) with other work
twitter.com/SebAaltonen/status/1041631227168600064

Rather what I meant to say is I assume Nvidia's RT uses the Tensor core 16-bit matrix hardware to process the raw math, and the RT cores themselves handle all of the other raytrace data.
So effectively, that huge bunch of 16-bit cores in the 16xx series does the same job as the 544 Tensor units in a 2080Ti

IF, if that is the case, the numbers match up fairly well in a hypothetical perfect-world scenario comparing 2,000 CUs with a 2080Ti, disregarding the extra work that the RT cores themselves handle.

Fuck off. Why gpu so expensive now.

This was discussed to death last year. 10 GigaRays is a farce, a total meme.
A 2080Ti at best processes I think, 2-3 rays per pixel, per frame at 1080p, for a total of
>1920*1080 * ~2.5 * 60(fps)
>3 billion rays per second