/hg/ - Housefire General

You can't be serious.

Attached: fire.png (828x220, 22K)

Other urls found in this thread:

community.amd.com/community/gaming/blog/2019/08/12/amd-radeon-community-update-more-control-over-gpu-power-and-performance-enhanced-thermal-monitoring-maximized-performance
twitter.com/NSFWRedditImage

>Temperature in °C (Higher is better)

Attached: 1553115864329.jpg (675x1200, 124K)

why would you give a fuck? An oven's normal operating temperature is like 350 to 450 degrees Fahrenheit and nobody cares

Tripfag genocide when?

Are you retarded?

noope

LOL look at them go

Attached: chaika.jpg (1280x720, 122K)

it's okay when AMD does it

We told you.
We told you AMD products run hot to keep away the jews

I want to see the absolute madman who will pair this with a 9900k. both GPU and CPU overclocked.

reported for being tripnigs

>tripfag
>junction temperature
>edge temperature
>not knowing the difference
so you really are all retarded

Attached: deathtotripfags.jpg (1000x563, 315K)

AMD is just monitoring and reporting temps differently

Attached: power-gaming-average.png (500x970, 54K)

>AMD is just monitoring and reporting temps differently
so this is like the first launch of Ryzen CPU's where ridiculous temps were recorded and you had to use some sort of mathematical formula to get the actual temp?

Kind of similar - the Ryzen temp issue was merely a problem with monitoring programs needing to know the correct offset, same as the old issue with FX CPUs

This hardware is showing the real temperature of the hottest part of the chip out of 60+ sensors, while Nvidia is on the "old standard" of just showing the average temp from a centrally located probe.

>In the past, the GPU core temperature was read by a single sensor that was placed in the vicinity of the legacy thermal diode. This singular sensor was used to make all power-performance optimization decisions across the entire GPU. However, depending on the game being run, the type of GPU cooling and other related metrics, different parts of the GPU might have been at different levels of utilization. As a result, ramping up or throttling down the entire GPU based on this single measurement was inefficient, often leaving significant thermal headroom – and resulting performance – on the table.

>With the AMD RadeonTM VII GPU we introduced enhanced thermal monitoring to further optimize GPU performance. We built upon that foundation with the RadeonTM RX 5700 series GPUs, and now utilize an extensive network of thermal sensors distributed across the entire GPU die to intelligently monitor and tune performance in response to granular GPU activity in real time.

community.amd.com/community/gaming/blog/2019/08/12/amd-radeon-community-update-more-control-over-gpu-power-and-performance-enhanced-thermal-monitoring-maximized-performance

I'm certain AMD is allowing these GPUs to operate in a manner that wont let them be harmed. You could change the temp limit if it bothers you.

We have to remember that the 110C reading is from Gamer's Nexus artificially restricting the stock cooler to 40db, not the 51db is it designed to run at.

In reality the chip was designed out-the-box to not cross over 100C at any spot, based solely on GN's test of stock settings.

yes it's mentally challenged in some aspects, but to call it retarded may be an insult to actual retards. it's on a whole different level of mental instability.

Attached: baron-von-faggot.gif (480x640, 3.36M)

Top tier capacitors will fail in less than 250 days at those temperatures, anything above 80 C can deform a capacitor and cause significant drops in life span.

Most of the stuff used on reference PCB GPU's isn't even good and usually dies within 3 years if you use the card extensively. That's why you should always buy custom PCB's with beefed up VRM if you're a power user.

But the 110C measurement is at the GPU itself, not any of the caps.

Remember when, for years until Fury X, AMD shills would never shut up about 92C benchmarks on Fermi? The entire AMD shill zeitgeist was spamming pictures of fires.

Now with Ryzen 3000 and Navi, temperatures just do not matter. Until you enter an Intel thread then they do again.

AMDrones still spam Intel housefire pics but somehow with amd GPUs power consumption never matters, kek. Brand loyalist fanboys are cancer.

Attached: totally_normal.jpg (810x595, 268K)

>AMD uses new, better, more accurate temperature sensing system that reads peak thermal values across hundreds of zones
>This is somehow equal to Nvidia getting the same readings with the older system that measures the temperature at a single arbitrary point, implying that there are significantly hotter portions of the chip

>AMD trades off thermal efficiency for significantly faster performance than Nvidia at every price point, across the board
>This is somehow equal to Intel using double the power to eke out a 3% lead with a far more expensive chip in a handful of video games, while getting completely BTFOed in productivity tasks.

>literal Steam Machine
Wow, so this is what Gabe Newell and Valve Tech has been working on...

Attached: shutterstock_678675256-compressor.jpg (780x408, 20K)

>dude 200Mhz to 4.75GHz for free lmao

underrated

>More accurate hotspot temperature reading shows higher number
Literally who cares, the same thing happened before the sensors reflected it.

Attached: 1549563021691.jpg (433x419, 97K)