Sup Jow Forums

sup Jow Forums.

I have some spare time, and I was thinking I'd do an AMA around my one year anniversary in the company, so here goes:

I'm a DV Engineer at AMD working in RTG (Radeon Technology Group, the GPU part of the company). Ask me anything.

Also note that I am under NDAs which mean you won't be getting insider trading information or trade secrets or IP from me. Please save yourself and me time, and don't bother asking me.

I'll be on to answer questions until about 4PM EST

Attached: file.png (1526x1600, 27K)

Proof.

Saged.

I've posted my work badge in previous threads.

Here's the organization from skype for business.

Attached: skype organization.png (468x716, 32K)

Tell your boss that your pajeet shills have opposite effect and actally manage to distance potential buyers from your shit products.

I think AMD will be a 200 billion company by eating half of Intel and half of Nvidia. But it will take some time for market share to be grabbed. What do you think about $50/share by year end?

Attached: lisa su dominating.jpg (1260x709, 182K)

I'll be sure to tell Lisa that the next time she shows up at the office.

So any plans on competing with rtx?

I wouldn't bet my life savings on a 2.5x rise in share price in a single quarter.

should i sell my AMD stocks now or later?

We've already released information about our ray tracing plans, though that was before we knew Nvidia had their "real time" ray tracing card.

Look at the old releases, then cross compare with when we began developing our vega successor. That'll give you an idea where we are with that. Can't say anything more on that matter.

Oh keep buying those stocks. Buy buy buy until it's $100 and I can retire rich.

2 questions:

1) how many engineers does it take to design a new graphics card?
2) after 5nm, are there any plans? you can't just shrinking down the transistors since you have quantum tunelling and shit

It will be 30 soon. Over Oct+Nov+Dec, it will go up to 50 easily, but you're right about not betting life savings on it.
You should be smart and wait several years if you can afford to.

Attached: redsnow.jpg (1024x640, 71K)

>I work for AMD
>but I won't answer anything fun
Ya already fucked up, pal. Theres no point unless you're going to divulge some secrets.

CU count of future 7nm APU?
Is 7nm VEGA a major rework, or just a higher clocked shrink with more memory/PHY?
Is the 7nm VEGA part really clocked north of 2400mhz? You should be able to answer this one at least.
Tangibly, is RTG still aiming to compete in the high performance consumer GPU segment against Nvidia's latest?
Are you working on any ray tracing accelerators in house, plans on licensing IP from ImaginationTech?
Primitive shaders when?

"quantum tunneling" is fucking current leakage. Current leakage is not some magical thing that only shows up beyond 5nm. It has always existed in every process ever. Different gate structures exist to mitigate the short channel effect in devices, and that will hold true forever.

1) You have to realize that we buy some IP from other companies like Synopsis, and a lot of our internal IP gets recycled between projects. Add to that all the engineers who don't strictly write our RTL code. I'd say about 2000 people total. But that's a very rough estimate.

2) AMD doesn't dabble in the material science end of the business. Whatever Intel, TSMC, Global Foundries, etc does, we'll just follow suit. Though I've heard we might be able to squeeze out a 3nm before moving to other methods of improving processing power.

>Ya already fucked up, pal. Theres no point unless you're going to divulge some secrets.
I get that hearing super secret insider knowledge is cool, but I don't know you fags, and for all I know a coworker has already seen this thread and is waiting to get me fired.

>CU count of future 7nm APU?
NDA my dude.

>Is 7nm VEGA a major rework, or just a higher clocked shrink with more memory/PHY?
I didn't work on Vega so I can't say. It's probably not going to be much better than current offerings though either way. Better power consumption though hopefully.

>Is the 7nm VEGA part really clocked north of 2400mhz? You should be able to answer this one at least.
Didn't work on vega. And for that matter I don't do post silicon work so I have no idea about how well the chip will be able to preform on the new process.

>Tangibly, is RTG still aiming to compete in the high performance consumer GPU segment against Nvidia's latest?
I keep hearing that from the higher ups, but we're not to optimistic about that around here.

>Are you working on any ray tracing accelerators in house, plans on licensing IP from ImaginationTech?
See above

>Primitive shaders when?
Again, I don't do post silicon, and I don't really talk to the losers that do driver development all that much.

Current leakage yes, but from one wire to another. Basically we can't put the wires that close or we'll get crosstalk. If we can't put the wires that close, then we have to space out the transistors. If we have to space out the transistors, then we're basically paying more for effectively the same t count. This is why it is a big issue

is GPU design one big file with HDL code in it, or much more?

Does MS going to use AMD apu in the next xbox??

Wire capacitances, parasitics, and the short channel effect, these have always been a factor. It isn't new. Materials and gate topology aren't just arbitrary, they're chosen to deal with these effects at each given node.

Do you miss Raja?

>"quantum tunneling" is fucking current leakage.
Came here to see this post, Thank you.

do you play vidya?

No one likes Dr Poo

What HDL simuilators/environments or other EDA software do you use? How long does it take to simulate something intresting?

>Rebrandeon Technology Group.
How's it feel to not be working on a better product (Vega), but instead the cash grab rebrands?

BTW, you better have something that can compete with nvidia at the high end next gen, because RTX is overpriced.

>BTW, you better have something that can compete with nvidia at the high end next gen, because RTX is overpriced.

I'm not OP, but I'll answer.

NVidia is at the ceiling on the perf/watt from what they can get out of their current process. 2080 is so expensive because it is x-box hueg. AMD has time until 7nm before novidya is going to be a problem in the mainstream.

Part of the magic of verilog is a structure called a module, which can be put in a seperate file.

I used about 20 files for a VGA pong game on an FPGA in university. our designs probably have tens of thousands of verilog files.

NDA.

That's very nice, now try and do materials engineering for two wires that are so close the electron clouds overlap. Hint: the material you use won't change the electron cloud.

Not really, but he left well before I could see whether he was a good leader or not. His job had two replacements so that should tell you something about him.

I bought a switch to play mario kart at work and home, and I play a bit of CS and GTA and [spoiler]HOTS when my gf wants me to play with her[/spoiler]

We have the whole synopsis line of products. Personally I make a lot of use of VCS and verdi. We also use cadence. I don't generally snoop on LVS or design engineers so I can't say much about the rest of the company. Sims can take anywhere from an hour to (in my collection of tests) 160 hours.

>How's it feel to not be working on a better product (Vega), but instead the cash grab rebrands?
I don't do anything related to branded products. If you mean refreshes, I'm too smart to be working on those.

>BTW, you better have something that can compete with nvidia at the high end next gen, because RTX is overpriced.
duly noted

FUCK DNA

>and for all I know a coworker has already seen this thread and is waiting to get me fired.
you should get some dirt on him

Did you know "poor Volta" was a hoax? way to troll the internet

>fuck dna
Did you get a little too much?

>verilog
Altera or Xilinx

If you're asking about my uni project, that was on a Xilinx board. Can't remember the model, but I think it was Eagle 3 or something along those lines.

I'll give you a "hint"
The BEOL contains wires for power delivery, that ultimately carry 100w+ of power, literally right next to IO. This carries through the die, through the package, through the socket, and right through the motherboard.

Companies that foundries contract and work with like Applied Materials know all of this and take it into account. It has always been a consideration. None of this is a new hurdle.

>We have the whole synopsis line of products. Personally I make a lot of use of VCS and verdi. We also use cadence. I don't generally snoop on LVS or design engineers so I can't say much about the rest of the company. Sims can take anywhere from an hour to (in my collection of tests) 160 hours.

Are those things parallelized? I work at an EDA company and we are considering a proof of concept of a simulator that breaks many of the guarantees of the HDL languages but offer quicker run times so larger designs can be at least somewhat quickly gauge if a given design makes sense.

At least - that is what I understood from the concept. I mostly make gui and graphical tools.

>our vega successor
yeah about that
you guys kept saying it would have magical drivers, new arch, and be mcm
but now it's just a refined vega?

I mean the one that AMD use

did you see today's stock gains?

Wrong post

The successor to the Vega design is part of the Navi family.
This 7nm Vega part coming out isn't ever going to see release for the general consumer market. Its an enterprise part.

As of right now AMD has nothing on the horizon for the consumer market by way of new GPUs. Maybe they'll release some refreshes of existing designs on 12nm, but a 10%~ freq uplift is pretty much worthless compared to Nvidia's high end now.

underrated

quantum tunneling is NOT analagous to electrical arcing. It CAN NOT be mitigated through materials engineering. a tera-Ampere, or a single electron through the same wire will run THE SAME risk of quantum tunneling regardless of the materials used. You're not getting your 1nm products, and I'm not going to argue what literally every materials design engineer has been saying since the 1980's.

>Are those things parallelized?
Nope.

Navi and Vega20 are two separate things. I work on Navi. I never touched Vega.

I dunno. Never seen the things. I think we use Altera. Funny when you think about it.

I did now. really mad I didn't opt in for employee stock purchasing.

>This 7nm Vega part coming out isn't ever going to see release for the general consumer market. Its an enterprise part.
I wish we were rich enough to have dedicated 'enterprise' parts.

is it a nice company to work for, do you like your placement and managers?

obligatory

Attached: 7687876.png (653x726, 42K)

Hey dude, what does it take to work for amd?
I sent an application to one of your partners in europe which is responsible mainly for your verification shit, the hdl design house.
They are literally a sweat shop for engineers, salary is 900 jewros and your responsibilities are a million times worth more.
Do you have an actual partner in europe that it's worth working for?
Are you going to buy back your packaging business which were sold in the big debt days?
Is amd gonna ditch the pajeet outsourcing and east europe outsourcing to hire the brain ls themselves?

>Hey dude, what does it take to work for amd?
In all honesty - that. I might want to switch jobs after 12 years.

>>Are those things parallelized?
>Nope.
Does it make sense in your job to parallelize that? Would it make sense to cut that 160 hours to, say 20?

Guy you were talking to wasn't me (OP)

Manager is cool. Really pushes you but doesn't get mad when you can't meet any unrealistic expectations. Coworkers are alright, but many of them are Mainlanders who prefer to talk in Cantonese during lunch etc. Don't hate them for it, but not being able to easily chat with the lads really bums you out. Overall I like it though.

cheat your way through university, and when it comes time for an interview, don't call an ADC an ACDC.

In all seriousness, if you can riddle off the information in asic world and have a degree to back you up, and the hiring manager likes the cut of your jib, then you can get in. Though it may be a little late for the hiring spree that I got in on.

>Do you have an actual partner in europe that it's worth working for?
no idea
>Are you going to buy back your packaging business which were sold in the big debt days?
no idea
>Is amd gonna ditch the pajeet outsourcing and east europe outsourcing to hire the brain ls themselves?
Probably. (Thanks Drumphhghgfffhfff)

>Is amd gonna ditch the pajeet outsourcing and east europe outsourcing to hire the brain ls themselves?
If AMD spent as much money into AI as Nvidia had, there would be no pajeet shilling necessary. It would be done automatically.

It would be nice, but parallelizing the simulator is Synopsys' job. Not ours. If they updated VCS to be parallelized, then we'd start using that.

I get that. But if my employer would make a parallel simulator, would that be an impetus to switch?

I was calling navi a refined vega
how new and improved will navi really be, since mcm and magical drivers have gone out the window?

Do you even understand the principle behind a GAA or GW-FET?
Hint: Insulation isn't magic, and you can make a perfectly insulated wire with passives.
The real difficulty in continuing CMOS scaling has been finding the right isotopes and methods of using them at industrial scale. Aberrant leakage is still just leakage regardless of if its FEOL or BEOL. As resistances fall new methods are employed to keep leakage at reasonable levels so the device still functions. Theres plenty of room left at the bottom.
Applied Materials showed a workable pathway, discussed BEOL at length, for 3nm GAA half a decade ago at SEMICON West. Funny how none of them were shitting their pants and crying about how it was an impossible feat when they were discussing design flow of such a process.

I think you're a bullshit LARPer

>hiring spree
had this called before Zen's launch, they'd reinvest in the GPU side after stripping funding to get Zen out the door

btw why do they have the APUs delayed so long after the CPU core launches
If they had managed to map out a 7nm APU for 2019 release instead of 2020 they could've seized the valuable end of year laptop market almost wholly. ICL cheapo dual cores are the only ones out by that time frame.

Do you think RTX is a true meme?

I mean, Synopsis' other tools are so far ahead of the curve that we'll probably be relying on them forever. I don't know how I'd live without verdi's click to source a wire/reg functionality.

That said, if you could lower our run times by 80%, yeah give us a call.

NDA

3nm MIGHT happen, but I wouldn't hold your breath for it or 1nm.

>I think you're a bullshit LARPer
You're free to think whatever.

There have been delays.

What was the reason VEGA didnt have all the promised features and did Raja really quit or did he get the boot

>NDA
how convenient
yeah I'm done with you

Nvidia tried to make their card sound better than it is, but it's still a good card.

Anyways lads, that's all for now. Sorry to anybody who got here late. I may be on later today or this week. Look forward to it.

Navi still using GCN?

Will you ever untuck your portable GPUs for notebooks so anyone besides Apple get them?

>delays
CPU, GPU, Integration, or Process?

from what I heard the hardware just didn't work

Attached: z7gub14d03k01.jpg (600x580, 71K)

thanks for all the info.

Not OP, but that is obviously information that can be used by Nvidia. It makes sense that is under an NDA

i can see jensen in his cosy leather jacket shitposting on here

Yes

What's your opinion about free software movement and freetards?

the diesize is fucking humongous compared to its capabilities, manufacturing wise its a turd Killebrew must be shitting himself at the direction the industry is taking.

Navi is part of the GCN family, its going to be long lived.
They'll probably implement varied CU size for better energy efficiency and add a second scalar unit per CU before they abandon GCN.
I doubt AMD actually has the resources on hand to start from scratch and compete with Nvidia at all.

If Lisa Su wore Jen-Hsun Huang's leather jacket, would you recognize her?

Verification (not mine mind you)

No problem

We use boatloads of Foss stuff at work and I main manjaro at home. I'm not religious about the benefits of Foss though.

This. Navi is part of GCN, and I don't think we're abandoning GCN for a long time

I'd recognize mommy anywhere. :^)

I had heard some issues Raven Ridge has with switching to dGPU in laptops snuck by; so that makes sense

For God's sake, AMD needs a new GPU architecture for now!

you gonna provide the budget for that? Maybe after vega 7nm and navi they can think about that but they've already started on their deadlines and a new architecture would derail the entire thing

GCN isn't bad in any way apart from the ROPs having limited throughput. If they could actually push more pixels per clock they'd be monstrous.
That'd be a substantial investment though so its probably the last thing they'll ever change.

Is Navi 20 coming out relatively soon, so we at least have an alternative to those fucked Nvidia prices?

They'll have something in 1H 2019 if we're lucky. I'm not expecting anything great from AMD on the GPU front for the foreseeable future.

Worry about Raytrace from nvidia or new Quadro line in rendering/video processing.

Really Sony GPU for PS5 begin today the big thing in RTG.

You miss raja?

Is Lisa sexy irl? In all seriousness, what can you tell us about the hardware used in the PS5 and Xbox Two? Are they similar? Which uses what?

They're probably going to be similar to what you see in the PS4 and Xboner now. Some of the exact same IP, different CU count, slightly different accelerators, maybe slight differences to memory hierarchy.
They're consoles. AMD showed everyone they could make a super quick to market, low power, "good enough" console APU. Thats what they'll be offering again, except this time their CPU cores won't be a major bottleneck as with Jaguar.

OP probably knows exactly zero about any of AMD's semi-custom projects.

What happened to Ruby the Radeon mascot? I used to fap to the tech demos.

Attached: 65593.jpg (480x583, 35K)

not OP but does Radeon have any mascot now

>GCN isn't bad in any way
>apart from the ROPs having limited throughput
>shit power efficiency
>neither AMD itself can extract everything from GCN with the current software
So yeah, GCN is bad for these days.

>not OP but does Radeon have any mascot now

Five years ago they had this..

Attached: AMD-Next-Gen-Ruby.jpg (1296x864, 480K)

Only the best!

Attached: BbP8CozCEAAOlVZ.jpg large.jpg (1024x1365, 252K)

Vega64 with a slight undervolt goes head to head with the GTX 1080, the 1080ti being about 15 to 30% faster.
The arch isn't bad, it needs some more tweaks. All Nvidia has been doing for the last few generations is revising one base design.

Would you bang Lisa su for a promotion?
Also will navi be any good? Can we expect you guys to have a product that's atleast worth considering? Considering nvidia is leaving you in the dust. You closing the gap?

I'm not OP but I'll take a guess.

SJWs infected every walk of life and every tech company like cancer. The mentally ill contingent would complain until Ruby is replaced by Tubby, a round-headed trans dyke with septum rings fighting against "white privilege" with the red Anarchy (A). Ironically, Dr. Lisa Su is a God Tier role model for men and women, but none of them have an IQ approaching hers so they get agitated. AMD could show they don't to SJW scum and bring back Ruby in true form. Until then, Ruby stays waiting and lives on in fan art.

Attached: Ruby redpill.jpg (768x432, 43K)

That shit was tacky AF. Rubby looks like a greasy truckstop whore. Her whole design makes me gag. I'm glad she was replaced with a superior mascot.

Attached: 1470313152722.jpg (3840x2160, 2.44M)

I have insider information regarding your company's future product pipeline, and none of the news I hear is positive:
Why is AMD not capitalizing on high-speed interconnects like OmniPath?
Why is support for Apache Pass not being implemented in ANY EPYC platforms?
Why can AMD not release a competent mobile processor that can compete with Intel's Skylake-Us and Rs? Your company's upcoming mobile/desktop processors are underwhelming compared to what Intel will offer in late 2019.
Why does RTG have ZERO consumer products that can compete with the RTX 2000 series products until 2020?
What will it take for AMD to abandon the cost-inefficient strategy of packing more cores per socket versus quad and eight-socket platforms? EPYC has very little market traction in key cloud and HPC markets because of it's two socket/high-core count limitations.
Why did AMD waste our time with a 12nm refresh whose sales were ultimately cannibalized by the 14nm Zens? Same goes for the chipsets. Why was this approved?
Why is AMD lying about how losing GloFo as a 7nm supplier is a net positive? AMD has lost well over a billion with GF pulling out of the leading edge race. This is terrible news for AMD in the long run.
What will it take for AMD to restructure its GPU division or sell it off entirely? RTG is burning cash and can never compete with Nvidia or Intel's new "GPU". AMD can not take back more than 30% of the total GPU market.

Thanks in advanced, and start hunting for another job after Q3 2019.

Crysis 3 is unplayable on the Vega
I emailed AMD last year but still no answer

Attached: Crysis_1080p.png (1295x1392, 48K)

And here's a fun one: why couldn't Zen2 fit more than 4 cores per CCX? It's smaller than Zen1 by about 40%, so surely they could have at least increased the core count per CCX...

Did raja Involve in making navi ? if so how much

>Why is AMD not capitalizing on high-speed interconnects like OmniPath?
Invest in a competitor's ecosystem and technology? Lol. Not with genz
>Why is support for Apache Pass not being implemented in ANY EPYC platforms?
See above. And apparently your inside info has not conveyed how much of a clusterfuck Apache Pass is for OEMs and support
>Why can AMD not release a competent mobile processor that can compete with Intel's Skylake-Us and Rs? Your company's upcoming mobile/desktop processors are underwhelming compared to what Intel will offer in late 2019.
Limited headcount to do stuff like this for now. Keep in mind AMD is 1/10th the size of INTC
>Why does RTG have ZERO consumer products that can compete with the RTX 2000 series products until 2020?
Enterprise/AI/Console market, not in gayming where there is no money
>What will it take for AMD to abandon the cost-inefficient strategy of packing more cores per socket versus quad and eight-socket platforms? EPYC has very little market traction in key cloud and HPC markets because of it's two socket/high-core count limitations.
Why have more sockets when you can have less
>Why did AMD waste our time with a 12nm refresh whose sales were ultimately cannibalized by the 14nm Zens? Same goes for the chipsets. Why was this approved?
Sometimes you need cash and to shake out all the issues
>Why is AMD lying about how losing GloFo as a 7nm supplier is a net positive? AMD has lost well over a billion with GF pulling out of the leading edge race. This is terrible news for AMD in the long run.
Who cares? 7nm will still beat intel to market
>What will it take for AMD to restructure its GPU division or sell it off entirely? RTG is burning cash and can never compete with Nvidia or Intel's new "GPU". AMD can not take back more than 30% of the total GPU market.
Why abandon a divison of the company that kept it afloat in the dark times? Also console market and e-gpu

>I have insider information guys
>proceeds to shit the bed on only the second question
fucking laughing my ass off right now

Attached: 1475074868843.jpg (499x460, 78K)

>I'm a DV Engineer at AMD working in RTG (Radeon Technology Group, the GPU part of the company). Ask me anything.

1) Where does the seeming limit of 4 SEs/GPU and 1 geometry engine/SE in GCN come from? Is it a deeply ingrained design limit like a xbar that won't scale wider, or is it just somebody high up with a hard-on for cutting fixed function units to dump as much die space as possible into CUs? Vega felt like a repeat of Fiji's mistake in this regard.

2) Are chiplet-based GPU designs even close to being on the horizon?

Material engineers have NOT been saying that since the 1980s; you WILL get tunneling if your barrier is thick as the sun, but it becomes exponentially less consequential. Materials are DESIGNED to provide a higher tunneling barrier (e.g. BN, heterostructures with low-k dielectrics like HSQ) and help circumvent the issue of quantum tunneling. If materials couldn't be manipulated to do this, technologies like QW lasers and VCSELs would have never existed. Anyone with a remote high school primer on QM can tell you that, not just a material scientist.

Silicon can be scaled down to 3nm, perhaps even smaller. We probably won't see the end to silicon for a while; the problem is process engineering, "planar" process has only been outdated for so long and there's many challenges researchers have to circumvent. GAA is designed to prevent short channel effects by screening external charge (although its performance benefits are also notable, including interface mobility and increased maximum current load). Most other materials including III-V (GaN, GaAs, InP...) and other compound semiconductors still need to be developed for process improvements and are very hard to scale to the level Si are, but will likely be ready by the time we need them. Beyond that we still have TMDCs and band gap engineered graphene/SWCNT; there's plenty of materials we can go through until the very end, at least in theory.

whats very end defined here

underrated as fuck

Other user here. OP is a faggot.
Unless CMOS scaling can physically no longer continue. That could be below what would be called a "1nm" node in industry terms.
We could use 1nm wide carbon nanotube gates, use some 2D material like tungsten disulfide, some other exotic, but we still hit a point where some structures can't get any smaller because atoms don't get smaller.
There is of course the potential to still reduce die size by improving back end scaling, but front end scaling would be at the absolute wall for CMOS. The only way to go smaller would be something like quantum junctions, maybe photonics, valleytronics is also a thing.

>Unless
Until* That would be the very end he was referring to