Question Speculation: RDNA3 + CDNA2 Architectures Thread

Page 58 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,702
6,405
146

TESKATLIPOKA

Platinum Member
May 1, 2020
2,428
2,914
136
N31 is only 10% smaller than AD102.
Only? Those MCD chiplets are 6nm and the main GCD is ~308mm^2. If you remove the same things from AD102, then I am pretty sure It would be larger than 350mm^2. That's a lot of missed WGPs(performance) If you ask me, but of course we need to know the respective transistor density.
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
The die area for N33 can be accepted if the claim of a 50% reduction in CU area is accepted as true.

N23 = 128b bus + 2048 shaders + 32MB IF = 237mm^2 on 7nm

N33 = 128b bus + 4096 shaders + 32MB IF = ((237)/.85)mm^2 = 201mm^2 on 6nm
 

Timorous

Golden Member
Oct 27, 2008
1,727
3,152
136
The die area for N33 can be accepted if the claim of a 50% reduction in CU area is accepted as true.

N23 = 128b bus + 2048 shaders + 32MB IF = 237mm^2 on 7nm

N33 = 128b bus + 4096 shaders + 32MB IF = ((237)/.85)mm^2 = 201mm^2 on 6nm

It also works if you scale N24 up. 2x everything with RDNA WGPs that have 2x the shader density is 217mm². Small amount of area optimisation and you can hit 203.
 
Reactions: Tlh97 and maddie

HurleyBird

Platinum Member
Apr 22, 2003
2,725
1,342
136
N31 is only 10% smaller than AD102.

And the GCD itself is half the size of AD102. One of the main points of chiplets is you can throw more silicon at the problem without breaking the bank, eg. Epyc. If a ~308mm2 N5 GCD (plus 6 * ~37.5mm2 N6 MCD) ends up trading blows with a ~611mm2 N5 AD102, then a 450+mm2 mm2 GCD would crush it. A 550+mm2 GCD might make for the most lopsided generational gap in the history of the two vendors.

After the fall of 3DFX, we've had two extremely powerful moments in the history of the graphics industry: R300 and G80. The first transformed ATI from an afterthought to equal part of a duopoly, and briefly the market leader. The second relegated ATI/AMD back to afterthought. The biggest blunder AMD ever made wasn't bulldozer--it was sweet spot. Terrascale (RV770 gen) was so architecturally superior that it could have created another R300/G80-type moment if AMD hadn't decided to become the budget brand.

It's possible that a modern R300/G80 moment wouldn't have quite the same industry-upturning significance today as they have traditionally--but there's no evidence for that, and we won't know for sure until we see again something analogous. At the worst, you might now need two such generations. Whenever AMD has the capability to make such a moment though, they fumble the ball.

Now, don't get me wrong, RV770 helped AMD's competitive position even if Terrascale could have done so much more. If Navi 31 is competitive with AD102 while consuming less power and being more area efficient, it will also greatly improve AMD's competitive position. But trading blows while being more efficient is a far cry from enjoying a top-to-bottom dominant generation. If AMD won't ever go for the kill, their only chance of being market leader is if Jensen gets replaced with someone similarly unambitious.

A part of me still hopes that AMD has a larger GCD that they've been very good at hiding. Or plans for a product with multiple GCDs. Or, at least that the lack of either is that they planned for multiple GCDs and failed to make it work, because that at least would signify an AMD that tried to go for the kill and failed, rather than one that failed to go for the kill when they easily could have made it.
 
Last edited:

Saylick

Diamond Member
Sep 10, 2012
3,385
7,151
136
And the GCD itself is half the size of AD102. One of the main points of chiplets is you can throw more silicon at the problem without breaking the bank, eg. Epyc. If a ~308mm2 N5 GCD (plus 6 * ~37.5mm2 N6 MCD) ends up trading blows with a ~611mm2 N5 AD102, then a 450+mm2 mm2 GCD would crush it. A 550+mm2 GCD might make for the most lopsided generational gap in the history of the two vendors.

After the fall of 3DFX, we've had two extremely powerful moments in the history of the graphics industry: R300 and G80. The first transformed ATI from an afterthought to equal part of a duopoly, and briefly the market leader. The second relegated ATI/AMD back to afterthought. The biggest blunder AMD ever made wasn't bulldozer--it was sweet spot. Terrascale (RV770 gen) was so architecturally superior that it could have created another R300/G80-type moment if AMD hadn't decided to become the budget brand.

It's possible that a modern R300/G80 moment wouldn't have quite the same industry-upturning significance today as they have traditionally--but there's no evidence for that, and we won't know for sure until we see again something analogous. At the worst, you might now need two such generations. Whenever AMD has the capability to make such a moment though, they fumble the ball.

Now, don't get me wrong, RV770 helped AMD's competitive position even if Terrascale could have done so much more. If Navi 31 is competitive with AD102 while consuming less power and being more area efficient, it will also greatly improve AMD's competitive position. But trading blows while being more efficient is a far cry from enjoying a top-to-bottom dominant generation. If AMD won't ever go for the kill, their only chance of being market leader is if Jensen gets replaced with someone similarly unambitious.

A part of me still hopes that AMD has a larger GCD that they've been very good at hiding. Or plans for a product with multiple GCDs. Or, at least that the lack of either is that they planned for multiple GCDs and failed to make it work, because that at least would signify an AMD that tried to go for the kill and failed, rather than one that failed to go for the kill when they easily could have made it.
As much as we want AMD to directly compete with Nvidia at the upper echelons of the desktop GPU market, I think the issue is that the sales volume for the >450W GPU market is quite small such that the cost-benefit ratio doesn't pan out. It's literally the same thinking that went through ATI/AMD's business minds with RV770. You know what they say: history doesn't repeat itself, but it often rhymes.

I think the similarities are actually kind of striking. Heck, even some of the former ATI people are back at AMD to relive this strategy and VLIW-esque execution is rumored to make a comeback in the form of VOPD instructions (i.e. extracting some ILP with dual issuing instructions)!

RV770:
- Lean, gaming focused product that used a new memory technology to make up for a smaller memory bus (GDDR5)
- Smaller die size allowed pricing to target the meat of the discrete GPU market without sacrificing too much performance or using too much power
- Very high compute:texture ratio

Navi 31:
- Lean, gaming focused product that uses a new memory technology to make up for a smaller memory bus (Infinity Cache)
- Even though new nodes are getting exponentially more expensive, uses MCM approach to keep IP in optimal node to lower costs and keep power in check
- Significantly increased shader performance without the same scaling upwards for texture performance (1.2x WGP, but 2.4x FP32)

Meanwhile, Intel was struggling with Larrabee and lo' and behold we have Intel struggling with Arc today.





When Carrell and crew were specing the RV770 the prediction was that not only would it be good against similarly sized chips, but it would be competitive because NVIDIA would still be in overshoot mode after G80. Carrell believed that whatever followed G80 would be huge and that RV770 would have an advantage because NVIDIA would have to charge a lot for this chip.
Source: https://www.anandtech.com/show/2679/5
AD102 is a big, big die on a brand spankin' new node. It's going to be expensive to make, and they sure as heck will ask a lot for it.

One thing I wondered was how well ATI knew NVIDIA’s plans and vice versa, so I asked the obvious: where do you guys get your information from? The answer was pretty much as expected: Taiwan. All of the board makers know one another and are generally open with sharing information, once information hits Taiwan it’s up for grabs. Then there’s a bit of guesswork that’s done.

ATI planned to put its best foot forward, looking at the roadmaps it seemed like NVIDIA wasn’t going to do much in the G92 space in the time period that ATI would launch RV770. NVIDIA had its sights set on another G80-esque launch with GT200, it would introduce this honkin new chip, price it out of the reach of most and not worry about the peasants until sometime in 2010. The existing product line would be relied on to keep the masses at bay.
Source: https://www.anandtech.com/show/2679/10
Doesn't that sound familiar? AD102 launching first at some exorbitant price point while leaving Ampere to cover the mid and lower range of the market.
 
Last edited:

Kepler_L2

Senior member
Sep 6, 2020
463
1,903
106
And the GCD itself is half the size of AD102. One of the main points of chiplets is you can throw more silicon at the problem without breaking the bank, eg. Epyc. If a ~308mm2 N5 GCD (plus 6 * ~37.5mm2 N6 MCD) ends up trading blows with a ~611mm2 N5 AD102, then a 450+mm2 mm2 GCD would crush it. A 550+mm2 GCD might make for the most lopsided generational gap in the history of the two vendors.

After the fall of 3DFX, we've had two extremely powerful moments in the history of the graphics industry: R300 and G80. The first transformed ATI from an afterthought to equal part of a duopoly, and briefly the market leader. The second relegated ATI/AMD back to afterthought. The biggest blunder AMD ever made wasn't bulldozer--it was sweet spot. Terrascale (RV770 gen) was so architecturally superior that it could have created another R300/G80-type moment if AMD hadn't decided to become the budget brand.

It's possible that a modern R300/G80 moment wouldn't have quite the same industry-upturning significance today as they have traditionally--but there's no evidence for that, and we won't know for sure until we see again something analogous. At the worst, you might now need two such generations. Whenever AMD has the capability to make such a moment though, they fumble the ball.

Now, don't get me wrong, RV770 helped AMD's competitive position even if Terrascale could have done so much more. If Navi 31 is competitive with AD102 while consuming less power and being more area efficient, it will also greatly improve AMD's competitive position. But trading blows while being more efficient is a far cry from enjoying a top-to-bottom dominant generation. If AMD won't ever go for the kill, their only chance of being market leader is if Jensen gets replaced with someone similarly unambitious.

A part of me still hopes that AMD has a larger GCD that they've been very good at hiding. Or plans for a product with multiple GCDs. Or, at least that the lack of either is that they planned for multiple GCDs and failed to make it work, because that at least would signify an AMD that tried to go for the kill and failed, rather than one that failed to go for the kill when they easily could have made it.
I'm not sure if people realized yet, but AMD's graphics roadmap is far more aggressive than NVIDIA's. They have achieved >50% perf/watt and 2x performance two gens in a row and they will do it again with RDNA4. This is far beyond what NVIDIA has done over the last 10 years. However, they have started from a huge deficit with RDNA1, and it will take a while for them to overtake NVIDIA at everything (specially ML/RT).

At the same time NVIDIA is responding to this aggressive roadmap with whatever they can: jumping from the super cheap SS8 with Ampere to bleeding-edge N4 with Lovelace (and N3P with Blackwell), almost doubling TDPs in their flagship GPUs (remember all those years of 250W **80Tis?) and pushing their software advantage with things like DLSS.

Zen didn't beat Intel over night and neither will RDNA, but their aggressive roadmap and (so far) flawless execution should have Jensen very, very afraid.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,725
1,342
136
I'm not sure if people realized yet, but AMD's graphics roadmap is far more aggressive than NVIDIA's. They have achieved >50% perf/watt and 2x performance two gens in a row and they will do it again with RDNA4. This is far beyond what NVIDIA has done over the last 10 years. However, they have started from a huge deficit with RDNA1, and it will take a while for them to overtake NVIDIA at everything (specially ML/RT).

Architectural aggressiveness gets you half of the way. Aggressive implementation gets you the rest. RV770 is the story of what happens when you knock the former out of the park while neglecting the later; you still lose the generation. You're still seen as the budget afterthought alternative even though you were more or less an equal player just two generations ago. History has proven that the performance crown matters much more than perf/area.

If AMD could have made a 450mm2+ GCD with the ability to spank GA102 like a red-headed stepchild and the bean counters decided not to, while also expecting the generation after that to have a similarly historic architectural gain, then they've found the stupid pills left behind from the Terrascale years and ODed on them. If AMD even got a whiff of a notion that they could make Nvidia's best look like Vega for two generations in a row, it would be absolutely bonkers to decide not to pursue that course. There's no way they wouldn't be the market leader in consumer graphics if they actually pulled that off.

Enterprise is a different story of course. CDNA 3 looks totally revolutionary, in terms of both architecture and implementation, and Nvidia will probably need to follow suit sooner than later by making enterprise APUs, but CUDA gives them a lot of entrenchment.
 

biostud

Lifer
Feb 27, 2003
18,397
4,963
136
Architectural aggressiveness gets you half of the way. Aggressive implementation gets you the rest. RV770 is the story of what happens when you knock the former out of the park while neglecting the later; you still lose the generation. You're still seen as the budget afterthought alternative even though you were more or less an equal player just two generations ago. History has proven that the performance crown matters much more than perf/area.

If AMD could have made a 450mm2+ GCD with the ability to spank GA102 like a red-headed stepchild and the bean counters decided not to, while also expecting the generation after that to have a similarly historic architectural gain, then they've found the stupid pills left behind from the Terrascale years and ODed on them. If AMD even got a whiff of a notion that they could make Nvidia's best look like Vega for two generations in a row, it would be absolutely bonkers to decide not to pursue that course. There's no way they wouldn't be the market leader in consumer graphics if they actually pulled that off.

Enterprise is a different story of course. CDNA 3 looks totally revolutionary, in terms of both architecture and implementation, and Nvidia will probably need to follow suit sooner than later by making enterprise APUs, but CUDA gives them a lot of entrenchment.
There are rumors of a dual GCD for later launch...
 

Timorous

Golden Member
Oct 27, 2008
1,727
3,152
136
Architectural aggressiveness gets you half of the way. Aggressive implementation gets you the rest. RV770 is the story of what happens when you knock the former out of the park while neglecting the later; you still lose the generation. You're still seen as the budget afterthought alternative even though you were more or less an equal player just two generations ago. History has proven that the performance crown matters much more than perf/area.

If AMD could have made a 450mm2+ GCD with the ability to spank GA102 like a red-headed stepchild and the bean counters decided not to, while also expecting the generation after that to have a similarly historic architectural gain, then they've found the stupid pills left behind from the Terrascale years and ODed on them. If AMD even got a whiff of a notion that they could make Nvidia's best look like Vega for two generations in a row, it would be absolutely bonkers to decide not to pursue that course. There's no way they wouldn't be the market leader in consumer graphics if they actually pulled that off.

Enterprise is a different story of course. CDNA 3 looks totally revolutionary, in terms of both architecture and implementation, and Nvidia will probably need to follow suit sooner than later by making enterprise APUs, but CUDA gives them a lot of entrenchment.

The cost would be reducing the market share gain they are seeing from EPYC in the data centre space. A larger GCD with even more performance (provided AMD are not hitting a scaling wall of course) would take up N5 capacity that is going to be used for Zen 4.

Winning the desktop GPU battle vs NV to lose the long term data centre war vs Intel is a losing play.
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
The cost would be reducing the market share gain they are seeing from EPYC in the data centre space. A larger GCD with even more performance (provided AMD are not hitting a scaling wall of course) would take up N5 capacity that is going to be used for Zen 4.

Winning the desktop GPU battle vs NV to lose the long term data centre war vs Intel is a losing play.
That's a guess. None of us know the wafer supply & demand numbers. None of us know the Zen4 adoption rate. Enthusiasts like us are irrelevant to a large degree.

We do know that a recession is here and all these sales/production plans are about to be trashed. What's different, is this appears to be worldwide.
 

JujuFish

Lifer
Feb 3, 2005
11,031
752
136
Zen didn't beat Intel over night and neither will RDNA, but their aggressive roadmap and (so far) flawless execution should have Jensen very, very afraid.
The big difference here is that Intel, whether by choice or ineptitude, was fine with mediocre improvements between generations. JHH, despite a lack of competition at the high end from AMD, constantly pushed the envelope and didn't rest on his laurels. For NV, lack of competition meant poor monetary value (relatively speaking), not stagnation.

Furthermore, NV currently has several advantages (CUDA, beliefs about drivers, mindshare from advertising, etc) that are seemingly so strong that a significant portion of people don't consider AMD an option at all for their personal buying decisions and only want a competitive AMD to keep NV's prices down.

I don't think JHH needs to be "very, very afraid" at all.
 

Timorous

Golden Member
Oct 27, 2008
1,727
3,152
136
That's a guess. None of us know the wafer supply & demand numbers. None of us know the Zen4 adoption rate. Enthusiasts like us are irrelevant to a large degree.

We do know that a recession is here and all these sales/production plans are about to be trashed. What's different, is this appears to be worldwide.

True.

It is also maths to a degree. AMD have X wafers and if they are manufacturing another GPU SKU it needs to come from somewhere. Sure it might mean some people who would have gone with N31 get upsold but that won't cover all the stock so some needs to come from other N5 lines if AMD want to keep their N32 and N31 supply numbers about the same the only place left if Zen 4.

I think the recession is another good point. I don't think the appetite for an expensive 450mm^2 512bit 32GB monster card will be there like it was lest gen where some people just got what they could even if meant stepping up a tier or two into 3090 range because it was somewhat available. This does make the PPA play pretty perfect because BOM cost is going to matter.

I don't expect AMD to price cheaply at all because they won't want to be seen as the budget option and I don't think AMD will be able to influence NVs pricing much anyway.

Besides. 2x 6900XT perf is pretty close to rumoured ada performance @450W and if N31 hits 2.5x then I expect it will still be the fastest part you can buy and it will consume less power if 2x8pin is correct.
 
Reactions: Tlh97 and Saylick

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
True.

It is also maths to a degree. AMD have X wafers and if they are manufacturing another GPU SKU it needs to come from somewhere. Sure it might mean some people who would have gone with N31 get upsold but that won't cover all the stock so some needs to come from other N5 lines if AMD want to keep their N32 and N31 supply numbers about the same the only place left if Zen 4.

I think the recession is another good point. I don't think the appetite for an expensive 450mm^2 512bit 32GB monster card will be there like it was lest gen where some people just got what they could even if meant stepping up a tier or two into 3090 range because it was somewhat available. This does make the PPA play pretty perfect because BOM cost is going to matter.

I don't expect AMD to price cheaply at all because they won't want to be seen as the budget option and I don't think AMD will be able to influence NVs pricing much anyway.

Besides. 2x 6900XT perf is pretty close to rumoured ada performance @450W and if N31 hits 2.5x then I expect it will still be the fastest part you can buy and it will consume less power if 2x8pin is correct.
What if 5nm demand falls? I think there's a high possibility that by Q1-2 2023, we might see availability of 5nm wafers increase. Digitimes Asia from Wednesday 23 March 2022 article. A one quarter increase.

"TSMC is expected to scale up its 5nm chip shipments to 150,000 wafers monthly in the third quarter of 2022 from about 120,000 units at present, according to sources at semiconductor equipment suppliers."

If demand falls and supply increases, throw out all prior assumptions and plans. I see AMD as perfectly positioned to best benefit from present chaos.

They might not have to choose, but could be able, to quote a phrase, do everything, everywhere, all at once, no production compromises.

This is a distinct possibility and we should consider it in our expectations.
 

Timorous

Golden Member
Oct 27, 2008
1,727
3,152
136
What if 5nm demand falls? I think there's a high possibility that by Q1-2 2023, we might see availability of 5nm wafers increase. Digitimes Asia from Wednesday 23 March 2022 article. A one quarter increase.

"TSMC is expected to scale up its 5nm chip shipments to 150,000 wafers monthly in the third quarter of 2022 from about 120,000 units at present, according to sources at semiconductor equipment suppliers."

If demand falls and supply increases, throw out all prior assumptions and plans. I see AMD as perfectly positioned to best benefit from present chaos.

They might not have to choose, but could be able, to quote a phrase, do everything, everywhere, all at once, no production compromises.

This is a distinct possibility and we should consider it in our expectations.

If 5nm demand does fall I expect super high end consumer GPUs to be one of the 1st markets to see a hit.
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
If 5nm demand does fall I expect super high end consumer GPUs to be one of the 1st markets to see a hit.
A hit does not mean zero. Super high-end of anything would always be a small minority of demand.

If you have a chance to at least be competitive in all markets, and can produce the numbers needed, why would you ignore any market niche?

There is the argument that AMD is all about ASP now. Consider this, what do you do after satisfying those markets? Do you cap revenue? No, you go for the lower segments.

My thinking anyway.
 
Reactions: Tlh97 and Joe NYC

Joe NYC

Platinum Member
Jun 26, 2021
2,331
2,942
106
What if 5nm demand falls? I think there's a high possibility that by Q1-2 2023, we might see availability of 5nm wafers increase. Digitimes Asia from Wednesday 23 March 2022 article. A one quarter increase.

There was a Tweet from Greymon saying that AMD increased its purchase (allocation) for Q4 from 50k wafers to 60k N5 wafers.

I wonder if some of these extra wafers are the same that NVidia is trying to reduce...
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
There was a Tweet from Greymon saying that AMD increased its purchase (allocation) for Q4 from 50k wafers to 60k N5 wafers.

I wonder if some of these extra wafers are the same that NVidia is trying to reduce...
That is interesting. I also read an older report that they had asked to reduce their 7/6nm wafers by 20K for 2 quarters.

Seeing that nearly a lot of their their new products use both 6nm and 5nm chiplets, this seems strange. One would expect increases/decreases in both 6nm & 5nm to occur in tandem, although I don't know the Navi2 allocation. Could be mainly those dies.

The irony here is that Nvidia and/or Intel might be selling these to their main competitor at discounts to TSMC prices.
 
Reactions: Leeea

Saylick

Diamond Member
Sep 10, 2012
3,385
7,151
136
If 5nm demand does fall I expect super high end consumer GPUs to be one of the 1st markets to see a hit.

I think the recession is another good point. I don't think the appetite for an expensive 450mm^2 512bit 32GB monster card will be there like it was lest gen where some people just got what they could even if meant stepping up a tier or two into 3090 range because it was somewhat available. This does make the PPA play pretty perfect because BOM cost is going to matter.

If demand falls and supply increases, throw out all prior assumptions and plans. I see AMD as perfectly positioned to best benefit from present chaos.

I think consumer demand dropping over the next few quarters falls perfectly into the hands of AMD's PPA strategy. When your Average Joe is strapped for cash, I believe they are going to be more perf/$ conscious than normal. That means either buying Ampere at a steep discount or buying next gen GPUs only if the perf/$ makes sense, i.e. it compares favorably to last gen stock at discounted pricing. Rumor mill has been saying that AMD is trying to polish up their drivers so there are no major launch hiccups in an attempt at equalizing the perceived brand weakness vs. Nvidia. If consumers see two equally reliable GPUs with similar performance and one of them is noticeably cheaper, that might be enough to sway them to the one with better perf/$. In this case, AMD appears to be lining up as much volume as possible to take advantage of that situation should it occur.
 

Joe NYC

Platinum Member
Jun 26, 2021
2,331
2,942
106
That is interesting. I also read an older report that they had asked to reduce their 7/6nm wafers by 20K for 2 quarters.

Seeing that nearly a lot of their their new products use both 6nm and 5nm chiplets, this seems strange. One would expect increases/decreases in both 6nm & 5nm to occur in tandem, although I don't know the Navi2 allocation. Could be mainly those dies.

The irony here is that Nvidia and/or Intel might be selling these to their main competitor at discounts to TSMC prices.

I was also surprised about rumored reduction of N6/N7 wafers by AMD. If both are true, increased N5 and reduced N7, it would indicate faster transition to newer generation products, but not higher output overall (vs. the previous plan).

Re: prices Intel and NVidia may be paying TSMC

Yeah it looks like their margins will suffer, especially in the client space, for TSMC sourced products.
 
Reactions: Tlh97 and Leeea

Aapje

Golden Member
Mar 21, 2022
1,467
2,031
106
I think that AMD's weakness with the enterprise is a problem for them on the high end. Nvidia are going to sell a lot of 4090 cards just due to CUDA, even if the cards are slower than AMD's top offering.
 

moinmoin

Diamond Member
Jun 1, 2017
4,994
7,765
136
I think that AMD's weakness with the enterprise is a problem for them on the high end. Nvidia are going to sell a lot of 4090 cards just due to CUDA, even if the cards are slower than AMD's top offering.
CUDA is Nvidia's cash cow after they nurtured it for well over a decade, so it's payback time. All others, not only AMD but also Intel as well as institutional investors of supercomputers etc. are pushing alternate solutions to soften the CUDA lock-in effect, but that will take time to make a difference.
 

Frenetic Pony

Senior member
May 1, 2012
218
179
116
As much as we want AMD to directly compete with Nvidia at the upper echelons of the desktop GPU market, I think the issue is that the sales volume for the >450W GPU market is quite small such that the cost-benefit ratio doesn't pan out. It's literally the same thinking that went through ATI/AMD's business minds with RV770. You know what they say: history doesn't repeat itself, but it often rhymes.

I think the similarities are actually kind of striking. Heck, even some of the former ATI people are back at AMD to relive this strategy and VLIW-esque execution is rumored to make a comeback in the form of VOPD instructions (i.e. extracting some ILP with dual issuing instructions)!

This is why I'm excited for chiplet GPUs. The design costs are less, the production risks are also less. If you say, split the highest end GPU in half then you can just allocate chiplets based on sales. If the big, headline worthy product doesn't sell a lot you just use the chiplets for the much more popular, half sized product.

It's why we should be seeing a big, ultra high performance product from AMD. Not only can they sell that for the biggest profit margin in the lineup, it's also relatively low risk compared to previous generations.

I think that AMD's weakness with the enterprise is a problem for them on the high end. Nvidia are going to sell a lot of 4090 cards just due to CUDA, even if the cards are slower than AMD's top offering.

AMD's CDNA cards seem popular for HPC stuff. And the HPC market has gotten huge all of a sudden, AMD seems to be profiting a lot there. Even if CUDA is easier to work with it's not some magic for supercomputers, where AMD is getting a lot of its money from. Their "bigger numbers than the competition" products might be relatively specialized, but they seem to be selling, and CDNA3 seems set to continue the trend.
 

Saylick

Diamond Member
Sep 10, 2012
3,385
7,151
136
This is why I'm excited for chiplet GPUs. The design costs are less, the production risks are also less. If you say, split the highest end GPU in half then you can just allocate chiplets based on sales. If the big, headline worthy product doesn't sell a lot you just use the chiplets for the much more popular, half sized product.

It's why we should be seeing a big, ultra high performance product from AMD. Not only can they sell that for the biggest profit margin in the lineup, it's also relatively low risk compared to previous generations.
That makes sense, but it's all predicated on whether or not AMD have solved the issue of getting multiple dies to behave as one, especially for gaming workloads. I won't believe it until I see it either.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |