Discussion RDNA4 + CDNA3 Architectures Thread

Page 210 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
Mar 3, 2017
1,754
6,631
136





With the GFX940 patches in full swing since first week of March, it is looking like MI300 is not far in the distant future!
Usually AMD takes around 3Qs to get the support in LLVM and amdgpu. Lately, since RDNA2 the window they push to add support for new devices is much reduced to prevent leaks.
But looking at the flurry of code in LLVM, it is a lot of commits. Maybe because US Govt is starting to prepare the SW environment for El Capitan (Maybe to avoid slow bring up situation like Frontier for example)

See here for the GFX940 specific commits
Or Phoronix

There is a lot more if you know whom to follow in LLVM review chains (before getting merged to github), but I am not going to link AMD employees.

I am starting to think MI300 will launch around the same time like Hopper probably only a couple of months later!
Although I believe Hopper had problems not having a host CPU capable of doing PCIe 5 in the very near future therefore it might have gotten pushed back a bit until SPR and Genoa arrives later in 2022.
If PVC slips again I believe MI300 could launch before it

This is nuts, MI100/200/300 cadence is impressive.



Previous thread on CDNA2 and RDNA3 here

 
Last edited:

marees

Senior member
Apr 28, 2024
743
898
96
I don't know, I hear from a guy named George on twitter with his twitter bros that AMD instead should focus on supporting a model that prioritizes running on consumer hardware that is completely unused by the industry at large. The twitter bros say that AMD is missing out on millions and millions of dollars by not supporting him.
In theory AMD could still satisfy George Hotz with UDNA

But I think RDNA 4 architecture is not ready for what he is asking
 

insertcarehere

Senior member
Jan 17, 2013
703
694
136
I read that as quite a highly biased article. I'm not saying that AMD doesn't have issues, but CUDA has had 20 years of investment, and we know that ROCm only got really serious this year now that AMD have RnD monies. But the article keeps banging on and on about stuff they simply can't know. And then they think themselves high and mighty because THEY have told AMD how to do software dev. Sure. A lot like LinusTech trying to say he told Intel how to market the B580. Absolutely laughable.

The article actually publishes a telling graph, but rather than be optimistic, they STILL go hard on the negative.

View attachment 114111

According to the article, AMD "perform worse than H100/H200 on GPT 1.5B", which is true. Then, barely recognises that someone somewhere is actually making progress with AMDs software, giving up to a 2x lift. But the point of the article is, AMD is still slower, not that they are making progress after less than a year of real investment. Also, it glosses over the fact that AMD hardware is in the biggest machines on in the world, so it can't all be bad.

Again, I'm not saying AMD is fantastic in this area, but that article is a hit piece. It's all "AMD is way better than a year ago, BUT THAT DOESN'T MATTER!"

SemiAnalysis does not do hit pieces nor, from its previous track record, have a bias against AMD. The article is documenting how MI300X fares against Nvidia's current products in 2024 and seems to be fairly complete documenting how far behind the software stack is versus Nvidia. It should be noted that it took a whole lot of wrangling, with AMD support, for a custom environment for that chart to even make comparable results with something OTS from Nvidia in the first place.

MS and Meta using AMD is nice and all, but that's because they're the sort of companies with the scale that going custom to take advantage of whatever value AMD can provide on the hardware side. AMD has no pricing power here given that MS/Meta would have no problem benchmarking TCOs with a Nvidia system and the scale to switch should that become beneficial. It also doesn't alleviate for companies/institutions that don't operate at that sort of scale (I. E all the rest) that would natively use Nvidia CUDA because of the massive gap in software stack.

I don't know, I hear from a guy named George on twitter with his twitter bros that AMD instead should focus on supporting a model that prioritizes running on consumer hardware that is completely unused by the industry at large. The twitter bros say that AMD is missing out on millions and millions of dollars by not supporting him.

Don't know what rock you've been hiding under but 'consumer hardware' is used plenty by the industry lmao. There's a cottage industry smuggling RTX 4090s to China and swapping PCBs/fans for use in server racks, and even besides that Nvidia has no problems flipping AD102/AD104 chips into AI accelerators.
 
Last edited:

Jan Olšan

Senior member
Jan 12, 2017
467
874
136
His point was probably assuming that catering to those ghetto solutions has opportunity costs*. Given the significant ($billions) revenue from MI300X, while it may not be wrong to support running on Radeons too, it is absolutely the right choice to *prioritise* support for MI300X. If AMD wants to capture part of the expanding market, big customers using MI300X are the way, not these fringe markets.
Expanding makes sense of course. Going after the best opportunities first and foremost makes even more sense.

* And who knows, maybe not just opportunity costs - what if it would actually canibalize the demand for Instinct GPUs?
 

Hitman928

Diamond Member
Apr 15, 2012
6,524
11,805
136
Don't know what rock you've been hiding under but 'consumer hardware' is used plenty by the industry lmao. There's a cottage industry smuggling RTX 4090s to China and swapping PCBs/fans for use in server racks, and even besides that Nvidia has no problems flipping AD102/AD104 chips into AI accelerators.

My comment was very pointed at a particular solution, so how many of those consumer cards are running tinygrad?

In a more general take, how long would it take AMD to work out support for general use AI on consumer hardware that will make it attractive versus people buying NV? Even NV with their off the shelf ready package has far higher data center revenue than consumer revenue. How much of that consumer revenue could AMD capture assuming they could get a solution quickly? AMD has been able to capture $5B in data center revenue within 1 year of release of their data center AI product, largely limited by how much they could product they could supply as they try to ramp volume. What is the opportunity cost for them delaying data center support to work out consumer level support? Do you think they could capture any where near the same amount of revenue by making that trade-off?
 
Reactions: Tlh97 and KompuKare

Hitman928

Diamond Member
Apr 15, 2012
6,524
11,805
136
His point was probably assuming that catering to those ghetto solutions has opportunity costs*. Given the significant ($billions) revenue from MI300X, while it may not be wrong to support running on Radeons too, it is absolutely the right choice to *prioritise* support for MI300X. If AMD wants to capture part of the expanding market, big customers using MI300X are the way, not these fringe markets.
Expanding makes sense of course. Going after the best opportunities first and foremost makes even more sense.

* And who knows, maybe not just opportunity costs - what if it would actually canibalize the demand for Instinct GPUs?

You were faster than me, but yes, this is it. It also ignores that AMD hasn't completely abandoned consumer support. It's not great and still has a long way to go, but it is improving and people that really want to are getting their projects to work on RDNA3 cards. The reality is, though, that AMD is focusing far more on satisfying some really big customers that have billions to spend, I find it strange that people think they should instead focus on trying to capture a much more difficult market to compete in that currently has a much smaller TAM. Eventually I'm sure they hope to address everyone, but you can only do so much so quickly and you have to pick your battles, especially when your competitor has a large head start.
 

insertcarehere

Senior member
Jan 17, 2013
703
694
136
His point was probably assuming that catering to those ghetto solutions has opportunity costs*. Given the significant ($billions) revenue from MI300X, while it may not be wrong to support running on Radeons too, it is absolutely the right choice to *prioritise* support for MI300X. If AMD wants to capture part of the expanding market, big customers using MI300X are the way, not these fringe markets.
Expanding makes sense of course. Going after the best opportunities first and foremost makes even more sense.

* And who knows, maybe not just opportunity costs - what if it would actually canibalize the demand for Instinct GPUs?

Well in that case as long as there's a divide between RDNA for consumer and CDNA for professional AMD will be continually handicapped in trying to build an ecosystem around their hardware, which only leaves them being able to compete by cutting margins to pitch users on a cost basis to compensate. In contrast, Nvidia's support with architectures like Ada and Blackwell is applicable for both their professional and consumer products.

P. S Ghetto solutions on consumer hardware are probably how 99% of CS majors get exposed to ML, most universities cannot afford H100s for undergrads to play with.
 

bearmoo

Junior Member
May 8, 2018
12
13
81
You were faster than me, but yes, this is it. It also ignores that AMD hasn't completely abandoned consumer support. It's not great and still has a long way to go, but it is improving and people that really want to are getting their projects to work on RDNA3 cards. The reality is, though, that AMD is focusing far more on satisfying some really big customers that have billions to spend, I find it strange that people think they should instead focus on trying to capture a much more difficult market to compete in that currently has a much smaller TAM. Eventually I'm sure they hope to address everyone, but you can only do so much so quickly and you have to pick your battles, especially when your competitor has a large head start.
I think I read back in this post something about day 1 rocm support for Strix Halo? If true, and with the possibility of like 128GB ram, should make the ghetto custormers happy?
 

Jan Olšan

Senior member
Jan 12, 2017
467
874
136
P. S Ghetto solutions on consumer hardware are probably how 99% of CS majors get exposed to ML, most universities cannot afford H100s for undergrads to play with.
Yeah but when everybody buys GeForce by default anyway, you will spend years and much effort to try to make Radeon good for those people and in the end you find that you sold 10 cards a quarter to them.
 

poke01

Diamond Member
Mar 8, 2022
3,036
4,018
106
Yeah but when everybody buys GeForce by default anyway, you will spend years and much effort to try to make Radeon good for those people and in the end you find that you sold 10 cards a quarter to them.
It also doesn't help AMD doesn't have halo/very high end cards, so with each passing gen more mindshare of consumers goes to NVIDIA.
 
Reactions: Tlh97

JustViewing

Senior member
Aug 17, 2022
265
465
106
On consumer AI side AMD support is increasing. APPs like Amuse, LM Studio are easy to use, no config needed. Many other open source tools have AMD support. Still not at the same level as NVidia, but it is improving.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |