Why have AMD APUs failed on the market?

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

VirtualLarry

No Lifer
Aug 25, 2001
56,543
10,169
126
There s no mistake from my part, the 11.5W Haswell consume twice its rated TDP, the system doesnt use much more than the idle power comsumption plus the 2.2W i accounted when it is loaded.

Why is exploiting thermal / power headroom for greater performance bad? Is it just because Intel is better at it than AMD? (They both have their "Turbo" functionality, but from what I have seen, Intel's is far more sophisticated, and much better at exploiting "headroom".)
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
So even with interposers allowing for two discrete dies, one for CPU and one for GPU, it still might be more efficient to have the two separated. (The cpu by itself in the socket and the GPU on the video card).
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
22,000
11,560
136
My point is that the socket's power rating is going to limit the maximum amount of iGPU performance we are going to see from these type of chips. So even if interposers and HBM come along to help with other problems I still see a limit to how much desktop APUs (in OEM boxes) will be able to replace dGPUs for gaming.

Doubtful. Look at how little power the PS4 and XbOne use, and then consider how many shaders they have on their custom chips. Hell the PS4 has 1152 shaders, and total system power usage never goes over, what, ~150W measured at the wall? Part of that is due to the ultra-low power of the cat cores serving as host, but still, the iGPU is not a power hog in that scenario.

We already know how many shaders AMD will use in its desktop APUs through 2016 anyway, and that is 512 maximum. They are more constrained by die area vs. expected yield than anything else. Also, they are moving to socket FM3, so any socket limits that need to be changed, can be changed.

So even with interposers allowing for two discrete dies, one for CPU and one for GPU, it still might be more efficient to have the two separated. (The cpu by itself in the socket and the GPU on the video card).

Not necessarily. If you are doing a lot of translation of small/short loops into GPGPU instructions (think HSA here) occurring in multiple separate threads, turnaround time and latency become important. AMD does not want to push an HSA paradigm in which only highly-parallelized workloads with no turnaround constraints can be moved to the GPU.
 
Last edited:

Abwx

Lifer
Apr 2, 2011
11,516
4,303
136
Why is exploiting thermal / power headroom for greater performance bad? Is it just because Intel is better at it than AMD? (They both have their "Turbo" functionality, but from what I have seen, Intel's is far more sophisticated, and much better at exploiting "headroom".)

LOL, i like it how you re branding an obvious flaw as a feature, Nvidia anyone..?.


AMD could do the same, all that is needed is to crank up voltage and frequency, and of course TDP, the result will be a fan that spin harder if the cooling solution is not improved.

While we re at it i apply the logic to Beema :

Idle : 4W load : 23.8W delta : 19.8W

Total losses as computed in the previous case is thus 16W, to wich i substract 2.2W for the sytem and we have 13.8W at the SoC level, this is exactly the power drained by AM1 Athlon 5350 APUs wich are extracted from the same waffers as Beema.

If we were to extend the TDP to 24W like the "11.5W" Haswell we would get roughly 30% better perfs from this chip.

http://www.notebookcheck.net/HP-Pavilion-13-a093na-x360-Convertible-Review-Update.130928.0.html
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Doubtful. Look at how little power the PS4 and XbOne use, and then consider how many shaders they have on their custom chips. Hell the PS4 has 1152 shaders, and total system power usage never goes over, what, ~150W measured at the wall? Part of that is due to the ultra-low power of the cat cores serving as host, but still, the iGPU is not a power hog in that scenario.

The GPU in the PS4 must be clocked pretty low then because a R9 270 with 1280 stream processors @ 900 Mhz is 150 watts TDP by itself ---> http://www.anandtech.com/show/7503/the-amd-radeon-r9-270x-270-review-feat-asus-his

Then there is the number of folks who want more powerful GPUs than that. For example, in my recent poll of current AMD Radeon owners here I was surprised to see that slightly over 75% reported having a R9 290 or greater.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
We already know how many shaders AMD will use in its desktop APUs through 2016 anyway, and that is 512 maximum. They are more constrained by die area vs. expected yield than anything else.

If AMD is using 512 stream processors and paying the premium for that via having the large die reduce yields (as well as increased "edge of wafer loss") then I would prefer to see the iGPU clocks on desktop to be at least 1000 Mhz (and ideally greater) to maximize the investment.

So I am not sure AMD is constrained by die area (in the current OEM 95 watt situation), but rather thermals and bandwidth (for gaming).

Also, they are moving to socket FM3, so any socket limits that need to be changed, can be changed.

Yes, they could increase TDP.

But I have been wondering if there is a cheaper better way of doing this than the rumored Bristol Ridge reported here.
 

III-V

Senior member
Oct 12, 2014
678
1
41
My point is that the socket's power rating is going to limit the maximum amount of iGPU performance we are going to see from these type of chips. So even if interposers and HBM come along to help with other problems I still see a limit to how much desktop APUs (in OEM boxes) will be able to replace dGPUs for gaming.
Performance is basically irrelevant though -- cost is all that counts.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Regarding a possible Bristol Ridge (quad core Excavator with 512 sp on FM3 socket), I am concerned how fast we will see the 4GB DDR4 DIMMs scale to.

Certainly, I can see 4GB at DDR4 3200 being relatively affordable* (so 2 x4 GB DDR4 3200 @ 51.2 GB/s is realistic), but I wonder if going much higher than that will require 8GB DIMMs.

Of course, if Bristol Ridge APU users need 2 x 8GB kits to maximize bandwidth to fully utilize a fast clocked 512 sp iGPU that is going to throw a serious monkey in the wrench for gaming cost effectiveness. (In contrast, a R7 250X with 640sp at 1000 Mhz has 72 GB/s dedicated bandwidth).


*although not as affordable as the single 4GB RAM stick some entry level gamers with dGPUs will still be using.
 

DrMrLordX

Lifer
Apr 27, 2000
22,000
11,560
136
The GPU in the PS4 must be clocked pretty low then because a R9 270 with 1280 stream processors @ 900 Mhz is 150 watts TDP by itself ---> http://www.anandtech.com/show/7503/the-amd-radeon-r9-270x-270-review-feat-asus-his

The PS4's iGPU is clocked at 800 mhz.

If AMD is using 512 stream processors and paying the premium for that via having the large die reduce yields (as well as increased "edge of wafer loss") then I would prefer to see the iGPU clocks on desktop to be at least 1000 Mhz (and ideally greater) to maximize the investment.

So I am not sure AMD is constrained by die area (in the current OEM 95 watt situation), but rather thermals and bandwidth (for gaming).

As you yourself said, they are already paying a premium from having a large die that reduces yields, etc. Increasing die size further reduces yields, unless they go MCM . . . and I'm not sure Kaveri or Godavari support that. Bandwidth is another problem, naturally. But they are not constrained by thermals or power delivery, nor will they be on FM3.



Yes, they could increase TDP.

But I have been wondering if there is a cheaper better way of doing this than the rumored Bristol Ridge reported here.

There might be, who knows? AMD has already cast its lot. They don't have the kind of budgetary discretion to be "agile" and throw out too many changes in their plans. It looks like their lineup between now and Q3 2016 (or so) is set in stone, and that had better be good enough.

Certainly, I can see 4GB at DDR4 3200 being relatively affordable* (so 2 x4 GB DDR4 3200 @ 51.2 GB/s is realistic), but I wonder if going much higher than that will require 8GB DIMMs.

That's going to be up to the market to decide what DIMM sizes are supported within any given market segment. Today, it's nearly impossible to get new (or even recent) 2x2gb kits in anything like DDR3-2400 since the market basically demanded 8gb as the new minimum memory configuration for a wide variety of systems. Are people between now and then going to start demanding 16gb memory capacity as a minimum in the enthusiast/DIY sector? How about OEM buyers? Will they?

So we may yet see DDR4-3400 (or higher) in 2x4gb kits. Personally, I think anyone who goes single-channel on a dual-channel (or higher) capable system is insane, but that's just me. Don't skimp on the memory people!

Of course, if Bristol Ridge APU users need 2 x 8GB kits to maximize bandwidth to fully utilize a fast clocked 512 sp iGPU that is going to throw a serious monkey in the wrench for gaming cost effectiveness. (In contrast, a R7 250X with 640sp at 1000 Mhz has 72 GB/s dedicated bandwidth).

One major point that you and many other APU downers seem to be forgetting is that the iGPU/dGPU ecosystem is moving towards cooperation rather than exclusivity. Many APU buyers will also buy a dGPU and switch the iGPU towards compute functions thanks to DX12 (or Mantle, if it ever becomes more widely-accepted). It won't be dead silicon for games when you throw in a 290x or whatever. Offloading physics to the iGPU is just the beginning.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
That's going to be up to the market to decide what DIMM sizes are supported within any given market segment. Today, it's nearly impossible to get new (or even recent) 2x2gb kits in anything like DDR3-2400 since the market basically demanded 8gb as the new minimum memory configuration for a wide variety of systems.

I was thinking there is a different reason why we see faster memory speeds only on the larger capacity sticks....and this is related to the fastest ICs (built on the smallest nodes) having a higher capacity than slower memory ICs built on larger nodes.
 

escrow4

Diamond Member
Feb 4, 2013
3,339
122
106
The whole revue is an exemple of trolled review, there s no secret that Anton Shilov was massacring AMD as much as he could since they stopped to send him some gear, indeed he was rewarded some times later by being offered the whole Devil Canyon line, so you re right to ask if it s serious, did you only read what this shill(ov) did write..?. It s a rant from start to finish and a caricature of a review.

That is your argument?

APUs are a failure because they try to be an all rounder but fail both ways - dGPU + Intel is what you want for gaming, and for a basic box Intel's iGPU is sufficient.

And the whole co-operation thing, buying an APU, then chucking in a 290X is pointless when the CPU will hold you back.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Are people between now and then going to start demanding 16gb memory capacity as a minimum in the enthusiast/DIY sector? How about OEM buyers? Will they?

I doubt it, but I wonder if we end up seeing the fastest DDR4 memory speeds being exclusive to 8GB sticks for the reason I mentioned in the previous post.

Personally, if the day ever came where a single 8GB stick was much cheaper than a 2 x 4GB kit I would be inclined to run just one stick then rely on my Video card's VRAM for GPU bandwidth.
 

DrMrLordX

Lifer
Apr 27, 2000
22,000
11,560
136
I was thinking there is a different reason why we see faster memory speeds only on the larger capacity sticks....and this is related to the fastest ICs (built on the smallest nodes) having a higher capacity than slower memory ICs built on larger nodes.

Right, but they could have opted to put fewer ICs per DIMM if they had really wanted to do so. In the end, it's driven by demand for a particular product. If 2x2gb configurations were wildly popular right now, you'd still see kits like that being manufactured and sold at a variety of different speeds.
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
One way I can see AMD apus succeeding in the market is once we get 16nm / 14nm that Microsoft release a higher end Xbox that is also a windows computer.

Windows 10 will allow you to use "universal apps" that will also work on the Xbox one. Universal Apps will not be limited to x86 (specifically allowing arm so it works on windows phone) so it would be any stretch for them to throw some small die area high performing cpus on there such as cortex A57 if they can get the clockspeed high enough, or whatever succeeds the A57.

The die size of the xbox one gpu on 28nm is already 363mm2. That means any die shrink is going to be in the 150 to 200mm2 territory depending on which process and foundry. What another 20mm2 or so if you can convince people to spend another $100 on the console for now they have a decent computer as well.

Of course the xbox will not be a full windows computer and will be locked down to prevent piracy.
 

Maximilian

Lifer
Feb 8, 2004
12,604
15
81
Given the amount of posters defending AMD APUs in the AMD Q414 results thread, I'd like to open a new thread to explore why the APUs failed on the market. Regarding the consumer market, AMD lost share in every single market bracket where it fielded APUs and the bleeding is yet to stop.

I'm rather curious to see the opinions of the people here on what are the APU value proposition strong points and why it failed on the market despite these strong points.

Theres no market for them, who would be best served by an APU?

Hardcore gamers? Nah intel + discrete GPU is what they are after.

Casual gamers? Not really, same story again.

Facebook gamers? Intel iGPU is enough for this and cheaper than an APU.

It dosent really fit many niches tbh, not strong enough for the gaming crowd, not cheap enough for the average joe (or average OEM). If APU's could challenge intel on the CPU front and challenge the ~$150-200 discrete cards then the gaming crowd might start to show interest, but as it stands they arent a good fit for anyone.
 

greatnoob

Senior member
Jan 6, 2014
968
395
136
APUs are a failure because they try to be an all rounder but fail both ways - dGPU + Intel is what you want for gaming, and for a basic box Intel's iGPU is sufficient.

And the whole co-operation thing, buying an APU, then chucking in a 290X is pointless when the CPU will hold you back.
How are APUs a failure and what does a dGPU + Intel have anything to do with their failure? You do know that most of Intel's and all of the mobile SOCs' offerings are APUs, right?

Only just recently have we started seeing games offload cloth, water and hair simulation onto 1 or more compute devices. This type of compute offloading can be done on 1, 2 or even thousands of 'compute' hardware (yes, that could mean any mix of graphic cards + APUs/SOCs).

So in fact, you'd be better off with an APU and a dedicated graphics card and that is what the majority of us have now. Any Intel, AMD or ARM chip with an iGPU IS an APU.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
As you yourself said, they are already paying a premium from having a large die that reduces yields, etc. Increasing die size further reduces yields, unless they go MCM . . . and I'm not sure Kaveri or Godavari support that. Bandwidth is another problem, naturally. But they are not constrained by thermals or power delivery, nor will they be on FM3.

Yes, I definitely think die size should not be larger. I am all for die size to be smaller.

P.S. I'm not sure why you think Kaveri with strock heatsink isn't constrained by thermals when the iGPU clock is only 720 Mhz. (Yes, lack of memory bandwidth is one reason not to run iGPU clocks higher for gaming, but lack of bandwidth should not affect OPEN CL apps which could benefit from a faster iGPU clockspeed)

In contrast, AMD clocks R7 250 @ 1000/1050 Mhz, R7 250X @ 1000 Mhz, R7 260X @ 1100 Mhz.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Theres no market for them, who would be best served by an APU?

Hardcore gamers? Nah intel + discrete GPU is what they are after.

Casual gamers? Not really, same story again.

Facebook gamers? Intel iGPU is enough for this and cheaper than an APU.

It dosent really fit many niches tbh, not strong enough for the gaming crowd, not cheap enough for the average joe (or average OEM). If APU's could challenge intel on the CPU front and challenge the ~$150-200 discrete cards then the gaming crowd might start to show interest, but as it stands they arent a good fit for anyone.

http://forums.anandtech.com/showpost.php?p=37132956&postcount=621
 

Ramses

Platinum Member
Apr 26, 2000
2,871
4
81

It's an emotional response. I'm not sure I can explain it, but it is and it's from my technician side, which isn't immune to such. I google'd "GPU compute" to be sure it was what I thought it was, and it is.

I think it might be because it feels like cheating? Not sure that's accurate.
Will think on it. I had no idea the plan was for that to come to the desktop other than in a few very specific apps/software.

It reminds me of math coprocessors if you older folks recall them. I wasn't crazy about those either, but that was years ago and a bit more forgivable.
 
Last edited:

greatnoob

Senior member
Jan 6, 2014
968
395
136
It's an emotional response. I'm not sure I can explain it, but it is and it's from my technician side, which isn't immune to such. I google'd "GPU compute" to be sure it was what I thought it was, and it is.

I think it might be because it feels like cheating? Not sure that's accurate.
Will think on it. I had no idea the plan was for that to come to the desktop other than in a few very specific apps/software.

It reminds me of math coprocessors if you older folks recall them. I wasn't crazy about those either, but that was years ago and a bit more forgivable.

How is it "cheating"? It's a trend that Intel, AMD, Qualcomm, Apple, [insert name here] have picked up on and have incorporated since around 2010 and onwards and it is the next leap in "CPU*" performance. All the aforementioned companies are now in the APU market, but are still producing the soon-to-be niche products, called CPUs.

The whole market has been a mutation from mobile parts going desktop, then desktop parts going mobile, then mobile parts going desktop, times the amount of years it'll take for the next brick wall for performance to be inbound.

* compute
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
It's an emotional response. I'm not sure I can explain it, but it is and it's from my technician side, which isn't immune to such. I google'd "GPU compute" to be sure it was what I thought it was, and it is.

I think it might be because it feels like cheating? Not sure that's accurate.
Will think on it. I had no idea the plan was for that to come to the desktop other than in a few very specific apps/software.

It reminds me of math coprocessors if you older folks recall them. I wasn't crazy about those either, but that was years ago and a bit more forgivable.

You will still have a choice in Desktop to go for a CPU + GPU like Socket 2011 and perhaps from AMD at 2016 with ZEN. so dont worry
 

Ramses

Platinum Member
Apr 26, 2000
2,871
4
81
Cheating as in, using a GPU to do what they can't get out of a CPU.
In the very least they could have called it something else, CPU accelerator,
Compute Unit, something. I realize and am comfortable being an oddball and having
matching oddball opinions and thought processes, but I can't be the only one that thinks
it's at least a little silly. It's really like the old math coprocessor or FPU units. We used them since there wasn't anything better but it was a relief when CPU's caught up.
And here we are again with a more modern twist.

Is pcie not fast enough to use dGPU's for compute?
If so that's a bit of a can o worms, no?
 

Ramses

Platinum Member
Apr 26, 2000
2,871
4
81
You will still have a choice in Desktop to go for a CPU + GPU like Socket 2011 and perhaps from AMD at 2016 with ZEN. so dont worry

It's funny, that was one of the large reasons I bought AMD a year or two ago, I came back to PC's and suddenly everything has a damn video card IN the CPU.
EVERY Intel board had a bunch of video plugs cluttering up the back.
And they had so many sockets and CPU's and chipsets, I got sick of reading trying to catch up and just bought an FX based on how well my old 955BE ran and how simple the lineup was to decipher.
All that stuff from intel was a huge turnoff.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |