Why have AMD APUs failed on the market?

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Cheating as in, using a GPU to do what they can't get out of a CPU.
In the very least they could have called it something else, CPU accelerator,
Compute Unit, something. I realize and am comfortable being an oddball and having
matching oddball opinions and thought processes, but I can't be the only one that thinks
it's at least a little silly. It's really like the old math coprocessor or FPU units. We used them since there wasn't anything better but it was a relief when CPU's caught up.
And here we are again with a more modern twist.

Is pcie not fast enough to use dGPU's for compute?
If so that's a bit of a can o worms, no?

Co-processors are integrated in to current CPUs (FPU units).
Also you can have GPGPU with dGPUs and they will be much faster than any APU for a long long time.

Personally for desktop use i can only see viable APUs up to 30-45W TDP. Higher than that is getting in to CPU + dGPU territory and its not what consumer wants to use.

For example a 30W TDP Carrizo and 28W TDP Broadwell for All in Ones are perfect, even for SFF Desktop PCs. The vast majority of consumers will not need more performance than what those APUs will offer in 2015.
 

greatnoob

Senior member
Jan 6, 2014
968
395
136
In the very least they could have called it something else, CPU accelerator,
Compute Unit, something.

They are called compute units/shaders/cluster/devices/hardware (or CUDA cores) when they are being used for that purpose and all those units combined are called an iGPU when they're being used for rendering. It's just a new part of the SOC. All sorts of things are being integrated in SOCs now (hence, "system on a chip") yet we're not at that stage of calling desktop CPUs SOCs yet, soon maybe.


It's really like the old math coprocessor or FPU units. We used them since there wasn't anything better but it was a relief when CPU's caught up.
And here we are again with a more modern twist.

"Compute" is synonymous to doing many things ALL at once. A CPU does things in serial, so there will be no way for it to catch up, hence why they exist now and until quantum computers are thing.

Is pcie not fast enough to use dGPU's for compute?
If so that's a bit of a can o worms, no?

Basically, doing compute work the load is shared across as many units as possible. Each unit does its work then goes to rest. So in this case, having a dGPU + an APU (Note: majority of Intel and AMD's 'CPUs' and all mobile SOCs are APUs) would help in compute. More units or compute devices -> Either (or both) faster compute and/or increased workload handling. PCIe or not it shouldn't matter, the more compute hardware, the better (that is if you can split the workload to all those units in the first place, which we certainly can at the moment).
 
Last edited:

Ramses

Platinum Member
Apr 26, 2000
2,871
4
81
So, when software starts using iGPU for compute, a dGPU which is generally dramatically faster could be used for such just as well?
 

Ramses

Platinum Member
Apr 26, 2000
2,871
4
81
Co-processors are integrated in to current CPUs (FPU units).

We got that with the 486 I think. I've forgot a lot but not everything. :sneaky:
I never looked into it but i assumed a lot of that stuff ended up offloaded to the GPU when that got popular.
 

Ramses

Platinum Member
Apr 26, 2000
2,871
4
81
That's correct at least from what I've read about OpenCL and CUDA documentation.

Huh.
Seems like they are setting themselves up for everyone wanting dGPU for not only gaming, but accelerating other computing tasks.
Wonder if an enterprising mobo manufacturer might not pick up on that and add an "onboard compute unit" to the board.

Which is history repeating itself yet again in a way, onboard video from wayback.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Seems like they are setting themselves up for everyone wanting dGPU for not only gaming, but accelerating other computing tasks.

That makes sense to me (with the exception of really small chips like Carrizo-L which I think has a 128sp iGPU).
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Give me 100W TDP GPU with HBM and 65W TDP CPU in a form of an APU at reasonable price point, say 200$-250. I would buy it in an instant.

But we would never hear the end of it...
For that price, you could have a proper 4 core i5 CPU with unlocked multi. Not the entry 2 weak modules!!1!oneoneeleven!1
200 bucks for integrated graphics?! Give me a break, for $200 you can have a proper GPU, like R9 280 3GB

(some) People will always judge amd apu comparing its weakest spots to a dedicated parts.

Just like a8-7600 was compared to a little higher priced i3 on a cpu front, and little higher prices gpu on gpu front. Obviously it looses in both. But very few noticed that it does everything at once.

It all comes down to market perception, hell.. some will never buy amd no matter what, does that make every possible amd product a failure?

I think APUs will be the best thing after sliced bread when intel have a competitive igp performance, or nvidia android apu gets into desktop somehow.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
It all comes down to execution, hell.. some will never buy amd no matter what, does that make every possible amd product a failure?

I think APUs will be the best thing after sliced bread when intel have a competitive igp performance, or nvidia android apu gets into desktop somehow.

FTFY. Nobody argues that we are heading towards SoC integration and that the graphics parts is becoming more and more important each generation. What people argue is that AMD execution is a failure, because they are throwing more hardware at the problem and getting less performance than dedicated solution, basically throwing away the lower costs benefits a strategy like this are supposed to yield.

So basically when Apple, Intel or Nvidia get the APU right people will praise it and it will become a sales success on the PC market, as it did become a success on the mobile market. AMD execution is just not good enough for praising or sales.

Huh.
Seems like they are setting themselves up for everyone wanting dGPU for not only gaming, but accelerating other computing tasks.
Wonder if an enterprising mobo manufacturer might not pick up on that and add an "onboard compute unit" to the board.

GPU acceleration improves performance at the expense of making programming more complex and expensive. It's not every programmer's cup of tea, and there are certain portions of the code that you can't make parallel. That's why Apple, Samsung and others, despite having access robust GPU architectures that can be used to compute still devote sizable resources to CPU development.
 

MisterLilBig

Senior member
Apr 15, 2014
291
0
76
Seems like they are setting themselves up for everyone wanting dGPU for not only gaming, but accelerating other computing tasks.

There are "GPU Servers", APU's are incredible lil things, on an AMD Kaveri there is 4 x86-64 "cores", an ARM chip(For security), a DSP chip(for audio) and a GCN iGPU, that's 4 different architectures all in one chip. Most SoC's now a days have 3-4 different architectures side by side.


Sooner or later everything gets integrated, to run a program is an advantage, "high"-"extreme" performance will always be for dedicated chips first.

But in the world of gaming, there is a limit, and one day it will be reached. And it will get to run on an APU.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
So even with interposers allowing for two discrete dies, one for CPU and one for GPU, it still might be more efficient to have the two separated. (The cpu by itself in the socket and the GPU on the video card).

Depends how you define efficient.

It could be cheaper due to yield. You can also mix 2 dies with better match together. However in the long run its worse than fully integrated.

Multi chip package isnt new.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
So, when software starts using iGPU for compute, a dGPU which is generally dramatically faster could be used for such just as well?

We already got software using it. And the problem is, it havent really expanded in ages in terms of usage ability. Its a hunt for fools gold for the enduser. 7 years and counting. And the most useful parts we got is PhysX and transcoding. And the last one isnt even dependent on GPGPU.
 
Last edited:

MisterLilBig

Senior member
Apr 15, 2014
291
0
76
All this talk about integrated and I just wondered...Microsoft HoloLens has a custom "Microsoft chip"...what if that's the "secret" semi-custom customer of AMD??

ShintaiDK's "And the most useful parts we got is PhysX and transcoding." inspired that thought. Jajaja
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
What people argue is that AMD execution is a failure, because they are throwing more hardware at the problem and getting less performance than dedicated solution, basically throwing away the lower costs benefits a strategy like this are supposed to yield.

Seam to me you directly describe Intel Broadwell/Skylake. Intel throwing more hardware at the problem getting less performance than AMD APUs and dedicated GPUs. :whiste:



I wonder for how much more they will be able to sell more inferior products against the competition.
 

coercitiv

Diamond Member
Jan 24, 2014
6,599
13,953
136
Seam to me you directly describe Intel Broadwell/Skylake. Intel throwing more hardware at the problem getting less performance than AMD APUs and dedicated GPUs.
Both Intel and AMD have the same problem, they need to find an efficient way of delivering more bandwidth to their iGPUs. They can add more computation resources, increase clocks or even build architectures with somewhat lower bandwidth requirements... but as long as the dGPU has access to faster memory it will always come out on top.

Make no mistake, the day a GPU like the one inside Kaveri gets to play with enough bandwidth as his "standalone" brethren and finds it's way in the performance charts close to something like R7 250X, that day the title of this thread will change.

Also, Intel may have a lot to gain if they pull this trick out of the hat first.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Also, Intel may have a lot to gain if they pull this trick out of the hat first.

Nah.. everyone is waiting for amd to burn their dollars and clear the path.

There are a lot reasons why a good product fails on the market. AMD apus are not the first time we see superior product not being successful.

I have kaveri system in my house. My sister use it. She played Sims4, Dragon age Origins, 2 and now Inquisition, Terra online and some other girly games. It runs pretty good on 1280x1024 with medium-high settings (depending on game).

The system wasn't the cheapest, but I wanted unlocked multiplier to have some fun with it aswell

I guess too many people listen to the advice from people who never saw amd apu system IRL, not to mention using it.
 

III-V

Senior member
Oct 12, 2014
678
1
41
I guess too many people listen to the advice from people who never saw amd apu system IRL, not to mention using it.
Anything you need to judge a piece of hardware is available online. Relying on real world usage is absolutely stupid -- others have already done the work for you.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Anything you need to judge a piece of hardware is available online. Relying on real world usage is absolutely stupid -- others have already done the work for you.


Yaca- yet another car analogy, so according to your logic, because people review cars you don't need to test drive them...

Or even better, no need to try a rift because people subjectively say it makes them sick/ it's the best thing since slice bread.

It is always better to form your own opinion rather than relying on others. At the very least it isn't stupid.
 

Wall Street

Senior member
Mar 28, 2012
691
44
91
My opinion is that the current APUs are too much of an 'in between' product.

For gamers, APUs are too slow to recommend with a straight face. If a friend asks me what build he should get for gaming, I would insist that a r9 270 or GTX 750 Ti should be the minimum GPU performance to play 2015 games. Even dual core CPUs are an acceptable compromise if otherwise you wouldn't have the budget to get off integrated graphics.

On the other hand, for YouTube, office work, Facebook, web browsing, music, etc. there really is no impact from that extra graphics grunt. My office PC has a core i5 sandy bridge with GT2 graphics. For this use I doubt I could tell the difference between this 'slow' GPU and an APU with hundreds of GCN cores.

So where does that leave the high end APUs? Who is the ideal user who would find a chip with 384 GCN cores useful?
 
Aug 11, 2008
10,451
642
126
Pretty hard to test drive a computer unless you want to go the somewhat devious route of "buying" several different models and returning them all. Even then, that seems like a waste of time when benchmarks for pretty much every metric are available in reviews.
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
So basically when Apple, Intel or Nvidia get the APU right people will praise it and it will become a sales success on the PC market, as it did become a success on the mobile market. AMD execution is just not good enough for praising or sales.

The APU will be right when it can dynamically sent tasks to the GPU compute units purely based on ISA alone, and without specialized software. It needs to work as seamlessly as FPU and SIMD tasks on the CPU already do.

As for the die stack APUs people have been talking about it would be interesting to make such an implementation possible.

The base die would be your x86 cores + marginal iGP + audio + chipset features

The secondary die would be a full on GPU

On top of the GPU would be HBM. The stack would still have main system memory available too via DDR4 buses through the CPU.

But this sort of system would make it possible to sell APUs with a minimal iGP or with a "dedicated-like" GPU, and perhaps even sell the GPU separate as a graphics car. For platforms where size is a factor, the full stack method would make it possible to fit alot of capability into a single point of thermal and electrical connectivity. For larger platforms, consumers can get the CPU without having to pay for the GPU, and get the graphics they want or even previous said GPU as a graphics card. This set up would be more appropriate to maxing the clock, voltage, and thermal headrooms of the CPU and GPU.
 
Last edited:

III-V

Senior member
Oct 12, 2014
678
1
41
Yaca- yet another car analogy, so according to your logic, because people review cars you don't need to test drive them...

Or even better, no need to try a rift because people subjectively say it makes them sick/ it's the best thing since slice bread.

It is always better to form your own opinion rather than relying on others. At the very least it isn't stupid.
Cars aren't even remotely relevant. You don't test drive a computer yourself -- it's the software that interfaces with your hardware. (As an aside, if you wanted to go crazy with measurements, you very well could build an accurate simulation of a car, and know exactly how they handle, without stepping foot into one.

Forming your own opinion of hardware, without paying any attention to what numbers your software is spitting out and comparing them other hardware's numbers, is stupid (excluding extraneous stuff, like reliability or customer service). Computer science is an objective one -- it's not art.

The only opinion that matters, in this context, is that of software.
 
Last edited:

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
So where does that leave the high end APUs? Who is the ideal user who would find a chip with 384 GCN cores useful?

Mobile platforms. But most consumers don't know shit and those that do will buy an Intel + dGPU system if they really need it unless they are monetarily constrained. Depending on their prospective workload though, they might still opt for Intel because of the superior CPU performance.

The only other thing is if someone like Apple was to use an AMD APU because they could fully customize their OS and software to fully use it's capabilities in a manner that benefits a large portion of the Mac user base (Photoshop, etc). Had AMD managed to get that Llano win with Apple, things might've turned around much quicker for AMD.
 
Last edited:

redzo

Senior member
Nov 21, 2007
547
5
81
Given the amount of posters defending AMD APUs in the AMD Q414 results thread, I'd like to open a new thread to explore why the APUs failed on the market. Regarding the consumer market, AMD lost share in every single market bracket where it fielded APUs and the bleeding is yet to stop.

I'm rather curious to see the opinions of the people here on what are the APU value proposition strong points and why it failed on the market despite these strong points.

It's damn hard to review a smartphone or laptop cpu objectively. The overall product itself dictates the success of the cpu, not the cpu itself.
The laptop and smartphone buyer cares more about the overall product itself(looks, product battery life, dimensions, weight, price) than the hardware platform it accompanies.

Mobile AMD cpus failed and will fail because there are not enough mobile products using them.

AMD's mobile quad cores are low power enough and should have competed with intel's low power i3's. If you check available market products, they don't. That's because intel affords to sell lower spec alternatives at lower prices. The result is that manufacturers choose them(intel's) in the quest of maximizing profit. It is hard being Lenovo, Dell, HP and ditch an offer like that when you profit most from high volume sales.

AMD has and had no chance at all. Their only salvation is to precisely predict the market and penetrate those future emerging markets before intel does it. Of course, if AMD had intel's volume and technical production capabilities like it did in the past, we wouldn't have this discussion.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |