AMD Q414 results

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

beginner99

Diamond Member
Jun 2, 2009
5,231
1,605
136
What they paid for ATI has little to do with recent events - AMD acquired ATI 9 years ago, not 9 months ago. What has ruined AMD these past 5 years is the WSA with Global Foundries.

And this WSA was a consequence of buying ATI. Just remember. After buying ATI, AMD delayed and then had issues with new process nodes. Cost exploded and hence the 500 mio loses per quarter in 2007. Being behind in process node compared to intel was first step in making their CPUs uncompetitive. Plus the loses from process node development probably forced them to lower R&D costs as well. All in all they were in a terrible state and had to split off manufacturing aka GF at completely unreasonable conditions (WSA).

Had they not bought ATI or at a much lower price they might have not run into these issues and their CPUs could have stayed competitive. We don't know but the WSA is a pretty direct consequence of the ATI acquisition.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
What can be said of all these fateful or not-fateful decisions which have come to pass for AMD is that they came to pass because of just one decision - the decision to put Hector Ruiz in as CEO.
 

beginner99

Diamond Member
Jun 2, 2009
5,231
1,605
136
Did that acquisition allow AMD to survive a few more years? Absolutely. Without GPU, AMD would have ceased to exist a few years ago because no APU and no console designs. Edit: they probably knew this when they bought it.

Wrong. How many CPUs today have no GPU tech?

Server CPU's don't need an iGPU and have high margin. In fact AMD was very strong in Server market back then when they had the better uArch. So instead of high-volume low margin console SOCs they could be selling lower volume high-margin server CPUs and high end desktop. The 5 billion spent on ATI could have been spent on process node development instead. But yeah if they could have kept up with intel in process is questionable. However if they did, they might be also producing Apple SOCs right now.

With AMD's "take or pay" requirements at GloFo, a reduction in die size won't actually reduce AMD's expenses. It will just mean they need fewer wafers to make the chips they can sell, and will have to beg/plead/etc with GF to give them another break on reducing the WSA.

Exactly. That's why it is expected that Radeon 380x will be made at GF 28nm, will be a really huge die like 600mm, 300 W TDP and hybrid cooling.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,980
595
126
The 5 billion spent on ATI could have been spent on process node development instead. But yeah if they could have kept up with intel in process is questionable. However if they did, they might be also producing Apple SOCs right now.
I can't think of any scenario where AMD could have kept pace with Intel in process. $5 billion to Intel is nothing quite honestly they paid $7.7 billion for McAfee without batting an eye.

Now if AMD had planned in early 2000 to go all in on the high end processor side of things then maybe they would have a chance, maybe. Problem is the volume is not there to support their foundry they would need other high volume products to fuel the fire so to speak. Such a venture would have been very high risk, not even IBM was able to keep up with Intel I don't see how AMD would have any chance at all.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
With AMD's "take or pay" requirements at GloFo, a reduction in die size won't actually reduce AMD's expenses. It will just mean they need fewer wafers to make the chips they can sell, and will have to beg/plead/etc with GF to give them another break on reducing the WSA.

Yes, that would be true if AMD sold the same quantity of product with the only difference being a smaller die size.

However, my belief of Kaveri is that it is a relatively uncompetitive product (to put things mildly) and AMD could do much better by reducing iGPU and increasing CPU even with the WSA factored in. (re: the die is smaller, but AMD sells more processors at a better ASP relative to the size of the die)

However, with that mentioned, I believe AMD could do even better to fulfill WSA by bringing dGPU over to GF as well (even if it is just the lower/midrange dGPUs in the beginning). There have been some rumblings of that happening in the following recent posts:

http://www.techpowerup.com/208431/amd-to-switch-to-globalfoundries-28-nm-shp-node-in-2015.html

http://www.hardocp.com/news/2015/01/02/amd_moving_28nm_gpu_process_to_globalfoundries#.VMC2sy6GX4p

Lastly, one more thing to think about is the cost to manufacture of one 245mm2 28nm die (Kaveri) vs. two smaller dies (one for CPU and one for dGPU manufactured at GF). I would think that having two smaller dies would be cheaper to manufacture even if the total silicon area of the two dies combined ended up being greater than the single 245mm2 Kaveri die. With that mentioned, I am sure someone would bring up the idea that a product with an APU needs less total components (due to sharing between cpu and igpu*)...but realize that because the dGPU has a separate cooler (and not limited by a 100 watt socket) it can be clocked higher as well. (eg, Kaveri's iGPU is 720Mhz, but many discrete Radeons are 1000 Mhz or greater). So it turns out overall performance and value still ends up favoring a cpu and gpu which are separate from each other.

*With this mentioned, I still question if using dual channel memory (of appropriate speeds for an APU) is really a cost saver. This particularly when compared to other budget desktops where the user (who also runs dGPU) typically only runs one stick in single channel.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
AMDs first goal with the APUs is not gaming, its compute (GPGPU). It takes a little longer but eventually more and more applications will use GPUs and thats what both AMD and Intel are aiming for. You dont believe Intel increasing the iGPU area and performance with every new gen is about Gaming do you ??
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
With AMD's "take or pay" requirements at GloFo, a reduction in die size won't actually reduce AMD's expenses. It will just mean they need fewer wafers to make the chips they can sell, and will have to

The WSA commitment only matters when AMD can't fulfill the wafer volumes requested for Globalfoundries, so in the past when AMD business was around 2.5 - 3 billion dollars per year it would have made sense to reduce die size. As of now the CPU business is way too small for the WSA, taking AMD to the situation you are described.

Maybe if AMD can move all the console business to GLF it can fulfill the WSA quota and then the relationship can resume in a more normal fashion.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
You dont believe Intel increasing the iGPU area and performance with every new gen is about Gaming do you ??

But why does AMD need to persue such a large iGPU for desktop when they have a rather large node disadvantage and they already sell dGPUs?

Also..... GPGPU can be done with discrete GPUs, so its not like having cpu and gpu separate prevents this.

Also please take note of what I wrote in the second part of my post above (#255) with regard to costs of one large die vs. two smaller dies:

cbn said:
Lastly, one more thing to think about is the cost to manufacture of one 245mm2 28nm die (Kaveri) vs. two smaller dies (one for CPU and one for dGPU manufactured at GF). I would think that having two smaller dies would be cheaper to manufacture even if the total silicon area of the two dies combined ended up being greater than the single 245mm2 Kaveri die. With that mentioned, I am sure someone would bring up the idea that a product with an APU needs less total components (due to sharing between cpu and igpu*)...but realize that because the dGPU has a separate cooler (and not limited by a 100 watt socket) it can be clocked higher as well. (eg, Kaveri's iGPU is 720Mhz, but many discrete Radeons are 1000 Mhz or greater). So it turns out overall performance and value still ends up favoring a cpu and gpu which are separate from each other.

*With this mentioned, I still question if using dual channel memory (of appropriate speeds for an APU) is really a cost saver. This particularly when compared to other budget desktops where the user (who also runs dGPU) typically only runs one stick in single channel.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Server CPU's don't need an iGPU and have high margin. In fact AMD was very strong in Server market back then when they had the better uArch. So instead of high-volume low margin console SOCs they could be selling lower volume high-margin server CPUs and high end desktop. The 5 billion spent on ATI could have been spent on process node development instead. But yeah if they could have kept up with intel in process is questionable. However if they did, they might be also producing Apple SOCs right now.



Exactly. That's why it is expected that Radeon 380x will be made at GF 28nm, will be a really huge die like 600mm, 300 W TDP and hybrid cooling.

This.

Although this graph is a bit dated, AMD server marketshare has not really budged since BD was launched (also called flat-lining). AMD's uArch with the initial Opterons (A64) pushed their marketshare in a few years from <5% to >25%. That was incredible. AMD had amazing innovations like HT vs. Intel's FSB and other advantages for server applications. They were efficient, fast and competitively-priced. More importantly, server margins are HIGH and continue to generate revenue after the initial sale (parts, support, etc.) plus you have an 'in' with the company and can continue to sell them more solutions and so forth.

All that came crashing-down with BD and obviously Intel's Core made a difference here, but Intel was not up to par with AMD on the server side until closer to 2008/2009 when the FSB was retired.

This is obviously arm-chair CEOing, but AMD should have doubled-down on the server market and done everything in their power to (1) keep their business intact and continue selling/replacing servers and (2) Bulldozer should have been a solution focused for servers. That means efficient, scalable, and the best IPC they can get. Unfortunately, BD was inefficient and tried pushing MT processing down to markets that didn't need it yet. IPC (not MT) should have been the focus.

http://notablecalls.blogspot.com/2011/02/advanced-micro-devices-nyseamd-upgrade.html

On a related note, I agree with the other poster here that maybe AMD sells-off their CPU division/patents and essentially morphs back into ATI again for purely graphics solutions. They have a solid GPU business, amazing IP related to graphics, and they could go head-to-head with NV in more areas without being dragged-down by their horribly-managed CPU division.

The caveat here is that they would need to completely clean-house with management and executives and start with a competent team. Also, do whatever they can to rid themselves of that terrible WSA agreement and become competitive again, be that TSMC, Samsung, GF, etc.
 

DrMrLordX

Lifer
Apr 27, 2000
22,035
11,620
136
Throttling is relatively minor and likely so that the chips could fit in the power target.

Bing bing bing, we have winnar. The throttling happens irrespective of CPU temperature/thermal margin/socket temperature for that reason. That's why it's so easy to stop the behavior, no matter what HSF you're using.

When furmark was used in this review, the cpu cores on 65 watt A8-7600 dropped down to 2.4 Ghz during Prime 95.

Granted, that is Furmark....but that is a lot of throttling.

The A10 Kaveri APUs have also been known to have a considerable amount of throttling as well.

Duh, the A8-7600 is going to throttle to 2.4 ghz under any iGPU load. It's programmed to do that. The A10-7700k throttles to 2.8 ghz, and the A10-7850k throttles to 3 ghz. Just look at the P5 states and you'll know what happens when you run Furmark, any Unigine bench, 3dMark, etc.

However, my belief of Kaveri is that it is a relatively uncompetitive product (to put things mildly) and AMD could do much better by reducing iGPU and increasing CPU even with the WSA factored in.

The 3-CU chips are actually pretty nice and well-balanced. The 7650k is going to launch at ~$100 which is a great deal for 2M Steamroller + 3CU GCN. Mantle/DX12 should be great on those chips. For a sneak preview of what I mean, overclock the heck out of one, put in a 290x, and then run the GPGPU benchmarks under AIDA64. Watch it use the iGPU and 290x at the same time . . .
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
This.

This is obviously arm-chair CEOing, but AMD should have doubled-down on the server market and done everything in their power to (1) keep their business intact and continue selling/replacing servers and (2) Bulldozer should have been a solution focused for servers. That means efficient, scalable, and the best IPC they can get. Unfortunately, BD was inefficient and tried pushing MT processing down to markets that didn't need it yet. IPC (not MT) should have been the focus.

It was a server focused design, just a poor one. Dirk said "let's build a better Pentium 4 with IBM Power features". If they were focusing on desktops they wouldn't have been worried about Multi-Threading at all.

The 3-CU chips are actually pretty nice and well-balanced. The 7650k is going to launch at ~$100 which is a great deal for 2M Steamroller + 3CU GCN. Mantle/DX12 should be great on those chips. For a sneak preview of what I mean, overclock the heck out of one, put in a 290x, and then run the GPGPU benchmarks under AIDA64. Watch it use the iGPU and 290x at the same time . . .

Something like the 7650K should have been a launch SKU. It's as if they have yet to come to terms with APUs being mainly a $100 or less product.
 
Last edited:

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
What can be said of all these fateful or not-fateful decisions which have come to pass for AMD is that they came to pass because of just one decision - the decision to put Hector Ruiz in as CEO.

Exactly! If your CEO is requesting work done in a specific area of the business, it's the teams responsibility to follow through on the request (as long as it's ethical). That is why the CEO is paid the big $$$.

I still think a strong CIO should be able to convince the CEO that the strategic move is good or bad for business.
 
Last edited:

GreenChile

Member
Sep 4, 2007
190
0
0
What can be said of all these fateful or not-fateful decisions which have come to pass for AMD is that they came to pass because of just one decision - the decision to put Hector Ruiz in as CEO.
I'm not so sure I completely agree with that statement. The company had a long history of waste during Jerry Sander's time. I believe decisions made long ago prevented AMD from becoming the company it should have been. Ruiz made things much worse though.

A very interesting read on this subject:
http://arstechnica.com/business/2013/04/the-rise-and-fall-of-amd-how-an-underdog-stuck-it-to-intel/
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
It was a server focused design, just a poor one. Dirk said "let's build a better Pentium 4 with IBM Power features". If they were focusing on desktops they wouldn't have been worried about Multi-Threading at all.


Something like the 7650K should have been a launch SKU. It's as if they have yet to come to terms with APUs being mainly a $100 or less product.

I'm not so sure about that. On the server side, its more about scalability from a platform level, rather than a single CPU. The uArch design should support bandwidth between nodes/racks/sockets/etc and scale to the moon. The MT capability of a single CPU itself is less important vs. the sum of the parts. I still think better IPC while leveraging the strengths of their advantage (HT) at the time would have been better, and would have 'scaled-down' better to desktops too, IMHO. HT is great, but IPC is still so important to deliver across many different server use-cases.

I agree on what you are saying they went about to build. Unfortunately they got exactly that (a P4-type cpu with poor IPC) and missed on the power delivery entirely.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
I'm not so sure about that. On the server side, its more about scalability from a platform level, rather than a single CPU. The uArch design should support bandwidth between nodes/racks/sockets/etc and scale to the moon. The MT capability of a single CPU itself is less important vs. the sum of the parts. I still think better IPC while leveraging the strengths of their advantage (HT) at the time would have been better, and would have 'scaled-down' better to desktops too, IMHO. HT is great, but IPC is still so important to deliver across many different server use-cases.

A big seller for servers is throughput, think of the timeframe Bulldozer was being designed. Imo Dirk thought Intel would fix the P4's issues overtime, he didn't see the whole Core 2 thing. Less faith in his own companies choices influenced by IBM Power and Sun SPARC pursuing similar Multi-Threading schemes. Bulldozer was firmly focused on servers but how Dirk and his team thought the server landscape would look like 5-10 years from mid-2000s did not play out well at all. Not sure where the thought they'd have better foundry success than Intel i.e. high clock targets came from, faith in Abu Dhabi backing combined with continuing to trust in IBM as lead of their foundry group?

It may even explain why AMD was so slow to get the APU ball rolling, funds directed at their 'server dominating' chip. A bit of their own Itanium but using up a bigger chunk of company resources than Intel's misadventure. Is it ironic or just funny that x86-64 secured Intel's profits better than AMD's?

John Byrne's, shortly before being let go, description of how VP heavy AMD was all the way into 2014 speaks volumes to why they've had such a tough time being nimble.
 
Last edited:

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
A big seller for servers is throughput, think of the timeframe Bulldozer was being designed. Imo Dirk thought Intel would fix the P4's issues overtime, he didn't see the whole Core 2 thing. Less faith in his own companies choices influenced by IBM Power and Sun SPARC pursuing similar Multi-Threading schemes. Bulldozer was firmly focused on servers but how Dirk and his team thought the server landscape would look like 5-10 years from mid-2000s did not play out well at all. Not sure why they thought they'd have better foundry success than Intel came from, definitely some hubris involved.

It may even explain why AMD was so slow to get the APU ball rolling, funds directed at their 'server dominating' chip. A bit of their own Itanium but using up a bigger chunk of company resources than Intel's misadventure. Is it ironic or just funny that x86-64 secured Intel's profits better than AMD's?

Good points all-around, especially with the last one.

The server marketshare is what disappoints me the most from AMD. They obviously designed the 'Hammer' CPUs to be amazing server performers, included the IMC and added the HT. Intel didn't have that same type of technologies across the board until the i7 came out. They did reap a LOT of marketshare (25% server) and is even more impressive when you think they never really had any server market before that to speak of... Looking back at server benches that AT used to do in 2002-2006, AMD just pwnd Intel in a good 70-80% of the tests. Too bad they let the foot off the gas and switched to a new (slower) car. :/
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Pure conjecture here.....but could anyone see Intel throwing AMD a lifeline here, aka Microsoft doing the same for Apple in the 90s?

'The enemy you know is better than the enemy you can imagine?'
 

pablo87

Senior member
Nov 5, 2012
374
0
0
And this WSA was a consequence of buying ATI. Just remember. After buying ATI, AMD delayed and then had issues with new process nodes. Cost exploded and hence the 500 mio loses per quarter in 2007. Being behind in process node compared to intel was first step in making their CPUs uncompetitive. Plus the loses from process node development probably forced them to lower R&D costs as well. All in all they were in a terrible state and had to split off manufacturing aka GF at completely unreasonable conditions (WSA).

Had they not bought ATI or at a much lower price they might have not run into these issues and their CPUs could have stayed competitive. We don't know but the WSA is a pretty direct consequence of the ATI acquisition.

To the extent the same mgmt team and Board of Directors were behind both decisions.

Otherwise, completely unrelated. As explained in a prior post, ATI on its own with competent management, would have generated Billions in cash flow the last 9 years. In fact, if the BoD and CEO had any sense at all, they would have set up AMD as a holding company with 2 entities, processors and ATI, and that WSA would have been solely under processors since ATI had no need for it.
 

positivedoppler

Golden Member
Apr 30, 2012
1,137
226
106
Pure conjecture here.....but could anyone see Intel throwing AMD a lifeline here, aka Microsoft doing the same for Apple in the 90s?

'The enemy you know is better than the enemy you can imagine?'

not a chance. Intel assult into the low end the past few years is what has amd gasping for air. Strategy will backfire on Intel. AMD has in turn decide to focus on the mid end while consumers are being conditioned into realizing that low end low margin is good enough computing. Intel is resting on a house of card that will collapse in the next two years.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Pure conjecture here.....but could anyone see Intel throwing AMD a lifeline here, aka Microsoft doing the same for Apple in the 90s?

'The enemy you know is better than the enemy you can imagine?'

They already do this by keeping their prices higher (they could add contra-revenue to entry level deskop CPUs) and not releasing unlocked i3.
 

pablo87

Senior member
Nov 5, 2012
374
0
0
I have to understand the last sentence right as i am not into this stock option market. Please explain?

A put is the option or right to sell a stock . So if you buy a June 2015 AMD put with a strike price of $2, you could sell AMD shares at $2 up until June 22 (there is a specific date, can't remember exactly) even if by then AMD shares were selling for 10 cents. Of course, if AMD's stock price rises to $2.50, you'll sell at market and let your put expire as it is now under water and thus worthless.

Speculators who believe a stock price is on its way down do this. Shareholders who want to limit their loss do this; and if you think the company is going bankrupt by June, you would do this. I think its also used as a meme - instead of saying the company is going bankrupt, someone will say lots of June puts being bought which if a stock is $2 can only mean one thing.

Reason options are popular is the leverage. You can make a lot of money on little investment, and in the case of puts, its safer than selling short and borrowing.

The $200MM WSA forgiveness is a red flag - if on December 3rd Kumar is saying they're taking the entire $1.2B, and 6 weeks later its $1B and the 2014 WSA is now closed, what happened to the difference? I mean, that's a lot of wafers, no? At $3,000 each, isn't that like 60,000 + wafers? Or did AMD just not take delivery, and its coming in Q1?
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Duh, the A8-7600 is going to throttle to 2.4 ghz under any iGPU load. It's programmed to do that. The A10-7700k throttles to 2.8 ghz, and the A10-7850k throttles to 3 ghz. Just look at the P5 states and you'll know what happens when you run Furmark, any Unigine bench, 3dMark, etc.

I'm not aware of other chips throttling back cpu so much below the stated specifications (A8-7600 @ 65 watts: base clock is 3.1 Ghz and the turbo is 3.8 Ghz) when using a stock cooler.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Throttling is relatively minor and likely so that the chips could fit in the power target.

Bing bing bing, we have winnar. The throttling happens irrespective of CPU temperature/thermal margin/socket temperature for that reason. That's why it's so easy to stop the behavior, no matter what HSF you're using.

Maybe AMD should have changed the specs on the A8-7600 base clock to read 2.4 Ghz then.

It would seem to me that a chip should be able to maintain its base clock when under iGPU load when using a factory supplied cooler matched to the chip.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Maybe AMD should have changed the specs on the A8-7600 base clock to read 2.4 Ghz then.

It would seem to me that a chip should be able to maintain its base clock when under iGPU load when using a factory supplied cooler matched to the chip.

Isn't maximizing GPU performance the desired outcome when stressing the GPU? Don't Intel's mobile chips behave similarly when the GPU is stressed?
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |