Best possible scenario for post-Bulldozer AMD x86 CPUs?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Maybe. I remember the K8 days and AMD was a very impressive, vibrant company. Were it not for a poor experience that I had with the VIA KT-300 chipset with my Athlon XP 2200+ (remember those PR ratings? ), I probably would have moved to K8 based systems -- but I was too cautious (and too broke) to afford another system that had issues.

Now, to your point, I would say that AMD's challenge is more difficult today because its R&D budget is smaller relative to Intel's than it ever has been, it's reliant on the foundries for manufacturing (and they will be charging more for their latest nodes), and the costs to design chips on these latest nodes are higher than ever.

It's just not as simple as "oh, we know the architectures we've put out over the last 7 years haven't been competitive, but we know what we need to do now -- honest". At least in my humble opinion.

It's the same reason that Intel can't just "catch up" to Qualcomm in cellular modems with just a generation or two of investment, and why Intel can't just "catch up" with AMD and NVIDIA on graphics IP quality in just a generation -- and Intel is a company that's spending money on this stuff like mad!

Its not as if AMD does not know to make good processor architectures. The AMD K7 and K8 were both excellent CPU architectures. In recent times AMD Cat cores have all been good designs. Mullins and Beema are good competitive products. Unfortunately the high end has been the real problem. AMD's competitiveness went from bad with K10 (Barcelona) to horribly worse with Bulldozer. AMD is paying the price now for the failures of 2007 - 2011. The lost market share and revenue in servers are a clear indicator of the enormity of the failure. The efforts being put in now with the 2 high performance cores (K12 ARMv8 and x86-64) will show results in 2016 and later.
 
Mar 10, 2006
11,715
2,012
126
I see that I didn't make this sentence clear enough. I mean that TSMC knows just as much about Intel's 14nm process as we do, which isn't too much.

So if they say their performance will be equal to Intel's 14nm, that isn't based on any facts.

Here's TSMC's claim (the one that raghu78 appears to be referencing):

Michael Chou - Deutsche Bank AG, Research Division
I actually have a follow-up question on 16-nanometers because you mentioned you think the reasons 16-nanometers [indiscernible] regarding the competition. So what's your [indiscernible] be versus [indiscernible] 14-nanometer FinFET? Would you be [indiscernible] versus competitors 14-nanometer FinFET devices [indiscernible]?

Mark Liu - Co-Chief Executive Officer and Co-President
In the transistor design, the speed and power are -- is convertible. So from our intelligent -- our 16-FinFET plus technology, with 15% improvement on top of 16-FinFET, is about the same as Intel's transistors. So that is what we are targeting at, and customer can convert that speed to power consumption, so -- okay.

http://seekingalpha.com/article/194...sses-q4-2013-results-earnings-call-transcript
 
Mar 10, 2006
11,715
2,012
126
Its not as if AMD does not know to make good processor architectures. The AMD K7 and K8 were both excellent CPU architectures. In recent times AMD Cat cores have all been good designs. Mullins and Beema are good competitive products. Unfortunately the high end has been the real problem. AMD's competitiveness went from bad with K10 (Barcelona) to horribly worse with Bulldozer. AMD is paying the price now for the failures of 2007 - 2011. The lost market share and revenue in servers are a clear indicator of the enormity of the failure. The efforts being put in now with the 2 high performance cores (K12 ARMv8 and x86-64) will show results in 2016 and later.

K7 was more or less bought from NexGen, and K8 built on top of that. Cat cores have been OK, but Mullins and Beema really haven't been that competitive, particularly with respect to power consumption.

AMD had led people to believe that Temash would find its way into tablets, but I am struggling to find them on the shelves. Today's AMD mobile play is Mullins, but the only designs available are pretty awful cheap notebooks.

If just one OEM puts out a design that is comparable in battery life, performance, and portability to something like the ASUS T100, then we'd have a good starting point for that discussion. However, it seems to me that these designs are good for winning PC performance benchmarks, but that's where those platforms' competitiveness comes to an end.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,692
136
Maybe. I remember the K8 days and AMD was a very impressive, vibrant company. Were it not for a poor experience that I had with the VIA KT-300 chipset with my Athlon XP 2200+ (remember those PR ratings? ), I probably would have moved to K8 based systems -- but I was too cautious (and too broke) to afford another system that had issues.

I take it you mean the KT333, which was known as a problem child. The general rule back then was not to touch a VIA chipset with a ten foot pole until the A revision was out. Those where pretty stable, though suffering from poor drivers.

Another problem was board quality, VIA chipsets were mostly used on low-cost boards. I almost feel sorry you didn't experience the superb nForce2 in all its glory...
 
Mar 10, 2006
11,715
2,012
126
I take it you mean the KT333, which was known as a problem child. The general rule back then was not to touch a VIA chipset with a ten foot pole until the A revision was out. Those where pretty stable, though suffering from poor drivers.

Another problem was board quality, VIA chipsets were mostly used on low-cost boards. I almost feel sorry you didn't experience the superb nForce2 in all its glory...

You are correct! The KT333! It's been a long time
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
Its not as if AMD does not know to make good processor architectures. The AMD K7 and K8 were both excellent CPU architectures.

Those were not native AMD architectures. Both leveraged a lot on DEC IP and design teams. The only good CPU architecture designed top to bottom inside AMD was Bobcat, the rest was either crap (K6, Bulldozer, Jaguar) or IP cloned or acquired from other players (386 486, K6-II, K7, K8...).

AMD designing something competitive when building upon native IP and technical expertise is an exception, that's why I expect post-CMT architecture to have a much narrower scope than previous projects, in order to minimize scope risks and delays.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
K7 was more or less bought from NexGen, and K8 built on top of that. Cat cores have been OK, but Mullins and Beema really haven't been that competitive, particularly with respect to power consumption.

Do you have a power consumption or battery life test from a reputed site to draw this conclusion ? If not there is nothing yet which confirms either way. the rated TDP for A6-6310 and A8-6410 is 15W and both these chips are 20% faster in CPU performance than Baytrail N3510 and close to 1.7x - 2x the GPU performance. Mullins is on par with Baytrail Z3770 CPU performance and close to twice the GPU performance for a rated 4.5W TDP.


AMD had led people to believe that Temash would find its way into tablets, but I am struggling to find them on the shelves. Today's AMD mobile play is Mullins, but the only designs available are pretty awful cheap notebooks.

If just one OEM puts out a design that is comparable in battery life, performance, and portability to something like the ASUS T100, then we'd have a good starting point for that discussion. However, it seems to me that these designs are good for winning PC performance benchmarks, but that's where those platforms' competitiveness comes to an end.
Awful OEM designs for AMD products is something which is a problem for many years. That has nothing to do with AMD's product competitiveness and more with PC OEM's fearing Intel's clout and market power. Its not a perfect world. AMD's best option to get out of that rut is to design a good product (like Nvidia does with Shield K1 tablet) and sell it through their partner Sapphire (who already do so for Firepro GPUs) .
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
If AMD will be able to make a 14nm product in 2016 then they will be very competitive. It will be the first time in years they will have a node process very close to Intel's in Electrical parameters for low power IC designs.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Those were not native AMD architectures. Both leveraged a lot on DEC IP and design teams. The only good CPU architecture designed top to bottom inside AMD was Bobcat, the rest was either crap (K6, Bulldozer, Jaguar) or IP cloned or acquired from other players (386 486, K6-II, K7, K8...).

AMD designing something competitive when building upon native IP and technical expertise is an exception, that's why I expect post-CMT architecture to have a much narrower scope than previous projects, in order to minimize scope risks and delays.

crap relative to what...intel nehalem? so becuase intel has a better usarch, that means amds is crap?
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131
Mullins is on par with Baytrail Z3770 CPU performance and close to twice the GPU performance for a rated 4.5W TDP.

There's a lot of unknow stuff about this chip, so I wouldn't run into BT-T comparisons just yet. Why there were no power consumption tests back when AMD revealed Mullins? Why the reference design was a bulky 11.6'' tablet that looks a lot more like a Surface Pro competitor (running 11.5-15W Haswell chips) than a cheap thin'n'light 7-10'' Bay Trail-T competitor? Also, even if AMD's relation with OEMs isn't the best I find it strange that MSI and other partners still didn't launch/announce a single tablet based on it, 3 months after it was revealed.

Ps: Pentium N3510 is old news. Pentium N3530 (launched in Q1) runs @ up to 2.58GHz + ~20% higher iGPU clock than the older model, same 7.5W TDP. I'm hopeful that Braswell will improve iGPU performance a lot on Intel low-end desktop/mobile chips but we're half a year away from it.
 
Last edited:

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
crap relative to what...intel nehalem? so becuase intel has a better usarch, that means amds is crap?
Crap relatively to other products on the market that could make money to their companies. The only victory which matters is the commercial victory, and CPU designs from AMD, with the exception of bobcat, were really underwhelming from the commercial POV.
 

carop

Member
Jul 9, 2012
91
7
71
However, there was a post from liahos1 on this very forum that you may want to read on this topic. He got ahold of what seems to have been a very interesting research note from a sell-side shop that did an investor tour with Intel's Bill Holt.

Thanks for the heads up. I tend to look at major conferences for process/device specs of new nodes. I rarely, if at all, look at IDF for new node specs. Hopefully, there will be a paper at IEDM 2014 explaining process/device specs.

As always there is a fair amount of spinning in that post from "liahos1":

So as to give credit where it is due, I will only point out that Inverse Lithography Technology (ILT) was first defined in the following paper 8 years ago:

http://spie.org/Publications/Proceedings/Paper/10.1117/12.658876

The authors later started up a company providing computational lithography solutions. They have been acquired by KLA-Tencor:

http://www.luminescenttechnologies.com/index.php

Mentor graphics markets an ILT tool called pxOPC. Synopsys has Inverse Explorer.

Thanks again.
 
Mar 10, 2006
11,715
2,012
126
Thanks for the heads up. I tend to look at major conferences for process/device specs of new nodes. I rarely, if at all, look at IDF for new node specs. Hopefully, there will be a paper at IEDM 2014 explaining process/device specs.

As always there is a fair amount of spinning in that post from "liahos1":

So as to give credit where it is due, I will only point out that Inverse Lithography Technology (ILT) was first defined in the following paper 8 years ago:

http://spie.org/Publications/Proceedings/Paper/10.1117/12.658876

The authors later started up a company providing computational lithography solutions. They have been acquired by KLA-Tencor:

http://www.luminescenttechnologies.com/index.php

Mentor graphics markets an ILT tool called pxOPC. Synopsys has Inverse Explorer.

Thanks again.

You're very welcome
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
If AMD will be able to make a 14nm product in 2016 then they will be very competitive. It will be the first time in years they will have a node process very close to Intel's in Electrical parameters for low power IC designs.

Will they?

Even if we somehow imagine that the nodes are similar. Then the next problem is that AMD doesnt get reduced transistor cost compared to 28nm.

So that would mean that AMD can only implement 1/3rd to 1/4th of the transistors for the same cost as Intel.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Getting back to the original question, for some segments of the market, there's no reason I can see why not.

Assuming the new design is more efficient and giving them stacked memory is very reasonable and definitely improves things an awful lot. They likely won't be the absolute best CPU but don't really need to be, just competitive.

I really can't see why they shouldn't be able to do a 'fast enough' ~45W CPU by then - the target for this really has not been moving fast! - and with an unrestricted iGPU performance they could get a few nice niches. Could round up an awfully large chunk of the current low/mid end dGPU market for starters.

I could certainly imagine using one, when I can't really now. Heck, if we're into best possible dreaming style stuff you could have Apple switching over to get custom designs with a really big/fast iGPU. They're definitely into that sort of thing so not an entirely outlandish thought for some of their desktop designs at least.
 

Fjodor2001

Diamond Member
Feb 6, 2010
3,938
408
126
Then the next problem is that AMD doesnt get reduced transistor cost compared to 28nm.

You have not shown any evidence showing that definitely is the case yet, or that things will be different for AMD than for everyone else. Answer the questions and provide the info requested in this post to start with, then we might make some judgment based on that. Because so far it's just speculation from your side.
 

Hans de Vries

Senior member
May 2, 2008
321
1,018
136
www.chip-architect.com
It isn't a bold statement.

We have Intel 32 and TSMC/Samsung 28.

Intel increases density and adds FinFET and calls it 22.
TSMC/Samsung increases density and then adds FinFETs afterwards, calls it 20 and then 16/14.

See what they did? 20nm should have been called 20LP and the FinFET version should have been called 20HP, just like they did with 28nm.

No, What TSMC/IBM/Samsung/GF said is:

We call these processes 14nm / 16nm because:
Intel deviated from the official ITRS process naming rules and Intel
marketing/shareholders always abuse this against us to claim they
are many, many years ahead.


Indeed, based on BEOL density Intel's processes should have been called
differently according to the official ITRS naming standards:

Intel's 65nm should have been called 80nm instead.
Intel's 45nm should have been called 55nm instead.
Intel's 32nm should have been called 39nm instead.

You can see this in the graph below:

-The blue squares show the official metal 1 half pitches according to the ITRS.
-The purple circles and squares are the actual BEOL metal 1 half pitches of Intel's processes.

You can see that going from 90nm to 65nm Intel improved the BEOL MT1
by only 5%. They did exactly the same then as TSMC/IBM/Samsung/GF
are doing now.

It is due to the ignorance of the general public, which has no knowledge
of the official process naming rules, that TSMC/IBM/Samsung/GF are
forced to deviate from the official naming rules in the same way as Intel
did.

They have no other choice if they like it or not, the ignorance will never
go away, as is demonstrated for instance time after time on this forum.


 
Last edited:

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
You have not shown any evidence showing that definitely is the case yet, or that things will be different for AMD than for everyone else. Answer the questions and provide the info requested in this post to start with, then we might make some judgment based on that. Because so far it's just speculation from your side.

You are always asking for evidence, but where is your evidence that the transistor cost will be lower at 20nm and below? Everything points in the other direction, so show me your evidence.
 

Fjodor2001

Diamond Member
Feb 6, 2010
3,938
408
126
22nm has a up to 2x lower power consumption than 32nm, Broadwell-Y consumes 0.7x the power of Haswell-Y.

We're discussing AMD's next-gen desktop APU. So did Intel really provide 4x lower power consumption on desktop CPUs going from 32 to 14 nm? Not even close.

Also, Intel has said Broadwell reduces power consumption by "up to 30%". I.e. that's under optimal conditions, so you should expect less in most cases, and certainly on desktop.

But since you said "up to 4x" lower consumption and thought that was ok, who not say "up to 10x" instead and hope for the best.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Will they?

Even if we somehow imagine that the nodes are similar. Then the next problem is that AMD doesnt get reduced transistor cost compared to 28nm.

So that would mean that AMD can only implement 1/3rd to 1/4th of the transistors for the same cost as Intel.

You are talking about transistor cost the last two days, ok give us a manufacturing cost analysis of Intel's 14nm 100mm^2 die against Samsung's 14nm 100mm^2.
 

Fjodor2001

Diamond Member
Feb 6, 2010
3,938
408
126
Density does matter. 20FF will not be less expensive because it does not have a lower density. 14nm will be ~2.2x more dense according to Intel, while wafer cost increases less rapidly.

What matters is absolute transistor density (transistors / mm^2), not how much it is improved compared to some previous node. IIRC, Intel's 22 nm wasn't that dense for its node pitch, GF's 28 nm was though. So Intel has to improve the density more going from 22->14 nm than what Samsung/GF has to do from 28->14 nm to catch up at 14 nm.

I.e. in the end what matters is how the absolute transistor density for Intel's 14 nm compares to Samsung/GF's 14 nm. Can you provide numbers showing that Intel's 14 nm is more dense, in such absolute numbers? Because that is what you have claimed.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
No, What TSMC/IBM/Samsung/GF said is:

We call these processes 14nm / 16nm because:
Intel deviated from the official ITRS process naming rules and Intel
marketing/shareholders always abuse this against us to claim they
are many, many years ahead.


Indeed, based on BEOL density Intel's processes should have been called
differently according to the official ITRS naming standards:

Intel's 65nm should have been called 80nm instead.
Intel's 45nm should have been called 55nm instead.
Intel's 32nm should have been called 39nm instead.

It is due to the ignorance of the general public, which has no knowledge
of the official process naming rules, that TSMC/IBM/Samsung/GF are
forced to deviate from the official naming rules in the same way as Intel
did.

They have no other choice if they like it or not, the ignorance will never
go away, as is demonstrated for instance time after time on this forum.
So you're saying it's Intel's fault.

But it's not.

It’s a state of affairs that has been nearly two decades in the making. Once upon a time, the node name told you practically everything you needed to know about a chip’s underlying technology. [...] In the mid-1990s, when such chips were the state of the art, 0.35 µm was an accurate measure of the finest features that could be drawn on the chip. [...]

But around that same time, the link between performance and node name began to break down. In pursuit of ever-higher clock speeds, chipmakers expanded their tool kit. They continued to use lithography to pattern circuit components and wires on the chip, as they always had. But they also began etching away the ends of the transistor gate to make the devices shorter, and thus faster.

Gates were made shorter than the node suggests, while the half-pitch declined less.



It wasn't Intel's fault. It was just designing the transistors to be most suited for competitive products.

Not only Intel, everyone abandoned the traditional node shrinking according ot the ITRS' names. If you think they didn't, show me TSMC's or Samsung's 28nm transistor with a 28nm half-pitch and gate.

After a while, “there was no one design rule that people could point to and say, ‘That defines the node name,’” says Mark Bohr, a senior fellow at Intel. The company’s 0.13-µm chips, which debuted in 2001, had transistor gates that were actually just 70 nm long. Nevertheless, Intel called them 0.13-µm chips because they were the next in line. [...]

That is, of course, the view from a chip manufacturer’s side. For his part, Paolo Gargini, the chairman of the International Technology Roadmap for Semiconductors, says the node is and always has been defined by the proximity of wires on the first metal layer on the back of the chip, a dimension that was reflected well in DRAM and, later, flash memory, but not in logic.

20nm first metal layer pitch: 64nm.

Not even close to 20nm, so I'd call your post blaming Intel for derailing node names and TSMC only abandoning it in 2015 with their FinFETs FUD. If a node doesn't have a higher density, in line with Moore's law, it shouldn't be called a new node. So what TSMC's now doing by introducing FinFETs in their transistors and calling it a new node, it doing the node-derailing thing to the extreme.
 
Mar 10, 2006
11,715
2,012
126
Not even close to 20nm, so I'd call your post blaming Intel for derailing node names and TSMC only abandoning it in 2015 with their FinFETs FUD. If a node doesn't have a higher density, in line with Moore's law, it shouldn't be called a new node. So what TSMC's now doing by introducing FinFETs in their transistors and calling it a new node, it doing the node-derailing thing to the extreme.

Eh, 16 FinFET has smaller SRAM cell sizes, so it is denser.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
We're discussing AMD's next-gen desktop APU. So did Intel really provide 4x lower power consumption on desktop CPUs going from 32 to 14 nm? Not even close.
Going to a new node is and certainly has always been the best way to improve efficiency. 28nm vs 14nm is a big deal and would put AMD in a very noncompetitive position.

Also, Intel has said Broadwell reduces power consumption by "up to 30%". I.e. that's under optimal conditions, so you should expect less in most cases, and certainly on desktop.
Wrong. Intel didn't say anything. Intel showed Broadwell-Y vs Haswell-Y at the same performance, and Broadwell-Y happened to consume 30% less. Intel did say however the silicon was not optimized, just a straight shrink, so I assume it will be more, and also a bit more with Atom. But that was not point. I was just saying how big of a difference a lower node can be, and I gave a worst-case example.

Remember your original comment: "AMD might actually be smarter than Intel, by staying on 28 nm instead of 14 nm for now, since the cost is lower." You also said 14nm didn't have much benefit.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
What matters is absolute transistor density (transistors / mm^2), not how much it is improved compared to some previous node. IIRC, Intel's 22 nm wasn't that dense for its node pitch, GF's 28 nm was though. So Intel has to improve the density more going from 22->14 nm than what Samsung/GF has to do from 28->14 nm to catch up at 14 nm.

I.e. in the end what matters is how the absolute transistor density for Intel's 14 nm compares to Samsung/GF's 14 nm. Can you provide numbers showing that Intel's 14 nm is more dense, in such absolute numbers? Because that is what you have claimed.



(Compared to FF+, Intel would be about 25% ahead.)

You'll have to do it with that for now, unless you can disprove those numbers.
 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |