AMD on 58XX supply issues

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
How do you suppose that would be possible? A significant increase in TSMC 40nm yield + AMD ordering a ton more wafers? Although, I assume even when the yields were low, AMD must have been ordering as much wafers as possible?

Yeah I'm just impressed they intend to unleash a 10x supply bump on the markets in so little time.
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
Yeah I'm just impressed they intend to unleash a 10x supply bump on the markets in so little time.
Ah, so your surprise was more about their decision to do so, instead of disbelief due to the sheer impracticality or impossibility of the task? So a 10x supply bump is not impossible to do at all?
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Ah, so your surprise was more about their decision to do so, instead of disbelief due to the sheer impracticality or impossibility of the task? So a 10x supply bump is not impossible to do at all?

That statement had a lot of wiggle room. Maybe he meant they would increase supply over the next 10 months by 10x what they did last month. :biggrin:
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
That statement had a lot of wiggle room. Maybe he meant they would increase supply over the next 10 months by 10x what they did last month.

I certainly wouldn't put that beneath marketing people, marketing people have been known to do a lot worse I do hope though that they meant "Christmas" or "Janunary" when they said "soon".
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Ah, so your surprise was more about their decision to do so, instead of disbelief due to the sheer impracticality or impossibility of the task? So a 10x supply bump is not impossible to do at all?

Yeah not impossible, small numbers can be multiplied by 10 and still remain below the threshold of becoming large numbers.

Looking at how little of TSMC's revenue is derived from 40nm there is a lot of room for expansion in supplies of 40nm GPU's as TSMC ramps capacity over the next 6-12 months:

 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
Looking at how little of TSMC's revenue is derived from 40nm there is a lot of room for expansion in supplies of 40nm GPU's as TSMC ramps capacity over the next 6-12 months:
But doesn't that chart show all revenues for all products for all nodes, not just GPU products? I mean, if that's the case, then 45/40nm might very well show up as a very minor part of their revenue stream simply because TSMC also produces products outside of GPUs, and those other products use nodes that are less advanced than 45/40 or even 65/55nm? So even if all 40nm GPU demand is finally met, that portion of their revenue can very well remain very small as long as only GPU's are produced on that node?
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,634
180
106
Yeah not impossible, small numbers can be multiplied by 10 and still remain below the threshold of becoming large numbers.

I've read something, probably in one of those rumor sites "a la inq", that the problems were caused by some bad calibration on some of the equipment.

Can something like this happen (I mean calibrations can go wrong but can they go undetected for a month or so?) and a feasible explanation to how it will be possible to increase supply by 10 in such a time span?
 
Last edited:

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
I've read something, probably in one of those rumor sites "a la inq", that the problems were caused by some bad calibration on some of the equipment.

Can something like this happen (I mean calibrations can go wrong but can they go undetected for a month or so?) and a feasible explanation to how it will be possible to increase supply by 10 in such a time span?

You probably mean this article from semiaccurate:
Officially, the problem is a 'chamber mismatch'. This is where a tool, likely a plasma etch chamber, is out of calibration. Basically, if you set it to 5, and it works like a 3 or a 7, it is 'off'. This can happen for a number of reasons, but semiconductor process engineers spoken with by SemiAccurate say that this is a bring up error. It doesn't just happen in the middle of a run.

Charlie doesn't seem to buy that whole "chamber mismatch" problem as the "true" source of the troubles. Let's see what IDC can input.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
But doesn't that chart show all revenues for all products for all nodes, not just GPU products? I mean, if that's the case, then 45/40nm might very well show up as a very minor part of their revenue stream simply because TSMC also produces products outside of GPUs, and those other products use nodes that are less advanced than 45/40 or even 65/55nm? So even if all 40nm GPU demand is finally met, that portion of their revenue can very well remain very small as long as only GPU's are produced on that node?

Of course that is what it shows, but the relevance of the data is that it speaks to the existing installed capacity of 40nm at this point in time.

Regardless whether 40nm yields are 99% or 50%, if installed capacity is a mere 1000 wspm (wafer starts per month) then AMD is going to have supply issues as 1000 wspm is not enough to satiate demand.

Looking at the revenue by node gives you some indication of just how capacity constrained 40nm is at the moment.

You probably mean this article from semiaccurate:

Charlie doesn't seem to buy that whole "chamber mismatch" problem as the "true" source of the troubles. Let's see what IDC can input.

The chamber mismatch situation is a capacity growth issue. To increase capacity you install new equipment, qualify them, and release them. Once released into production the existing wip (work in progress) in the fab can now go thru any number of tools at a given step in the process.

Now no two tools are the same, and for multi-chamber tools no two chambers are the same. So every wafer sees a subtly different processing environment. Sometimes these subtle differences impact metrics in non-linear ways, a 1% delta at the tool results in a 50% delta in some critical electrical metric like leakage or reliability.

So during the course of a fab ramping their capacity there are always these sorts of "teething" problems experienced as every new node brings with it its own set of non-linear dependencies that can only be learned by experience. The spirit of what I am stating here is what drove the creation of Intel's "copy exact" policy.

Its also something that can't be cost-effectively explored by the R&D team prior to a node being ramped in production. Its a cost of doing business and absolutely every company that owns a fab and has/is ramped/ramping a new node has experienced this type of yield/capacity tool-matching issues.

The only reason we are being made aware of this specific instance of a completely common phenomenon in the industry is because it happens to be the foundry producing the leading-edge parts of our favorite GPU maker(s) AMD and Nvidia. Were it to be Qualcomm (or TI) that was suffering capacity and yield issues at the hands of TSMC's 40nm production ramp nuances we wouldn't hear the first thing about it.

I don't get why Charlie is so eager to assume the worst as being the most probable possibility, creating mountains out of mole-hills, conspiracy and intrigue out of the mundane and routine, except to say he's not running a non-profit business over there so the conflict of interest is self-evident.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,634
180
106
I don't get why Charlie is so eager to assume the worst as being the most probable possibility, creating mountains out of mole-hills, conspiracy and intrigue out of the mundane and routine, except to say he's not running a non-profit business over there so the conflict of interest is self-evident.

Cause impact and capture audience.

A much more dire, and with possible direct consequences to our pockets, of such stratagems is how global warming possible causes/possible consequences are presented to the public - you hear no doubt and if we don't change our way of life in a way to reduce possible causes YESTERDAY!, we ALL GOING TO DIE TODAY!

Unfortunately, stuff like this works, as it is pretty much impossible to understand different fields, since they keep growing more and more specific/detailed, without specific directed to that field education/experience.

Returning to the question at hand, and thank you for the information, I notice you don't really say one way or the other.
 
Last edited:

dreddfunk

Senior member
Jun 30, 2005
358
0
0
IDC - Looking at the curves for the last three process techs (65/55, 90/80, 0.13/0.11), it looks as if the revenue % (capacity growth), increases by a factor of 2x - 3x over quarters 4-6 of implementation. E.g., the revenue produced by at quarter 6 is roughly 2-3 times what it produced at quarter 3. Is that typical of what we should expect at 45/40?
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Returning to the question at hand, and thank you for the information, I notice you don't really say one way or the other.

Sorry if I missed a question or did not answer it, I am not trying to be obtuse, what is the question?

IDC - Looking at the curves for the last three process techs (65/55, 90/80, 0.13/0.11), it looks as if the revenue % (capacity growth), increases by a factor of 2x - 3x over quarters 4-6 of implementation. E.g., the revenue produced by at quarter 6 is roughly 2-3 times what it produced at quarter 3. Is that typical of what we should expect at 45/40?

Yeah using revenue as a proxy for capacity is valid because if anything it underestimates the changes in capacity that occur over time because wafer contract pricing declines (typically) over time meaning capacity has to increase in order for the revenue to remain the same so an ever-increasing revenue percentage requires an even more ever-increasing capacity.

I've got the graph on my other computer, I'll post this and then I'll come back and edit to add the graph to show what you are speaking to more clearly. The number is actually 6x increase in %-revenue between the third quarter and the sixth quarter of a node's introduction. I'll post the graph shortly.

edit: here's that graph:
 
Last edited:

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
IDC, GaiaHunter must have been referring to this question:

Can something like this happen (I mean calibrations can go wrong but can they go undetected for a month or so?) and a feasible explanation to how it will be possible to increase supply by 10 in such a time span?

Thanks for the information. So do you believe, in fact, that these teething problems can very well last for a month or more before being detected? Charlie was particularly adamant in his article about how these "problems" should only take about 1 week, tops.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,634
180
106
Sorry if I missed a question or did not answer it, I am not trying to be obtuse, what is the question?

I guess it isn't so much a question as your opinion - you certainly didn't bash to the ground the possibility of a calibration causing delays in production/diminishing yields.

Lets assume they end having 10x more available 58xx GPU in a 1-2 months. What kind of problems do you envision that could have reduced the yields of the 58xx series (I believe there is/was talk that the RV870 had decent yields and went down, correct me if I'm wrong) but allow them to increase 10x the production (or even double, triple whatever) in such a short span. And forgive me if I'm asking question you can't really answer/speculate for whatever reasons, even if it is only because it can be so many things you could throw a dart at them to decide which, and you wish not to do so.

Thank you.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
IDC, GaiaHunter must have been referring to this question:

Thanks for the information. So do you believe, in fact, that these teething problems can very well last for a month or more before being detected? Charlie was particularly adamant in his article about how these "problems" should only take about 1 week, tops.

As best I can gather from Charlie's innuendo article is that he's promulgating a rather embellished strawman argument.

Whether the strawman is of his own invention or if he is simply the mouthpiece of some disgruntled industry veterans I can't really determine nor do I care to bother investing the time to discern the difference.

There are a gamut of calibration issues that can and do occur (not sure if you know my past but I was a process development engineer at Texas Instruments, I have first-hand experience in the subject matter) but what I get out of the shadowy stories so far is that someone is implying that the story here is that the most innocuous and simplistic of calibration issues transpired or are claimed to have transpired and the "meat" of the story is that it took "too long" per someone else's expectations to resolve the calibration issue.

Do calibration issues like the one semi-described in Charlie's article occur? Yes they do. Do they take a week to resolve? Yes in some instances. Can they take a month to detect and resolve? Yes in some instances.

Does it mean something sinister is afoot if it takes a month to detect and resolve? No, not at all.

Does it mean Q/A and metrology issues abound if it takes a month to detect and resolve? Entirely depends on the specific process metric that has been thrown off by the miss-calibration. Only the most simplistic of process metrics are readily detected at a metrology step like film thickness measurement (called a "tool qual") or particle count measurement.

There are process metrics which require much more time-intensive analysis and characterization to quantify and determine if the process variation is a problem. Subtle shifts in a deposited film's density or refractive index can result in orders of magnitude shifts in the lifetime reliability of the film, or its leakage characteristics and etch-rates, etc.

Even further, process integration deals with the effects of sequences of process events, a shift in process A might not affect the outcome of process A in a meaningful way but it may have a knock-on effect down the line and cause process D to go awry...isolating the problem to that of process D takes time, determining that process D is going awry because of a subtle shift in process A takes even more time.

Resolving those kinds of cause-and-effect (called "root-cause determination" in industry-speak) can take months, can take days. When it takes months that doesn't mean something sinister is afoot, it speaks to the complexity of the underlying issue. 40nm is obviously a complex thing to master, count on one hand how many other foundries have figured out enough of the issues with 40nm such that they have 40nm healthy enough to put it into production today. (you should arrive at the answer of "1")

I guess it isn't so much a question as your opinion - you certainly didn't bash to the ground the possibility of a calibration causing delays in production/diminishing yields.

Absolutely, I wouldn't. That is the routine and mundane part, this happens all the time, during development as well as during production ramp. No two tools are the same, no two chambers on the same tool are the same.

It takes time to figure out what the critical metrics are for a new process and a new tool, you can't characterize everything under the sun because resources are limited. So you make judicious choices on what is most critical for tool-matching and chamber-matching during new tool installs and releases. When something like this occurs then you call that a "lesson-learned" and the breadth of the things on your checklist of things to make sure are calibrated identically chamber after chamber and tool after tool increases by one.

Everything that was already on that checklist came from prior lessons-learned. Node development is a cumulative process thing, knowledge on how to make node N-1 production-worthy are applied to making node N (a more complex node) equally production-worthy.

What is interesting is that everything I write here is self-evident and known to everyone in the industry, I'm not bestowing upon you some great super secret of the industrial world that makes the difference between great fabs and crappy ones. Everyone knows of this stuff. So when Charlie goes off saying he's got engineering sources telling him differently it just makes me sigh, he's either making up fictitious engineering sources or his sources are so far out of the loop they shouldn't be passing themselves off as authorities on the subject in the first place (something Charlie should be smart enough about that he vets them in the first place so he doesn't get led astray).

At any rate I don't doubt that Charlie believes something sinister is afoot, his opinion is expressed genuinely. And unless you've been involved first-hand in process development and tool releases for capacity ramps I'm sure his "stories" seem to have some merit. But from where I'm sitting it is all mountains out of molehills. Take a routine mundane aspect of life in the fab and work it up to be unique and rare and as such only explainable only by invoking cloak-and-dagger sinister neerdowells.

I'm sure it helps that sensationalism generates more hits than simply reporting the mundane.

Lets assume they end having 10x more available 58xx GPU in a 1-2 months. What kind of problems do you envision that could have reduced the yields of the 58xx series (I believe there is/was talk that the RV870 had decent yields and went down, correct me if I'm wrong) but allow them to increase 10x the production (or even double, triple whatever) in such a short span. And forgive me if I'm asking question you can't really answer/speculate for whatever reasons, even if it is only because it can be so many things you could throw a dart at them to decide which, and you wish not to do so.

Well I think first we need to make a distinction between yields and capacity. TSMC said the yields did not decrease, they simply failed to improve.

The chamber mismatch issue (assuming that is not fictitous as well) is a capacity issue, until you resolve the disconnect with the tool you simply "blacklist" it from the available tools for production at that point in the flow. This reduces capacity, or if it was a new tool release then it simply means capacity did not increase above what it had been prior to the tool being brought into the fab.

My take on the situation is that there are two things here, unrelated at the process level, causing supply issues. One is that yields did not improve...yield improvements are like "free" capacity increases. Yield going from 30% to 60% means you get 2x more chips, the same effect as yields staying at 30% but doubling capacity and wafer starts.

Obviously during the early stages of a new node's ramp both are happening in parallel. Yield limiting issues are being root-caused and fixes are implemented, in parallel new tools are being installed and capacity is increasing. If you hit a snag and yields don't improve it will cause supply issues, if capacity doesn't increase (tools aren't released to production because there seems to be some kind of issue with them) then that will cause a supply issue.

If both happen at the same time you don't just double your issues, it is a quadrupling (figuratively) of a problem.

So yes, absolutely a sizable discrepancy between planned supply and actual supply can be created through the combination of some rather simplistic yield improvement delays and new tool release delays. It is not common but at the same time it is not rare.

Where yield gets conflated with capacity in the most recent debacle is that if you are releasing new tools to increase capacity those new tools can cause yields to decline if the tools are not fully matched, so you must make a choice as a fab planner - (1) release the tool and increase capacity but take the added hit in yields, or (2) keep the tools in engineering release phase (not released to production) and take the hit in capacity while keeping your yields from being impacted by the new tool's mismatch.

The choice between (1) and (2) above is actually made all the time during capacity expansions, it changes dynamically on a day by day basis because sometimes the yield hit is minor enough that the added capacity is worth the yield hit for the time being.

I know this is a lengthy post, and I am trying to answer your question by empowering you with more background information than you probably wanted to be exposed to. If you want a more succinct answer let me know and I will try and distill it down to a cliff's version.
 

dreddfunk

Senior member
Jun 30, 2005
358
0
0
IDC - Many thanks for the additional graph. Your explanations make a great deal of sense.

While many of these issues are transparent to those in the industry, they are far more opaque to those of us who follow -- even with great interest -- from the sidelines. You've done a fine job distilling them into a sensible form.

What is impressive --given your discussion of the process issues involved -- is just how consistent TSMC's revenue increases (yield/capacity growth) are, across the last several process implementations.

Am I reading that wrong, or is TSMC doing a really good job in general implementing new processes? There have been hiccups, but overall they seem to ramp production in a remarkably consistent fashion. That suggests to me that their process folks are doing their jobs well.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
IDC - Many thanks for the additional graph. Your explanations make a great deal of sense.

While many of these issues are transparent to those in the industry, they are far more opaque to those of us who follow -- even with great interest -- from the sidelines. You've done a fine job distilling them into a sensible form.

What is impressive --given your discussion of the process issues involved -- is just how consistent TSMC's revenue increases (yield/capacity growth) are, across the last several process implementations.

Am I reading that wrong, or is TSMC doing a really good job in general implementing new processes? There have been hiccups, but overall they seem to ramp production in a remarkably consistent fashion. That suggests to me that their process folks are doing their jobs well.

Thanks for the feedback, I know my posts can be lengthy and verbose at times but I'd rather talk too much and risk boredom in my readers versus being terse and talking to little with the risk of imparting ignorance or easy misinterpretation as a result.

My employment capacities at Texas Instruments afforded me a somewhat unique co-working experience with the foundries (UMC/TSMC/SMIC/Chartered) as we operated a dual-production model of developing our own nodes but ramping production of the same IC across both internal fabs as well as across multiple foundries simultaneously.

What you are surmising regarding TSMC's process technology efficacy is perhaps best proven by the fact that after having worked alongside TSMC in the trenches of node development and capacity ramps, TI elected to discontinue their own internal CMOS node development (aborting their own 45nm node that was well into development at the time) in spring 2007 and transitioned to a foundry-only model for CMOS for 45nm and beyond.

At the time TI was 3rd ranked in the industry for revenue, behind only Intel and Samsung.

What does that say about the confidence TSMC had instilled in key decision makers at TI?

That's not to say TSMC is flawless or that their execution is the envy of the world, it is to say they are good enough to warrant TI ditching their own well-adept CMOS development team (we were second only to Intel for node release timelines and second to none for sram density) in favor of relying entirely on the foundries to keep their multi-billion dollar CMOS products moving forward on Moore's law.

(note TI retained their analog development team, hence the distinction I am making that all this applies to CMOS)

Now the bumps in the road being seen at 45/40nm can be attributed to three things in my estimation.

The first is that Dr. Morris Chang (founder of TSMC, coincidentally a Texas Instruments VP for 25yrs prior to that) ceded his position at TSMC to Rick Tsai in 2005, right around the time that 45/40nm would have been getting started. So we had a change in the leadership that had been responsible for the repeated node cadence you see above.

(Morris has come back to TSMC as of this past summer, too late to impact 32nm, maybe even too late to impact 28nm, but certainly in time to impact 22nm)

Second was that Dr. S.Y. Chiang left TSMC in 2006, he was the head of R&D process development for many years and was also responsible for the sustained node cadence. His absence and subsequent change in who directedR &D no doubt had its impact on 45/40nm as well as 32nm.

(Chiang has also come back to TSMC as of this past autumn, as with Morris it will be too late to materially impact 32nm development timeline but will make a difference to 22nm)

R&D guru returns to TSMC

TSMC has named Chiang as its senior vice president of research and development (R&D). He will report directly to Morris Chang, TSMC's chairman and chief executive.

"Dr. Shang-Yi Chiang first joined TSMC as vice president of R&D in 1997, and successfully led the R&D team through the 0.25-micron, 0.18-micron, 0.13-micron, 90-nm, and 65-nm generations," said Chang in a statement.

http://www.eetimes.com/showArticle.jhtml?articleID=220300287

Coincidence? I think not.

The third difference for TSMC at 45/40nm is that this is their first node in a long time that did not have the co-assistance of TI's 45nm process development team there helping in the background alongside the TI high-volume DSP qual device as had been the case for many node before. That isn't to say TSMC was critically dependent on those resources to hit their node cadence before, but I do happen to know of more than one occassion where things simply became so stuck that without some collaboration the TSMC node cadence would have slipped.

Fast forward to 40nm and TSMC is without Chang, without Chiang, and without TI pushing them to their own internal standards. Clearly they did OK, it hasn't been a total trainwreck as the other foundries have done even less well, but the return of Morris and Chiang are definitely indicators that things weren't rosy inside TSMC and its likely to get worse before it gets better as it will take 2-3 yrs for those two guys to get things thru the pipeline that will end up impacting customers. 28nm at the absolute earliest, 22nm the more likely.

TSMC has done a great job, no other foundry has mastered 40nm well enough to offer it in a production environment and they managed to convince the third-highest revenue company (at the time, TI has slipped to 4th now) to abandon their own internal node development and pursue reliance on the foundries for 45nm and beyond. That speaks volumes IMHO.
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
That suggests to me that their process folks are doing their jobs well.
Yeah, I agree, I'm pretty sure they are. All this "outrage" about 40nm at TSMC is probably just due to having to manufacture a very publicly popular product and some hiccups affected their availability. Like IDC noted:
The only reason we are being made aware of this specific instance of a completely common phenomenon in the industry is because it happens to be the foundry producing the leading-edge parts of our favorite GPU maker(s) AMD and Nvidia. Were it to be Qualcomm (or TI) that was suffering capacity and yield issues at the hands of TSMC's 40nm production ramp nuances we wouldn't hear the first thing about it.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Yeah, I agree, I'm pretty sure they are. All this "outrage" about 40nm at TSMC is probably just due to having to manufacture a very publicly popular product and some hiccups affected their availability. Like IDC noted:

You pay too close attention I can tell I am going to have to double-down and stick to telling the truth now more than ever before

Years ago my advisor used to quip to me that the questions in life never changed but if you paid close enough attention you'd notice the answers did.

It took me five years of experiencing his ever changing answers to my questions before I finally realized what manner of a kernel of knowledge my Sensei had attempted to bestow upon me all those years prior.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,634
180
106
Thanks for the info IDC.

Just to make it clear, I wasn't much asking an opinion about Charlie's article and his theories and/or stories why 5xxx aren't being seen in numbers.

Just trying to understand what was the cause (and I believe it is a malfunction in the production process somewhere and not some other event) of the 5xxx series availability problems and how can they be related or not related to Fermi problems(?)/delays.

From what I gather this situation was more on TSMC side and less on AMD side (and already discounting the intrinsic problems TSMC has with this node) - as in AMD and TSMC already were able to get a certain % of usable chips per wafer but then something went wrong when increasing the number of wafers produced and targeted GPU numbers weren't met.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Ah, understood Gaia, yes from everything I can ascertain AMD executed flawlessly on their Evergreen plans to date. They couldn't push TSMC any faster or harder, TSMC very much is/was their rate-limiting step in the supply chain. Clearly TSMC has over-promised and under-delivered on 40nm and that has been the progenitor of a lot of angst throughout the supply chain. Had they set expectations a little more appropriately then I think everyone from AMD to Nvidia to us customers would have experienced a little less excitement this past year.
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
Ah, understood Gaia, yes from everything I can ascertain AMD executed flawlessly on their Evergreen plans to date. They couldn't push TSMC any faster or harder, TSMC very much is/was their rate-limiting step in the supply chain. Clearly TSMC has over-promised and under-delivered on 40nm and that has been the progenitor of a lot of angst throughout the supply chain. Had they set expectations a little more appropriately then I think everyone from AMD to Nvidia to us customers would have experienced a little less excitement this past year.

Nice thanks! (Lets forget charlie)
 

scooterlibby

Senior member
Feb 28, 2009
752
0
0
TSMC has done a great job, no other foundry has mastered 40nm well enough to offer it in a production environment and they managed to convince the third-highest revenue company (at the time, TI has slipped to 4th now) to abandon their own internal node development and pursue reliance on the foundries for 45nm and beyond. That speaks volumes IMHO.

I thought the node or half-node names like "40nm" were just that - names of processes that could completely vary between foundries (forgive my ignorance if this interpretation is wrong, I believe you wrote something similar to this a while back and I might not be remembering it correctly). If another foundry mastered something they called 40nm, would it necessarily be the same thing?
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
I thought the node or half-node names like "40nm" were just that - names of processes that could completely vary between foundries (forgive my ignorance if this interpretation is wrong, I believe you wrote something similar to this a while back and I might not be remembering it correctly). If another foundry mastered something they called 40nm, would it necessarily be the same thing?

Oh you are absolutely right, but at this time regardless what anyone else might call their 40nm-equivalent node there are no foundry alternatives that deliver the xtor parametrics and dimensions that TSMC's 40nm does.

If you want 2B xtors operating at 850MHz while consuming less than 188W and fitting onto a 334mm^2 chip you've got few choices.

The other foundries do have competing nodes, they just aren't anywhere close enough to the production yields necessary for it to be commercially viable to use their node if your competitors are using TSMC's.
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
Well then, what use is it for AMD or nVidia to specify "40nm" or "32nm" or whatever node in their products if such labelling is arbitrary for foundries?
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |