EETimes: ST plans for Dresden FDSOI production

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Abwx

Lifer
Apr 2, 2011
11,172
3,868
136
It doesnt matter since they talk %age.
If they had stated for instance 30 uops more by cycle it would be
different since we would have to know their microcode to extract
the number of macro ops per cycle , and this is where ShintaiDK
promptly mislead himself by confusing ratios and absolute values
in a mathematical estimation.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
BD can issue 3 macro-uops from microcode per cycle (directly to dispatch).
 
Last edited:

Abwx

Lifer
Apr 2, 2011
11,172
3,868
136
Actually , a single macro ops can be be translated as 3 micro ops , otherwise the decoder can do four macro ops per cycle since it must feed two cores , inducing two macro ops per cycle per core in a multithreaded and theorical case since this peak decoding bandwith is unlikely to be sustained more than a few % of a code. http://www.realworldtech.com/bulldozer/5/
 

SocketF

Senior member
Jun 2, 2006
236
0
71
Rather in R&D the question is "can it be done within the need-timeline, within the allocated development budget, and will it meet the desired production metrics for yield, cycle-time, production cost, etc?"

I believe the gate-last simulated results. But is it a pathway to sub 0.7V operation that is superior in both development and production costs when compared to other alternative integration schemes? That is the real question.

Very good question ... if it would be 20nm, I would say yes, they could do it, double pattering was already tested several times @intel, so GF should be able to do it, too and Gate-last is easier to do (afaik) then Gate-first.

But now that STM canceled the 20nm FD-SOI node and will go straight to 14nm, we have to discuss 14nm FD-SOI. There they probably need triple pattering. No clue how that will work out in the end or how well GF can handle that.

On a side note:
I subscribed previously to the "FD-SOI promoting woman" on twitter, she is everywhere at ASN and all other SOI sites. I thought it is a good idea, but today I wondered why there are still only the old FDSOI news from Dec. 19th etc. on top of my twitter list, no message from my other subscriptions, and we already have Feb. now.

Seems that twitter gave the FD-SOI feeds some more priority, for whatever reasons. After scrolling down 2-3 screens I found new, fresh twitter messages. Seemed fishy .. I wonder if twitter got some sponsoring money from the SOI-people. In any case it really was annoying, also that that woman re-twittered any message from anybody about the very same topic (STE's FD-SOI SoC) was really stupid ...
Now I deleted her again and have no more twitter-troubles ;-)
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Very good question ... if it would be 20nm, I would say yes, they could do it, double pattering was already tested several times @intel, so GF should be able to do it, too and Gate-last is easier to do (afaik) then Gate-first.

But now that STM canceled the 20nm FD-SOI node and will go straight to 14nm, we have to discuss 14nm FD-SOI. There they probably need triple pattering. No clue how that will work out in the end or how well GF can handle that.

On a side note:
I subscribed previously to the "FD-SOI promoting woman" on twitter, she is everywhere at ASN and all other SOI sites. I thought it is a good idea, but today I wondered why there are still only the old FDSOI news from Dec. 19th etc. on top of my twitter list, no message from my other subscriptions, and we already have Feb. now.

Seems that twitter gave the FD-SOI feeds some more priority, for whatever reasons. After scrolling down 2-3 screens I found new, fresh twitter messages. Seemed fishy .. I wonder if twitter got some sponsoring money from the SOI-people. In any case it really was annoying, also that that woman re-twittered any message from anybody about the very same topic (STE's FD-SOI SoC) was really stupid ...
Now I deleted her again and have no more twitter-troubles ;-)

STM is going to foot the bill for 14nm FD-SOI development? I find that difficult to believe. (meaning I don't believe they will actually do it, but I have no doubt they are telling people they plan to if for no other reason than to keep 28nm FD-SOI appearing relevant)

It may be a press release, a plan in power-point only, but that is downright tomfoolery of STM management to be throwing that kind of R&D resources at a project that doesn't stand a chance of delivering a ROI.

If STM joined the fab club and convinced IBM to subsidize the development of 14nm FD-SOI by using everyone else's money then I could see it happening and being cost-justified. But literally everyone else in the industry has adopted a development path that is Finfet, STM is shouldering the R&D burden solely on their own by pushing headstrong into a non-standard xtor path.

That just isn't sound money management. STM's shareholders deserve more, and I'll be surprised if they don't start to demand more.

Do you remember when Motorola's management was high on drugs and decided to spend $7B creating and launching satellites into space (Iridium) to build a global satellite mobile communications network? Talk about losing focus on your core competencies.

STM is in danger of spending billions developing the next Iridium network if they are serious about 14nm FD-SOI IMO.
 

carop

Member
Jul 9, 2012
91
7
71
But now that STM canceled the 20nm FD-SOI node and will go straight to 14nm, we have to discuss 14nm FD-SOI. There they probably need triple pattering. No clue how that will work out in the end or how well GF can handle that.

The following Metal 1 pitch overview is from Hans de Vries at the Chip Architect:

http://www.chip-architect.com/news/2010_09_04_AMDs_Bobcat_versus_Intels_Atom.html



The "metal 1" pitch defines the pitch of the first and densest metal layers, and is 64-nm at both the 20-nm and 14-nm nodes. This is because the foundries are trying to avoid triple patterning.

The parties interested in the FD-SOI roadmap are renaming what was the 20-nm node so they can be seen to be at an equivalent node to Intel 14-nm FinFET process and FinFET processes from foundries such as TSMC and GF:

http://www.eetimes.com/electronics-news/4403224/FDSOI-roadmap-renames-20-nm-node-as-14-nm
 
Last edited:

SocketF

Senior member
Jun 2, 2006
236
0
71
If STM joined the fab club and convinced IBM to subsidize the development of 14nm FD-SOI by using everyone else's money then I could see it happening and being cost-justified. But literally everyone else in the industry has adopted a development path that is Finfet, STM is shouldering the R&D burden solely on their own by pushing headstrong into a non-standard xtor path.
No, IBM is still in the FD-SOI camp, too, no need of convincing needed. IBM just call it differently, "ET-SOI" , maybe because it is - of course (as it is IBM - a High-performance type process, and not Low-Power type like anybody's else. They demonstrated it at IEDM:

At the IEDM, a team led by IBM will report on the world’s first high-performance hybrid-channel ETSOI CMOS device. They integrated a PFET having a thin, uniform strained SiGe channel, with an NFET having a Si channel, at 22nm geometries. A novel STI-last (isolation-last) process makes the hybrid architecture possible
That hybrid channel seems to be totally new, some sites say that it seems to be very promising, but I have to admit: I have no clue about the differences to "normal" FD-SOI ;-)
Here is a small preview doc available:
http://www.btbmarketing.com/iedm/images/images_wCaptions/18.1_Hybrid-Channel_22nm_SOI_CMOS_IBM.doc

Seems to me as an FD-SOI process + IBM's "special sauce" ;-)

@carop:
Thanks, but I am not 100% confident that it is really just a full rename. Yes, GF mentioned for their 14XM process that they will reuse their BEOL from 20nm, but they didn't say to reuse anything else. Being in the same IBM-Fab-club I would think that STM will do more or less the same. Said that, I am also aware that the marketing guys are very active. Some site measured intel's 22nm gate length as 26nm in real, whereas GF's gate-length for their 28nm node could be 25nm. Furthermore, if I remember correctly, then Intel called their 14nm node previously 16nm.

So obviously ... there is some foul play going on, maybe you are right and it is only some re-branding. We'll see.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Process node labels are just that, labels. The fact they have numbers in them mean absolutely nothing. You can buy a 2014 Toyota Sierra minivan in 2013, think about it.

20nm, 16nm, 14nm...nothing in the integrated circuit is actually tied to the node's label. The last time that convention held true was with 0.5um. Trust me, I know, I personally worked on all the nodes from 0.5um to 32nm as a process development engineer.

We pick a node label first, one that makes sense with respect to our historical nomenclature (i.e. meets marketing expectations), and then we spend a year iterating on various geometry targets to actually hit the targeted electrical and production cost targets.

Why do it like that? Because for some reason it (the marketing aspect) actually works, people are very naive and willing to believe that just because a number is in the label that it means something. See "GHz wars" for another example
 

SocketF

Senior member
Jun 2, 2006
236
0
71
Well that approach sounds ok, first you set your goal, then you look for the best and cheapest way to reach that goal.

But why would you change it mid-way again? Like intel from 16->14, and according to eetimes the SOI-consortium from 20 to 14? New marketing guys?

But GF still has the "normal" 20nm process, thus I wonder now if it is really a rename from the SOI-people or not. Let's wait and see.
 

SocketF

Senior member
Jun 2, 2006
236
0
71
ARM would help with FD-SOI:
LONDON – Warren East, CEO of processor IP licensor ARM, says the company stands ready to help STMicroelectronics make a success of its fully-depleted silicon-on-insulator (FDSOI) chip manufacturing process, but that it is up to ST to make the process more widely available.Speaking to EE Times in a discussion of ARM's 4Q12 and full year financial results East said of FDSOI: "We think it is pretty good technology and we would encourage ST to proliferate it. The physical IP we need to create is essentially the same [as for bulk CMOS]."
"At the moment it is effectively a proprietaty technology. We can help ST if they can proliferate the technology," said East.
http://www.eetimes.com/electronics-news/4406366/ARM-rates-FDSOI-process-as-good-technology
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Well that approach sounds ok, first you set your goal, then you look for the best and cheapest way to reach that goal.

But why would you change it mid-way again? Like intel from 16->14, and according to eetimes the SOI-consortium from 20 to 14? New marketing guys?

But GF still has the "normal" 20nm process, thus I wonder now if it is really a rename from the SOI-people or not. Let's wait and see.

They likely found they had set their initial targets to conservatively. I lived through that happening myself once.

For 65nm at TI we initially set our shrink targets such that we anticipated needing a metal 1 pitch of 210nm, representing a 77.78% linear shrink over our 90nm process (which had a 270nm M1 pitch)...only we discovered about a year into development that 210nm was way too easy to achieve with yield entitlement, we hadn't set out goals aggressive enough and as such we were meeting all of our R&D milestones well ahead of schedule in every way.

So we went back and shrunk everything by another 5% linearly, setting the M1 pitch to 200nm. Still was too easy to reduce to practice (if you can call it "easy" ), yields were silly high and xtors were coming into target way too easily given our production schedule.

So we went back and gave ourselves a third stretch goal, with an additional 10% linear shrink, netting a M1 pitch of just 180nm.

At 180nm our 65nm process was coming in with a 66.7% linear shrink factor compared to the existing 90nm process, a rather unprecedented 44.4% areal footprint for logic (well ahead of the typical 60% areal shrink for logic).

Technically our 65nm should have been relabled something along the lines of 60nm or 55nm, or perhaps our 90nm should have never been named 90nm (maybe it should have been 95 or 100nm).

Regardless the naming justification, we opted to keep with the existing node label even though every single design rule and electrical parametric was being shifted around with the ensuing changes.

I suspect Intel experienced a similar situation in which their initial targets for the post-22nm shrink had them feeling that a 16nm node label was appropriate, but they found the development was actually easier than anticipated, breakthroughs were coming faster and better, well ahead of schedule.

So they went back and instead of bringing out 16nm sooner, ahead of schedule, they would make the shrink goals even more challenging (with higher reward then when it goes to production) and they likewise decided a numerically smaller node label was called for at the same time.
 

Haserath

Senior member
Sep 12, 2010
793
1
81
What is the most important part for switching speed or lower power? If you don't mind my asking.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
What is the most important part for switching speed or lower power? If you don't mind my asking.
Most important part? I don't mind answering but I don't understand the question as written. Can you elaborate on what you mean?
 

Haserath

Senior member
Sep 12, 2010
793
1
81
There are many metal layers and there are the actual transistors, right?

The metal layers connect everything together.

The low nm measurement, I assume, is the transistor.

It's a culmination of everything, but does feature size or geometry help more than the gate pitch?
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
There are many metal layers and there are the actual transistors, right?

The metal layers connect everything together.

The low nm measurement, I assume, is the transistor.

It's a culmination of everything, but does feature size or geometry help more than the gate pitch?

They all tend to scale together, but there are no strict requirements nor are they any generalized rules of thumb that span across companies.

For logic it use to relate to the transistor, but that ceased to be true back in 1995 with 0.35um.

There are too many trade-offs and parameters to really nail it down to any one thing that matters the most in terms of dimensions. But the die size is going to be predominately determined by the minimum gate pitch and minimum metal pitch.
 

Abwx

Lifer
Apr 2, 2011
11,172
3,868
136
What is the most important part for switching speed or lower power? If you don't mind my asking.

The most important parameters are the gate voltage threshold
as well as the transistor transconductance.

Gate voltage threshold , Vth , should be low to increase switching
speed but at the same time it must be high enough to reduce
comsumption when the device is switched off , so there s a
trade off for this parameter.

Transconductance is the device output current slope in function
of the gate voltage , it is expressed in Ampere/Volt , or rather
in microAmpere/Volt for sub micron transistors.

The higher the transconductance the faster the circuit
will switch on once Vth is reached.

Of course , the switching speed is also dependent of the gate
capacitance , the higher this parasistic capacitor the higher
the losses during switch on/off times.
 

Abwx

Lifer
Apr 2, 2011
11,172
3,868
136
More extraordinary claims from GloFo...
  • At constant power, the frequency achieved with 14nm-XM technology based implementation (using 9-track libraries) is expected to be 61% faster than the frequency achieved with 28nm-SLP technology based implementation (using 12-track libraries).
  • At constant frequency, the power consumed by 14nm-XM technology based implementation is expected to be 62% lower than the power consumed by 28nm-SLP technology based implementation.
  • The performance-power efficiency of 14nm-XM technology based implementation (expressed as DMIPs/milliwatts) is anticipated to be more than twice that of the 28nm-SLP technology based implementation, while using half the silicon area.



http://www.xbitlabs.com/news/other/...Two_Times_More_Power_Efficient_Than_28nm.html
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91

Given that we have yet to see how horrifically bad GloFo's 28nm is to begin with (there is a reason it has been delayed nearly 2yrs already), it probably should come as no surprise that it can be markedly and quickly improved upon to such a degree.

It is easier to improve on Bulldozer's performance than Sandy Bridge's. So I am inclined to believe GloFo's PR, but it convinces me their 28nm must really be poor and that it won't be fixed and made competitive until 14nm-XM comes out.
 

MisterMac

Senior member
Sep 16, 2011
777
0
0
Given that we have yet to see how horrifically bad GloFo's 28nm is to begin with (there is a reason it has been delayed nearly 2yrs already), it probably should come as no surprise that it can be markedly and quickly improved upon to such a degree.

It is easier to improve on Bulldozer's performance than Sandy Bridge's. So I am inclined to believe GloFo's PR, but it convinces me their 28nm must really be poor and that it won't be fixed and made competitive until 14nm-XM comes out.

But what will this do Intels advantage on Nodes?

IF GloFo can deliver volume 14XM in 2014 - is the process as good as intels?(I know it's not but ...).

You said yourself it's just labels - so while most of us still think "OOMG 14 NM TRIGATE XTORS WAUUUW" - it's clearly the half truth.


Even tho it maybe publicly marketed as 14 NM XM - will it be anywhere near Intels?

If we ignore time and just compare when both are released - are the process relatively the same or extremely far apart?
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
The 14XM uses 20nm interconnects. Plus lets be honest, 14XM products wont hit before 2015 at earlist, more likely 2016. Dont confuse foundry roadmaps with actually products shipping.
 

MisterMac

Senior member
Sep 16, 2011
777
0
0
And what if we ignore timeframe and just raw compare the process itself in terms of "performance"?
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
If we ignore time and just compare when both are released - are the process relatively the same or extremely far apart?

Extremely far apart. Just as AMD and IBM were completely caught off-guard by Intel's aggressive development and adoption of HKMG into production at 45nm, they were even more caught off-guard by the development and adoption of FinFet into production at 22nm.

So what you saw, and continue to see, is IBM and GloFo operating in crisis mode, rushing under-developed process technologies through the R&D pipeline and making ill-advised tradeoffs in the process (bad 32nm dielectric decisions, gate first decision, 28nm disaster, etc).

And they are continuing that tradition with Finfet and 14nm...rushing an underperforming FinFet product (it can only manage enough Idrive to power mobile devices without burning itself up, if they try and power it with enough current and voltage to hit GHz speeds needed for CPUs and GPUs then it dies very quickly) to market for 20nm but re-labeling it a 14nm-XM product because they can't figure out how to rush the 14nm BEOL (metal wiring) to market at the same time.

The gap between Intel and GloFo continues to grow, we see it in their limited release of Finfet for 14nm (mobile only, not high performance) and the lack of scaling in the BEOL. GloFo's 14nm-XM customers will be ill-equipped to field cost or performance competitive parts if those customers are competing with Intel or high-performance customers of TSMC.

Even though TSMC is doing the same shenanigans with the BEOL not shrinking to 16nm, at least they have do intend to field finfet transistors that are robust enough to function (and survive) in the higher voltage and current environment that comes with the MPU version of their 16nm node.

It is difficult to see a silver lining in GloFo's looming dark clouds TBH. Their technology roadmap is not competitive even if they manage to pull it off without delays the likes of which 32nm and 28nm have experienced
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |