TSMC 7nm info

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Conceivably possible - if Apple are really, really committed to something they've got insane resources these days.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Do you have a source for this? Seems like a non-trivial insertion. Wish IDontCare was still posting here.
They have already said this many times...

http://semiengineering.com/deeper-inside-intel/

CTRL+F EUV

or

Bohr: We are developing our 7nm technology to be compatible with either all immersion or EUV at selected layers. Obviously, we would rather use EUV if it could deliver the manufacturability goals in terms of the uptime and wafers per hour. The EUV tools today are not at that point, so it would be risky to commit a technology to EUV. And it would just delay us if we had to hold off on developing 7nm to wait for EUV tools. Obviously, if we had EUV, we could do it with fewer masks, and thus, a lower total wafer cost.

SE: Can you do 7nm without EUV?

Bohr: We are well down the path of developing our 7nm technology today on an all-immersion process. We are closely monitoring the health progress of EUV tools. But again, they are not yet at that maturity level that we could say we’ll be committing them for 7nm.

Bohr: Regarding some of those process details, we’ll keep them close to our vest a little while longer. We did disclose some relevant data, the first of which is gate pitch. It’s a very key factor in scaling for both logic and memory. We showed another metric of gate pitch times logic cell height. So we disclosed enough to make the point that our 10nm technology is a big step forward, better than the usual 0.56x area scaling.
Bohr: There are always a few changes, but it’s too early to disclose exactly what those changes are. For gate pitch, we’re scaling about 0.76x per generation. But the other design rules are scaling at a faster rate. The result in that area, certainly on 10nm, is much better than the traditional 0.56x area scaling.

So that basically confirms that Intel 10nm > TSMC 7nm, 'cause Intel needs 0.54x to match TSMC 7nm.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
Thanks witeken, for the link (and especially the CTRL-F advice). I thought the comment was about TSMC - my bad. In either case, I wish I had some info on the challenges of inserting EUV into the line - hence my comment about wishing IDC was still posting here.
 
Reactions: witeken

jpiniero

Lifer
Oct 1, 2010
14,839
5,456
136
So that basically confirms that Intel 10nm > TSMC 7nm, 'cause Intel needs 0.54x to match TSMC 7nm.

I looked at this, and .54x isn't going to be enough to be denser (if you are comparing sram). It'd only be .031 or somewhere around that. It'd have to be closer to .47x. At least if I am going the math right.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
I looked at this, and .54x isn't going to be enough to be denser (if you are comparing sram). It'd only be .031 or somewhere around that. It'd have to be closer to .47x. At least if I am going the math right.
Intel 14nm: 0.0499µm² (high density)
TSMC 7nm: 0.027µm² (high density)

0.027/0.0499 = 0.54.

You are using the regular density SRAM, which is indeed what Intel uses but not their densest they can build at 14nm.
 

jpiniero

Lifer
Oct 1, 2010
14,839
5,456
136
Intel 14nm: 0.0499µm² (high density)
TSMC 7nm: 0.027µm² (high density)

0.027/0.0499 = 0.54.

You are using the regular density SRAM, which is indeed what Intel uses but not their densest they can build at 14nm.

Okay, but AFAIK Intel doesn't actually ship any CPUs using HD though. Maybe they will with Cannon Lake; and that's why they did Coffee Lake.
 

imported_ats

Senior member
Mar 21, 2008
422
63
86
Okay, but AFAIK Intel doesn't actually ship any CPUs using HD though. Maybe they will with Cannon Lake; and that's why they did Coffee Lake.

Almost no one uses HD cells because they are so compromised. They basically aren't able to scale voltage, they are super slow, etc. 1:1:1 SRAM cells just have a LOT of issues.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Almost no one uses HD cells because they are so compromised. They basically aren't able to scale voltage, they are super slow, etc. 1:1:1 SRAM cells just have a LOT of issues.
But TSMC for instance doesn't report normal density. You would expect that at technical conferences the marketing aspect would go away a bit, but not for TSMC. Intel has in high key presentations only reported the 0.0588µm² number, on the other hand.
 

oak8292

Member
Sep 14, 2016
87
69
91
But TSMC for instance doesn't report normal density. You would expect that at technical conferences the marketing aspect would go away a bit, but not for TSMC. Intel has in high key presentations only reported the 0.0588µm² number, on the other hand.

I am interested in thoughts on the purpose of technical presentations on the pitch density and SRAM size. Who is being marketed to in these presentations? It seems like Intel would have less need to publish accurate SRAM sizes than TSMC. Intel is selling finished goods to consumers and TSMC is sell a process to fabless companies and my guess is that the fabless companies care more about SRAM cell sizes. Why are Intel's SRAM sizes more 'accurate' than TSMC's?

As a 'geek' interested in technology I was under the impression that SRAM's were becoming an increasingly important part of an SOC and increasingly difficult to implement. Here is a quote from a marketing blog by Monolithic 3D;

'The percent of the die area used for embedded SRAM is growing with scaling and already exceeds 50%.'

http://www.monolithic3d.com/blog/the-most-expensive-sram-in-the-world-20

To my surprise, in Bill Holt's Intel presentation, a 2+2 Skylake only had 9% of the die area for SRAM and another 11% is Reg. Files meaning only 20% of the die area is memory. Bill also reported that in an actual SOC from Apple there was only 30% SRAM? Something is wrong with the blog or what Intel is reporting? I am guessing the blog but I could be wrong.

The Monolithic 3D blog has a lot of additional observations about the difficulty of scaling and the effects of variation in lithography with scaling. It is an interesting read. It would be nice if they would update some the information now that the 14nm node has been in production and the industry is moving to the 10nm node with triple and quad patterning.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
I am interested in thoughts on the purpose of technical presentations on the pitch density and SRAM size. Who is being marketed to in these presentations? It seems like Intel would have less need to publish accurate SRAM sizes than TSMC. Intel is selling finished goods to consumers and TSMC is sell a process to fabless companies and my guess is that the fabless companies care more about SRAM cell sizes. Why are Intel's SRAM sizes more 'accurate' than TSMC's?

As a 'geek' interested in technology I was under the impression that SRAM's were becoming an increasingly important part of an SOC and increasingly difficult to implement. Here is a quote from a marketing blog by Monolithic 3D;

'The percent of the die area used for embedded SRAM is growing with scaling and already exceeds 50%.'

http://www.monolithic3d.com/blog/the-most-expensive-sram-in-the-world-20

To my surprise, in Bill Holt's Intel presentation, a 2+2 Skylake only had 9% of the die area for SRAM and another 11% is Reg. Files meaning only 20% of the die area is memory. Bill also reported that in an actual SOC from Apple there was only 30% SRAM? Something is wrong with the blog or what Intel is reporting? I am guessing the blog but I could be wrong.

The Monolithic 3D blog has a lot of additional observations about the difficulty of scaling and the effects of variation in lithography with scaling. It is an interesting read. It would be nice if they would update some the information now that the 14nm node has been in production and the industry is moving to the 10nm node with triple and quad patterning.
I would indeed be interested to have more of a context about following graph, its accuracy and how this is in Apple's situation.




Further, it indeed seems the 50% number is wrong, but SRAM still is an important component of a chip of course. It could be the case that he only has numbers of different chips from other companies.

And about the SRAM numbers disclosures. I would say it's just the culture of technology and science. When you have achieved something, you want to share it, that's what those conferences are for. But maybe someone who works in the semiconductor industry could tell more about that.
 

oak8292

Member
Sep 14, 2016
87
69
91
When you have achieved something, you want to share it, that's what those conferences are for.

That seems reasonable but it does beg the question as to what has been achieved. I get the impression that SRAM is the first cell produced with the process of record to get information on transistor characterization and variability for validating EDA tools. A very small SRAM area does come with some bragging rights but does it really indicate much about suitability for design? In the last four to five years TSMC and Intel have very different target markets. It will be interesting to see how the Intel process adapts as they start to market towards foundry services. Will the cell libraries and design rules allow third parties to gain the advantage of the very tight pitches and small SRAM that Intel reports at conferences, in practice most of Intel's important designs are high performance designs with lower interconnect densities?
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
SRAM is just one of those standard things about a node that companies disclose every time.

And BTW, I don't think those conferences are for their customers. I guess that TSMC's customers get information about the process node in a different way and with all the design rules and stuff.
 

imported_ats

Senior member
Mar 21, 2008
422
63
86
I am interested in thoughts on the purpose of technical presentations on the pitch density and SRAM size. Who is being marketed to in these presentations? It seems like Intel would have less need to publish accurate SRAM sizes than TSMC. Intel is selling finished goods to consumers and TSMC is sell a process to fabless companies and my guess is that the fabless companies care more about SRAM cell sizes. Why are Intel's SRAM sizes more 'accurate' than TSMC's?

As a 'geek' interested in technology I was under the impression that SRAM's were becoming an increasingly important part of an SOC and increasingly difficult to implement. Here is a quote from a marketing blog by Monolithic 3D;

'The percent of the die area used for embedded SRAM is growing with scaling and already exceeds 50%.'

http://www.monolithic3d.com/blog/the-most-expensive-sram-in-the-world-20

To my surprise, in Bill Holt's Intel presentation, a 2+2 Skylake only had 9% of the die area for SRAM and another 11% is Reg. Files meaning only 20% of the die area is memory. Bill also reported that in an actual SOC from Apple there was only 30% SRAM? Something is wrong with the blog or what Intel is reporting? I am guessing the blog but I could be wrong.

The Monolithic 3D blog has a lot of additional observations about the difficulty of scaling and the effects of variation in lithography with scaling. It is an interesting read. It would be nice if they would update some the information now that the 14nm node has been in production and the industry is moving to the 10nm node with triple and quad patterning.

Blog seems overly focused on high end network switch chips as far as memory area is concerned.
 

imported_ats

Senior member
Mar 21, 2008
422
63
86
I would indeed be interested to have more of a context about following graph, its accuracy and how this is in Apple's situation.


Basically, as you up performance, you end up having to change not only SRAM cells from 1:1:1 -> 1:2:1 -> 1:2:2 but also change SRAM organization. At low speed you can get away with long slow WL/BL because you have the time for them to settle. As you ramp up the speed you end up having to use shorter and shorter WL/BL which means more banks, banks, drivers, decoders, sense amps, etc. Which means more area. And then it feeds back on itself.
 

imported_ats

Senior member
Mar 21, 2008
422
63
86
SRAM is just one of those standard things about a node that companies disclose every time.

And BTW, I don't think those conferences are for their customers. I guess that TSMC's customers get information about the process node in a different way and with all the design rules and stuff.

Conferences are almost always bragging and marketing.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
Conferences are almost always bragging and marketing.
Well, ya - would you want to stand up there and say - "our products suck compared to our competition because...".
It's a chance to the industry to present it's best to generate an optimistic viewpoint for consumption by investors and tech media.
 

oak8292

Member
Sep 14, 2016
87
69
91
But even if you ignore the table that is divded in years, the thing that is wrong most of all with it is the formula.

Standard Node = 0.14 x (CPHP x MMHP)^0.67

I will not go into all the mathematical details, but the formula is really, really, really wrong in a very big way. The thing that messes it all up is the ^0.67 power.

I think I found the source of your pain. His name is Andrew Kahng and he is a professor at UC San Diego. He 'Calibrated' the 2011 ITRS Roadmap for SOC in 2013 with data from Chipworks and actual FinFET products on the market. With the addition of a lot of overhead 'uncore' and Design Equivalent scaling or 'realizable density growth' in transistor designs the node scaling factor was changed to 1.6x transistors per node. They wrote a CSE Dept. Technical Report #CSE2013-1002.

An interesting article on how we got here on naming is in IEEE Spectrum with the following quotes from Prof Kahng;

"But for Kahng, the steady progression of node names masks deeper problems. There is a difference, he says, between “available density” (how closely you can pack circuits and wires on a chip) and “realizable density” (what you can actually put into a competitive commercial product).

The sheer density and power levels on a state-of-the-art chip have forced designers to compensate by adding error-correction circuitry, redundancy, read- and write-boosting circuitry for failing static RAM cells, circuits to track and adapt to performance variations, and complicated memory hierarchies to handle multicore architectures. The problem, Kahng says, is that “all of those extra circuits add area.” His group has been scouring company specs and deconstructing images of chips for years, and they’ve come to an unsettling conclusion: When you factor those circuits in, chips are no longer twice as dense from generation to generation. In fact, Kahng’s analysis suggests, the density improvement over the past three generations, from 2007 on, has been closer to 1.6 than 2."

http://spectrum.ieee.org/semiconductors/devices/the-status-of-moores-law-its-complicated

Here is a link to the conference paper by Kahng;

http://vlsicad.ucsd.edu/Publications/Conferences/320/c320.pdf

Or an PowerPoint (it may download)

http://vlsicad.ucsd.edu/Publications/Conferences/320/c320_slide.pptx
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
ASML has reached 2 milestones in Q3:

* 90% EUV availability on an old tool for 4 weeks
* 1500 wafers per day over 3 days

* In their earnings call, they're clear that they are targetting a 2018-2019 ramp of EUV for production -- they expect to produce 12 EUV systems in 2017 and double in 2018 and double again in 2019. This means that the actual EUV production should start in 2019-2020. It seems Samsung's gonna have to play the waiting game with their 7nm if they are stubborn to not use immersion anymore at 7nm.

http://seekingalpha.com/article/401...-results-earnings-call-transcript?part=single

Andrew Gardiner

Yeah I understood. So Peter, you suggesting that we can anticipate perhaps a mix of sort of ones and twos of orders, but also for some of the bigger customers you anticipate a similar type of announcement to that which we had from Intel in the second quarter last year?

Peter Wennink

Yeah, I think that that's normally how we work which is not typical for EUV that’s typical for the Deep UV and other businesses also. I mean we are actually have these volume purchase agreements over a longer period of time, could be a years, could be a node, could be 18 months. It depends on the customer where basically we say at this particular volume these are the conditions the terms and conditions under which we will ship. And that’s also true for EUV for 1Cs and 2Cs [ph], you don't need volume purchase agreements, that to be clear.

So that for volume we need volume purchase agreements, because that has an impact of pricing and customers would like to see that impact. So this is also another surprise they were in deep discussions as we speak on those volume purchase agreements, which will be the trigger you could say which will be the umbrella and which the individual deals will be issued.
 
Last edited:
Reactions: Lodix

wingman04

Senior member
May 12, 2016
393
12
51
ASML has reached 2 milestones in Q3:

* 90% EUV availability on an old tool for 4 weeks
* 1500 wafers per day over 3 days

* In their earnings call, they're clear that they are targetting a 2018-2019 ramp of EUV for production -- they expect to produce 12 EUV systems in 2017 and double in 2018 and double again in 2019. This means that the actual EUV production should start in 2019-2020. It seems Samsung's gonna have to play the waiting game with their 7nm if they are stubborn to not use immersion anymore at 7nm.

http://seekingalpha.com/article/401...-results-earnings-call-transcript?part=single
What is EUV and Immersion?
 

superstition

Platinum Member
Feb 2, 2008
2,219
221
101
IBM's high performance processes have traditionally been plagued by the fact that they are extremely expensive and yield very poorly.

FDSOI and SOI FinFETs don't have "yield problems." Especially, if the foundry has done a bulk version of that node.

Samsung has given us a slide for this;


Because GlobalFoundries has done 20LPM & 14LPP. Thus, we should see the above with 22FDX & 14HP.

Stuff I have found;
22FDX volume starts March 2017. [1.0 PDK and production*]
14HP volume states July 2017. [1.0 PDK and production*]
*For the past six months.

Ask yourself why GloFo didn't use IBM's 22nm SOI process? Why it didn't license IBM's 14nm SOI high performance process (which hasn't even been used to build anything that anyone can buy yet)?
AMD may have decided to target CPUs that are more benefited by being made on the LPP. AMD may not be trying for the high-performance node design target at this point.

It seems clear enough that AMD had the means to replace Piledriver with, at least, a minor — but significant on the desktop — update to that architecture in the 8 core chips and chose not to for whatever reasons. The Stilt said the static leakage of 32nm SOI SHP has been dropped by 45% since the process was released so it's not even clear that moving to IBM's would have even been necessary to put out something more competitive than Piledriver. Adding in a micro-op cache would have probably helped quite a bit just by itself. With not having an iGPU and with the improvements to the leakage, yields, and consistency of that process it's not even clear that it's too late, even now, to release a high-performance CPU on it for gamers (since power consumption is not, despite all the talk about it, all that central to their needs).

But, since AMD chose not to change anything since 2012 it suggests that desktop high-performance isn't the company's focus. There's a lot of talk about Zen's Summit Ridge targeting this area but it might just be hype and the real focus is elsewhere.
 
Last edited:
Reactions: SunburstLP

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
What is EUV and Immersion?
In my other post I just posted I posted a video to a talk from someone at Google which is actually a great talk (3 parts), maybe that will help for a bit.

http://www.portvapes.co.uk/?id=Latest-exam-1Z0-876-Dumps&exid=thread...and-interconnect-pitch.2489528/#post-38534325

For the rest, this is still one my favorite semiconductor videos since it has a great depth but is easily understandable: https://www.youtube.com/watch?v=NGFhc8R_uO4.

But in short, lithography is one of the most important and costliest kind of steps you need to do to make a chip. Lithoghraphy uses light to pattern features on the chip, basically. However, light only has a certain wavelength, so you can't use light to theoretically print infinitely small features. As you can see in one of graphs on the first video (part 1), currently 193nm light is used. However, the size of the feature you can print is also directly (inversely) proportional to the numerical aperture. By using water, you can increase the aperture to 1.35, thus improving your resolution by about 1.35x. This is called immersion lithography ans has been used since 32nm.

But because you want to continue shrinking features (22nm, 14nm, etc.) you have to invent something to keep making things smaller. The best idea (that could be done in time and with reasonable cost) that people could come up with is to just expose the wafer twice (or even more) to double (or triple, etc.) resolution. This is called double or in general multiple patterning.

However, what you really want to do of course, is to use a smaller wavelength. This apparently was not possible anymore in an evolutionary way, so the next feasible option was EUV, which is at the far end of the ultraviolet spectrum (indeed, called extreme ultraviolet). This has a 13.5nm resolution. However, developing EUV has been an enormous pain and is still in development by ASML. The first reason is that a lens is not transparent to EUV. So you have to use mirrors, which lose a lot of light (0.7x or so) with every reflection, so you have to have a really strong light source to get about 250W to expose the wafer with. Developing this source to make it bright enough (right now only about 120W) and to have a good reliability (to get >85% uptime) is really difficult.

So that's lithography in a nutshell. There are also other methods that are being looked into, like electron beam litho or directed self assembly, but those have so far not been able to become compelling for of course primarily throughput reasons.
 
Reactions: wingman04

wingman04

Senior member
May 12, 2016
393
12
51
In my other post I just posted I posted a video to a talk from someone at Google which is actually a great talk (3 parts), maybe that will help for a bit.

http://www.portvapes.co.uk/?id=Latest-exam-1Z0-876-Dumps&exid=thread...and-interconnect-pitch.2489528/#post-38534325

For the rest, this is still one my favorite semiconductor videos since it has a great depth but is easily understandable: https://www.youtube.com/watch?v=NGFhc8R_uO4.

But in short, lithography is one of the most important and costliest kind of steps you need to do to make a chip. Lithoghraphy uses light to pattern features on the chip, basically. However, light only has a certain wavelength, so you can't use light to theoretically print infinitely small features. As you can see in one of graphs on the first video (part 1), currently 193nm light is used. However, the size of the feature you can print is also directly (inversely) proportional to the numerical aperture. By using water, you can increase the aperture to 1.35, thus improving your resolution by about 1.35x. This is called immersion lithography ans has been used since 32nm.

But because you want to continue shrinking features (22nm, 14nm, etc.) you have to invent something to keep making things smaller. The best idea (that could be done in time and with reasonable cost) that people could come up with is to just expose the wafer twice (or even more) to double (or triple, etc.) resolution. This is called double or in general multiple patterning.

However, what you really want to do of course, is to use a smaller wavelength. This apparently was not possible anymore in an evolutionary way, so the next feasible option was EUV, which is at the far end of the ultraviolet spectrum (indeed, called extreme ultraviolet). This has a 13.5nm resolution. However, developing EUV has been an enormous pain and is still in development by ASML. The first reason is that a lens is not transparent to EUV. So you have to use mirrors, which lose a lot of light (0.7x or so) with every reflection, so you have to have a really strong light source to get about 250W to expose the wafer with. Developing this source to make it bright enough (right now only about 120W) and to have a good reliability (to get >85% uptime) is really difficult.

So that's lithography in a nutshell. There are also other methods that are being looked into, like electron beam litho or directed self assembly, but those have so far not been able to become compelling for of course primarily throughput reasons.
Thanks for the information and links. I new a little about the Lithography and Intel using water for focusing even smaller features now I know a lot more thanks.
 
Reactions: witeken
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |