Discussion Leading Edge Foundry Node advances (TSMC, Samsung Foundry, Intel) - [2020 - 2025]

Page 87 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
Mar 3, 2017
1,687
6,243
136
TSMC's N7 EUV is now in its second year of production and N5 is contributing to revenue for TSMC this quarter. N3 is scheduled for 2022 and I believe they have a good chance to reach that target.


N7 performance is more or less understood.


This year and next year TSMC is mainly increasing capacity to meet demands.

For Samsung the nodes are basically the same from 7LPP to 4 LPE, they just add incremental scaling boosters while the bulk of the tech is the same.

Samsung is already shipping 7LPP and will ship 6LPP in H2. Hopefully they fix any issues if at all.
They have two more intermediate nodes in between before going to 3GAE, most likely 5LPE will ship next year but for 4LPE it will probably be back to back with 3GAA since 3GAA is a parallel development with 7LPP enhancements.




Samsung's 3GAA will go for HVM in 2022 most likely, similar timeframe to TSMC's N3.
There are major differences in how the transistor will be fabricated due to the GAA but density for sure Samsung will be behind N3.
But there might be advantages for Samsung with regards to power and performance, so it may be better suited for some applications.
But for now we don't know how much of this is true and we can only rely on the marketing material.

This year there should be a lot more available wafers due to lack of demand from Smartphone vendors and increased capacity from TSMC and Samsung.
Lots of SoCs which dont need to be top end will be fabbed with N7 or 7LPP/6LPP instead of N5, so there will be lots of wafers around.

Most of the current 7nm designs are far from the advertized density from TSMC and Samsung. There is still potential for density increase compared to currently shipping products.
N5 is going to be the leading foundry node for the next couple of years.

For a lot of fabless companies out there, the processes and capacity available are quite good.
 
Jul 27, 2020
17,986
11,726
116
It's good that they offer the 5600 option. Should result in lower latency and better for general use. 7467 would mainly benefit highly parallel apps or games using the tGPU.
 

Doug S

Platinum Member
Feb 8, 2020
2,507
4,104
136
So what components of a modern SoC are Analog?

View attachment 90918

Obviously there is the modem. What else?

The LNA and ADC on the input side and the DAC and VGA on the output side would be analog, everything else is digital. Those may be on a separate chip (or chips if they one for input and one for output) though; if so the modem is 100% digital.
 
Reactions: SpudLobby

Khato

Golden Member
Jul 15, 2001
1,225
281
136
No experience in the area to say how much of the 'Snapdragon X70 Modem-RF System' would be 'digital' versus 'analog'. But the majority of the chip periphery used for I/O would typically be considered 'analog', even if it's transmitting a 'digital' signal.
 

Khato

Golden Member
Jul 15, 2001
1,225
281
136
Interesting. On first thought 20% analog seems a bit high... Makes me curious as to whether the clock tree would be counted as part of the analog design? Would make sense for it to be. Power gating would also fall into that category.
 

FlameTail

Diamond Member
Dec 15, 2021
3,194
1,833
106
TSMC 5nm = 136 mm²
Samsung 4nm = 137 mm²
TSMC 4nm = 141 mm²

Source: https://www.angstronomics.com/p/the-truth-of-tsmc-5nm

So in the 5nm/4nm generation, Samsung barely kept up with TSMC 5nm/4nm.



But it gets better with 3nm, right?

NO!


Samsung 3GAP(SF3) only brings a 21% logic area improvement. Contrast this to TSMC's vast 1.6x scaling for logic with N3E (1.7x with N3B).

And for Samsung it only gets worse in 2nm.

Only 5% decrease !?

Meanwhile TSMC claimed a >15% density improvement in their N2 (vs N3E).

So Samsung 3nm is going to have worse density than TSMC 3nm.

And then in the 2nm generation, the gap is only going to widen. From thesr numbers, it seems SF2 will have worse density than even N3E!

I am now worried about Samsung Foundry's future prospects.
 

FlameTail

Diamond Member
Dec 15, 2021
3,194
1,833
106
About IMEC
You might not be familiar with the Interuniversity Microelectronics Centre (imec), but it ranks among the most important companies in the world. Think of imec as a silicon Switzerland, of sorts. Imec serves as a quiet cornerstone of the industry, bringing fierce rivals like AMD, Intel, Nvidia, TSMC, and Samsung together with chip toolmakers such as ASML and Applied Materials, not to mention the critical semiconductor software design companies (EDA) like Cadence and Synopsys, among others, in a non-competitive environment.

This collaboration allows the companies to work together to define the roadmap of the next generation of tools and software they will use to design and manufacture the chips that power the world. A standardized approach is paramount in the face of the profoundly increasing cost and complexity of the chipmaking process. The leading-edge chipmakers use much of the same equipment sourced from a few critical toolmakers, so some level of standardization is necessary, and circumventing the laws of physics requires R&D efforts that can begin a decade in advance, so imec’s roadmaps give us a broad view of the upcoming advances in the semiconductor industry.
Wow. I didn't know IMEC was so significant. It's cool how such an organisation exists.
 
Reactions: Elfear
Jul 27, 2020
17,986
11,726
116
I am now worried about Samsung Foundry's future prospects.
They deserve whatever fate awaits them. It's their own fault. Just like it is Intel's own fault for losing their leadership position. No need to worry about any of them. The passage of time applies whatever course correction is needed automatically, even if it means killing one organization and giving rise to a completely new player.
 

FlameTail

Diamond Member
Dec 15, 2021
3,194
1,833
106
They deserve whatever fate awaits them. It's their own fault. Just like it is Intel's own fault for losing their leadership position. No need to worry about any of them. The passage of time applies whatever course correction is needed automatically, even if it means killing one organization and giving rise to a completely new player.
They have been fudging about for the past few years.

Samsung 5nm wasn't even a "real 5nm", as they instead lazily opted to make a derivative of their 7LPP node, creating 5LPE, which was significantly inferior to TSMC N5.

Then came their ground up new node 4LPE with it's terrible yields.

Together, 5LPE, 4LPE and 4LPX ruined two whole generations of Android phones.

In the meanwhile, Samsung had been hyping up their 3nm for years- with proud boasts of being the first node with GAAFET.

That hypetrain has now been wrecked too.

They do have 3GAE working, but it isn't a proper node as it doesn't have SRAM cells apparently, and the only thing it has been producing so far is cryptomining chips.

3GAP is coming, but it seems we won't see a chip with it until January 2025- the Exynos 2500.

So we will see a mass volume product with Samsung 3nm only in 2025. TSMC is already producing mass volume N3B right now, and by Jan 2025, they will have mass volume N3E too.

And in the meantime Intel will have shipped Arrow Lake with 20A - with GAAFET, beating Samsung by being the first to ship mass volume GAAFET.

Samsung fumbled 5nm and 4nm. Their big bet on 3nm has also collapsed.

I have no more words to say.
 
Reactions: Executor_
Jul 27, 2020
17,986
11,726
116
Well I suppose that could help stuff living in datacenters, or gamers who are willing to spare no expense. Not really gonna help with smartphones and laptops though!
Yeah, mobile devices can't use this. But this "extreme low temperature" tolerance could lead to the cloud migrating to space data centers where you don't need any cooling at all. Imagine a long chain of servers floating in space, going all the way to the moon and beyond, with enough compute capacity to serve entire humanity! I bet Amazon AWS will pave the way since Bezos wants to move every source of industrial heat and toxic chemical fumes out into space to make Earth pristine and clean.
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
Yeah, mobile devices can't use this. But this "extreme low temperature" tolerance could lead to the cloud migrating to space data centers where you don't need any cooling at all. Imagine a long chain of servers floating in space, going all the way to the moon and beyond, with enough compute capacity to serve entire humanity! I bet Amazon AWS will pave the way since Bezos wants to move every source of industrial heat and toxic chemical fumes out into space to make Earth pristine and clean.
C is too slow.
 

Aapje

Golden Member
Mar 21, 2022
1,467
2,031
106
Yeah, mobile devices can't use this. But this "extreme low temperature" tolerance could lead to the cloud migrating to space data centers where you don't need any cooling at all.
It is actually a misconception that because space is 'cold,' it is easy to cool things. The lack of air in space means that you don't get convection cooling and need to move heat to radiators or the like over metal connections that can't have any gaps. Then those radiators work merely by emitting IR radiation.

This is a lot more complicated than on earth where you can simply slap a fan on a hot surface and make some holes to blow out the air.
 
Last edited:

dr1337

Senior member
May 25, 2020
385
639
136
It is actually a misconception that because space is 'cold,' it is easy to cool things. The lack of air in space means that you don't get convection cooling and need to move heat to radiators or the like over metal connections that can't have any gaps. Then those radiators work merely by emitting IR radiation.

This is a lot more complicated that on earth where you can simply slap a fan on a hot surface and make some holes to blow out the air.
You can rest assured that doing anything on earth is less complicated than rocket science, but things in space do still get cold even if its hard to radiate heat. Apollo needed heaters on both the service module and lander because the crafts inherently radiated heat away so fast. Also I'm pretty sure the massive oversized radiators they have on the ISS are liquid cooled anyways.

Its a good idea but it could be better, we should just put a giant datacenter on the moon's poles and use the very cold rock as one massive heatsink. Looking at the cost trend of advanced nodes this obviously becomes cost effective sooner than later.
 

Doug S

Platinum Member
Feb 8, 2020
2,507
4,104
136
You can rest assured that doing anything on earth is less complicated than rocket science, but things in space do still get cold even if its hard to radiate heat. Apollo needed heaters on both the service module and lander because the crafts inherently radiated heat away so fast. Also I'm pretty sure the massive oversized radiators they have on the ISS are liquid cooled anyways.

Its a good idea but it could be better, we should just put a giant datacenter on the moon's poles and use the very cold rock as one massive heatsink. Looking at the cost trend of advanced nodes this obviously becomes cost effective sooner than later.

It is a silly idea putting anything in space let alone on the Moon to keep it cool, even if that made it easier to keep stuff cool which it really doesn't. The cost/benefit isn't there (and don't forget the other rather large downsides, such as network latency and ease of installation/maintenance/upgrades)

Cooling to nitrogen liquefaction is pretty easy to do - otherwise you wouldn't see standardized LN2 PC cooling kits that normal people can buy for a pretty reasonable cost, at least considering it is pretty niche. Yes cooling to that level costs more in terms of energy because you're having to move more heat, but for DOUBLE the performance that's an easy economic case to make.

You wouldn't even need any sort of special data center for it. You'd probably have insulated racks designed for it, with integrated LN2 plumbing that connects to piping either hung from the the ceiling or under the floor that connects to a big industrial chiller. Other rows could still have standard air cooled servers. The main problem with trying to use existing datacenters is probably power density. I'm assuming if you're doubling the performance at that temperature you are AT LEAST doubling the power per rack to run it. You might need to rewire to deliver that much power, and you'd need to massively upgrade the UPS not only for the servers' power needs but for the increased cooling load. That means upgrades of the power feeds and perhaps the substation feeding them. Oh and you'd need a lot more water too due to all that extra heat.

I wonder what the characteristics are of these transistors if they are able to double their performance at those temperatures. Is it a smooth drop off of performance as temperature increases, or is it a cliff? If you lose cooling running at that performance level, how quickly do you have to react by cutting back on the clock rates? Hopefully it isn't like a quench in a superconducting magnet lol
 
Reactions: Executor_

Aapje

Golden Member
Mar 21, 2022
1,467
2,031
106
Apollo needed heaters on both the service module and lander because the crafts inherently radiated heat away so fast. Also I'm pretty sure the massive oversized radiators they have on the ISS are liquid cooled anyways.
There actually was a cooling system installed on both the CSM and LM of Apollo. The issue was more that at some points, most of the electronics were turned off to preserve the electricity in the batteries.

Despite most of the electronics being powered off, the remainder could have been used to heat the crew compartment, except that it was deemed to complicated to build something to conduct this heat to the crew for the rare moments where most of the electronics were turned off, so they used heaters instead. During normal operations, there was excess heat in the crew compartment and the heat transfer was only in the other direction, so they didn't want to add something complicated that could break, for the rare moments were this special system was useful.

Anyway, once you start putting server farms in space, you are very far off from: most electronics are turned off.
 
Last edited:
Reactions: adamge
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |