Discussion Intel current and future Lakes & Rapids thread

Page 444 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

eek2121

Diamond Member
Aug 2, 2005
3,051
4,275
136
I found this tidbit interesting:

Recently, Intel has disclosed the value of PL1 used as TDP, but the value of PL2 used for the maintenance time of the boost function (Intel® Turbo Boost Technology) is not described in the data sheet. PL1 * 1.25 is said to be the hardware default, which is far from the actual setting, as you can see from the settings described in Coreboot above.

Any thoughts?
 

jpiniero

Lifer
Oct 1, 2010
14,836
5,454
136
I found this tidbit interesting:

Recently, Intel has disclosed the value of PL1 used as TDP, but the value of PL2 used for the maintenance time of the boost function (Intel® Turbo Boost Technology) is not described in the data sheet. PL1 * 1.25 is said to be the hardware default, which is far from the actual setting, as you can see from the settings described in Coreboot above.

Any thoughts?

OEMs have wide latitude in what they set the PL1 and PL2 to.
 

Asterox

Golden Member
May 15, 2012
1,028
1,786
136
Those power limits seem high but aren't really out of the ordinary now for Intel.

Yes, if this is 4 big Cores+8 small Cores.

4 + 8 + GT2 は PL1: 45W/PL2: 115W となっている

What we can expect from PL2 for 8+8, hm 200W+ no doubt for top model.
 

eek2121

Diamond Member
Aug 2, 2005
3,051
4,275
136
I just think it's weird seeing the 11980HK with a peak power draw higher than the 5600X. And on that note, the 5980HS also seems a little toasty too - it's not a desktop part! But 11980HK is just off the chain.

I'm waiting on 11800H or 11900H reviews to drop. Seems like Intel took the 11980HK too far.

Yes, if this is 4 big Cores+8 small Cores.

4 + 8 + GT2 は PL1: 45W/PL2: 115W となっている

What we can expect from PL2 for 8+8, hm 200W+ no doubt for top model.

For ADL-P:

4+8+2 config is this:
PL1 = 28 Watts
PL2 = 64 Watts

Coreboot lists TGL-UP3 with:
PL1 = 15W
PL2 = 60W

So yes, PL1 is a bit higher, but PL2 hasn't changed much at all (so far). That is for 12 cores vs. 4. The 2+8+2 part is actually closer to TGL-UP3, however it is important to note that the 2+8+2 values are specific to Google Brya, which is a chromebook.

Nothing has been added regarding ADL-S yet.

EDIT: Perhaps more interesting is that the 6+8 "P" variant will edge out the 5800X in single and multicore performance. EDIT: And the 4+8 variant should absolutely beat a 5600X, it may even beat the 5800X...
 
Last edited:

itsmydamnation

Platinum Member
Feb 6, 2011
2,864
3,418
136
The upgrade cycle in enterprise will blow your mind. Servers run until they die or a major infrastructure project is planned. Most companies I have worked with replace servers every 6-8 years.

You can expect EPYC to become more popular in a few years.

Which do you think costs more in the long run, 2X 64 core EPYC servers or 1X 128 core EPYC server? What if you have 2 racks in a tiny (well cooled) closet as one of my clients does? Building a new closet is a significant capital investment. Adding a 2S motherboard or 4S motherboard for a few hundred or thousand more + additional CPU does not.
depends on workload, but assuming the each of 2x64 serves has the same uncore as the 128core server ( like EPYC right now) im picking 2x64 every day of the week. At scale memory bandwidth is king.

also licencing plays a big part, when an ESX + vsan 1P(where 1P = max 32cores) license costs as much as the server im optimising at throughput at 32 core a socket.

edit: this is also why icelake server isnt so bad, they will be able to sell plenty of 32core models to enterprise and they have about the same performance as MIlian EPYC
 
Last edited:

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
depends on workload, but assuming the each of 2x64 serves has the same uncore as the 128core server ( like EPYC right now) im picking 2x64 every day of the week. At scale memory bandwidth is king.

also licencing plays a big part, when an ESX + vsan 1P(where 1P = max 32cores) license costs as much as the server im optimising at throughput at 32 core a socket.

edit: this is also why icelake server isnt so bad, they will be able to sell plenty of 32core models to enterprise and they have about the same performance as MIlian EPYC
What is your workload, if you can say?
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,864
3,418
136
What is your workload, if you can say?
all softs of stuff
general enterprise compute (DC's , terminal servers, apps, db's , etc )
software based storage
network function virtualisation (NFV)
security/ analytics / event correlation

all the good things any org of any size needs.
one thing i was looking forward to trying but hardware validation timelines got in the way was milian SEV-SNP for (NFV) functions.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
TGL-U was PL1 42W/PL2 64W in the most aggressive setup, right? So 115W is kinda high compared to that. Maybe the Golden Coves are really fast but hungry.

However, it got moar corez potentially affecting the balance.

The figures are no higher than Tigerlake. The M part is actually 5-10W lower than UP4 Tigerlake.

The 45W Alderlake-P is a replacement of Tigerlake-H which can have PL2 set as high as 135W.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
depends on workload, but assuming the each of 2x64 serves has the same uncore as the 128core server ( like EPYC right now) im picking 2x64 every day of the week. At scale memory bandwidth is king.

Yeah, 2P is where the action is. 4P starts having nasty effects and are only viable if the workloads fits it (like big data type workloads, NUMA aware apps and so on).
Even architecturally things like UPI links between sockets can get overloaded by inter socket traffic. I think Golds have 2xUPI, Platinums 3xUPI links and in 4S configuration you only get 1 UPI link to neighbor ( or even worse, have to go additional hop if 2xUPI CPUs were used ).
So while in theory you have ton of DIMMs, inter-socket traffic will kill you if your app is not NUMA aware ( or interleaving was enabled for whatever reason ).

2P is really a sweet spot in memory channels, inter-socket bandwith and "throughput/licenzing" metric.
 

repoman27

Senior member
Dec 17, 2018
378
535
136
If manufacturing process and tools are ready they can just tape out product to them. But when manufacturing process isn't complete yet there is no way to tape out design. So they tape-in design, trying to develop design and process design rules to actually manufacturable and when they are ready design is taped out.

Those Intel links they estimate that tape out for product family should happen within three years of tape-in of first product......
That document is specifically a customer agreement with Altera where Intel will manufacture and sell all products with a "Product Attachment" agreement which can also include similar or successor products that are part of the same product family. The three year clause is a limitation on the original Product Attachment for a given product family. Intel expects all products in the family to tape out within 3 years of the lead product tape in, otherwise a new or updated Product Attachment would be required.

Nah, tape-in is after verification and PD sign-off, you can't tape in if you have not verified your interfaces.

Historically tape-in just meant before the GDS generation and tape-out meant handing off GDS to the fab. But considering the likely state of Intel 7nm it could just mean that design has signed off and moved on while the fab figures out how to make the thing.
Intel shifted to "tape in" in their public communications with 10nm / Ice Lake. Given the heavy reliance on multi-patterning and the increasing number of steps between GDS and actual mask data production, the period between design sign-off and having a usable set of masks was probably way longer than for previous nodes. Their general struggles with 10nm viability and concurrent manufacturing capacity constraints would have further exacerbated that.

Intel uses certain engineering milestones for accounting purposes so what they say publicly is often carefully worded as those milestones often have a material financial impact. For instance, costs incurred in the design life cycle up to the point of producing the first full set of masks are recognized as R&D expenses, and thereafter as cost of sales. Thus "tape out" would generally signal that transition occurring within the quarter, whereas "tape in" could indicate a delay of one or more quarters prior to mask production.

From what you are saying, though, it seems pretty clear that "tape in" means that physical design has been completed and the final layout for the entire SoC or tile is ready to be released to manufacturing.
 

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
Intel shifted to "tape in" in their public communications with 10nm / Ice Lake. Given the heavy reliance on multi-patterning and the increasing number of steps between GDS and actual mask data production, the period between design sign-off and having a usable set of masks was probably way longer than for previous nodes. Their general struggles with 10nm viability and concurrent manufacturing capacity constraints would have further exacerbated that.

You are correct. There is the dates for “base layer tape-out” followed by each of the metal layers.

Announcing a 7nm “tape-in” says absolutely nothing about 7nm process health which is far more important to Intel’s financial future than the Oregon design team signing off on some IP targeted for 2023 release. In fact, the time gap between this tape-in and release date says far more about Intel’s situation than the actual sign-off.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Yeah, all direct NVMe slots are PCIe 4 on ADL-S/P/M


And on top of that, these client platforms do not (usually) support 4x4 bifurcation of these slots. Otherwise one could get one of those adapters and put proper PCIE5 drives in primary x16 slot.
On the other hand, i think even PCIE3 bw is plenty, i'd take PCIE3 Optane 905P anyday over those 10GB/s read client drives that are looking for a problem to "solve".
 
Reactions: scineram

LightningZ71

Golden Member
Mar 10, 2017
1,661
1,945
136
Given the conductor constraints on PCIe 5.0, I'm scared to see how much even the passive 4 by x4 M.2 NVME cards will cost.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
Announcing a 7nm “tape-in” says absolutely nothing about 7nm process health which is far more important to Intel’s financial future than the Oregon design team signing off on some IP targeted for 2023 release. In fact, the time gap between this tape-in and release date says far more about Intel’s situation than the actual sign-off.
1.5-2 years to get product into the hands of OEMs and (hopefully) consumers. Intel 7nm is looking just peachy. Have all the really talented engineers and scientists at Intel process R&D taken early retirements??? It's like the B team has taken over. Or, maybe the $8.5B stock buyback was paid for out of R&D's coffers.
 

Exist50

Platinum Member
Aug 18, 2016
2,452
3,101
136
1.5-2 years to get product into the hands of OEMs and (hopefully) consumers. Intel 7nm is looking just peachy. Have all the really talented engineers and scientists at Intel process R&D taken early retirements??? It's like the B team has taken over. Or, maybe the $8.5B stock buyback was paid for out of R&D's coffers.

1.5 years is a pretty normal gap between first TO and shipping. Or in other words, their schedule doesn't seem to account for significant 7nm issues.
 

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
1.5 years is a pretty normal gap between first TO and shipping. Or in other words, their schedule doesn't seem to account for significant 7nm issues.

1.5 years if it is released New Years Day 2023. LOL. Also, 4 to 6 quarters is the normal qualification period at Intel, not the 6 to 8 that you say.

Intel 7nm is looking just peachy. Have all the really talented engineers and scientists at Intel process R&D taken early retirements?

Laid off, fired, quit. Take your pick.
 
Last edited:

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
Yeah, all direct NVMe slots are PCIe 4 on ADL-S/P/M
For consumer grade motherboards, the current ATX layout spec is getting outdated really fast as PCIe data rates (clocks) go up. PCIe connectors for 5.0 and 6.0 will need to be closer to the CPU. Running 4 high speed lanes down to the chipset (which will need to be closer as well) and then fanning out at a lower speed should be fine for all secondary peripherals. The alternative is adding active circuitry to fix clock screw and other signalling issues and that gets expensive as well. So, I don't see how faster implementations can be done without increasing the number of board layers (for routing and cross-talk issues).
 

Exist50

Platinum Member
Aug 18, 2016
2,452
3,101
136
Also, 4 to 6 quarters is the normal qualification period at Intel, not the 6 to 8 that you say.

No, it's not. 5 is considered very good in the industry for a complex soc. Intel's historically been slower.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |