It will be a success based on how much effort they put into it. Intel's 14nm would be a massive improvement for many designs. The majority of TSMC revenue comes from 16nm and older processes I think Intel's soon-to-be-spare 14nm could compete well in that space if they help potential customers get their designs working on the process.
You want a bet or something? Tremont is already at Ivy Bridge levels. Another 30% gets us to Skylake.Intel's Atom Architecture: The Journey Begins
www.anandtech.com
Do you know what makes that possible? Because ARM cores can do it.
Nope. Let's not even get that far. Their own "little" core team is owning them..
Lakefield had problems, yes. But complaining about rendering performance on a 7 W device is probably the worst possible argument you could make against Lakefield. Honestly who buys a laptop with low performance and long battery life with the intention of fast image rendering? That would be like a high-end restaurant buying their ingredients from McDonald's down the street, doing poorly, then complaining that therefore the McDonald's food could not possibly be successful.
See, now there is a great argument about Lakefield. Lakefield was supposed to be used in laptops that can run a full 24-hour day. It can't, it only gets 17 hours max.battery lasting significantly longer even while outperforming Lakefield (e.g. under load), which points to a much higher power efficiency.
Lakefield is a terrible comparison point given the node it's on completely blows amongst other issues with the design.You are delusional if you honestly believe, you can get the same efficiency from a core design using x86/64 ISA compared to ARM ISA - everything else being the same. Just look at Lakefield to understand where Tremont stands compared to generic synthesizable ARM cores from even few years back.
My expectation is that Gracemont might roughly match Cortex A76 IPC, if lucky - at worse power and size.
Lakefield had problems, yes. But complaining about rendering performance on a 7 W device is probably the worst possible argument you could make against Lakefield. Honestly who buys a laptop with low performance and long battery life with the intention of fast image rendering?
Lakefield is a terrible comparison point given the node it's on completely blows amongst other issues with the design.
Let me just link a research paper on the topic, which you should read through yourself: hpca13-isa-power-struggles.pdf (wisc.edu), but ultimately the TL;DR is that ISA does not have a significant affect on efficiency. The uArch is what has by far the largest effect on energy efficiency (when normalising the designs to the same node)
I am almost certain that the best first step for all desktop Alder Lake buyers will be going to BIOS and disabling Atom cores. Performance will be way more consistent and not at the mercy of Windows scheduler.
I'm not following? How would reducing the amount of available compute in a CPU result in better performance? It seems as though this would be a massive mistake for Intel if they spent tens of millions of dollars on engineering, design, and production, trying to optimize every square millimeter of die space and then produce a product that would perform better if some of that die area was turned off.
Im 100% confident intel can do a working big.little design. Theres no reason they should be limited by windows (or any OS) in any meaningful sense.The same as disabling power saving features like downclocking improves performance and performance consistency. If it takes CPU and OS 10-15ms to realize it is under load heavy enough to ramp from 800mhz to 5ghz that is billions of clock cycles missed.
Same applies to OS wrongly scheduling the task on weaker core(s). It has to move threads from one CPU to another, and that is different L2 and it takes cache misses. And in stock configuration, big core also needs to wake from power save modes, ramp clocks and so on. Also some non-deterministic things can happen, like critical GPU driver thread being stuck on small core and OS deciding to keep it there, cause it has history of being idle. Too bad GPU heavy game is running now and your FPS are somehow half and you take off to Reddit and forums to blame AMD.
All that is avoided by not having to choose at all.
I posted ten ways that the decision does not need to be made. http://www.portvapes.co.uk/?id=Latest-exam-1Z0-876-Dumps&exid=thread...ure-lakes-rapids-thread.2509080/post-40462511All that is avoided by not having to choose at all.
I could probably list dozens of more ways. I don't know if any of those are the route that will be taken, but there are plenty of options available to eliminate any possibility of penalty.
@JoeRambo is more concerned about performance consistency than he is about the extra MT performance brought by the little cores. His example with frequency and sleep states control shows how the same system can be configured to get the same results in classic throughput benchmarks and yet feel more or less responsive.I'm not following? How would reducing the amount of available compute in a CPU result in better performance? It seems as though this would be a massive mistake for Intel if they spent tens of millions of dollars on engineering, design, and production, trying to optimize every square millimeter of die space and then produce a product that would perform better if some of that die area was turned off.
@JoeRambo is more concerned about performance consistency than he is about the extra MT performance brought by the little cores. His example with frequency and sleep states control shows how the same system can be configured to get the same results in classic throughput benchmarks and yet feel more or less responsive.
Many Intel Skylake DYI systems aren't properly configured for the best blend of responsiveness and efficiency:
I went with option 2, while JoeRambo probably went with option 3. Regular users may get served with option 1, meaning less responsiveness and less efficiency combined (but more stability for auto-overlocks by means of enabling MCE).
- stock behavior usually ends up with most sleep states disabled, and frequency controlled by the OS (Balanced Profile). Problem here is sleep states save more power than lower clocks, and the OS takes a while to ramp up the cores to optimal frequency - think tens of milliseconds. Some users learn to move to the more agressive High Performance profile, which kills idle efficiency instead.
- enabling sleep states in BIOS usually puts the system in a more favorable position, since idle power consumption is way lower... and this also potentially enables SpeedShift - which is a quicker, hardware based CPU frequency control mechanic from Intel. Frequency is still variable, but ramping the CPU up is faster by an order of magnitude to around 1ms as the OS no longer (fully) controls P-States.
- further customization can be done by keeping Sleep States but making sure SpeedShift is disabled and the OS does not scale CPU clocks. This ensures the CPU is as responsive as possible while still enjoying most of the energy saving benefits .
It is up to Intel to put up or shut up on this topic when Alder Lake launches. Intel claims: "Alder Lake will involve Intel’s next generation hardware scheduler, which we are told will be able to leverage all cores for performance and make it seamless to any software package." However, many people on Anandtech feel that claim is not possible or at least will not be met.I understand that frequency and sleep state controls are designed to reduce performance when the system deems it not needed in order to save energy. I'm not sure I totally follow the analogy to the small cores also actually being a performance detriment when active.
Just to add. Speedshift is adjustable. You can set it to 1 for the fastest response or 255 for the slowest. You can disable Speedstep but keep Speedshift for optimal battery on laptops and the number around the middle(80-100) for balance of performance and power usage.
- enabling sleep states in BIOS usually puts the system in a more favorable position, since idle power consumption is way lower... and this also potentially enables SpeedShift - which is a quicker, hardware based CPU frequency control mechanic from Intel. Frequency is still variable, but ramping the CPU up is faster by an order of magnitude to around 1ms as the OS no longer (fully) controls P-States.
Your "power efficient" CPU is using 1.5x the entire idle desktop power budget of modern laptops. A single extra watt of power on a mobile CPU can tank battery life by 2-3 hours.P.S. I don't destroy power efficiency just unlike those who disable C1E or deeper package states: my CPU is still quite efficient, despite having static voltage OC:
View attachment 41743
Just my CPU takes hundreds of uSecs instead of a dozen miliseconds to spring to action.
10-15 ms is 1 frame at 60 FPS. 10-15ms delay from 800 Mhz to 5 GHz is also millions of clock cycles, not billionsThe same as disabling power saving features like downclocking improves performance and performance consistency. If it takes CPU and OS 10-15ms to realize it is under load heavy enough to ramp from 800mhz to 5ghz that is billions of clock cycles missed.
Same applies to OS wrongly scheduling the task on weaker core(s). It has to move threads from one CPU to another, and that is different L2 and it takes cache misses. And in stock configuration, big core also needs to wake from power save modes, ramp clocks and so on. Also some non-deterministic things can happen, like critical GPU driver thread being stuck on small core and OS deciding to keep it there, cause it has history of being idle. Too bad GPU heavy game is running now and your FPS are somehow half and you take off to Reddit and forums to blame AMD.
Your "power efficient" CPU is using 1.5x the entire idle desktop power budget of modern laptops. A single extra watt of power on a mobile CPU can tank battery life by 2-3 hours.
By the way, since he has a 10900K, he'll be hard pressed to get it under 7.5W for CPU package power. 7.5W idle and 200W max load is an awesome dynamic range.
10-15 ms is 1 frame at 60 FPS. 10-15ms delay from 800 Mhz to 5 GHz is also millions of clock cycles, not billions
Closing messenger programs and SSH tunnels resulted in even better usage:
I've seen down to 3.1W actually, so dynamic range is even more awesome - 3W to 250W under stress tests.
It's quite possible the small cores will be a performance detriment for some workloads and a performance enhancement for others:I'm not sure I totally follow the analogy to the small cores also actually being a performance detriment when active.
They are felt by the user, that's why both Intel and AMD invested R&D into making frequency and sleep state transition faster by migrating the decision process from software to hardware. Humans can sense a 10-30ms delay, especially as it adds up on top of the actual computation time to get the expected result on the screen, and especially if this is linked to motion in the UI.It has been my experience that unless you get really aggressive with all of the power saving settings, most of the time they operate behind the scenes and are not felt by the user.