TahoeDust
Senior member
- Nov 29, 2011
- 557
- 404
- 136
Performance always comes at the expense of efficiency. You couldn't get ANY Ryzen chip to run at 4.7Ghz on air with any kind of voltage, let alone complete a successful run of cinebench R.15. So yeah, Intel's 14nm+ process is looking very good. If the saving grace for AMD is to say: "Oh look, our chips are slow and efficient," then so be it. But let's not be fooled, the existence of the 1700X and 1800X, which are clocked near their limits from the factory tells me AMD is rightly not after "efficiency," they've been forced into those lowish clocks because of their process or design or both.You get the 1800X to ensure 4GHz OC, while you get the 1700 to do 24x7 tasks efficiently. For that you don't need 4GHz.
There has to be a compromise when you chase after the best performance - in this case its efficiency.
Why should "better efficiency" be a forced choice instead of a decision that had been set in stone right from the start? No one who cares about efficiency and wants to do video encoding would run the 7900X at 4.7GHz on a 24x7 basis.Performance always comes at the expense of efficiency. You couldn't get ANY Ryzen chip to run at 4.7Ghz on air with any kind of voltage, let alone complete a successful run of cinebench R.15. So yeah, Intel's 14nm+ process is looking very good. If the saving grace for AMD is to say: "Oh look, our chips are slow and efficient," then so be it. But let's not be fooled, the existence of the 1700X and 1800X, which are clocked near their limits from the factory tells me AMD is rightly not after "efficiency," they've been forced into those lowish clocks because of their process or design or both.
7900x@4.6 Ghz losing to 6950x@4.4 Ghz in half the tests is not a good sign for Skylake-X IPC. Skylake-X / Skylake-SP seems to have been designed for server workloads.
But wasn't Broadwell-E also designed for server workloads?
Cost is the primary motivation, and far less sinister as well.Do you guys think Intel used the TIM to limit performance? I think they might have used TIM so they can release the same or similar chips later with solder and market them as overclocking friendly and offer more performance simply due to solder. Probably charge more for them as well. Solder is a feature, folks.
Efficiency is the ratio between performance and power usage. The relation between performance scaling and power usage is a quality of the process node used. 14LPP that Ryzen uses has the best possible ratio up until around 3.3GHz, after that the ratio is dropping. At around 4.0/4.1GHz the ratio is in free fall.Performance always comes at the expense of efficiency.
Do you guys think Intel used the TIM to limit performance? I think they might have used TIM so they can release the same or similar chips later with solder and market them as overclocking friendly and offer more performance simply due to solder. Probably charge more for them as well. Solder is a feature, folks.
Doesn't that also apply to LLC = 10c people as well?
Memory latency took a hit, which is the more likely reason for gaming issues than L3 IMO.
It is now between Zen and Skylake in terms of memory latency.
Cost is the primary motivation, and far less sinister as well.
Drama...I don't think it will influence decisions.100c doing just cinebench at 1.3v? The clocks are great, but that TIM is devastating and will force de-lidding. The go-to chip will be the 6 core, because few will want to ditch their warranty on a more expensive chip, and de-lidding will be mandatory with these.
Haven't you heard? Intel is coming out with their own delid tool to increase OC performance. $99 if purchased separately, but only $79 if you get one bundled with a $100 Intel RAID key.
The initial research that influenced the decision to scrap solder in favor of polymer TIM was motivated by cost.I don't think its sinister. Maybe they figure performance is good enough with the thermal wall they have now. Using solder later will give them a new marketing spin and WAY better temps for overclockers. They did something similar with Devil's Canyon by using at least better TIM or something, right? So why not do it again? Maybe with solder this time? Maybe they used TIM because the parts were rushed and it simply takes longer to solder the chips? Much faster to slab some grease under the hood and call it "good enough".
EDIT: Seriously now, has ANYONE asked Intel WHY they used TIM on their chips? Was cracking dies the reason? Have they ever given an honest answer to the question? Or is all hope lost there?
This would be really weird, considering what we were told in relation to Intel's new mesh interconnect and cache structure:In some of these benchmarks IPC is way below Broadwell, especially for gaming because games tend to be more cache/latency sensitive. I would like to hear some statements from Intel, I wonder what they have to say about this.
PcPer said:Starting with the HEDT and Xeon products released this year, Intel will be using a new on-chip design called a mesh that Intel promises will offer higher bandwidth, lower latency, and improved power efficiency.
And this is straight from Intel:PcPer said:There is a lot to dissect when it comes to this new mesh architecture for Xeon Scalable and Core i9 processors, including its overall effect on the LLC cache performance and how it might affect system memory or PCI Express performance. In theory, the integration of a mesh network-style interface could drastically improve the average latency in all cases and increase maximum memory bandwidth by giving more cores access to the memory bus sooner. But, it is also possible this increases maximum latency in some fringe cases.
Intel said:Negligible latency differences in accessing different cache banks allows software to treat the distributed cache banks as one large unified last level cache. As a result, application developers do not have to worry about variable latency in accessing different cache banks, nor do they need to optimize or recompile code to get a significant performance boosts out of their applications.
It's not forced, it's a compromise. AMD's tops out at around 4ghz. Intel's has been shown to be doing 4.8Ghz.Why should "better efficiency" be a forced choice instead of a decision that had been set in stone right from the start? No one who cares about efficiency and wants to do video encoding would run the 7900X at 4.7GHz on a 24x7 basis.
Context. But yeah, all processes have their sweet spots.Efficiency is the ratio between performance and power usage. The relation between performance scaling and power usage is a quality of the process node used. 14LPP that Ryzen uses has the best possible ratio up until around 3.3GHz, after that the ratio is dropping. At around 4.0/4.1GHz the ratio is in free fall.
GloFo announced 14LPP ("low power plus") as being optimized for 3GHz, while they claim the upcoming 7LP ("leading performance") to be at 5GHz.
Memory latency could be a result of the changed cache hierarchy as well. If these benchmarks are legit the new cache structure is worse than the old. Because with Skylake it should have a constant IPC advantage over Broadwell, even if it is tiny. In some of these benchmarks IPC is way below Broadwell, especially for gaming because games tend to be more cache/latency sensitive. I would like to hear some statements from Intel, I wonder what they have to say about this.
Fortunately Intel isn't going to use this structure for the mainstream (definitely not for Coffeelake and Icelake doesn't use it either based on early Geekbench entries).
It is a choice, because the same basic unit has to be in everything from an ultrabook-like laptop to a 2P server.It's not forced, it's a compromise. AMD's tops out at around 4ghz. Intel's has been shown to be doing 4.8Ghz.
100c doing just cinebench at 1.3v? The clocks are great, but that TIM is devastating and will force de-lidding. The go-to chip will be the 6 core, because few will want to ditch their warranty on a more expensive chip, and de-lidding will be mandatory with these.
Easy there. Let's wait for reviews, shall we?The millions saved on solder sure are showing their worth here.
Still, the results overall look weird. I'd give the platform a few weeks of BIOS updates and tweaks and then see what it's capable of. Seems rushed, just like Ryzen at launch.
We can thank AMD for that, for a great 2017 and even more interesting 2018 and beyond. Nice to see these two compete again.
Meh, I am still getting a 7820X and turning off HT to save power and heat. Running it with my Corsair H100i, I hope to get a 4.5Ghz OC and I will be happy. Anything else is gravy.
It definitely helps. It'll be the first 'tweak' I apply just because I won't be doing heavy encoding where every bit of juice helps.Unless I'm missing something, I'm not sure you'll save much on power / heat by turning off HT. No material benefit, at least.