There is no reason to do McIVR over FIVR unless you are using multiple dies.
I think there is a reason for doing so.
They were having some problems during the Haswell era which prompted them to abandon it on Skylake. There were also theories that for high end client it doesn't make sense because it increases thermals. Having it on a seperate die would mitigate that - either by not having it at all on Desktop or just more room to dissipate heat.
Who knows? Maybe in practice it'll turn out to be better.
I think if Cannonlake's yields aren't so atrocious that they miss targets the IVR would improve battery life significantly. When Skylake first came to laptops the gains were erratic. On idle it sometimes used significantly more power than Broadwell. Skylake does have new features for power management but I just felt it mitigated the loss of FIVR.
Considering they attribute FIVR to enabling 50% battery life gain on laptops for Haswell I'm expecting good things(in practice) for Cannonlake/Icelake on the mobile front. Not going to be 50% but better than average.
The patent I quoted describes what is essentially a stacked die with multiple IVRs integrated into the package substrate. The reasoning they state for using this configuration is that when integrating multiple dies (such as with EMIB or MCM) you can have different optimal voltage regulation parameters required for each die, so a single FIVR won't provide optimal voltage regulation. What benefit would McIVR provide over FIVR on a single die package?
They were having some problems during the Haswell era which prompted them to abandon it on Skylake.
I think more important than testing methodologies is that 3770k and 7700k are different designs which are made to function at specific frequency points. What I mean, is they need to test from stock 3700k to stock 7700k with incremental 100mhz. That will show how efficiency is across the range. Testing only at 3.4 ghz, is non sense, as no one is keeping the chips at this speed.They need to use a better way of measuring "efficiency" and power consumption than on that article. The best way is adopting the metric used to measure battery life on laptops. The way they do it on desktops saves time, but its not accurate.
Modern CPUs and GPUs often have power management techniques that save power in intermediate states between peak and idle.
Reviewers need to do this in addition to current techniques of measuring instantaneous idle and load power:
Measure total power consumed at the outlet. Using special equipment to measure CPU-only power consumption is unrealistic and useless to consumers. As a buyer you care about power consumption for two reasons, for cooling, and hydro bills. You use a computer, not a CPU, or a video card. By measuring total system level power consumption, it represents a realistic usage scenario and fully takes into account whatever the CPU manufacturer used to address power as a platform.
You can further separate the power consumption into Low Load, Medium Load, and High Load. You can even separate the motherboard out by comparing different motherboards. Ideally a review site would have its own motherboard review section, which can be combined with a CPU/GPU review. A desktop motherboard can have 30W difference in power depending on models: http://www.anandtech.com/show/6989/...aswell-gigabyte-msi-asrock-and-asus-at-200/18
Measuring a CPU or a Video Card alone is only interesting for theories. As a product, platform power usage has to be considered. If a new CPU is said to use 20W less, but rest of the platform uses 30W more, then the total power is 10W more. The 30W difference on motherboard shown above can easily sway one CPU from another regarding power usage.
That's what happens when all the tech money is going into cellphones. Review sites can only pretend for so long to care about desktop/HEDT.
Oh, I totally agree, and frankly it's kind of a bummer. FWIW, the restriction of CFL to 300-series appears to be a purely artificial marketing decision (and a nod to the motherboard makers who are probably investing a lot of $$ in new Z370 boards so soon after Z270).
Yet it makes sense. You cell phone will only last so long due to battery and lack of software updates. Contrast to a desktop that nowadays can last you 5+years easily and the market is such much, much smaller.
What I mean, is they need to test from stock 3700k to stock 7700k with incremental 100mhz. That will show how efficiency is across the range. Testing only at 3.4 ghz, is non sense, as no one is keeping the chips at this speed.
That will show how efficiency is across the range.
We had just over 100 million PCs shipped in 1999. 2016 saw over 200 million shipped. Yet we arguably had better, more-engaged reviews back in '99 than we do today. The PC market is bigger, and yet . . . ?
We had just over 100 million PCs shipped in 1999. 2016 saw over 200 million shipped. Yet we arguably had better, more-engaged reviews back in '99 than we do today. The PC market is bigger, and yet . . . ?
Has 24 PCI-E lanes been confirmed for Z370/90? If so, then HEDT is dead for 90% of current HEDT customers. I say that because the only reason for an enthusiast gamer like myself to get HEDT is for that extra couple cores for additional headroom and more lanes for SLI, and SLI is now dead. So...
You really like making these extraordinary claims don't you? Either a company is dead, or a product line is dead, or something to get people's attention that has zero facts to back up the claim (your opinion).
Even if the 8700K gets 24 PCIe lanes (which I doubt), how does that possibly impact anyone other than a 7800X owner?
8700K will only have 6 cores.
8700K still does not have AVX-512.
8700K would still have less PCIe lanes than all the x299 cpus (28/44).
8700K will still only be dual channel ram. And I believe feeding 6 Skylake cores on dual channel RAM will be a bigger bottleneck than some may expect.
So how exactly does this kill HEDT for 90% of the users? You are trying to tell me that 90% of the people who bought a X299 system only went with a 7800X? Because a 8700K is not going to beat out a 7820X (except in a small number of games), never mind anything in the i9 series.
You seem quite upset. I find this oddly strange, especially considering I have never been wrong around here and people know this. 8700K will WRECK any reason to get HEDT for 90% of people out there. You are among the leet 10% cause 2 more cores for reasons.
Can someone show me the benefits of quad channel over dual? Last I heard it made 0.1% difference. Let alone it bottlenecking the CPU. You guys are way more knowledgeable than I.
There's likely no or little upgrade path from the 8700K, though.8700k will make the 7800x irrelevant. 8700k will eat into a little bit of 7820x sales too. But I don't think the higher core count 7900X, 7920x, 7940X,7960X, 7980XE will be affected at all. The people who do serious work on their PC and run applications that can make use of lots of cores like video editing, video encoding, 3D rendering, raytracing, compilation, audio compression/decompression are still going to buy the high core count CPUs. HEDT never made sense for mainstream consumer. For those people whom HEDT made sense 8700k will not change anything except push them to buy 10-18 cores. For enthusiast people who run multiple GPUs , lots of hard drives, SSDs and do serious work, multi tasking/megatasking the HEDT platform will always make sense.
Since the loading is sustained this is the best way to measure intrinsical efficiency, FTR they use the CPU ATX rail and they use up to AVX2 wich should give SKL an advantage, here the 7700K has 53% better perf than a 3770K but it also consume 85% more, hence the lower efficency at stock.They need to use a better way of measuring "efficiency" and power consumption than on that article.
I think they believe mobile has more mindshare than desktops. That wasn't true back in '99. Don't we see reports saying mobile internet usage has surpassed PC? We have reports of people saying perpetually about the death of the PC. I mean people use PCs, but for "boring" reasons and the attention is more on mobile. It then starts making sense as a reviewer to focus on mobile testing.
The mindset probably even affects it on a technical level indirectly because the top engineers start gravitating towards the companies that make the top mobile products(like engineers moving to Apple).
Most of the PCs sold today are laptops. And within desktops, about half are sold to corporate customers and the other half are sold to consumers. Corporate customers don't give a hoot about enthusiast reviews, and the bulk of avg Joes who buy PCs don't care about enthusiast reviews either.
The enthusiast PC crowd is a very small portion of the overall PC market, but it is vocal and has a very inflated sense of self importance
Can someone show me the benefits of quad channel over dual? Last I heard it made 0.1% difference. Let alone it bottlenecking the CPU. You guys are way more knowledgeable than I.
Since the loading is sustained this is the best way to measure intrinsical efficiency, FTR they use the CPU ATX rail and they use up to AVX2 wich should give SKL an advantage, here the 7700K has 53% better perf than a 3770K but it also consume 85% more, hence the lower efficency at stock.
Since efficency decrease with increasing frequency a fair comparison is to work at same frequency, so if the 7700K is downclocked accordingly to 3.5GHz it will consume (1.85)(35/42)^2 = 1.28 times more for (1.53)(35/42) = 1.275 time the performance, hence these are CPUs with the same intrinsical efficency.
For mobile there s of course better power managements with recent CPUs, but that s the only progress made, if IVB was granted the same power management as SKL it would be as efficient without any change in the CPU uarch, at some point one has to wonder if it wouldnt have been better to simply increase the core count rather than widening the uarches and increasing hugely the cores areas.