I've seen that not to be the case, for whatever reason, on more than one occasion, that is, with 8-series chipsets. No experience with 9-series yet.Even if you leave it in high performance option, the cpu still clock down on idle.
yep if I select high performance it goes to 4.4 and 1.25 volts and thats where it stays until I change the power plan back.I've seen that not to be the case, for whatever reason, on more than one occasion, that is, with 8-series chipsets. No experience with 9-series yet.
@sjerra
Even if you leave it in high performance option, the cpu still clock down on idle. It only switch between idle clock and max clock, not inbetween like balance mode. So even watching a movie will bump cpu to max clock.
I'm surprised so many people thought that the first generation of DDR4 was going to be worth much, and it might in fact be a negative for performance, over highly optimised DDR3 systems.
I think a good rule of thumb is to avoid the new memory standard till at least the second family of processors come around.Was it any different with DDR->DDR2 or DDR2->DDR3? To recollection it wasn't. Getting more channels (of any kind) was a bigger bump IIRC.
I thought the value of DDR4 was supposed to be in getting same (or slightly better) performance as DDR3 but with considerably lower power consumption because the voltage has been significantly reduced.
With my 5820k, 4.0GHz at 1.1v, using High Performance power mode, the processor multiplier goes between x12 and x40 (and voltage between 0.718 and 1.1 respectively), as required. Playing just a video, as another poster mentioned, doesn't cause it to stay at max, unless it's something demanding, like 4k.
AFAIK, the only difference between High Performance and Balanced in regards to the processor, is the "Minimum processor state" setting under "Processor power management". Unless there is something else going on under the hood, that should be it.
Interestingly, I ran into some similar problems after I built my new PC. My previous PC was i7 950 with Windows 7, and the new one is 5820k with Windows 8.1. While working on the new PC, I started noticing some odd slowdowns, even in the general OS UI. Things as simple as the window minimize/restore animation started becoming jerky. The severity of the jerkiness varied. At the low end, others I showed it to for verification couldn't see it, though I could. Other times it was enough for them to see it too. When that happened, other activities were visibly effected as well, like opening programs and files, or scrolling webpages. Spent some time messing with the video card settings and drivers. when I stumbled upon the minimum state setting. Setting that to 100% fixed it instantly.
On further experimentation, I found that no matter what setting I used, as long as it was below 100%, would cause the problem, including the random severity I mentioned. 1%, 99%, didn't matter. It also effected the games I was playing, causing visible jerkiness and slowdowns. I don't know why modern, demanding games are even letting the processor downclock during gameplay.
Since then, I have set my minimum processor state to 100%, and everything has been fine. The voltage and multiplier goes between minimum and maximum only, nothing in between, but it works well, and doesn't stay at max all the time either.
I'll try your solution under Win 8.1 but I'm not sure I want my computer to draw basically twice the power at idle just because it's running Windows 8.1.
Keep in mind that the processor downclocks at idle even with the minimum state at 100%. When idle, or working on some light computation, the processor goes between the minimum of x12 multiplier and 0.718v, to the maximum (of whatever your OC is).
The minimum processor state simply sets the base for the speed the processor will run at when computation power is needed. E.g. with 100% as minimum, when opening Word it will clock to x40 (in my case), then back down once the task is complete. If it was set to 50% it would have clocked to x20-25 (not sure if 0 is base or 12 in that calculation). It will not run at max while idle.
ps. I just checked. High performance option in the power plan pegs my CPU at 4.4GHz.
So you can set the minimum at 100% in Windows 8.1 without worrying that it will consume max power even at idle. Though, given how close Windows 10 is rumored to be, might make sense to wait. Of course, there is no guarantee that this behavior will not be present in that too.
Hi there,
First post in over 10 years of reading Anandtech. This is how baffled I am with what happened to me.
I have an I5 3570K in a little P8Z77i with 16Gb 1600 CL8 running at 4.6Ghz with SSD. It's been running stable ever since I bought it in 2013.
I don't game. I run 1D simulation and Solidworks & Ansys FEA (Maxwell 3D). I recently ran out of memory on a job.
So I decided to buy a new rig.
After reading a lot I settled on a X99-Deluxe with an i7-5820K and 8x16 DDR4 2400 CL16 with SSD.
After some tuning I managed to run it stable at 4.4Ghz. I could probably go a little higher but I need a reliable 24x7 rig.
Anyway, when I ran the simulation I wasn't impressed at all. Ok, I didn't have memory issues, but it didn't feel as fast as my old rig. So I did a test. Both PC's side by side, same model, press start button at the same time.......and yes, my I5-3570K is faster -though after a hour it throws a memory error-.
How much faster I can' say I didn't time it, but enough for me to feel it from the start so I estimate something like 7-10%.
That's an Ivy Bridge @ 4.6Ghz vs an Haswell-E @ 4.4Ghz.
So I tuned the Ivy bridge down to 4.4Ghz and it is still faster. Now, the margin has gone down to something I'm willing to suffer (2-3%)considering the memory availability but this is very weird. I'm not expecting miracles considering this is a tock generation, but at the same clock this rig should have beaten my previous gen i5.
All my bios settings are on auto, except the memory, that's been set to xmp (so 2400 CL16) and processor clock speed. The voltage profile is set in AIsuite 3.
Is there anybody who can confirm or deny this type of behaviour especially considering one has 1600 CL8 and the other one 'only' 2400 CL16? I know from the many discussions out there that memory speed seldom makes a measurable difference. But might this be the use case where it does?
Are there any tips on why I might not get the performance I should expect?
Thanks
sjerra