I just ran a short test of OCCT and the temps does peak at 85C, but not at all cores and only for a short time. Most are below 80C. In linX my CPU temperature reaches 95C and stays there. But I'll do a proper run later.
This type of discussion is going on at this very moment on another thread or another forum. I had read -- today -- a post by IDontCare on these issues, probably made within the last 12 hours or so. It points to the issues of stability testing and temperatures, and I pretty much agree with him.
But if the author of OCCT is providing accurate information, I'm willing to assume that we don't need to push the thermals to their very, very highest to discover whether or not an OC is stable. I'm even willing to consider that tests forcing the CPU to throttle aren't likely to damage anything, but complicate the process and the issue. Here's why I tentatively say that.
These new 22nm processors, their IHS fabrication process with the TIM, the new instruction-set additions, and the TDP's of processors like the E chips almost seem to be designed by Intel with expectations of higher thermal effects. Even for the 4790K, TDP 95W if I'm not mistaken, gets very hot.
I'm a statistician (or was), but I attended my freshman/sophomore physics course covering electrical concepts back in 1966/67. Not only do I "not remember," but it was rudimentary. IDC, on the other hand -- knows his S***.
So unless someone disagrees and proves otherwise, here's what I understand.
There are calibration errors for the core sensors of these processors. Maybe four years ago, they were estimated at +/- 6C. You can use RealTemp (or CoreTemp?) to calibrate them yourself within the software, but I don't think it's really recommended.
Testing offers simultaneous "small sample" and "large sample" observations: with a quad-core or hexa-core, you have a small sample of "cores," and with HWMonitor, Aida64, CoreTemp/RealTemp and even ASUS AI Suite "Sensor Recorder," you can build a file of observations taken every second -- whatever interval you choose to set.
So for the sensors themselves, you can take an average (central tendency) and range (variation), even from the "maximum values" that get snagged in the recording. You can also take averages by core over an hour's stress-testing, and you can average those averages across the cores.
So with the cores, which are likely all "out of whack" to some degree -- some below and some above -- the average is the best measure, whether of maximums or "average-of-averages."
Suppose the highest core reading rivets your attention? It is much less likely that all the core calibration errors are in one direction, and more likely there is a distribution above and below.
A persistent problem of information-sharing on these forums and especially in threads like the one you linked for which I'm very grateful, is that folks apply their own choices of how these things are read and measured. Another example of that is "what is meant by load voltage."
Since we've come far in the ability to OC with EIST and C1E enabled, I've come to realize that there are two values of voltage you may see during a stress-test besides the EIST minimum. There is a drooped load voltage (probably the one that counts the most), and an unloaded "turbo" voltage -- captured by monitoring software momentarily after a stress-test ends and before the processor settles down to EIST speed and volts. So often, it is uncertain what someone is reporting, and that would be true even with EIST/C1E disabled to give a fixed voltage setting. LLC settings also affect this, so it is possible that there's something like "negative droop." Worse, it's possible that VCORE could exceed VID at some point with high LLC. I always make sure my LLC setting gives me 20mV of droop.
But what I'm trying to say with this explosive diarrhea of words, is that even if we were to agree to some standard of reporting, recording and so forth, a lot of folks might not follow to the letter. We're always going to find these confusions and ambiguities.