Do you have HPET enabled or disabled?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
So you're saying HPET is separate from Invariant TSC in terms of functionality, but everything I've managed to find seems to say that Invariant TSC is a replacement for HPET.

You can't replace timer event source with time source. Your Windows/Linux system still needs precise and accurate timer. If you are not using HPET, LAPIC will get used, if that is not reliable (for example on some older CPUs, once in sleep LAPIC is disabled and it can't trigger anything) PIT will get used. End of story. Invariant TSC is just 64bit number that is incremented each clock tick despite power saving mode or actual clock of CPU.

HPET came out almost 10 years ago so it's not exactly new technology. And as far as I'm concerned, the stability or power efficiency of my rig hasn't been compromised by turning HPET off, as it was never in use to begin with.

Not sure if anyone claimed that stability or power efficiency would be affected after disabling HPET, more like patiently explaining that it is clock event source where invariant TSC isn't. It is not being used by default and therefore it is stupid to disable it?

Yes, HPET can cause plenty of problems. When I forced Windows to use it, it resulted in tons of lag and stuttering in games.

I'm trying to figure this one out myself. Windows definitely isn't using it whether it's enabled or disabled in the BIOS.

Forcing it's usage is a bit of leap of faith, isn't it? But advocating it's disabling in BIOS is too much too.


The LAPIC timer is even older than HPET.

...

Like I mentioned earlier, the technology is nearly 10 years old at this point. Better things have been created since then..

It is ironic, but LAPIC timer is exactly what is being used in modern systems, you can do a check on Linux, check /proc/interrupts and you will see that there is truckload of "Local timer interrupts" going on, and that is what your LAPIC timer is generating. No doubt that Windows is using it too, cause PIT was very limited in efficiency, and Linux's 1000hz tick mode was approaching what was possible with PIT.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I didn't see it in my mobo config, but I don't experience any problems that used to be symptomatic of it being enabled and used, either, on my Core 2 rig.

Forcing it's usage is a bit of leap of faith, isn't it? But advocating it's disabling in BIOS is too much too.
Not really. Windows will use it, if it's there, and it can cause problems. It would find it and use it on my GA-P35-DS3R, and cause exactly the described problems: in-game stuttering, and audio skipping.

One thing I don't get w/ HPET, though, is this: why not create such a timer as a new PIC, with either its own clock source, or hardware guaranteed synchronization with some other hardware clock source? Even better would be to have the CPU sync'd to it, so a program/driver could read the current time, an compare it to the event time, as well.
 
Last edited:

SPBHM

Diamond Member
Sep 12, 2012
5,058
410
126
just enabled it on my old athlon 64 x2, it was turned off on the bios, wintimertester now reports 25MHz (it was 3. something), it's hard to tell if there is any real change, but the task manager graph certainly look different

with the other PC (bios had support and it's always enblaed, windows was not using) with it enabled now I may have noticed less stuttering playing "saints row 4", but it's really hard to tell without using something like FCAT I guess
 

rgallant

Golden Member
Apr 14, 2007
1,361
11
81
I went to turn this off in the bios ,but seems the last bios update done for 780sli , there is no longer a user option and is in device manager lol lol
love asus but there lack of bios changes /change logs \ bios menu's are also bs at least in English text. a dog could do better programing imo for rog boards
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
TSC is just 64bit number that is incremented each clock tick despite power saving mode or actual clock of CPU.

You're contradicting what was said in the source I quoted earlier on the first page:

Time services on windows have undergone changes with any new version of Windows. Considerable changes are to be reported beyond VISTA and Server 2008. The synchronous progress in hardware and software development requires the software to stay compatible with a whole variety of hardware platforms. On the other hand new hardware enables the software to conquer better performance. Today's hardware provides the High Precision Event Timer (HPET) and an invariant Time Stamp Counter (TSC). The variety of timers is described in "Guidelines For Providing Multimedia Timer Support". The "IA-PC HPET Specification" is now more than 10 years old and some of the goals have not yet been reached (e.g. aperiodic interrupts). While QueryPerformanceCounter benefited using the HPET/TSC when compared to ACPI PM timer, these days the HPET is outdated by the invariant TSC for many applications. However, the typical HPET signature (TimeIncrement of the function GetSystemTimeAdjustment() and MinimumResolution of the function NtQueryTimerResolution() are 156001) disappeared with Windows 8.1. Windows 8.1 goes back to the roots; it goes back to 156250. The TSC frequency is calibrated against HPET periods to finally get proper timekeeping.
An existing invariant TSC influences the behavior of GetSystemTimeAsFileTime() noticeable. The influence to the functions QueryPerformanceCounter() and QueryPerformanceFrequency() is described in sections 2.4.3. and 2.4.4. Windows 8 introduces the function GetSystemTimePreciseAsFileTime() "with the highest possible level of precision (<1us)". This seems the counterpart to the linux gettimeofday() function.
Invariant TSC is a full blown timer. If it wasn't, then HPET would be used by the OS and I would suffer severe consequences by disabling it in the BIOS.

It also seems to be OS dependent. Windows 8.1 which is what I have, seems to have deprecated HPET functionality. So HPET is going the way of the dodo.

Not sure if anyone claimed that stability or power efficiency would be affected after disabling HPET, more like patiently explaining that it is clock event source where invariant TSC isn't. It is not being used by default and therefore it is stupid to disable it?
Why is it stupid if there's no consequences? I haven't noticed ANYTHING out of the ordinary since turning it off. My music, videos play just fine as before, but my games are even smoother and my overall system responsiveness seems to have improved.

Forcing it's usage is a bit of leap of faith, isn't it? But advocating it's disabling in BIOS is too much too.
That was for testing purposes when I forced it's use, and found out that HPET is completely inferior to TSC. And disabling it would be bad too if there were consequences associated with it, but insofar, I haven't noticed any.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
One thing I don't get w/ HPET, though, is this: why not create such a timer as a new PIC, with either its own clock source, or hardware guaranteed synchronization with some other hardware clock source? Even better would be to have the CPU sync'd to it, so a program/driver could read the current time, an compare it to the event time, as well.

Isn't that what invariant TSC does?
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I went to turn this off in the bios ,but seems the last bios update done for 780sli , there is no longer a user option and is in device manager lol lol
love asus but there lack of bios changes /change logs \ bios menu's are also bs at least in English text. a dog could do better programing imo for rog boards

What board was it? I would expect an ROG board to have that option.

But it seems the X series chipsets are now considered enthusiast level hardware, and the Z and H series for mainstream consumers. Maybe that's why they removed that option..
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Isn't that what invariant TSC does?
AFAIK, no, but it should allow for something similar (I meant actually integrate all of that but the clock itself into the PIC). If TSC were amended to be part of the CPU, with a single clock source available to all cores, always counting at the same rate, then yes, TSC would do it. However, TSC offers no such guarantee, so even if most CPUs do it that way now, it's a huge compatibility concern to assume that, which could result in data corruption, lockups, BSODs, kernel panics, etc..

HPET is newer than TSC. I just mean that HPET's implementation is so basic that it requires overly-much software complication to use well, which seems rather silly in the face of Moore's Law, especially by the '00s, when regular CPUs were already hundreds of millions of xtors.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
AFAIK, no, but it should allow for something similar (I meant actually integrate all of that but the clock itself into the PIC). If TSC were amended to be part of the CPU, with a single clock source available to all cores, always counting at the same rate, then yes, TSC would do it. However, TSC offers no such guarantee, so even if most CPUs do it that way now, it's a huge compatibility concern to assume that, which could result in data corruption, lockups, BSODs, kernel panics, etc..

According to this thread over at Intel developer zone, it does appear that invariant TSC has the capability to synchronize across all cores.

HPET is newer than TSC. I just mean that HPET's implementation is so basic that it requires overly-much software complication to use well, which seems rather silly in the face of Moore's Law, especially by the '00s, when regular CPUs were already hundreds of millions of xtors.
HPET is newer than TSC, but not newer than invariant TSC. Invariant TSC is only available in modern CPUs starting with Nehalem on Intel's side and most likely Bulldozer on AMD's.
 

mindbomb

Senior member
May 30, 2013
363
0
0
i had hpet enabled, but I think I'm gonna disable it now just to play it safe. It was disabled by default.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Presumably if this is impacting the CPU aspect of gaming then it will show up in fraps frame times. You don't need FCAT to see the impact of stutter caused by the CPU, FCAT specifically captures stutter introduced by the GPU and in this case looking at the games interaction with the API should be sufficient to scientifically show the difference between the two. A sorted magnitude difference graph of the frame times overlayed over each other should be able to show the difference.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Invariant TSC is a full blown timer. If it wasn't, then HPET would be used by the OS and I would suffer severe consequences by disabling it in the BIOS.

It also seems to be OS dependent. Windows 8.1 which is what I have, seems to have deprecated HPET functionality. So HPET is going the way of the dodo.


Are you just repeating "Invariant TSC" all over like it is some magic word, without actually knowing what it is? I've first read TSC value in 2003, on P4 with ASM in some network timing project i needed (and ironically same code was having problems several years later on P-M Dothan mobile cpu, cause TSC was getting stopped in low power CPU modes - it not "invariant").

Ironically it is stated in the source You have quoted that:

"Windows 8.1 goes back to the roots; it goes back to 156250. The TSC frequency is calibrated against HPET periods to finally get proper timekeeping."

It means that TSC calibration (basically how many ticks of CPU TSC "invariant" clock mean how many time units in nanoseconds) is calibrated using known period of HPET (since it is clocked at 14mhz+, resolution is obviuosly great enough). So Windows 8.1 is actually making use of HPET for initial calibration, and if it is not available (like disabled in BIOS?), it will fallback to less accurate source of timing ( LAPIC or PIT).


Anyway this discussion is pointless, HPET is not beeing used by default for time event source, so it is pointless to disable it in BIOS. It is risky to force enable it in Windows, cause of incompability with some motherboards, but you could try doing so if your CPU is older. Those are known facts...

Snake oil starts where recommendation to disable it in BIOS is, clearly this has a burden of proof on more than 1 system with some quantitative results for this recommendation to have some weight?
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Are you just repeating "Invariant TSC" all over like it is some magic word, without actually knowing what it is? I've first read TSC value in 2003, on P4 with ASM in some network timing project i needed (and ironically same code was having problems several years later on P-M Dothan mobile cpu, cause TSC was getting stopped in low power CPU modes - it not "invariant").

Invariant TSC is only in modern processors like the Intel Core i5/i7 series, Bulldozer and Piledriver. The TSC you're talking about is a much older form.

It means that TSC calibration (basically how many ticks of CPU TSC "invariant" clock mean how many time units in nanoseconds) is calibrated using known period of HPET (since it is clocked at 14mhz+, resolution is obviuosly great enough). So Windows 8.1 is actually making use of HPET for initial calibration, and if it is not available (like disabled in BIOS?), it will fallback to less accurate source of timing ( LAPIC or PIT).
OK I concede your point. When I first read that, I was wondering whether it could mean that HPET would still be necessary, even though it wasn't being used as the primary timer.

Anyway this discussion is pointless, HPET is not beeing used by default for time event source, so it is pointless to disable it in BIOS. It is risky to force enable it in Windows, cause of incompability with some motherboards, but you could try doing so if your CPU is older. Those are known facts...
Yeah I'm going to have to agree with you. After thinking about it, it may have been another setting that caused my performance to increase. I changed several settings that day, mostly voltages, so who knows.. I'm going to have to do a full backtrack when I have the time.

One thing too, I found a possible consequence of turning HPET off in the BIOS. Playing Batman Arkham Origins, I noticed some audio skipping at times and no sound in the fast travel cut scenes. I've never seen these problems before, but I think it may be related to turning HPET off.

Anyway, after turning HPET back on, I haven't noticed any performance degradation.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Invariant TSC is only in modern processors like the Intel Core i5/i7 series, Bulldozer and Piledriver. The TSC you're talking about is a much older form.


Yeah, but it was still cool to have clock tick style timing on P4 back then! Much better than standard windows 2000 era stuff of 15ms resolution i think (i remember some stuff about multimedia "timers" that were insanely unworkable and seemed to have 0 reliability, TSC was major step forward)

I think Nehalem was the first CPU with invariant TSC ( one that ran at same clock on all cores and sockets and had same value everywhere ). And that's why I called it marvel for it's day - honestly i have no idea how they do this magic of synchronization. So time keeping/reading issues were solved, but timer problems still remain to this day. There is a mess of HPET/LAPIC/PIT timers available, each with their own quirks etc.

EDIT: BTW i've read somewhere that LAPIC timers have operational mode now, where you can set them to fire at certain TSC value, that is kinda best possible when combined with Invariant TSC, I really hope future OS's will start using this and solve timer related problems for good.
 
Last edited:

Pilum

Member
Aug 27, 2012
182
3
81
Yeah, but it was still cool to have clock tick style timing on P4 back then! Much better than standard windows 2000 era stuff of 15ms resolution i think (i remember some stuff about multimedia "timers" that were insanely unworkable and seemed to have 0 reliability, TSC was major step forward)
The 15ms (actually 15.625ms) resolution is not really a problem of the underlying timer hardware, but of Windows kernels. At least up to Win7, the "master tick" of the kernel runs at this speed of 1/64s. Many of the timer functions in Windows run off this master clock. You can check out the resolution of different methods by downloading the binaries of this CodeProject example. While Timers.Timer and Thread.Sleep deliver 1ms precision on my Win7SP1 system, you can't really rely on that; older systems may well deliver only 1/64s resolution - as do the Form.Timer and Thread.Timer tests on my system.

The only way to make sure you get high-precision timing is using the multimedia timer, which tends to work pretty reliably on modern systems from what I've read from programmers. The downside is that the MM timers have a rather high overhead and you really can't use them everywhere, as this would generate too high a system load on weaker CPUs. I'm pretty sure that the MM timers rely on the existence of HPET and/or LAPIC to work reliably. And I readily belive that early implementations of the hardware were broken and terrible - but that really goes for most of the innovations in PC space. That doesn't mean the technology isn't usable today, after a decade of working out the bugs.

And the TSC wasn't a step forward in timers, because it isn't one - it's a Time Stamp Counter. It's just a monotonically increasing clock-counter which you can read to see how much time has elapsed. It lacks the ability to generate interrupts on certain events (usually underflow/overflow of the timer value register), and that's pretty much the basic definition of a timer in computer hardware: it not only knows how much time has elapsed, it can also tell you that it's time to wake up and do something. This is something the TSC can't do.

I think Nehalem was the first CPU with invariant TSC ( one that ran at same clock on all cores and sockets and had same value everywhere ). And that's why I called it marvel for it's day - honestly i have no idea how they do this magic of synchronization.
They probably have the TSC on the parts of the uncore which never go to sleep, and the core accesses that value when a RDTSC instruction is issued. Considering that RDTSCs have a latency of 20-30 cycles on Nehalem+, there should be enough time for an uncore access.

Note that the TSCs for NUMA systems may not necessarily run sychnronized, Intel provides a user-programmable TSC_AUX register for use with the RDTSCP instruction, so that software can keep track of a thread switching cores/sockets during execution and handling the non-uniform TSC readings accordingly.

So time keeping/reading issues were solved, but timer problems still remain to this day. There is a mess of HPET/LAPIC/PIT timers available, each with their own quirks etc.

Well, it's not really problematic, you just need to know what you're doing, and which kind of Windows services you can rely on and which not. The bad documentation is a problem, the software/hardware really is not. Okay, you may have buggy HPET implementations on certain boards, but that's really not a general problem, but a problem of bad hardware. But this can affect every single part of a system and is not limited to timers.

EDIT: BTW i've read somewhere that LAPIC timers have operational mode now, where you can set them to fire at certain TSC value, that is kinda best possible when combined with Invariant TSC, I really hope future OS's will start using this and solve timer related problems for good.
Well, what kind of new capability will this give for desktop systems? I can see use for that in embedded systems, when you might want to sleep for extended periods of time and make use of the TSCs 64-bit resolution. But the APIC timer already has a one-shot mode on it's regular 32-bit register, which is enough to wait for 42.95 seconds on a busclock of 100MHz even when the Divide Configuration Register is set to divide by 1; you can set the DCR to a divider of 128, which gives a maximum time span of 91.6 minutes (if I interpret Intels docs correctly).

It's possible that the resolution might increase, but then I think that the 10ns resolution of the APIC timer (assuming 100MHz bus clock) should be enough for everything you want to do on a DT or server system. Again, maybe useful for embedded, but not relevant for the kind of systems we deal with. And it won't increase timing precision, because (on Intel systems) both TSC and APIC timer seem to run off the busclock which means they both ultimately depend on the busclocks precision.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Pilum - I wonder why are you quoting my text and stating what I have been patiently explaining in the thread to the OT poster? I already know most of this stuff, cause i was involved in a hunt for timer precision starting from 2003...

But thanks for very detailed post, great effort!

Note that the TSCs for NUMA systems may not necessarily run sychnronized, Intel provides a user-programmable TSC_AUX register for use with the RDTSCP instruction, so that software can keep track of a thread switching cores/sockets during execution and handling the non-uniform TSC readings accordingly.

This is kinda interesting, I have always been under impression that proper multi socket system sends reset to each socket simultaneously and then they are starting TSC from zero? So as long as OS does not mess it up by setting it, you are fine? I have checked several dual socket Nehalem+ Linux systems we have and they are all using TSC as clocksource and PIT as TSC calibration timer.
 

GAO

Member
Dec 10, 2009
96
1
71
It could be that the processor is no longer allowed to sleep (to conserve power), and you are seeing the effects of Intel's CPU power-management (or rather the lack thereof) on I/O performance.

Or, it could be placebo.

I have my minimum processor state set to 2.2 GHz (i5 2500k OC to 4.5 GHz). With C states disabled, then I will see it bottom out to 2.2GHz on CPU-Z. With C-states enabled I will see it drop down to 1.6 GHz (as it enters C-states).

With HPET disabled I still see it drop down to 1.6 GHz, so I conclude that the C-states are working with HPET disabled in Windows.
 

crashtech

Lifer
Jan 4, 2013
10,554
2,138
146
I don't know if this has been mentioned, but the glaring problem with HPET as I understand it is that values are compared on an equality basis only, not "greater than or equal to," such that if an NMI comes around just as the values become equal, HPET misses it. I don't know for sure, but this seems to be an error in the hardware implementation, and may cause excess problem on systems where NMIs are occurring oftem.
 

mrpiggy

Member
Apr 19, 2012
196
12
81
Learning here that HPET was not actually enabled in the OS, thought I would try it.

I explicitly enabled HPET for testing yesterday/today on my work computer. With HPET enabled the via the bcdedit command line, disk benchmarks dropped about 3-12% depending on which disk benchmark or test (ASSSD, CDM, ATTO, PCMark Disk suite). PCMark Vantage 64 PCMarks dropped about 2.5% with HPET.

My work desktop is a 4770K (running at stock speed) with an AMD 6970 video card and a C400 256GB boot drive booting Win8.1x64. All tests however were run on my secondary P320H PCIe SSD. (Yes I have very nice work machines)

One coincidental possibility issue I had was that remote desktop to my other 3960X Win7x64 work machine. I generally keep Remote Desktop running 24x7 and started having keyboard/mouse/network syncing issues after I changed the HPET on both machines. Has never had this kind of issue before, so I assume it's the HPET change, as once I disabled HPET, it all went away.. Could be pure coincidence though.

Basically, enabling HPET seemed to have no benefit on my work PC's and was actually detrimental in actual tests. I have not run every test out there as I don't have the time, just the few I already had on my machines.
 

Lorne

Senior member
Feb 5, 2001
874
1
76
Dissabled it at work and the only change is coretemp showing a drop of 10c, makes no sence.
Dissabled it at home and now get a split second freeze in online games but nothing otherwise.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |