Originally posted by: rck01
Typical. Your reaction to a poor showing by your CPU of preference is to dismiss the test as being irrelevant. A bit pathological, don't you think?
Ri-i-ight. Okay, I'm a little curious about the testsuite product that you used in the test, *HTP* Analyzer (i.e. Hyperthreading Analyzer). According to its product write-up at
http://analyzer.csaresearch.com/, it runs a test script which is "a classical linear benchmark", like ZD, Sysmark, Bapco, etc. And in order to load it down you use a "concurrent workload simulator". These simulators are Database, Workflow, and Multimedia, as you call it. It seems to me that these are synthetic benchmarks, as they aren't real database, workflow or multimedia workloads, but rather simulations of them. The only real application benchmark running is the single-threaded test script, while the other threads run synthetic benchmarks to slow the system down in the background. These simultaneous workloads are *pretending* to be database, workflow or multimedia workloads.
Since this is a Hyperthreading analyzer, it would stand to reason that this program knows how to detect and use Hyperthreading priorities. It would also stand to reason that a processor without Hyperthreading would be at a disadvantage.
A Hyperthreading processor would have two threads, a high-priority thread A and a low-priority thread B. Thread A will always get more priority than Thread B. You can put the real-world application test script on the high-priority A thread, while putting the synthetic workloads on the low priority B thread, and because A thread gets more priority than B thread, you can easily starve the synthetic workload simulations from getting any real CPU time, since when push comes to shove, Thread A is always king.
A non-Hyperthreading processor on the other hand would just use standard Windows task-switching priorities and give equal time to all threads, agnostic about whether they are real-world or synthetic.
Let me ask you something, does this HTP Analyzer measure how much time it took the synthetic workloads to execute in the background, or does it only measure the foreground test-script times?
Unfortunately, in my position I don't have the luxury of becoming emotionally attached to products.
Interesting that you mentioned the word "luxury".