Why didnt intel use HT on the QX series?

Smoblikat

Diamond Member
Nov 19, 2011
5,184
107
106
The pentium 4 had HT, the QX6850 and QX9770/5 didnt have it. None of the dual core chips had it either, why did it take them until guftown to implement it again?

Also, why isnt it a standard feature? Doesnt it cost them more to disable it than it would to keep it enabled? Is it really just pure greed?
 
Last edited:

Don Karnage

Platinum Member
Oct 11, 2011
2,865
0
0
Intel implemented HT with nehalem and i'm not sure why they didn't on the C2D's and Q's.
 

Smoblikat

Diamond Member
Nov 19, 2011
5,184
107
106
Intel implemented HT with nehalem and i'm not sure why they didn't on the C2D's and Q's.

Oh ya, I forgot that the regular I7's had HT, still no unlocked multi though. It would have made sense to implenent it on the C2Q's and even some C2D's.
 

tweakboy

Diamond Member
Jan 3, 2010
9,517
2
81
www.hammiestudios.com
The P4 was still a slow ras, plus it was single core so implementing HT was easier I guess. Took them some time after the C2 series to have HT on chips. We have it now.

A QX is a 5 year old processor; tech wasn't there. Current processor youll get HT! if you pick the right model.. thx gl
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
It takes time to add features. The core architecture was not the same development line as the P4 architecture. It's not just a feature they could turn on or off at will and chose to "disable". They would have to spend the time to implemented with that architecture (if it were even compatible with the concept at all).
 

Maximilian

Lifer
Feb 8, 2004
12,603
9
81
Dunno, maybe they thought nobody would buy nehalem if it was just an integrated memory controller and FSB replacement so they saved HT for it.
 

Smoblikat

Diamond Member
Nov 19, 2011
5,184
107
106
That makes sense. Though what about locked multipliers? Why did they decide to implement those in the first place, was there ever a time where all multis were left untouched (like AMD).
 

CKTurbo128

Platinum Member
May 8, 2002
2,702
1
81
It probably has something to do with the structure of the Core 2 architecture, being derived from the P6 architecture (Pentium Pro, Pentium II/III, Pentium M, Core). Pentium 4/D, with it's Netburst architecture was completely different from P6. If I recall from past memory, I believe it was easier for Intel to implement hyper-threading on Netburst, due to it is long pipelines.

Perhaps with the shorter pipeline architecture on the Core 2 series, it might have been difficult for Intel to implement HT without introducing radical consequences (i.e. development/release delays, arch. limitations, increasing die/transistor size & power consumption, etc.) and probably just waited until the Nehalem architecture was ready (Core i7) to re-introduce HT, where it was likely to be more feasible.

That makes sense. Though what about locked multipliers? Why did they decide to implement those in the first place, was there ever a time where all multis were left untouched (like AMD).

I belive Intel introduced locked multipliers back in the Pentium II/III era, after it discovered that some disreputable vendors were rebranding some Pentium CPUs at higher speeds by adjusting the multipliers.
 
Last edited:

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
It takes time to add features. The core architecture was not the same development line as the P4 architecture. It's not just a feature they could turn on or off at will and chose to "disable". They would have to spend the time to implemented with that architecture (if it were even compatible with the concept at all).
This is the most correct answer.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
Yeah, I'll echo what others are saying. Hyperthreading is a feature that requires the architecture to specifically support it. The Pentium 4 (Netburst) architecture was changed to support HT as of Northwood, but the Core architecture was radically different than Netburst. Essentially Netburst was a dead end, and Core bore more similarities to the Pentium III architecture (P6) than Netburst, and P6 never supported HT. Either Intel didn't have the time or resources to put hyperthreading into the first iteration of the Core architecture, (Conroe) before they released it. Obviously though they were able to reintroduce it in the next iteration, Nehalem.

As for why it's not a standard feature in modern Sandy Bridge/Ivy Bridge processors, well, that is just Intel being greedy and making you pay an extra $50 or so to enable the feature. They can do stuff like that because AMD just can't compete with Intel in the $200+ range. Hyperthreading is really only a feature you need if you used heavily multithreaded applications, such as transcoding.
 
Last edited:

nehalem256

Lifer
Apr 13, 2012
15,669
8
0
Yeah, I'll echo what others are saying. Hyperthreading is a feature that requires the architecture to specifically support it. The Pentium 4 (Netburst) architecture was changed to support HT as of Northwood, but the Core architecture was radically different than Netburst. Essentially Netburst was a dead end, and Core bore more similarities to the Pentium III architecture (P6) than Netburst, and P6 never supported HT. Either Intel didn't have the time or resources to put hyperthreading into the first iteration of the Core architecture, (Conroe) before they released it. Obviously though they were able to reintroduce it in the next iteration, Nehalem.

I thought it was in the original P4, but was just disabled.

Also, at the time HT wasnt even enabled on Dual-Core P4s except for extreme edition. Since the Pentium-Mesque architecture didnt have the feature it probably was not given priority for addition since Core 2 already supported multi-threading and with its shorter pipeline would not have gained much by its addition.
 

nardz84

Member
Jul 11, 2008
71
0
61
Wasn't P4 hyperthreading horrible? I seem to remember most programs actually running worse with it enabled. It probably just took them a while to make it actually useful and worth including on the other gen.
 

Maximilian

Lifer
Feb 8, 2004
12,603
9
81
I thought it was in the original P4, but was just disabled.

Also, at the time HT wasnt even enabled on Dual-Core P4s except for extreme edition. Since the Pentium-Mesque architecture didnt have the feature it probably was not given priority for addition since Core 2 already supported multi-threading and with its shorter pipeline would not have gained much by its addition.

Apparently it was on wilamette core P4's but disabled yeah.
 

greenhawk

Platinum Member
Feb 23, 2011
2,031
0
71
The history of it as I remember was HyperThreading was added as a fix to a issue of excessivly long pipelines present in the P4's. The performance the cpu suffered with a branch miss prediction was quite massive, enough to make a P4 less useful than previous cpus like the P3.

It was not included in the original Core processors as intel went back to the drawing board and left the development of the P4 development line for dead (it was intended to push the Ghz limit and did not care about anything else).

What intel did was went back and started developed again from the Pentium M(designed for laptops) range of processors. Side note, but one of the major advantages of the Pentium M over ealier Pentiums was the ability to power gate unused cache, leading to massive power savings).

With the shorter pipe line of the cores, hyperthreading was point less (and with short development time) had even less reason to use it in the inital core designs.

It was latter brought back as it did have some use, but performance from it was really hit or miss with some bentchmarks showing a performance loss with it enabled and others showing such a small increase as to be more of a "what the" moment.

The current implementation of Hyperthreading is better, but while it is not as bad as it was, still can show the odd bentchmark performing better without it.

Currently it is used for marketing segmentation (ie: i3's have it, i5's don't, i7's do). But then it is used by marketing to "upsell" a processor to the masses without the cost of actually having a proper cpu behind what windows reports as the number of processors present.

Personally, the rule of thumb when it comes to "do I need hyperthreading" as I see it is still no.


On the topic of clock multiplier locking, Intel started doing it as a rather large underground market developed about the 1Ghz era, back when intel's quality was excellent for overclockers. Back when buying a 300Mhz part and getting something like 99% of all cpus overclocking to 450Mhz without changing voltages. Due to the better silicon, businesses existed to re-badge the intel cpus (buy the lowest/cheapest, cut back the top of the cpu and re-laser etch the details for a faster CPU back on). Only probably $200-300 increase in wholesale prices (between the cheap and the one they re-badged to), but the numbers involved were massive. The fix from intel to kill this problem off was to burn / lock the maximum multiplier a given chip was allowed to use. So leaving the only way to overclock to adjust the FSB (intel at least left the lower multipliers unlocked to allow some flexability for the overclocking market).

AMD did not have this problem (so left there multipliers unlocked as a selling feature) as their name did not attract the same issues from rebadges, price differences from cheap to high end was far narrower and their silicon, in comparison at the time, was of a far lower quality so failure rates from this practice was far higher.
 

SocketF

Senior member
Jun 2, 2006
236
0
71
Intel implemented HT with nehalem and i'm not sure why they didn't on the C2D's and Q's.

If I remember correctly, then the C2D was design in Haifa, bc the engineers there were previously responsible for the mobile Pentiums / Centrino.

Nehalem was then the responsibility of the US-P4 team again, hence the added Hyperthreading, bc these ppl. were experienced with it.

But I am not 100% sure ... long time ago ;-)
 

Smoblikat

Diamond Member
Nov 19, 2011
5,184
107
106
The history of it as I remember was HyperThreading was added as a fix to a issue of excessivly long pipelines present in the P4's. The performance the cpu suffered with a branch miss prediction was quite massive, enough to make a P4 less useful than previous cpus like the P3.

It was not included in the original Core processors as intel went back to the drawing board and left the development of the P4 development line for dead (it was intended to push the Ghz limit and did not care about anything else).

What intel did was went back and started developed again from the Pentium M(designed for laptops) range of processors. Side note, but one of the major advantages of the Pentium M over ealier Pentiums was the ability to power gate unused cache, leading to massive power savings).

With the shorter pipe line of the cores, hyperthreading was point less (and with short development time) had even less reason to use it in the inital core designs.

It was latter brought back as it did have some use, but performance from it was really hit or miss with some bentchmarks showing a performance loss with it enabled and others showing such a small increase as to be more of a "what the" moment.

The current implementation of Hyperthreading is better, but while it is not as bad as it was, still can show the odd bentchmark performing better without it.

Currently it is used for marketing segmentation (ie: i3's have it, i5's don't, i7's do). But then it is used by marketing to "upsell" a processor to the masses without the cost of actually having a proper cpu behind what windows reports as the number of processors present.

Personally, the rule of thumb when it comes to "do I need hyperthreading" as I see it is still no.


On the topic of clock multiplier locking, Intel started doing it as a rather large underground market developed about the 1Ghz era, back when intel's quality was excellent for overclockers. Back when buying a 300Mhz part and getting something like 99% of all cpus overclocking to 450Mhz without changing voltages. Due to the better silicon, businesses existed to re-badge the intel cpus (buy the lowest/cheapest, cut back the top of the cpu and re-laser etch the details for a faster CPU back on). Only probably $200-300 increase in wholesale prices (between the cheap and the one they re-badged to), but the numbers involved were massive. The fix from intel to kill this problem off was to burn / lock the maximum multiplier a given chip was allowed to use. So leaving the only way to overclock to adjust the FSB (intel at least left the lower multipliers unlocked to allow some flexability for the overclocking market).

AMD did not have this problem (so left there multipliers unlocked as a selling feature) as their name did not attract the same issues from rebadges, price differences from cheap to high end was far narrower and their silicon, in comparison at the time, was of a far lower quality so failure rates from this practice was far higher.

Excellent answer, you really explained it well.

Everyone else did too, but this one was very thorough.

Now that I think about it, intel is a VERY greedy company and AMD doesnt do half the things they do to stick it to us as consumers. AMD still has the FSB and all procs are unlocked multi, plus they dont lock out any features on their chips just so we will buy more expensive ones. I might switch over to an FX-81XX platform
 
Last edited:

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Intel implemented HT with nehalem and i'm not sure why they didn't on the C2D's and Q's.

Intel reads the forms and beings p4 was complete rubbish . core couldn't have any P4 elements. Truth is alot of P4 ended up in core2.
I gave this alot of thought. I think if intel would have put HT on core. It would have killed AMD as a company . Intel new this. We all seen the bencies 6 months befor release
 

TuxDave

Lifer
Oct 8, 2002
10,572
3
71
If I remember correctly, then the C2D was design in Haifa, bc the engineers there were previously responsible for the mobile Pentiums / Centrino.

Nehalem was then the responsibility of the US-P4 team again, hence the added Hyperthreading, bc these ppl. were experienced with it.

But I am not 100% sure ... long time ago ;-)

You are correct.

IDC (Haifa) did Centrino/C2D
CCDO (Oregon) did Nehalem
 

Rvenger

Elite Member <br> Super Moderator <br> Video Cards
Apr 6, 2004
6,283
5
81
I think if intel would have put HT on core. It would have killed AMD as a company .


HT Doesn't have that much of an impact. Especially since nothing even used quad cores back then, why would they even consider HT?? HT hurts performance if there is no use.

Im sorry, AMD is the king at cores so your argument is a little irrelavent.




















:sneaky:
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
The history of it as I remember was HyperThreading was added as a fix to a issue of excessivly long pipelines present in the P4's.

SMT(or Intel's Hyperthreading) is used because modern processors can't completely fill every part of the pipeline. While the long pipeline stage of the Pentium 4 architecture might have benefited from SMT better, the single-issue front-end probably made it even worse. Core 2 and after, they have short pipeline stages and really good performance/clock, but the front-end is 4x wider making it likely better fit for Hyperthreading.

It was not included in the original Core processors as intel went back to the drawing board and left the development of the P4 development line for dead

While Hyperthreading takes minimal on the die and transistor resources, its extra difficult to validate and make it work properly with minimal performance loss(in the applications that don't take advantage of extra threads). The Core 2 team didn't have experience with the technology, while the Netburst team, that was also responsible for Nehalem, did.

Side note, but one of the major advantages of the Pentium M over ealier Pentiums was the ability to power gate unused cache, leading to massive power savings).

No such thing existed for Intel back then. Power Gating, while misused for everything related to power management nowadays, specifically refers to pairing the circuit with a special transistor to turn it off completely. The cache could be gated, but not completely. The power savings were due to combination of multiple factors like better P-states(EIST), improved IPC, and the cache granularity you mentioned.

The current implementation of Hyperthreading is better, but while it is not as bad as it was, still can show the odd bentchmark performing better without it.

Completely disagree. It incurs very small loss in most applications(0-5%) with huge gain in those that were meant for it in the first place. 0.5 steps back with 5 steps forward. In dual cores with HT, even games benefit from it, proving that lack of performance gain in quads are very likely due to applications not taking advantage of it.

In servers, Hyperthreading alone is probably worth as much as the integrated memory controller for Nehalem's performance gains over Penryn.
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
232
106
Pentium D ( is a series of microprocessors developed in Intel's Research & Development Centers in Israel, and was first introduced to the public at the Spring 2005 Intel Developer Forum. A Pentium D chip consists of two Pentium 4 Prescott dies in a single package (contrary to popular belief that they are two cores on a single die).
....
Source.

Looks like, the Israelis did have a hand with Netburst. If the above is true.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Source.

Looks like, the Israelis did have a hand with Netburst. If the above is true.

They were more involved in engineering the MCM aspects of the final product, no re-engineering of the cores or deep-diving into the pre-existing hyperthreading machinery that the Oregon team already implemented in the cores themselves.
 

greenhawk

Platinum Member
Feb 23, 2011
2,031
0
71
While Hyperthreading takes minimal on the die and transistor resources,

The cache could be gated, but not completely. The power savings were due to combination of multiple factors like better P-states(EIST), improved IPC, and the cache granularity you mentioned.

1 - last numbers I remember seeing for the amount of silicon needed for Hyperthreading was something like 20% of the core it was working with. A noticable chunk in my book, but vs the whole cpu, it is not that much (cache takes a huge amount of space vs the processor).

2- I remembered the feature, not the name of it. It was a long long time ago
 

denev2004

Member
Dec 3, 2011
105
1
0
The P4 was still a slow ras, plus it was single core so implementing HT was easier I guess. Took them some time after the C2 series to have HT on chips. We have it now.

A QX is a 5 year old processor; tech wasn't there. Current processor youll get HT! if you pick the right model.. thx gl

It was there, just not that suitable...for a chip with relatively shorter pipeline and not too much time & don't want take the risk to implement HT on it..
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
1 - last numbers I remember seeing for the amount of silicon needed for Hyperthreading was something like 20% of the core it was working with.

Less than 5%: http://www.anandtech.com/show/1031/3

Nowadays its probably lot less since they integrate lots of functionalities that used to be on a seperate chip, like Memory/PCI express controller, iGPU, QPI, etc.

If we assume it takes 5% on each Sandy/Ivy Bridge core, then as a proportion of die size it would take up only less than 1.5%, or ~2.3mm2 for all 4 cores.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |