Rambus Question

john3850

Golden Member
Oct 19, 2002
1,436
21
81
Would the p4 have better if intel stayed with Rambus.

I read online that orignal Rambus gave the p-4 somewhat of a edge.
I also remember that Intel was to get a large amount of Rambus shares for using it.
I built half a dozen p-3 but never p-4 and went with Amd instead so I didn't know any P-4.
 

greenhawk

Platinum Member
Feb 23, 2011
2,031
0
71
1) yes, but that is mainly to do with the p4 being designed to work with RamBus.

Rambus was good, but over priced and only got worse. Trying to force a monopoly onto the market did not go down well.

The p4 ended up "failing" as it could not be redesigned for SD ram
 

SecurityTheatre

Senior member
Aug 14, 2011
672
0
0
Would the p4 have better if intel stayed with Rambus.

I read online that orignal Rambus gave the p-4 somewhat of a edge.
I also remember that Intel was to get a large amount of Rambus shares for using it.
I built half a dozen p-3 but never p-4 and went with Amd instead so I didn't know any P-4.

Not really. The pipeline was too deep and did not adequately consider the effect of latencies, and the hard-cap on clock speed imposed by thermal limitations.

On release, the P4 was around 2Ghz and Intel claimed it was designed to scale through 10Ghz in 5-6 years. If that indeed happened, the P4 would have been competitive with lower clocked chips like the Athlon despite latency issues, but obviously, it wasn't.

Rambus doesn't inherently solve any of the problems with the pipeline in the P4. In fact, it only provides marginal improvements in certain types of streaming data.

Realistically, even highly optimized code for a P4 was going to be inherently slower (per clock) on a P4 than on a P3 or Athlon (except in weird cases of specifically anti-AMD code). The ONLY way it was going to catch up is raw clock, which it never was going to be able to do due to the thermal issues. Once the Althon incorporated SSE2, it handily beat the P4 on even the most friendly optimized code as load latency for 128-bit data on the Athlon was only 1-2 cycles, where it was 8-12 cycles on a P4.

The simple fact is that even on 100% cache hits (no DRAM access at all), it was still slower than competitive processors, sometimes as much as 50% slower illustrates that RDRAM wouldn't fix this. Instruction latency of 10-18 cycles on instructions that stall only 3-4 cycles on a P3 or Athlon was a big deal and insurmountable in branchy code, especially in light of smaller caches, poor branch prediction and the elimination of "free" FXCH, among various other instruction-level speed hits. The superior branch prediction on Athlon and Core2 chips also was a substantial difference, and their desire for maximum clock speed held the size of the L1 cache down as well, further hindering performance.

There were some water cooled engineering demos that Intel made at around 5-6Ghz, but they were pushing 180W+ of power, which is starting to get to the point where the substrates and things can't hold up. The architecture just wasn't there... Not to mention the issues they had with electromigration, regardless of cooling, at those speeds.

I do appreciate Intel being innovative. They came out of the very successful P3 architecture, which resembles the modern Athlon/Core architecture. They tried a deep-pipeline chip to see what would happen (P4-Netburst) and they tried a massive explicit superscalar design (IA-64) with the Itanium, both of which flopped, despite being reasonable on paper.

But the world learned a lot about computation and CPU architecture from those real-world experiments and all of our chips are better today due to those lessons learned.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
1) yes, but that is mainly to do with the p4 being designed to work with RamBus.

Rambus was good, but over priced and only got worse. Trying to force a monopoly onto the market did not go down well.

The p4 ended up "failing" as it could not be redesigned for SD ram
The P4 was designed with a smaller cache than it needed to keep up with its long pipeline, that was hard to fill. RDRAM, with the 850 chipset, gave it plenty of RAM bandwidth, to help out. The later 865/875 chipset matched the speed of the old 850 just fine, with none other than SDRAM.

The P4 ended up "failing" due to the mart not being happy with it, because high clocks, long pipelines, and needing to recompile, or even rewrite, software, to get it running faster than on an older CPU (when that was even possible), was simply not acceptable.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
RDRAM was a disaster for Intel. They couldn't wait to get out of it but couldn't because they had a contract and a stake in the company. They admitted outright that it was the wrong decision.

RDRAM was a big cost burden. Not only did you have to pay Rambus royalties, but the RIMMs themselves were more expensive due to needing a memory controller for each one. They also needed terminators if you didn't fill up the slots. And their high clocks drove prices up further, first since they used more power they needed heat spreaders in a time when competing DIMMs didn't, and also because they required more expensive motherboard designs to meet the same signal integrity goals.

The latency was also pretty poor, for a variety of reasons (biggest one being that the RAM chips were daisy chained and the latency to the last one determined the latency for the whole system, although there were other factors). The bandwidth was a bit better than DDR available at the same time but that didn't often make a tangible difference in software that couldn't really utilize it.

I don't really see how having a long pipeline alone means you need more RAM bandwidth to compensate somehow. Netburst maybe generated more RAM traffic because of more aggressive speculation and how it dealt with replays. But in those cases it shouldn't matter if you end up stalling the RAM controller if the results are just going to get thrown out. Big bandwidth is really only usable when it comes from legitimate requests that have the latency well hidden, mainly from prefetch (with software hints if necessary) working well with very regular data patterns. Games like Quake 3 demonstrated good utilization here and thus had an advantage w/RDRAM but the thing is that at this point it was already running into the hundreds of FPS, so the boost it got from the bandwidth advantage is both exaggerated and irrelevant.

The fact that much lesser companies like SiS could actually provide competitive chipsets without using RDRAM was very telling.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I don't really see how having a long pipeline alone means you need more RAM bandwidth to compensate somehow.
In itself, it doesn't. That keeping it filled enough to perform required much more prefetching than other CPUs of the time did (mainlyly late-model P3s, and Athlons), but needing that degree of oracular work was due quite a bit to longer latencies in the CPU. Now, all our CPUs need all that, but it's different when bottom of the barrel crap has 2MB cache, and mainstream have 4MB or more, even mobile, when it was 256K, then 512K, back then.

Early on, also, it was RDRAM (2133MBps to 3200MBps) v. PC100/PC133 (800MBps and 1066MBps), not just RDRAM v. SDRAM. DDR was good enough, and just needed some chipset tweaking to compete. PC133 could be as little as 1/3 the bandwidth, at the time, though, compared to RDRAM, and was typically 1/2.
 

Remobz

Platinum Member
Jun 9, 2005
2,563
37
91
RDRAM was a disaster for Intel. They couldn't wait to get out of it but couldn't because they had a contract and a stake in the company. They admitted outright that it was the wrong decision.

RDRAM was a big cost burden. Not only did you have to pay Rambus royalties, but the RIMMs themselves were more expensive due to needing a memory controller for each one. They also needed terminators if you didn't fill up the slots. And their high clocks drove prices up further, first since they used more power they needed heat spreaders in a time when competing DIMMs didn't, and also because they required more expensive motherboard designs to meet the same signal integrity goals.

The latency was also pretty poor, for a variety of reasons (biggest one being that the RAM chips were daisy chained and the latency to the last one determined the latency for the whole system, although there were other factors). The bandwidth was a bit better than DDR available at the same time but that didn't often make a tangible difference in software that couldn't really utilize it.

I don't really see how having a long pipeline alone means you need more RAM bandwidth to compensate somehow. Netburst maybe generated more RAM traffic because of more aggressive speculation and how it dealt with replays. But in those cases it shouldn't matter if you end up stalling the RAM controller if the results are just going to get thrown out. Big bandwidth is really only usable when it comes from legitimate requests that have the latency well hidden, mainly from prefetch (with software hints if necessary) working well with very regular data patterns. Games like Quake 3 demonstrated good utilization here and thus had an advantage w/RDRAM but the thing is that at this point it was already running into the hundreds of FPS, so the boost it got from the bandwidth advantage is both exaggerated and irrelevant.

The fact that much lesser companies like SiS could actually provide competitive chipsets without using RDRAM was very telling.


I remember some big time fights back in the days on these forums with some idiots trying to defend how good Rambus was just to justify their expensive purchase.
 

CakeMonster

Golden Member
Nov 22, 2012
1,428
535
136
I don't understand the technical details, but I remember that Micron and two other companies colluded in dumping the price on DDR for a long while in order to force RDRAM out of the competition. In addition to that, Rambus charged a lower licence fee on RDRAM than for DDR. While I have no love for the patent system and the way things work, most people have the wrong idea about why RDRAM failed.
 

Doomer

Diamond Member
Dec 5, 1999
3,722
0
0
I've got a couple sticks of 128mb RamBus and two terminators if anyone is interested, lol.
 

PliotronX

Diamond Member
Oct 17, 1999
8,883
107
106
From a technical performance standpoint (forgive me if somebody made this point already), they should have stuck with it but in reality for marketshare they had to move away from it. It would have shut most potential consumers out. Once Granite Bay came along...

Good riddance.
 

PliotronX

Diamond Member
Oct 17, 1999
8,883
107
106
I've got a couple sticks of 128mb RamBus and two terminators if anyone is interested, lol.
LOL damn I was sorting the spare memory drawer at the college and I just kept pulling RAMBUSt after RAMBUSt RIMM out. Sitting there doing nothing. I was more impressed by the SIMM's with aluminum fingers in the back. Still useful in Cisco routers ^_^
 

Remobz

Platinum Member
Jun 9, 2005
2,563
37
91
LOL damn I was sorting the spare memory drawer at the college and I just kept pulling RAMBUSt after RAMBUSt RIMM out. Sitting there doing nothing. I was more impressed by the SIMM's with aluminum fingers in the back. Still useful in Cisco routers ^_^

Glad you brought this up. I have some old ram (5 years or more) and old hardware just sitting in draws collecting dust. Why keep it I wonder? Seriously, why just let it sit there collecting dust?

Maybe I need help?...lol!
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
In itself, it doesn't. That keeping it filled enough to perform required much more prefetching than other CPUs of the time did (mainlyly late-model P3s, and Athlons), but needing that degree of oracular work was due quite a bit to longer latencies in the CPU. Now, all our CPUs need all that, but it's different when bottom of the barrel crap has 2MB cache, and mainstream have 4MB or more, even mobile, when it was 256K, then 512K, back then.

Early on, also, it was RDRAM (2133MBps to 3200MBps) v. PC100/PC133 (800MBps and 1066MBps), not just RDRAM v. SDRAM. DDR was good enough, and just needed some chipset tweaking to compete. PC133 could be as little as 1/3 the bandwidth, at the time, though, compared to RDRAM, and was typically 1/2.

A longer pipeline doesn't mean more outstanding memory requests in flight. A larger OoO execution window can mean this.

I don't understand the technical details, but I remember that Micron and two other companies colluded in dumping the price on DDR for a long while in order to force RDRAM out of the competition. In addition to that, Rambus charged a lower licence fee on RDRAM than for DDR. While I have no love for the patent system and the way things work, most people have the wrong idea about why RDRAM failed.

I don't know about Micron or other RAM manufacturers dumping to get rid of Rambus but there's no way that anyone charged a higher license fee for DDR because there is no license fee at all. It's an open standard to all JEDEC members.

The only thing resembling a DDR license fee was the royalty settlements Rambus themselves got from patent trolling.
 
Last edited:

zir_blazer

Golden Member
Jun 6, 2013
1,184
459
136
On release, the P4 was around 2Ghz and Intel claimed it was designed to scale through 10Ghz in 5-6 years. If that indeed happened, the P4 would have been competitive with lower clocked chips like the Athlon despite latency issues, but obviously, it wasn't.
Pentium 4 Willamate debutted with its Socket 423 variants of 1.4 and 1.5 GHz, check here. Faster parts appeared halfway 2001 if CPU World info is right.
Yes, Prescott was supposed to hit 5 GHz and cancelled Tejas carry us to 10 GHz. Prescott actually just increased the retail Frequency bar by just 400 MHz over Northwood (There were 3.4 GHz Northwoods) with some 3.8 GHz models that had serious thermal throttling issues with retail cooling. I recall that Intel made a lot of noise when they announced they it was too hard to deliver a 4 GHz Prescott and canceled it (Was around Pentium D Smithfield introduction).


Early on, also, it was RDRAM (2133MBps to 3200MBps) v. PC100/PC133 (800MBps and 1066MBps), not just RDRAM v. SDRAM. DDR was good enough, and just needed some chipset tweaking to compete. PC133 could be as little as 1/3 the bandwidth, at the time, though, compared to RDRAM, and was typically 1/2.
You got your numbers wrong. The first time that Intel pushed RAMBus was at the Pentium 3 era with the i820 Chipset. PC133 was capable of 1066 MB p/s, but the 16 Bits 800 MHz RIMMs only 1600 MB p/s with higher Latency, and ridiculous price. DDR crushed it.
And even if you had faster RAM, you still had the Processor-to-Chipset-to-RAM Bus bottleneck, because it isn't really THAT useful if you increase the amount of data that you can move from RAM-to-Chipset yet still have the same from Chipset-to-Processor. Well, actually, it was useful for IGPs, in case anyone recalls Dual Channel on Socket A's nForce 2.
I know that RIMMs received faster upgrades for the Pentium 4 era, but don't recall the details of these not what was the DDR competence of that era.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I don't know about Micron or other RAM manufacturers dumping to get rid of Rambus
They colluded to keep the price of RDRAM high, regardless of their actual costs, and market demands. They got sued, and investigated by gov regulatory agencies.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
You got your numbers wrong. The first time that Intel pushed RAMBus was at the Pentium 3 era with the i820 Chipset. PC133 was capable of 1066 MB p/s, but the 16 Bits 800 MHz RIMMs only 1600 MB p/s with higher Latency
The 820 was barely ever used, and offered nothing to the CPUs it was there for. I've only ever seen a single 820 motherboard. The 810 and 815, OTOH, were everywhere. The 850E, however, was rather popular, for those that had sufficient funds.

DDR crushed it.
DDR was not available, until RDRAM was already on its way out (it didn't have a very long life ).

The 850E and 845 were dual-channel RDRAM (up to 800 early on), and single-channel PC1xx, with the S423 P4s. That's 2133MBps-3200MBps v. 800MBps-1066MBps. By the time the 845P was out, much less 865/875, RDRAM was basically dead.
 
Last edited:

lamedude

Golden Member
Jan 14, 2011
1,206
10
81
If you wanted 133MHZ FSB support (without overclocking the AGP bus) and AGP 4x on an Intel chipset you were limited to the 820/840 for almost a year until 815 came out. Even then most people went with 440BX or Via Apollo.
 

naukkis

Senior member
Jun 5, 2002
782
636
136
I don't know about Micron or other RAM manufacturers dumping to get rid of Rambus but there's no way that anyone charged a higher license fee for DDR because there is no license fee at all. It's an open standard to all JEDEC members.

The only thing resembling a DDR license fee was the royalty settlements Rambus themselves got from patent trolling.

Nope, all sdram-variant are subsets of rdram and covered by Rambus patents. Rambus got royalties from all sdram-variants sold.

Rambus-hate is something that comes with ignorance, that small company develop what bigger memory conpanies didn't and there was simply no other possibilities than using Rambus patented memory technology. And today's DDR3-sdram is basically rdram with independent memory bus from each memory chip, rdram used same chaining structure but with shared data bus for all memory chips. Rdram was developed in 90's so jedec and memory makers are developed nothing in last twenty years.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Nope, all sdram-variant are subsets of rdram and covered by Rambus patents. Rambus got royalties from all sdram-variants sold.

Rambus-hate is something that comes with ignorance, that small company develop what bigger memory conpanies didn't and there was simply no other possibilities than using Rambus patented memory technology. And today's DDR3-sdram is basically rdram with independent memory bus from each memory chip, rdram used same chaining structure but with shared data bus for all memory chips. Rdram was developed in 90's so jedec and memory makers are developed nothing in last twenty years.

RAMBUS also was part of JEDEC that near the end went out and patented designs they effectively stole from JEDEC. They were then found guilty of that fact in civil court and were kicked out of JEDEC but there was no prior art for the patents so they stood at the time. Then the JEDEC basically designed DDR DDR2 and DD3. DDR2 and DDR3 basically got rid of the "chain of chips" and used balanced addressing for DD2 and bus based for DDR3.

The short of it is that RAMBUS didn't design anything, they stole it when they were on the JEDEC board. The fact that they are basically a dead company now basically shows it.
 

naukkis

Senior member
Jun 5, 2002
782
636
136
RAMBUS also was part of JEDEC that near the end went out and patented designs they effectively stole from JEDEC. They were then found guilty of that fact in civil court and were kicked out of JEDEC but there was no prior art for the patents so they stood at the time. Then the JEDEC basically designed DDR DDR2 and DD3. DDR2 and DDR3 basically got rid of the "chain of chips" and used balanced addressing for DD2 and bus based for DDR3.

The short of it is that RAMBUS didn't design anything, they stole it when they were on the JEDEC board. The fact that they are basically a dead company now basically shows it.

They stole what? Rambus designed their original syncronous rdram circa 1990. Jedec designed nothing. And of course Rambus get royalties from all DDRx versions sold today as they are clearly just different implementations of their memory design. Jedec tried to stole all Rambus patents and implement them one by one for free but failed.

And drdram from late nineties is about same as we got now as DDR3. Similar data rates and address signals are routed similarly from mb to dimm itself unlike older sdram-variants where address channel stays at mb, clearly there was some real geniuses at Rambus and what they implement was what everybody have to do with higher frequencies. Remember that we got similar speed memory what we use now fifteen years ago thanks to Rambus but somehow we settled down to slower memory types for decades, are we stupid or is there some big money to make decisions for us......
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
The dram chartel was the bad guys. But they seem to have won the PR spin wars.

We still suffer with parallel memory designs today due to that.
 

PliotronX

Diamond Member
Oct 17, 1999
8,883
107
106
They stole what? Rambus designed their original syncronous rdram circa 1990. Jedec designed nothing. And of course Rambus get royalties from all DDRx versions sold today as they are clearly just different implementations of their memory design. Jedec tried to stole all Rambus patents and implement them one by one for free but failed.

And drdram from late nineties is about same as we got now as DDR3. Similar data rates and address signals are routed similarly from mb to dimm itself unlike older sdram-variants where address channel stays at mb, clearly there was some real geniuses at Rambus and what they implement was what everybody have to do with higher frequencies. Remember that we got similar speed memory what we use now fifteen years ago thanks to Rambus but somehow we settled down to slower memory types for decades, are we stupid or is there some big money to make decisions for us......
If they're so genius, why don't they come up with fresh ideas and designs instead of going around suing larger companies? That is not innovation.
 

PliotronX

Diamond Member
Oct 17, 1999
8,883
107
106
The dram chartel was the bad guys. But they seem to have won the PR spin wars.

We still suffer with parallel memory designs today due to that.
What strengths would serial chipsets provide? The added costs are not appealing in the consumer market. Servers already use serial designs with the AMB.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
What strengths would serial chipsets provide? The added costs are not appealing in the consumer market. Servers already use serial designs with the AMB.

Serial benefits speed greatly and lowers the amounts of traces needed. The AMB is a hotfix solution. Also with DDR4 for example, you only have 1 DIMM per channel. Unless you add a switch.

About everything else than the memory is serialized now.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |