Originally posted by: MadRat
DDR-II and DDR is just big memory makers attempt to recycle old tech. They push ahead the transfers per clock one generation and move two steps back in clock rate each time. They did it with EDO (70ns 5v 168-pins from 50ns 5v 72-pins; recycling 50ns, 60ns, and 70ns), they did it with DDR(PC166 @166MHz to PC1600 @100MHz; recycling 100MHz, 133MHz, 166Mhz), and now they'll do it with DDR-II. Lest we forget we've been past the 100MHz mark in memory a couple of times already...
Strangely enough, you seem to know more about the history of memory than I do yet you learned nothing from it.
DDR-II\533 isn't hardly different than DDR400, so why would it be 4x better than PC133?? DDR wasn't but 7-15% different in raw streaming performance at the same internal clock speeds, so why would DDR-II suddenly be so much more efficient? The truth is that DDR-II will at best be around 50% efficient, no where near as compared to SDR's mid 70-ish percent efficiency. DDR was only in the mid-50's, another reason its impossible to say DDR is actually double the performance; at 133MHz SDR is around 600MB/sec and DDR is around 1GB/sec. When it came to asynchronous reads and writes the SDR memory had an advantage.
Hmm.... perhaps you should take another look at how the efficiency is calculated. If it is done the way I think it is, then DDR is more efficient overall than the legacy SDRAM it replaced. DDR transfers two bits per clock, but at 50% efficiency, that would be at least one bit of valid data per clock. Legacy SDRAM at 70% efficiency would be the same as saying 30% of the time, SDRAM came up short and sent nothing useful. 50% efficiency at twice the speed is a lot better than 70% at the reference speed.
SDR, DDR, and DDR-II may use identical SDRAM technology, but the controllers for each one are different. Interlaced SDR memory was running pretty close to DDR performance, but with better quickness. DDR could read and write relatively close to SDR, but there was a slim difference between the two. That and it took DDR much longer to catch up to the speeds of SDR on a per clock basis; SDR was at PC166 when DDR266 was just coming to market. DDR-II is going for dual-channel and DDR-II's QDR-style of transfers to boost their Prescott performance in the Alderwood/Grantsdale chipsets if I understand right, and the reviewers were given PC4200 when the initial mainstream rollout will likely be only PC3200. I seriously doubt AMD would want PC3200 in any of their machines!
Um.. I think you forget AMD is still using PC2100 and PC2700. PC3200 is still high-end for AMD.
Anyways, legacy SDRAM never made it to PC166. Sure, you could overclock parts to 150 or 166 and sell it as such, but you could do the same with DDR.
Also, DDR controllers, if I remember correctly, seriously sucked. It took how long before a decent controller hit the scene? Legacy SDRAM controllers had been on the market for years and the technology was well understood. In other words, comparison of mature vs new technology.
I have a feeling the reason interleaved legacy SDRAM did about as well as the DDR counterparts is because internally, DDR is interleaved into 2 banks minimum.
Raw bandwidth is good for processes that demand bandwidth, like multimedia processing, but for performance computing its not a benefit. Dial-A-Rate MEMORY would span the technologies to bring about the best of each as necessary.
Well, stupid me, in that case, higher density hard drives must not improve performance at all.
There are two ways to increase microprocessor performance : reduce latency or increases bandwidth. Seeing as how latency isn't increasing anytime soon (on the order of 10% / year for the past twenty years) it looks like bandwidth is the way to go.
I see a couple major problems with your Dial-A-Rate memory idea. First off, very little benefit for a lot more cost. The percentage gains are at best on the order of 10% in best case scenarios, but the cost of extra logic on the memory alone is going to cost money and, more importantly, latency.
Second problem involves system design. I don't recall any CPU which can specify which memory type it wants to run. Doing so destroys compatibility with other systems and future memory types. You'll end up with early generations running only a couple memory types while the new CPUs mop the floor with new memory formats. It's like supporting old ISA slots and with about as much benefit.