I think for Raptor Lake, minor refresh as it is, they won't do anything as drastic as actually removing DDR4 hardware. Instead, I expect them to either leave it supported, or officially remove support because they don't want to bother validating it on a "new" platform.
Something else to keep in mind, CPU cache structures have grown significantly in the last few years.yes, first access latency for DDR5 certainly looks higher, but, with L3 caches being 20-32MB, is it really going to be as bad in production? On low end systems, you are already paying a penalty for the cheapness, and many will run DDR4 for a long time to come. Mobile systems already run memory with atrocious timings.
I just don't think that, in general day to day use scenarios, it'll ever be noticeable, and, on things that lean on memory bandwidth hard, it'll be an advantage. Quite literally, it should only show up as an issue in software that's specifically written to thrash memory pages deliberately to expose first word latency.
For context, when DDR4 hit desktops, AMD had bristol ridge, which had a less than ideal cache structure. Intel had haswell and Broadwell, which had 8MB on the 4770 and 6MB on the 57xx i7s. Current products have 4x the L3 at almost every level.
Its just not the same world.
That being said I think some here are concerned that early DDR5 testing is not outperforming current DDR4 solutions.
You guys are going nuts over a system with unknown configuration. Review systems with DDR-5 will perform just fine.
never mind whatever it was hitting when that was still a 0000 ES
Like eek2121 says, flipping out over memory latency on a ES 0000 chip is pointless. Rocket Lake 3800CL16 is now hitting AIDA 40ns on Gear 1 after it was doing 50ns on launch firmware, never mind whatever it was hitting when that was still a 0000 ES
Something else to keep in mind, CPU cache structures have grown significantly in the last few years.yes, first access latency for DDR5 certainly looks higher, but, with L3 caches being 20-32MB, is it really going to be as bad in production?
The latency in that image has nothing to do with Gear 4 mode. In fact it's almost certainly either Gear 1 or 2 - more likely the latter.
Because I've seen an image with significantly higher latency already with JEDEC B or C (I forget which) memory. Also A0 silicon. It can go far higher than the 90ns in that image.And how do you know this?
My guess is Intel won't remove DDR4 support entirely until 14th gen Meteor Lake, which will probably have a new socket too.
Because I've seen an image with significantly higher latency already with JEDEC B or C (I forget which) memory. Also A0 silicon. It can go far higher than the 90ns in that image.
Gear 4 mode is a meme for ADL-S, basically will only ever be used to chase memory frequency world record memes.Still, if DDR5 requires GEAR2 mode that will result in a nasty penalty, just like on RKL. While CapFrameX is not the source i'd love to quote, they do test CPU limited scenarios:
CapFrameX Frametime Analysis Software
Frametimes capture and analysis tool compatible with most common 3D APIswww.capframex.com
Some nasty deficits for GEAR2 vs GEAR1 there, 25% FPS deficit in CPU limited scenario is just bad. DDR5 might be starting at huge disadvantage already if GEAR2 is engaged by default and DDR4 can run GEAR1 near 4000. GEAR4 is probably next level of stupid.
As a whole I would just stick to Gear 1 with the lowest latency memory you can get. We're not talking Rembrandt here, there's virtually no need to be chasing higher memory bandwidth figures.
Eh going async still nets you get higher memory bandwidth so the iGPU can benefit. I'm not saying for the CPU ofc.Even with RMB there the little issue of sync vs async IF unless this somehow changes with DDR5, if Cezanne has showed something is that going async only makes things worse.
Last time, DDR3 support lasted from the 6th gen to the 9th gen, there actually was a H310 board with DDR3 that supported the i9-9900.
Is this due mainly to the memory subsystem?
Because I've seen an image with significantly higher latency already with JEDEC B or C (I forget which) memory. Also A0 silicon. It can go far higher than the 90ns in that image.
I'd say with 90% certainty that it's the superior SSD. 10% uncertain because I don't know any of the other factors.Speaking of memory subsystems...
I have a Surface Laptop 3, Kaby Lake R, 256GB SSD, don't know the spec.
Also a 4770k desktop with an EVO 860 1TB SSD.
Both running Windows 10. In every day use, opening applications, the 4770K is MUCH snappier. Is this due mainly to the memory subsystem?
No. You're running a system built with frugal energy usage in mind. Power management and sleep states keep performance low, there's extra latency everywhere and probably some intentional throttling as well.Is this due mainly to the memory subsystem?