Originally posted by: CTho9305
Presumably so whatever control logic there was could be reused across the versions. It would definitely reduce the design complexity. I'm speculating, but I would imagine the validation resources required also go way up if the different cache sizes have different latencies. You can't just assume that because your design works with an n cycle latency means it'll work with an n+1 or n-1 latency. It's naive to assume that changing latencies won't expose new bugs/quirks. Compare the L2 latencies for the various existing AMD CPUs - I'm pretty sure they don't change by size (any change would be by revision). You can also check die photos to see that the different sizes weren't all just down-bins, but there were really separate tape-outs.
i find that hard to believe because the very first thing done with a new process is to determine the size of the big cache. before any other studies are done, they figure out the absolute smallest bitcell they can create, then the cache size is set, then all the other studies begin. the cache size sets the economies of scale and price, hence everything else as well. if the cache size was determined up front, why would they design with the option to make it larger, knowing that the would be economically undesirable?
that said, even if the bigger cache option existed and never materialized, how hard is it to move the data strobe back to the pre-validated fetch time of 12 cycles? and since AMD actually performs separate tapeouts with chopped caches, there is no way they would force that pipeline to be systolic and waste validation effort on multiple products.
and lastly, i find it incredibly difficult to believe that the PR mouthpiece of any company will ever tell the bare truth. it is not a matter of if but how strong the bullshit detector reads. and that particular explanation for a common issue with shrinks definitely set off alarms.