There are two main reasons. The first is that I very rarely got asked anything about power loss protection in client drives, so I didn't see the readership having much interest in the topic.
But isn't it the task of one of the best reviewsites - which Anandtech is - to show leadership and educate your readers what is truly important when considering an SSD to buy?
If so, don't you think the differences in reliability - which can be huge particularly for SSDs - would be important enough to write about in more detail? I am sure that if high quality reviewsites like Anandtech would spend more time on this topic, it would raise awareness about the issue and likely spark more interest in the topic as well. Simply posting benchmarks where the subtle performance differences are being enlarged, seems like playing Lemmings to me.
I mean, if
even i do not know about many things regarding this topic - and i do think know at least a little bit of SSDs - how do you expect your readers to know? Don't you think they deserve to know more about it? In this thread alone, you have provided me with valuable new insights and corrected me on a couple of things. Such information would be very valuable to include in review articles, i would argue. Certainly it would distinguish Anandtech from the plethora of other reviewsites that focus solely on performance - which for SSDs is not all that interesting in my view.
Micron/Crucial was a special case because I noticed some inconsistencies in their marketing materials
Can you indicate what exactly the inconsistency was? I have read the material but i never saw the claim they provided full protection for the DRAM buffercache.
Just on a related note, i think the information that Intel provides is also ambiguous at best. For example
the claims Intel makes regarding its
Enhanced power-loss protection provided by S3700:
The drive saves all cached data in the process of being written before shutting down, thereby minimizing potential data loss.
It is confirmed; all acknowledged writes will be properly written to the SSD.
Regardless, to me it is not clear whether this means:
1) all writes are protected that either are actually in the process of being written - thus the entire DRAM buffercache is being lost.
2) only writes 'confirmed' by FLUSH CACHE command are protected.
3) all writes that receive the DRAM buffer are protected; i.e. loss of power will ensure protection of all buffered writes - including those not written and not in the actual process of being written.
Those are all very different interpretations of what Intel claims. The Intel 320 falls in the third (best) category, but whether the S3700 offers the same kind of protection is not clear to me. Their meaning is ambiguous. The same can be said about the documentation Crucial has given regarding their power-loss protection. At least the information that i have read. No where did they specify that all buffered writes will survive unexpected power-loss.
If this is true, wouldn't it be fair to say Crucial is not to entirely to blame, since it is also a matter of assumptions being made instead of explicit unambiguous claims that turned out to be untrue? If they indeed explicitly claimed power-loss protection for all buffered writes, then they would be at fault. But in this case it appears to me that Crucial/Micron at most was unclear, ambiguous or incomplete about the offered protection. The same can be said about Intel.
Timing is relative to previous IO, so the idle time between every IO is the same for every SSD regardless of how fast the drive is. Idle times are truncated to 25ms to speed up the playback process.
Sure, the argument can be made that because testing is done consistent across the drives, it would be a fair comparison. But still it could be that SSD A would be impacted more from this kind of testing than SSD B, and that testing this way does not tally with real-world scenario's. If true, such benchmarks are unrealistic and defeat the whole purpose of benchmarking.
At the very least, one should also periodically do a test whether the trace&replay benchmarks performed do indeed tally with real-world performance. For example by comparing the relative performance differences of the trace&replay test with a stopwatch test that tests actual performance differences in typical usage scenario's. This way it can be tested whether the testing methods provide an unfair - i.e. unrealistic - advantage to one SSD or the other.
It's true that The Destroyer isn't a typical consumer workload, but it was never designed to be one (and it has never been mentioned as one). The Heavy and Light workloads cater that niche already, so the purpose of The Destroyer is to illustrate a very IO intensive workload that a professional or enthusiast may have.
Does 'The Destroyer' indeed match workloads that are typical for a professional or enthusiast user? I have my doubts about that. It appears to me as the most severe test one might construct, and probably would not match any typical workload except for extreme corner-cases.
If true, one might question why these benchmarks are performed at all. I mean, isn't the goal of a benchmark to match realistic workloads so it gives a rough measure of how performance in real circumstances would be? If that is the aim, do you think this test and the other less extreme trace&replay benchmarks are up to the task?
I explained why relevant real-world benchmarks are practically impossible to create in
this article. The short version is that there's no reliable way to measure real-world performance in multitasking environment, which is the only way to really put pressure on a modern SSD.
Well, i think the question shouldn't be
'how to put pressure on a modern SSD' but instead, how SSDs perform in real circumstances and whether there is a noticeable difference in performance that can be experienced in real life.
If the differences in performance are only measurable in laboratory situations, then such comparisons are not at all interesting - perhaps only for academic consideration. Instead, i believe reviews ought to focus more on other areas such as reliability (levels of protections, such as power-loss protection, journalling and parity bitcorrection). Also price would be valid area of focus.
Instead, i see that almost all reviewsites focus almost exclusively on performance. Anandtech does provide some details about the rest and is actually an exception to the rule. But still, the attention on non-performance related issues are sketchy at best. Instead, a whole page dedicated to reliability and protections mechanisms would be much more logical to me.
The result of all the focus on performance is that SSDs want higher numbers, higher MB/s. This has caused many consumers to focus their buying decisions on SSDs that had higher specs. This in turn caused honest products such as Intel X25-M and 320 to be overrun by much less reliable competitors that used a alpha-quality Sandforce controller or Indilinx or other controllers which were immature at the time and full of firmware bugs and lacking many protection mechanisms that Intel did provide. But they sold better, more than likely because they could claim high MB/s specs, which consumers greedily bought into.
I think this dark past of unreliable SSDs is very tragic, because many enthusiast consumers were motivated to spend a lot of money - relative to their income - on SSDs because they heard they are so much better. But all they got was a sub-par OCZ SSD with issues i do not need to repeat to you as you more than likely know what i am talking about. The disappointment of the unreliable nature of their products - in that era anyways - would have been very sad, especially considering they were so popular at the time. And the problem was not the NAND, but the controller - particularly the firmware. Consumers were little more than Guinea pigs that acted as beta testers. Right now Sandforce is pretty reliable yes, but only after massive widespread exposure of their buggy firmware to so many consumers. That just isn't funny.
And i guess my point is that, reviewsites have played a part in that. I remember Anandtech running benchmarks with compressible data, including unpatched IOmeter which so called incompressible data was being reused over and over. The Sandforce controller was able to defeat that benchmark due to deduplication technique. One might argue that was exactly the intention of Sandforce - the compression was to defeat ATTO benchmark and the deduplication was to defeat IOmeter. The whole controller was designed to cheat consumers into buying their products with high specs and good reviews, but in realistic circumstances they were often even slower than its competitors which provided honest performance specifications - read: Intel. It was also at that time that Intel went to provide Sandforce based SSDs, and also marketed them with zero-write specs. How low can you go.... from one of the most reputable brands in the technology sector, to being equals of brands that committed deceit and treachery.
But wouldn't it be reasonable to say that reviewsites - including Anandtech - had a role in this dark past? Even today, the best reviewsites Toms Hardware, Anandtech and The Tech Review, post reviews articles that i think are misleading for ordinary consumers. With so much focus on performance differences and using extreme benchmarks to find some difference between the available products, consumers will implicitly interpret them as if performance differences between modern SSDs have a real-life impact on noticeable performance, which is very controversial i would say.
Instead, i would think that the best reviewsites this planet has to offer, ought to provide meaningful insights to their readers. And educate them into the real situation where SSDs do not at all differ so much in noticeable performance, but differences in reliability (not just failure rate but also protection against corruption) and price are much more substantial. And that it makes much more sense to look at these areas when considering which SSD to buy.
Besides performance, articles about endurance tests where SSDs are written well below 0% MWI are frequently being cited and are shaping the knowledge of ordinary consumers. But what is not mentioned in those articles is that retention can become a serious problem impacting reliability of the drive. Sure, you can go well below 0% MWI, but the reliability of the SSD will suffer considerably. This effect is obfuscated by the fact the test re-writes the flash pages over and over and thus the NAND cells are refreshed very frequently, resetting the retention timer where JEDEC specifies at least 12 months of average retention for consumer-grade SSDs. This important issue is not being raised in those articles and again consumer opinion is based on incomplete and even misleading articles from the best review sites out there... again, not so funny! :|
I am just curious how you think about this argument. Please do not perceive it as a personal attack or anything. Because if there is any merit to it, the whole review-industry is to blame.