I know you guys are doing a good job at trying to provide good benchmarks and analyses. Please do not take my rationale personal in any way. It is not meant that way. But i did try to convey my point, that virtually all storage benchmarks out there offer no true understanding of what the numbers mean, why the numbers are as they are. In many cases the numbers do not mean much at all, either because it resembles a scenario that is extremely implausible for the majority of readers, or because of benchmark contamination. In the latter case, you do not really know what the numbers mean any more. In many cases, the storage articles do not give me enough information to judge whether the results where obtained properly. Access to the raw test data is rare.
In general, articles can present beautiful graphs but the true aim should be to explain exactly to the reader what the numbers mean, and for that the reviewer - the person writing the article and performing the benchmarks - needs to be totally aware of why the numbers are as they are, and be aware of what scenario's are important to the readers. A good rule of thumb is that you should already know the numbers before you hit the start benchmark-button. Only then i think someone is competent enough to write a proper article about storage performance.
I have not seen one truly amazing and correct RAID benchmark article to date. But there is evolution. In the early days AnandTech and StorageReview dismissed RAID for the desktop because it had very little performance benefit in reality. But they used a PCI FakeRAID card and with too low stripesize and a 31.5K misaligned XP-partition to prove their point. Later Anand missed the ball on Sandforce, but the level of knowledge kept increasing and very interesting articles were written. So there is progression in the quality of information and implicit advice offered, but it is not clear to the readers what information to trust or take seriously.
So i guess my point is that, people should be more cautious to simply accept anything that is written - even by a reputable reviewsite, which i consider Anandtech to be part of. No doubt the articles have added value, but benchmarks can be very misleading. There is a reason people are buying overpriced Samsung SSDs and unknown budget SSDs that are either mediocre or crap due to exotic bugs that you will never read about. Worse, these budget SSDs are often only a few dollars cheaper, and sometimes the difference is about even, with good SSDs like the Crucial MX100. Samsung 830 was cheap in the past, but they went sneaky and introduced a high-performance and expensive 840 Pro, to raise the acceptable price bar of the 840 and later 840 EVO that was selling above the price point of Crucial's SSDs.
I think Crucial still is the best all round choice for consumers. It has all the protections you want for a desktop: RAID5 bitcorrection and capacitors, so resilient against bad pages (high uBER) and resilient against unexpected power-loss. The read performance is excellent, the write performance depends on the size of the model. Only the 512GB model has almost SATA/600 capped writes. This shows off as a huge difference in the benchmarks, but in reality a small SSD will have only the OS and a few games on it. When will you be writing 500MB/s to it? You might read 500MB/s quite often though, when starting a game after a reboot. So benchmarks usually are misleading in that they favor writing while in reality reading is more common. The cause - i think - is that the captured I/O is performed at maximum speed and inaccurately timed queue depth. While in reality 4K writes happen in bunches with time in between, which all modern SSDs handle quite well.