Samsung 950 pro vs Crucial BX100 in RAID-0

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

CiPHER

Senior member
Mar 5, 2015
226
1
36
But why? Same controller as Crucial M550/MX100/MX200 but without power-safe capacitors as far as i know, and performs about the same. But it is much more expensive. Why on earth would you want that?

Besides, if you buy two smaller cheaper SSDs and RAID0 them you get roughly double the speed (except for the limitations as described in this thread) for free. Plus, you can always break the RAID0 and use them separately in two systems. I like that a lot more than an overpriced overclocked SSD. Sandisk is always overpriced.
 

readers

Member
Oct 29, 2013
93
0
0
But why? Same controller as Crucial M550/MX100/MX200 but without power-safe capacitors as far as i know, and performs about the same. But it is much more expensive. Why on earth would you want that?

Besides, if you buy two smaller cheaper SSDs and RAID0 them you get roughly double the speed (except for the limitations as described in this thread) for free. Plus, you can always break the RAID0 and use them separately in two systems. I like that a lot more than an overpriced overclocked SSD. Sandisk is always overpriced.

I need that much space to put all my programs(include lots of big ass games). So no PCI-E SSD

Also normal desktop usage, I think we don't have the workload really take advantage of raid 0 on ssd. And more importantly I care about read more than write. So raid 0 = no go.

Still don't see how sandisk is overpriced when price per GB is close and extreme pro out perform those you listed in most tests.

OP ask what would you pick, and everyone would have different answer.
 
Last edited:

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
Still don't see how sandisk is overpriced when price per GB is close and extreme pro out perform those you listed in most tests.

Thanks, I missed that. Another somewhat bizarre claim. The extreme pro is competitive with high end offerings from Samsung, and the ultra II is an excellent budget ssd.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
Maybe prices are different in your area, but here are prices in mine:

Sandisk Extreme Pro 960GB - EUR 446,-
Crucial MX200 1TB - EUR 335,-

That is quite a significant price difference, but again the difference might be less big in your area.

But still, they perform about the same because they have the same controller, but the Sandisk does not have power-loss capacitors so should be a bit cheaper. Why bother with Sandisk then?

If you require a lot of space, RAID0 does the trick nicely. Even if it doesn't provide much performance benefit, you still get to stack storage space and it is basically free since a smaller SSD is about half the price. So any argument that your SSD might be faster, would get washed away when using RAID0. But suddenly when people hear RAID0 they say that the added performance will not matter for typical desktop workloads. But the same argument applies to performance differences between SSDs. And for those you are willing to pay a significant amount of additional bucks, while RAID0 is basically free. I do not understand that kind of reasoning.
 

Hellhammer

AnandTech Emeritus
Apr 25, 2011
701
4
81
But still, they perform about the same because they have the same controller, but the Sandisk does not have power-loss capacitors so should be a bit cheaper. Why bother with Sandisk then?

The same controller means nothing, it's all about the firmware. Currently Marvell only provides the silicon and its customers have to build their own firmware, creating rather significant differences between Marvell-powered SSDs. The Extreme Pro performs much better under heavy IO workloads, which require good IO consistency.

http://www.anandtech.com/bench/product/1454?vs=1488

As for the capacitors, the MX200 only has a few low capacitance ceramic capacitors, which don't cost much. That's not the only way to protect lower pages from corruption, though. One way is to create a backup of the lower page data before writing to the upper page, hence the data in the lower page is not lost if an unexpected power loss occurs during an upper page program. Since lower page program is essentially an SLC program, the endurance impact of the backup is practically negligible.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
Alright, i just quickly checked some benchmarks and do not seem to vary that much. Like this: http://www.tweaktown.com/reviews/64...sd-review-sata-iii-to-the-extreme/index6.html.

The chance you are going to notice anything from the subtle differences in real-life scenario's is almost negligible. Besides, most reviews use benchmarks that span the entire LBA, and thus overprovisioning will be critical. SSDs with very few OP that rely on TRIM to get spare space, will be more impacted by these benchmarks than what is reasonable in real-life scenario's.

And i still find it amusing that people focus on the tiny performance differences of SSDs, but when using RAID0 to stripe multiple SATA/600 can seriously elevate the raw numbers, people suddenly fall back to the "it doesn't matter" argument. Which is fine, but why do they not use the same argument when discussing performance differences between SATA/600 SSDs, especially when using the same controller. At least RAID0 provides some good increase in almost all performance specs which may translate to noticeable performance gains in some areas - such as loading games. Best of all, RAID0 is virtually free since two smaller SSDs usually are about half the price as one bigger. But for the Sandisk Extreme, people are willing to spend up to 33% extra?! Doesn't make much sense to me.

As for the capacitors, the review on Anandtech says this:

The indirection/page table is stored in nCache, which SanDisk believes gives it a better chance of maintaining the integrity of that table in the event of sudden power loss (since writes to nCache are quicker than to the MLC portion of the NAND). The Extreme II itself doesn’t have any capacitor based power loss data protection.
This leads me to believe the Sandisk SSDs have a window of opportunity to corrupt themselves, possibly trashing all data when the mapping tables become inconsistent. Unless they use journalling like we discussed before. Instead, reading this paragraph it appears to me Sandisk is relying on the window of opportunity being small enough because they write the data as SLC - like the MX200 already does by the way, but it does have the additional protection of power-safe capacitors.

I really do not know why one would consider the Sandisk SSD over the Crucial one. The Crucial one appears to me slightly superior for a vastly lower price. Sandisk products have been marketed well, but i think they are overpriced. More than likely you pay for the brand instead of technical differences.
 

Hellhammer

AnandTech Emeritus
Apr 25, 2011
701
4
81
Alright, i just quickly checked some benchmarks and do not seem to vary that much. Like this: http://www.tweaktown.com/reviews/64...sd-review-sata-iii-to-the-extreme/index6.html.

The chance you are going to notice anything from the subtle differences in real-life scenario's is almost negligible. Besides, most reviews use benchmarks that span the entire LBA, and thus overprovisioning will be critical. SSDs with very few OP that rely on TRIM to get spare space, will be more impacted by these benchmarks than what is reasonable in real-life scenario's.

I wouldn't consider PCMark Vantage a valid SSD benchmark in 2015. It's Windows Vista based and frankly the workload is too light to show anything other than marginal differences between modern SSDs. For workloads that light, any modern SSD will do it as the bottleneck in most use cases is going to be user input, not the hardware.

If you look at PCMark 8 or AnandTech's The Destroyer trace, you can see some rather significant differences between the Extreme Pro and MX200. For a user with an IO intensive workload, the differences in benchmarks would also translate to real world, at least in some degree.

As for the capacitors, the review on Anandtech says this:

This leads me to believe the Sandisk SSDs have a window of opportunity to corrupt themselves, possibly trashing all data when the mapping tables become inconsistent. Unless they use journalling like we discussed before. Instead, reading this paragraph it appears to me Sandisk is relying on the window of opportunity being small enough because they write the data as SLC - like the MX200 already does by the way, but it does have the additional protection of power-safe capacitors.

SanDisk uses journaling; they even have a full whitepaper describing power loss protection implementations in their different drives: http://www.sandisk.com/Assets/docs/Unexpected_Power_Loss_Protection_Final.pdf

Also, only the 250GB MX200 (and 500GB M.2/mSATA) does SLC caching. The rest of the SKUs operate in MLC-only mode.

I really do not know why one would consider the Sandisk SSD over the Crucial one. The Crucial one appears to me slightly superior for a vastly lower price. Sandisk products have been marketed well, but i think they are overpriced. More than likely you pay for the brand instead of technical differences.

I'm not recommending either (or any drive for that matter) because I'm now paid to be biased, so it wouldn't be fair for me to make any recommendations. I'm just pointing out facts (and some opinions) and letting people make their own mind.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
Very interesting and useful document, Hellhamer! However, do you have any idea why such information is rarely (read: never) included in reviews like on Anandtech? The quote i mentioned in my previous post from the Anandtech review, does seem to imply that a window of opportunity exist to cause corruption, as is the case for many SSDs. If this does not apply to Sandisk, even despite the lack of capacitors, that would seem to be very important information to include in the review.

As for the argument that some benchmarks do manage to find more substantial differences in performance between SSDs, i highly doubt these differences would be as big when overprovisioning is being used. This would better tally with real-life scenario's where users do not have written their SSD full to below 1% free space, but instead have plenty of free space and thanks to TRIM this space can be used as spare space by the SSD internally, resulting in less wear and better performance.

Even if this weren't the case, the name 'The Destroyer' is pretty evident that the workloads presented in this benchmark are not typical consumer workloads at all. And most trace&replay benchmarks used in benchmarks, including in Anandtech i presume, do not use original timing mechanisms but instead replay at maximum speed or have a static timer between I/O's. This would lead to less time for the SSD to spend time on garbage collection, which becomes more of an issue if the drive had fewer OP to begin with. The MX200 does not have much OP at all because the RAID5 bitcorrection is included in the little space it has - little over the 6,7% difference between GB and GiB which includes everything from internal metadata like the mapping tables to reserve pages to the RAID5 bitcorrection and what is left is just a tiny amount of spare space. So overprovisioning such an SSD would give much more consistent performance in dramatically unrealistic benchmarks such as 'The Destroyer'.

At the very least, it would be good to compare such trace&replay benchmarks with REAL benchmarks, where the SSD is secure erased, is not written full but instead 60 to 80% which is far more realitic, and then do common tasks such as start a game or write a video editing file or do some random I/O in a VM image container. I am pretty sure the differences between SSDs would be very marginal in such conditions. If true, the benchmarks used in reviews including Anandtech, do not tally at all with realistic workloads typical of desktop users and as such are not very useful and very likely even misleading.
 

Hellhammer

AnandTech Emeritus
Apr 25, 2011
701
4
81
Very interesting and useful document, Hellhamer! However, do you have any idea why such information is rarely (read: never) included in reviews like on Anandtech? The quote i mentioned in my previous post from the Anandtech review, does seem to imply that a window of opportunity exist to cause corruption, as is the case for many SSDs. If this does not apply to Sandisk, even despite the lack of capacitors, that would seem to be very important information to include in the review.

There are two main reasons. The first is that I very rarely got asked anything about power loss protection in client drives, so I didn't see the readership having much interest in the topic. Micron/Crucial was a special case because I noticed some inconsistencies in their marketing materials (some saying just PLP and others PLP for data-at-rest), so I asked what's the deal and they explained the details, although even that didn't create much reader discussion around the topic.

The second is that not all manufacturers are willing to go public on such details. Some manufacturers are very protective about their technologies, which is partially due to patent reasons. SSDs are like black boxes - it's very difficult to reverse engineer one because everything is hidden in the firmware. However, if you make a statement in public (even indirectly through media), it may lead to an investigation if another company owns a relevant patent (FYI, SanDisk alone has over 5,000 patents). It's better to stay quiet unless there's an urgent need for the information, which hasn't been the case with PLP.

Even if this weren't the case, the name 'The Destroyer' is pretty evident that the workloads presented in this benchmark are not typical consumer workloads at all. And most trace&replay benchmarks used in benchmarks, including in Anandtech i presume, do not use original timing mechanisms but instead replay at maximum speed or have a static timer between I/O's. This would lead to less time for the SSD to spend time on garbage collection, which becomes more of an issue if the drive had fewer OP to begin with. The MX200 does not have much OP at all because the RAID5 bitcorrection is included in the little space it has - little over the 6,7% difference between GB and GiB which includes everything from internal metadata like the mapping tables to reserve pages to the RAID5 bitcorrection and what is left is just a tiny amount of spare space. So overprovisioning such an SSD would give much more consistent performance in dramatically unrealistic benchmarks such as 'The Destroyer'.

Timing is relative to previous IO, so the idle time between every IO is the same for every SSD regardless of how fast the drive is. Idle times are truncated to 25ms to speed up the playback process.

It's true that The Destroyer isn't a typical consumer workload, but it was never designed to be one (and it has never been mentioned as one). The Heavy and Light workloads cater that niche already, so the purpose of The Destroyer is to illustrate a very IO intensive workload that a professional or enthusiast may have. In the end, those are the people who buy high-end SSDs because they can take use of the higher performance.

At the very least, it would be good to compare such trace&replay benchmarks with REAL benchmarks, where the SSD is secure erased, is not written full but instead 60 to 80% which is far more realitic, and then do common tasks such as start a game or write a video editing file or do some random I/O in a VM image container. I am pretty sure the differences between SSDs would be very marginal in such conditions. If true, the benchmarks used in reviews including Anandtech, do not tally at all with realistic workloads typical of desktop users and as such are not very useful and very likely even misleading.

I explained why relevant real-world benchmarks are practically impossible to create in this article. The short version is that there's no reliable way to measure real-world performance in multitasking environment, which is the only way to really put pressure on a modern SSD. Basically, the closer you get to real-world, the simpler the test has to be in order to be reliable and reproducible, but that's not interesting if you do more than launch Chrome and play games on your computer.
 

readers

Member
Oct 29, 2013
93
0
0
Maybe prices are different in your area, but here are prices in mine:

Sandisk Extreme Pro 960GB - EUR 446,-
Crucial MX200 1TB - EUR 335,-

That is quite a significant price difference, but again the difference might be less big in your area.

But still, they perform about the same because they have the same controller, but the Sandisk does not have power-loss capacitors so should be a bit cheaper. Why bother with Sandisk then?

If you require a lot of space, RAID0 does the trick nicely. Even if it doesn't provide much performance benefit, you still get to stack storage space and it is basically free since a smaller SSD is about half the price. So any argument that your SSD might be faster, would get washed away when using RAID0. But suddenly when people hear RAID0 they say that the added performance will not matter for typical desktop workloads. But the same argument applies to performance differences between SSDs. And for those you are willing to pay a significant amount of additional bucks, while RAID0 is basically free. I do not understand that kind of reasoning.

They were both 500 CAD when I bought my extreme pro 960gb. Now might be different.

raid0 is something I can add later, don't see a point to fill my limited ssd mount on smaller drives, maybe I will pick up another extreme pro 960gb for something like sub 300 in a year or two, but not sure I want to to bother with raid and reinstall all my programs.
 

Coup27

Platinum Member
Jul 17, 2010
2,140
3
81
If you want the most reliable consumer-grade SSD, the Intel 320 is still the classic that is almost impossible to beat.
I'm pretty sure the 8MB bug never got 100% fixed in the 320 series. Even after the firmware update I remember reading reports on various forums about people still getting hit by it. No further firmware fixes were published from what I recall.

Also, I do believe suggesting to people that the most reliable consumer drive in 2015 is a 3Gbps product from 2011 which you can't even buy anymore is a bit silly.

No offense, and while you are clearly a clever person, I do think you're full of a lot of scare mongering. I've estimated that I have used or managed over 50 SSD based systems using primarily Samsung and a couple of Intel and Crucial drives and I have never had one corrupt so I think you're making out this to be far more of a problem than it actually is.
 

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
Hellhammer, your posts are golden. Not surprising for a former Anandtech reviewer, but still. I miss your articles on SSDs.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
There are two main reasons. The first is that I very rarely got asked anything about power loss protection in client drives, so I didn't see the readership having much interest in the topic.
But isn't it the task of one of the best reviewsites - which Anandtech is - to show leadership and educate your readers what is truly important when considering an SSD to buy?

If so, don't you think the differences in reliability - which can be huge particularly for SSDs - would be important enough to write about in more detail? I am sure that if high quality reviewsites like Anandtech would spend more time on this topic, it would raise awareness about the issue and likely spark more interest in the topic as well. Simply posting benchmarks where the subtle performance differences are being enlarged, seems like playing Lemmings to me.

I mean, if even i do not know about many things regarding this topic - and i do think know at least a little bit of SSDs - how do you expect your readers to know? Don't you think they deserve to know more about it? In this thread alone, you have provided me with valuable new insights and corrected me on a couple of things. Such information would be very valuable to include in review articles, i would argue. Certainly it would distinguish Anandtech from the plethora of other reviewsites that focus solely on performance - which for SSDs is not all that interesting in my view.

Micron/Crucial was a special case because I noticed some inconsistencies in their marketing materials
Can you indicate what exactly the inconsistency was? I have read the material but i never saw the claim they provided full protection for the DRAM buffercache.

Just on a related note, i think the information that Intel provides is also ambiguous at best. For example the claims Intel makes regarding its Enhanced power-loss protection provided by S3700:

The drive saves all cached data in the process of being written before shutting down, thereby minimizing potential data loss.

It is confirmed; all acknowledged writes will be properly written to the SSD.

Regardless, to me it is not clear whether this means:

1) all writes are protected that either are actually in the process of being written - thus the entire DRAM buffercache is being lost.

2) only writes 'confirmed' by FLUSH CACHE command are protected.

3) all writes that receive the DRAM buffer are protected; i.e. loss of power will ensure protection of all buffered writes - including those not written and not in the actual process of being written.

Those are all very different interpretations of what Intel claims. The Intel 320 falls in the third (best) category, but whether the S3700 offers the same kind of protection is not clear to me. Their meaning is ambiguous. The same can be said about the documentation Crucial has given regarding their power-loss protection. At least the information that i have read. No where did they specify that all buffered writes will survive unexpected power-loss.

If this is true, wouldn't it be fair to say Crucial is not to entirely to blame, since it is also a matter of assumptions being made instead of explicit unambiguous claims that turned out to be untrue? If they indeed explicitly claimed power-loss protection for all buffered writes, then they would be at fault. But in this case it appears to me that Crucial/Micron at most was unclear, ambiguous or incomplete about the offered protection. The same can be said about Intel.

Timing is relative to previous IO, so the idle time between every IO is the same for every SSD regardless of how fast the drive is. Idle times are truncated to 25ms to speed up the playback process.
Sure, the argument can be made that because testing is done consistent across the drives, it would be a fair comparison. But still it could be that SSD A would be impacted more from this kind of testing than SSD B, and that testing this way does not tally with real-world scenario's. If true, such benchmarks are unrealistic and defeat the whole purpose of benchmarking.

At the very least, one should also periodically do a test whether the trace&replay benchmarks performed do indeed tally with real-world performance. For example by comparing the relative performance differences of the trace&replay test with a stopwatch test that tests actual performance differences in typical usage scenario's. This way it can be tested whether the testing methods provide an unfair - i.e. unrealistic - advantage to one SSD or the other.

It's true that The Destroyer isn't a typical consumer workload, but it was never designed to be one (and it has never been mentioned as one). The Heavy and Light workloads cater that niche already, so the purpose of The Destroyer is to illustrate a very IO intensive workload that a professional or enthusiast may have.
Does 'The Destroyer' indeed match workloads that are typical for a professional or enthusiast user? I have my doubts about that. It appears to me as the most severe test one might construct, and probably would not match any typical workload except for extreme corner-cases.

If true, one might question why these benchmarks are performed at all. I mean, isn't the goal of a benchmark to match realistic workloads so it gives a rough measure of how performance in real circumstances would be? If that is the aim, do you think this test and the other less extreme trace&replay benchmarks are up to the task?

I explained why relevant real-world benchmarks are practically impossible to create in this article. The short version is that there's no reliable way to measure real-world performance in multitasking environment, which is the only way to really put pressure on a modern SSD.
Well, i think the question shouldn't be 'how to put pressure on a modern SSD' but instead, how SSDs perform in real circumstances and whether there is a noticeable difference in performance that can be experienced in real life.

If the differences in performance are only measurable in laboratory situations, then such comparisons are not at all interesting - perhaps only for academic consideration. Instead, i believe reviews ought to focus more on other areas such as reliability (levels of protections, such as power-loss protection, journalling and parity bitcorrection). Also price would be valid area of focus.

Instead, i see that almost all reviewsites focus almost exclusively on performance. Anandtech does provide some details about the rest and is actually an exception to the rule. But still, the attention on non-performance related issues are sketchy at best. Instead, a whole page dedicated to reliability and protections mechanisms would be much more logical to me.

The result of all the focus on performance is that SSDs want higher numbers, higher MB/s. This has caused many consumers to focus their buying decisions on SSDs that had higher specs. This in turn caused honest products such as Intel X25-M and 320 to be overrun by much less reliable competitors that used a alpha-quality Sandforce controller or Indilinx or other controllers which were immature at the time and full of firmware bugs and lacking many protection mechanisms that Intel did provide. But they sold better, more than likely because they could claim high MB/s specs, which consumers greedily bought into.

I think this dark past of unreliable SSDs is very tragic, because many enthusiast consumers were motivated to spend a lot of money - relative to their income - on SSDs because they heard they are so much better. But all they got was a sub-par OCZ SSD with issues i do not need to repeat to you as you more than likely know what i am talking about. The disappointment of the unreliable nature of their products - in that era anyways - would have been very sad, especially considering they were so popular at the time. And the problem was not the NAND, but the controller - particularly the firmware. Consumers were little more than Guinea pigs that acted as beta testers. Right now Sandforce is pretty reliable yes, but only after massive widespread exposure of their buggy firmware to so many consumers. That just isn't funny.

And i guess my point is that, reviewsites have played a part in that. I remember Anandtech running benchmarks with compressible data, including unpatched IOmeter which so called incompressible data was being reused over and over. The Sandforce controller was able to defeat that benchmark due to deduplication technique. One might argue that was exactly the intention of Sandforce - the compression was to defeat ATTO benchmark and the deduplication was to defeat IOmeter. The whole controller was designed to cheat consumers into buying their products with high specs and good reviews, but in realistic circumstances they were often even slower than its competitors which provided honest performance specifications - read: Intel. It was also at that time that Intel went to provide Sandforce based SSDs, and also marketed them with zero-write specs. How low can you go.... from one of the most reputable brands in the technology sector, to being equals of brands that committed deceit and treachery.

But wouldn't it be reasonable to say that reviewsites - including Anandtech - had a role in this dark past? Even today, the best reviewsites Toms Hardware, Anandtech and The Tech Review, post reviews articles that i think are misleading for ordinary consumers. With so much focus on performance differences and using extreme benchmarks to find some difference between the available products, consumers will implicitly interpret them as if performance differences between modern SSDs have a real-life impact on noticeable performance, which is very controversial i would say.

Instead, i would think that the best reviewsites this planet has to offer, ought to provide meaningful insights to their readers. And educate them into the real situation where SSDs do not at all differ so much in noticeable performance, but differences in reliability (not just failure rate but also protection against corruption) and price are much more substantial. And that it makes much more sense to look at these areas when considering which SSD to buy.

Besides performance, articles about endurance tests where SSDs are written well below 0% MWI are frequently being cited and are shaping the knowledge of ordinary consumers. But what is not mentioned in those articles is that retention can become a serious problem impacting reliability of the drive. Sure, you can go well below 0% MWI, but the reliability of the SSD will suffer considerably. This effect is obfuscated by the fact the test re-writes the flash pages over and over and thus the NAND cells are refreshed very frequently, resetting the retention timer where JEDEC specifies at least 12 months of average retention for consumer-grade SSDs. This important issue is not being raised in those articles and again consumer opinion is based on incomplete and even misleading articles from the best review sites out there... again, not so funny! :|

I am just curious how you think about this argument. Please do not perceive it as a personal attack or anything. Because if there is any merit to it, the whole review-industry is to blame.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
I'm pretty sure the 8MB bug never got 100% fixed in the 320 series. Even after the firmware update I remember reading reports on various forums about people still getting hit by it. No further firmware fixes were published from what I recall.
All modern SSDs except the first generation like Mtron and Memoright have the '8MB bug'. The '8MB bug' refers to the corruption of the mapping tables with catastrophic result. Modern SSDs do not store your data, but instead store references to your data. The mapping tables are required to reconstruct what is being stored. Without them, the actual data is a jungle of bits and pieces the SSD cannot make heads or tails of - unlike mechanical harddrives which do not use mapping tables in this fashion, but only have a small table for reserve sectors so in fact the exact opposite of how SSDs operate.

Upon corruption of these mapping tables - sometimes mentioned as FTL or Flash Translation Layer - each SSD may react in a different way. Intel SSDs with Intel controller and firmware will report the SSD as a 8MB storage device and is not writeable. Basically, the SSD is unusable unless the mapping tables are reset using a secure erase procedure, after which the SSD works again.

The Intel '8MB bug' in Intel 320 was due to the power-safe capacitors not working properly, though i do not know the details of the exact culprit. But my point is that each SSD can have corruption of the mapping tables. Most SSDs will not respond or will not be detected in the BIOS. Some OCZ Sandforce-based SSDs had a LED which started flashing red if i'm not mistaken. In some cases, the firmware could restore the consistency of the mapping tables, but often with data corruption as a result.

Modern SSDs have parity bitcorrection to protect the mapping tables from corruption. Intel uses RAID4 while most other brands i know use RAID5 i.e. distributed parity. The differences are minor, but the level of protection can vary. For example, the Crucial M500 used 1:16 parity which also meant that 1/16th of the capacity was lost due to saving all the parity information. That is why they had capacities like 120/240/480/960GB. Starting from the MX100, Crucial switched to a 1:128 parity scheme which had less overhead and thus either more usable space and/or more spare space ('overprovisioning'). But the result is also less protection against unreadable NAND pages. If those affect the mapping tables, the SSD is in trouble.

Also, I do believe suggesting to people that the most reliable consumer drive in 2015 is a 3Gbps product from 2011 which you can't even buy anymore is a bit silly.
Why? I still recommend them for ZFS sLOG purposes. In those cases, the write latency is important and since writes are mostly single queue random writes the newer SSDs will not add much performance at all. Intel generally focuses on consistency of write latency, as opposed to high MB/s numbers which are not all that important for server workloads such as ZFS sLOG.

And their reliability record is among the best, certainly for a consumer-grade drive. I would not really recommend them as generic desktop SSD but for specific purposes it would still be the classic reliable consumer-grade SSD to consider. It is still being sold at some places.

No offense, and while you are clearly a clever person, I do think you're full of a lot of scare mongering. I've estimated that I have used or managed over 50 SSD based systems using primarily Samsung and a couple of Intel and Crucial drives and I have never had one corrupt so I think you're making out this to be far more of a problem than it actually is.
Well today's SSDs are certainly a lot better, thanks to software techniques like journalling of the mapping tables, and thanks to RAID4 or RAID5 bitcorrection. So protections are in place. But in the past, SSDs were very fragile and could easily corrupt themselves. I still think many budget brands today fall into the same category. The SSD brands you mentioned are among the best, but that is not to be said about all brands and models.

In the past, OCZ had failure rates of over 50%. That is totally unacceptable for non-mechanical storage, which certainly has the potential of being extremely reliable. I mean, how often does your CPU fail? Those are pretty complex pieces of electronics but still failures are extremely rare, except those caught before they leave the factory. The same cannot be said about SSDs, even though in theory they could be just as reliable. And especially for a storage device that is something you want.

Because if your CPU fails, you get a new one, simple as that. But if your storage device fails, you do not only lose the functionality it offers but also the data it stores. Consumers in particular are renowned for not making proper backups. While many blame the consumers, i blame the technology. Computers ought to make lives easier for mankind, not complicate it with additional maintenance humans need to periodically do in order to compensate for the fragile nature of immature technology.

So 'scare mongering' - well.. consumers have reason to be scared. SSDs are complex devices and have many failure modes. Mechanical storage is inherently unreliable, but at least has limited failure modes due to its limited complexity. Solid state storage simply fails to deliver the promise of being highly reliable as they ought to be.

Hellhammer, your posts are golden. Not surprising for a former Anandtech reviewer, but still. I miss your articles on SSDs.
I agree, his posts are very informative and he knows his stuff. I also particularly like the fact that he kept posting with nuance, considering some of my posts were not - even to the point of being cocky. :$

If an apology is appreciated, I hereby offer one.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |