Where did I state that any form of storage is 100% reliable? Why are you arguing a point I never made?
I'm not. I'm pointing out that expecting any storage device to more reliable, and choosing it based on that, is generally silly (aside from choices like not buying OCZ ).
Again, wasting time arguing a point I never made. Where did I say traditional hard drives never die without warning?
With traditional drives, even when they go bad, you usually get some sort of warning that give you enough time to at least get the data off.
I have not argued about never or always, except in the case of very old drives. Back in the day, pre-Barracuda Seagates (I think they were called Mercury), and the old Maxtors with poor quality castings that could slice into your fingers, would fail without warning, but I can't recall any others failing without warning, prior to manufacturers going to perpendicular recording, and it has definitely been an increasing proportion of failures I see (IE, I cannot say X out of Y drives are failing, but Z out of T failures that I come across are 'like so').
You were arguing that since SSDs typically fail abruptly, that HDDs should be used, because they usually don't. But, that's changing.
No idea how you came to your conclusion based on the link you provided. Is it possible to use 20 hard drives for 5 years with no failures? Absolutely, almost anything is possible, however you would be well off the center of the bell curve.
5% in the first year is 19 drives. 20 would be just off of the top of the curve, for the first 3 years, and they're being put into rather harsh, vibration-wise, usage. If 3+ is wear-out, consumers should easily get double that, typically, if not more.
First off, there is no mention of what drives they use, except that they mention buying a bunch of external consumer grade drives and removing them from their enclosures. Whether they use just one model mainly or a variety of models/manufacturers has a tremendous impact on how meaningful their numbers are.
Only assuming there's an extremely wide variance between models.
From the link, 5% of drives fail in the first year. That's 1 out of 20 fail in the first year. Then it levels off, and tumbles after 3 years, resulting in 22% failure rate by the end of year 4. So the failure rates say that 4 or 5 drives out of 20 are expected to fail by year 4, and you conclude that it would not be unusual to have 20 drives make 5 years?
Yes.
1. It should be expected that the failure rate, beyond factory-defect failures, should be lower in a lower-vibration environment.
2. If you only get 20 drives, that's a small enough sample size to easy get all good drives, while someone else gets more of the bad ones.
You see it all the time. One person has no failures. One person has had every HDD from manufacturer X fail quickly. One person has had that from manufacturer Y. And so on.
So, by their numbers, the average drive they use (with an actual brand and model attached) lasts less than 3 years. Such results make your 20 drive comment sound even more absurd. Also very important to note, is that they claim most of the time (NOT 100% of the time), the failed drive enters a cripple mode rather than just imploding.
If it's gradually wearing out, it should do just that. I don't have any data centers. SATA REs and ESes in lightly filled racks last basically forever, as far as I can tell. Usage like they are putting it through is going to be mechanically worse than desktop usage, or even usage in a server with few drives.
But try telling that to a friend of mine that just lost his OS drive unexpectedly, yesterday. All was fine, no Windows warnings, no SMART warnings on bootup, and he does keep track of his drives' health regularly, along with temps and such. The PC rebooted unexpectedly, and didn't see the drive. After reconnecting the SATA cables and power cables, and doing a cold boot, it saw it, and booted, end everything seemed OK. No errors. Before anything further could be done (like running a diag), it rebooted again, and shat itself: chkdsk automatically ran, and found sector after sector unreadable, just flying by. It's gone.
That doesn't happen every time. Tools to get data off before it's too late exist and work (I typically use gddrescue, and it can be a godsend), if the failure is slow enough, or of a nature that doesn't blow it up. But failures like that, that used to only happen with a nasty head crash, account for I'd say about half of the desktop failures I see, with newer drives, including that those that start up typically have no SMART warnings, or even high values for relevant SMART fields, prior to trying to recover data.
And now, we'll be getting SMR drives, soon, with half the complexity of SSDs built in, on top of the higher density and all its mechanical and analog signaling issues.
Meanwhile, take a recent oldish PC, with an ancient Maxtor drive, that was getting awfully slow. There were some red flags in the SMART values, and it was awfully loud, but it successfully read everything, so moving to a new drive was easy as could be. But, I'm sure the SnR on its platters are tens of times higher than anything you can buy, today, along with wider tracks to settle on. The mechanical tolerances alone are surely a couple orders of magnitude more loose.
I don't know what kind of mission critical bit perfect required data you are storing on your home PC. But I don't think we could find anyone who wouldn't want to recover 75% of their family photos or videos or other personal files as opposed to recovering 0%. You post doesn't argue my point, it supports it.
How many people are even going to think to have the ability to check CRCs (and where is a good NTFS replacement that does this for the user?)? Many would be stuck with useless data, not even know it, and back that up.
I'm a fond believer of the instant on computing paradigm.
If you haven't seen this already, you should:
Bret Victor - The Future of Programming