SSDs and RAID0 questions

Thor86

Diamond Member
May 3, 2001
7,886
7
81
Here is my scenario. I want to setup a TEST VM host using 2x500GB SSDs in RAID0 with heavy WRITES. What would be better in terms of garbage collection over time so that WRITE performance is kept up over time/storage fills?

This would be setup on an Intel X58 motherboard using onboard ICH9(?).

My two candidates are: Samsung 840 500GB (Does this controller do its own GC and in RAID0?), the second is ADATA 512GB using SF-2281 controller. I know this does GC, but can someone confirm it does GC in RAID0? According to LSI it does, but wondering if anyone has some personal experience/proof?

This setup doesn not account for redundancy or data backups, so please keep the comments regarding data backups out of this thread. This is strictly for performance and storage sizing.

TIA!
 
Last edited:

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Techreport found that SSD's in RAID 0 actually performed slower than when they weren't.

It it's all about write performance you should look at enterprise class SSD's, not consumer. Assuming they are in the budget of course.
 
Last edited:

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
Trim is actually worse than OP without trim.

LSI controllers like what is on their compatibility list. due to the firmware bug in the 840/840 pro that caused massive slowdown when dirty, they were pulled from the list a few weeks ago.

Samsung 830 (20-40% OP)
Intel 320 (40-60% OP)
Crucial M4 (20-40% OP)

Would work great with raid-1 or raid-10.

Avoid: Sandforce, OCZ, or anything not on the LSI compatibility list Their magic doesn't work when the MSM says "CERTIFIED: NO".

How heavy are the writes? you could use sas drives for journal/log and ssd for read caching or database for instance.

Or just suck it up and buy more ssd's as they wear out. the 840 non-pro lasted 1/10th the time of a 830 so avoid TLC.

I'm about to fire up 10 840-pro 256gb with LSI and fastpath (basically throttled QD to force higher QD aka coalescing).

I mean $150 (samsung 830 256gb) drives - or $299 crucial m4 512gb, burn them up, who cares? buy another. The same amount of random iops would cost you so much in space/heat/mechanical parts that it is not worth it.

The LSI card is slick with certified drives, it can pop failing sectors off to a standby spare and keep a raid-0 integrity good as a drive starts to come apart assuming it is not a catastrophic failure.

If you can avoid raid, do it, split up your workload, all raid increases write amplification since a change to one sector can write to both drives.
 

Coup27

Platinum Member
Jul 17, 2010
2,140
3
81
I think there are a number of flaws in your proposal.

1. You are suggesting to create an array using entirely different drives. It has always been good practice to use identical drives in a RAID array. A Samsung and a Sandforce are miles apart on that front.

2. You are doing heavy writing on a TLC (840) based SSD. This is not what a TLC based drive is designed for.

3. You are using a Sandforce drive in a RAID0 array. Sandforce drives have the worst reliability record out of all the controllers.

4. You will be sacrificing TRIM in exchange for faster write performance. TRIM will be essential in a heavy write environment. Garbage Collection will occur regardless of TRIM or RAID environment. It is built into the very base of the drive and will do it's best to maintain the SSD irrespective of OS, storage driver or anything else.

I really think you're on a short road to headaches. I would be interested to know what workload you have that you would really see a long term benefit in ~600MB/s writes over a ~300MB/s writes which a good quality single SSD will give you.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
probably going to run a hypervisor in a hypervisor in a hypervisor and throw 1000 people on a single server and sell hosting for a $1/month like the lowendtalk/lowendboard folks do
 

Thor86

Diamond Member
May 3, 2001
7,886
7
81
1. You are suggesting to create an array using entirely different drives. It has always been good practice to use identical drives in a RAID array. A Samsung and a Sandforce are miles apart on that front.

My mistake, never intended this. What I meant in my OP is that either the 840 or the SF based, however, since this not a production level environment, I'd say heck with Best practices, heh.

2. You are doing heavy writing on a TLC (840) based SSD. This is not what a TLC based drive is designed for.

Again, I wasn't looking or worrying about WEAR levels. Just keeping WRITE performance as close to original as possible

3. You are using a Sandforce drive in a RAID0 array. Sandforce drives have the worst reliability record out of all the controllers.

Not sure where you are getting your stats from, so either cite your references and/or experience, or this is just "opinion". What make SF controller worse off than Marvel or Samsung or others when it comes to "Run Times" is all I care about.

4. You will be sacrificing TRIM in exchange for faster write performance. TRIM will be essential in a heavy write environment. Garbage Collection will occur regardless of TRIM or RAID environment. It is built into the very base of the drive and will do it's best to maintain the SSD irrespective of OS, storage driver or anything else.

So your are saying either the 840 or the SF SSDs provide GC in RAID? Not sure what you mean here.

I really think you're on a short road to headaches. I would be interested to know what workload you have that you would really see a long term benefit in ~600MB/s writes over a ~300MB/s writes which a good quality single SSD will give you.

Well that is just my questions in my OP, I want to know if I RAID0 the two types of SSDs (2 840s or 2 SF based ones) how much write performance will I lose over time, or will the built in GC in RAID provide adequate maintenance?

Again, not worried about "best practices", or longevity of the SSDs, as this a TEST environment for VMs.
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
lol.. that review is near worthless for those who know what the shortcomings of Sandforce controllers are and therefore know how to circumvent them.

They also obviously have no clue as the fact that even R0's will often write in lopsided fashion to one drive in the array more than the other.. which will then cause one drive to throttle and bring the whole works down with it.

All I can say to be true in my world is this.

I have very fast single drives.. I have very fast 2 drive R0's.. and I have VERY fast 6 drive R0's.. each of which have similar total capacity. The 6 drive array(which is VERY heavily OP'd cause I'm smart like that) blows them all to hell in real life usage due to sheer channel count and available nand packages.

In fact.. just to go against the.. "oh no.. not R0!".. grain even moreso than usual.. my next system will be implementing 2 seperate 8 x SSD R0 arrays combined with another 8 x HDD based R0 array and another 4 x HDD R0 array just for the sake of redundancy(which I currently implement anyways).

But why would I spend all that money when I could get the exact same performance.. or even better according to some.. from just using larger single SSD's? Simply because it's faster regardless of what a reviewer or benchmark tells me. Seeing really is believing and trumps speculation 10 times over.

Oops.. almost forgot to give the OP my opinion on his question after the funny rant. If you will be using those SF drives for the heavy write environment you mention.. make sure to have constant power to them(no S3 sleeps) more often than not to allow proper GC time.. and always allow VERY heavy OP on the volume. I have 280GB's available to me and stripe only 80 gig's because it helps immensily with steady state performance/write speed stamina. You have to work around the controller/firmwares limitations and there is no other way to do it effectively without throttling the drive/s in the process. Hope that helps.
 
Last edited:

Coup27

Platinum Member
Jul 17, 2010
2,140
3
81
Not sure where you are getting your stats from, so either cite your references and/or experience, or this is just "opinion". What make SF controller worse off than Marvel or Samsung or others when it comes to "Run Times" is all I care about.
I am not sure what you mean by "run times" but it is very common knowledge to people who follow the SSD industry that Sandforce controllers, both first and second generation have been plagued by reliability and compatibility problems. Whilst these problems have not hit every user in the same manor, doing some research online into Sandforce's problems, even just articles on Anandtech will give you an idea. They are supposed to be fixed in the current firmware but there has been so much go wrong with them and there are so many other proven drives I would use something else.

So your are saying either the 840 or the SF SSDs provide GC in RAID? Not sure what you mean here.
GC is built into the very foundation of an SSD. SSD's relied purely on GC long before TRIM came along. GC works regardless of what environment the drive is installed in (Windows, OSX, Linux, other embedded solution) because without it if you had a 100GB SSD and wrote 100GB, it would then be full forever. How exactly GC works irrespective of it's environment remains an industry secret. What this means is both drives will have working GC in a RAID0 array. The effectiveness of this GC when both drives are in a heavy write environment without TRIM would be your biggest problem. The best solution for you would be to allow the drives as much idle time as possible. When an SSD sits idle that's when GC does it's stuff so when the SSD is next required it is hopefully as "cleansed" as possible. I don't know what your on/off cycle is but if you run a heavy write workload for say 16 hours, if you could leave the system idle (fully powered, maybe sitting at a lock screen) for the remaining 8 hours of that day you would give GC the best chance to clean the drives ready for tomorrows workload.

I personally cannot advise on whether 8 hours of idle time will be sufficient to completely run GC on both drives. It depends on the actual workload, how much data is being wrote and over what period etc. Unless somebody has had a similar experience to what you're trying you may have to do some trial and error experiments yourself.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Coup27, I can relate to what you say about Sandforce. However, what you say about garbage collection is not ubiquitous.

Garbage collection or GC is not really the foundation of an SSD; SSDs can work without it and many still do in the form of USB sticks without write remapping. SSDs before the first generation of Intel SSDs also fall into the class of SSDs who do not utilise garbage collection. The result is consistent but slow random write performance; sequential write is not affected. This is why most USB sticks are good with sequential speeds, but can be extremely bad with random writes, having even a factor hundred lower performance than harddrives, let alone modern SSDs.

What you explain about garbage collection is correct, but it applies to SSDs employing aggressive garbage collection. The most common example here is Samsung, which due to its aggressive GC has much higher write amplification than its competitors. Intel SSDs with Intel controller, for example, fall in the category of passive garbage collection properly referred to as foreground garbage collection. This kind of GC only does its work when a host write request has been issued and no free erase blocks exist. In other words, this kind of GC is lazy and will only do work if it really has to. The result is far lower write amplification, but possibly degraded performance if too little overprovisioning has been applied. For these kind of drives, letting it 'idle' won't change anything.

SSDs which employ background garbage collection also known as aggressive GC, have higher write amplification, higher performance in TRIM-less environment with few overprovisioning as well as a slightly higher chance of failure because of a longer window of opportunity in which the SSD can corrupt itself if it loses power when background GC is active.
 

johny12

Member
Sep 18, 2012
109
0
0
I am not sure what you mean by "run times" but it is very common knowledge to people who follow the SSD industry that Sandforce controllers, both first and second generation have been plagued by reliability and compatibility problems. Whilst these problems have not hit every user in the same manor, doing some research online into Sandforce's problems, even just articles on Anandtech will give you an idea. They are supposed to be fixed in the current firmware but there has been so much go wrong with them and there are so many other proven drives I would use something else.
GC is built into the very foundation of an SSD. SSD's relied purely on GC long before TRIM came along. GC works regardless of what environment the drive is installed in (Windows, OSX, Linux, other embedded solution) because without it if you had a 100GB SSD and wrote 100GB, it would then be full forever. How exactly GC works irrespective of it's environment remains an industry secret. What this means is both drives will have working GC in a RAID0 array. The effectiveness of this GC when both drives are in a heavy write environment without TRIM would be your biggest problem. The best solution for you would be to allow the drives as much idle time as possible. When an SSD sits idle that's when GC does it's stuff so when the SSD is next required it is hopefully as "cleansed" as possible. I don't know what your on/off cycle is but if you run a heavy write workload for say 16 hours, if you could leave the system idle (fully powered, maybe sitting at a lock screen) for the remaining 8 hours of that day you would give GC the best chance to clean the drives ready for tomorrows workload.

I personally cannot advise on whether 8 hours of idle time will be sufficient to completely run GC on both drives. It depends on the actual workload, how much data is being wrote and over what period etc. Unless somebody has had a similar experience to what you're trying you may have to do some trial and error experiments yourself.

I believe Thor by "Run Times" is referencing "Write Amplification". Samsung drives are presumed to have better write amplication then the rest. However, I have been using both Intel & Samsung SSDs for quite some time now. Even though stats indicate Samsung performs better, in practice I can see both Intel and Samsung perform on par with similar loads. New series of SF drives with 20nm NANDs have chances of showing up better reliability, because they get heated up less & also manage power better
 

Thor86

Diamond Member
May 3, 2001
7,886
7
81
Great information everyone.

What I meant by "Run times" really should have been "Power on times". Sorry for the confusion. Again, I am not worried by wear/tear from massive writes. As a previous responder said, if it dies, replace it.

Over-provisioning is not what I really want to do, but of course this reflects on long-term performance for writes. Thanks for this suggestion groberts101, but I am a bit lost on your example. You are saying you have 280GB available, but only provision 80GB usable? You cannot be Over-provisioning 200GB?!?!
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |