_CiPHER_ you are clearly very knowledgeable but I do think you have lost perspective in this discussion. You have spent the majority of the time discussing the internal RAID construction of an SSD or the inner workings of things which aren't actually relevant to the question being asked.
You may have a point here.
But i did gave
some practical advice. Having 2x 512GB SSD in RAID0 would provide little practical performance benefit, but it is around the same price and any performance increase is welcome. The 1GB/s read might speed up loading games a bit, or other tasks. And you can break them up into 2x512GB for two other systems for family/girlfriend/bedroom etc.
Putting two SSD's in RAID0 will only significantly improve 2 metrics which are sequential read and sequential write
That is simply incorrect. It improves both random read IOps with multiple queue depth, and random write IOps - even with single queue depth.
Let's look at CrystalDiskMark on a
single SSD :
Both the
sequential read and write are so high, thanks to RAID0. Actually many SSDs internally could do about 2GB/s of sequential read, if internally there were enough bandwidth/channels available and the SATA interface is traded for PCI-express. Some OCZ PCIe products have benchmark results where this can be seen - yes with incompressible data.
Random read
"4K" means random I/O with a request size of 4 kilobytes. But it also means that the queue depth will be just 1. In normal English: it only sends the next I/O request after the SSD has returned the data of the previous read request. In other words: it can only do one thing at a time, and not do multiple things at a time like is possible with RAID0. And this is exactly why the SSD cannot employ its internal RAID0 to boost the performance. That is why 4K random read is always between 20MB/s and 30MB/s - dependent on CPU speed as well. This is fully latency bottlenecked. Round trip times. Like gaming, you need a low ping; i.e. fast reaction time; i.e. low round trip time.
"4K QD32" means random I/O of 4KiB request size, with a queue depth of 32. That means 32 blocks of 4KiB will be asked from the SSD at the same time. SATA allows a maximum of 32 queued I/O's, only SAS can go higher. This means the SSD has a maximum chance of doing 32 times at once, if internally it is able to do that. Otherwise, the SSD just let the request wait and let it complete later with higher latency.
The higher the queue depth for random reads, the more the SSD can enable the power of the internal RAID0 or interleaving design where one NAND die processes another I/O than the other NAND die. Basically having a 16-disk RAID0 internally.
With all inefficiencies combined, you can see the score is only 10,6 times higher, and not 15 or 16 (RAID5/RAID0). Still, this is the power of RAID0. It is an awesome piece of technology that works extremely well.
Random write
Unlike random read, random write can be accelerated by RAID0 even with a queue depth of 1 - meaning blocking writes. With reads this is not possible, because the SSD has to finish the read request first. But with writes, you can make them disappear with write-back buffering. You store the request in your command buffer, you tell the host that you've completed the request, and internally you can queue the write request.
This all means that even with the host sending one random write at a time, the SSD can process multiple at once. This works because the SSD immediately returns I/O completion and only processes the write later with multiple writes at a time.
Why else do you think the random 4K write is so much higher than 4K read? SSDs do writing a lot slower than reading. But thanks to buffering the write, the SSD can execute multiple 4K blocks at once, thanks to RAID0.
Theoretically, a single queue depth for writes could be enough to saturate all performance potential. It is for HDDs, which uses the exact same principle discussed above by the way. HDDs cannot process multiple writes at the same time, so their 4K write will be about the same as 4K-qd32. But for SSDs, a single queue depth for random writes is not enough. SSDs being so fast, they get starved for I/O requests because each I/O request has a propagation delay. So you can only send x amount of I/O requests per second. So you need multiple random write queue depth to saturate performance. Between 4 and 8 probably would be very close to 32 already. This doesn't count for random reads, they cannot 'hide' read requests; you can only do that trick for writes.
The screenshot above shows only a x2 improvement between queue depth 1 and queue depth 32, which is exactly in line with the theory i explained above.
Final verdict
So there you have it. A single SSD uses the principle behind
RAID0 very effectively to boost all but one of its performance specs. Only the "4K" random blocking read would not allow itself to be boosted by RAID0.
The funny thing is that i explained RAID0 performance characteristics by showing a benchmark of a single SSD. But the very same performance behaviour applies to host-level RAID0 like Intel onboard RAID. Because it is the same principle. That is why i think it is so funny people love fast SSDs but dislike RAID0. Haha. :awe:
I ran 2 Samsung SSD's in RAID0 for 6 months without any issues at all so I would like to hear why you say that you should not run Samsung SSD's in RAID and get a "proper" Crucial drive.
Samsung SSDs employ journalling on the mapping tables. This allows them to omit the capacitors that Crucial employs to protect its FTL-consistency. In normal English: both have the same goal, but use different means. The Samsung uses a 'cheap' software protection, cheap in the sense that it costs no extra parts such as Crucial's SSD. It's simply a design-choice for the rather complex firmware that basically is the operating system of the SSD.
The SSD is a mini-computer with up to three processor cores, its own RAM and its own operating system (firmware) and well you know all that. But my point is its complexity. SSDs are not simple storage devices like harddrives. They need some form of protection or the NAND flash is unprotected, and this is what caused 90% of OCZ SSD failures in the past; the firmware gets stuck after the NAND is in an inconsistent state where the mapping tables do not match the actual stored data. A secure erase would 'fix' the issue at the cost of total data loss. Many consumers sent their SSD to OCZ and got a secured erased/wiped version back, many times from a different customer having the same issue. Better protection against power loss and keeping the mapping tables consistent, was obviously the next step.
The software protection that Samsung SSDs employ, can cause a form of inconsistency that RAID and modern filesystems like ZFS were never designed for. This has to do with the protection 'rolling back' the mapping tables to an earlier state. In essence, the SSD is reset to a point in the past, with all LBA corresponding exactly like it was in the past. But not all SSDs may roll back after a power failure, or roll back to different times in the past. Then you have an inconsistent filesystem.
After a power failure, a Samsung SSD will employ POR (Power-On Recovery). You can see how many times in the SMART attributes. This protection would roll back the mapping tables to an earlier state. This means any recent writes which may be corrupt on the NAND after a power failure, will not be used at all. Clever trick for sure.
But it has a weakness. Imagine a RAID array where disk1 is of another date than disk2. If all SSDs roll back to the exact same time, there would not be a problem. But if parts of the filesystem that spans multiple SSDs are set back to the past, you would have inconsistency in your filesystem.
The protection works flawlessly on simple desktops though, because in that case the whole filesystem is put back to the same date. It is cheap and effective. But on more complex storage, i would recommend avoiding Samsung SSDs. With a legacy filesystem like NTFS, you would not notice there is corruption, you just see the filesystem check for a very brief amount and assuming Windows still works, you might have some missing files or some corruption in your game that you were updating. It doesn't have to be serious though. But these kind of problems could give you weird issues where you might not expect the SSD being the culprit. With the Crucial MX100 being even cheaper, the choice is pretty obvious if buying a new SSD for a RAID or ZFS setup - in my opinion of course.