Jason:
I've been running SATA SSD RAID-0 for a number of years with 2 Samsung 500 GB EVO's on a Scratchpad Partition under an Intel Z68/i7 2700K platform - It's FAST but it's a playground for me and all trash is permanently deleted - ;o)
Be aware, Intel does not like Samsung SSD's doing this and you have to hack your Intel MB Bios to invoke TRIM pass through for other then Intel SSD's in RAID-0.
Can't argue your point re: RAID-0 PCIe NVMe maybe more or less pointless at this date and time. Not that I haven't been looking at this perhaps unnecessary and expensive option but with large Core CPU's PCIe 4 may make it worth while.
Hmm, that BIOS hack sounds interesting...
I have a Haswell Xeon E3-1276v3 running a 24x7 home-server on an Asus P9D WS with a C226 chipset (basically a Z77, I think) and 32GB of ECC RAM. Runs a RAID6 of eight 2.5" SATA HDDs with an LSI 9261i8 hardware RAID controller (close to 700MB/s sequential) and I now wanted to add some SSD cache pool to it, since I upgraded it to 10Gbit Ethernet to consolidate that extra storage tier in a single box for the other systems to share.
I wanted more than 2TB (quite pricey as NVMe, too), and I'm out of slots anyway, since it's already bifuricating its 16 lanes between the GPU (very low noise/power GTX 1060 for on-demand remote gaming) and the RAID adapter, while the PCHs 4 lanes are taken up by the Aquantia 10Gbit NIC.
But there are still the orginal 6 SATA ports only one of which was used for the boot SSD 860 Pro, so I got myself 4 1TB 860 Evos into a RAID0 mostly for caching stuff like VMs and Steam games. I chose the 1TB units as they were most economical in terms of price/capacity and I was hoping get a little more than single port SATA bandwidth as a windfall.
The Intel software RAID0 tops out at 1.4GB/s but that would be good match, if data going from the SATA raid to the 10Gbit NIC diddn't have to squeeze through the 4x v2 DMI bottleneck twice. Even if the data channel doesn't pass through the CPU, DMA needs RAM and that's north of the PCH while data on file server mode travels South to South.
Sure would have been better if I could have recycled the 4 lanes the LSI RAID doesn't actually take off the 8 lanes assigned to it: These static allocations are really painful, especially when you know that the CPU would actually support 8+4+4, but 8+8 is more "popular".
Anyhow: I was quite shocked to find out that the intel RAID driver presents the four SSDs as a hard disk so Windows 2016 ain't trimming. Crystal disk info quite cleary identifies them as four SSDs and that's how they are explicitely labelled in the BIOS, too. But where I saw simply a deficit in the software RAID driver (used to be quite normal in SSD early days) with your remark I see a chance Intel is actively hurting the competition: Quite inacceptable perhaps even illegal once you find a court and bring the patience to judge it so.
So where exactly do I need to fiddle to toggle that Intel-only bit?