While thinking about this, I remembered Anand mentioning "queue depth" in the context of SSD benchmarks. I did a search on the site for "queue depth." I came across the second post below from a user named GullLars in the comments of one of Anand's articles. Then I did a search for GullLars and came across this comment as well...
Good test, now RAID by GullLars, 10 days ago
This was a good test, and one i've been waiting for a while. I'm a bit disappointed a 32GB Indilinx Barefoot drive wasn't included. I have a 30GB Vertex in my laptop that performs better sequentailly than these numbers, and has better random performance than the Kingston V 30GB. The price is slightly higher though.
Ref screenshot:
http://www.diskusjon.no/index.php?ap...tach_id=339908 CDM 3.0 + WEI for my laptop.
Now the next thing I hope Anandtech will do regarding SSDs is a comparison of RAID of low-capasity cheap SSDs VS single high capasity SSDs. This is something no other reckognized tech site has done yet, but enthusiasts have done for years now. Example:
http://www.nextlevelhardware.com/storage/battleship/
I'll also mention Nizzen, an enthusiast on a forum i frequent, who set a WR i PCmark vantage last spring with his 24/7 setup, and is still on top5 with the same setup (updated in august with 4GB RAM on the Areca). The key was an Areca 1680ix-12 with a RAID-0 of several (7 i think) OCZ Vertex.
ORB result page:
http://service.futuremark.com/result...eResultType=18
24740 PCmarks, WAY ahead of the highest score in your benchmark lists. The same level of disk performance is possible to get with an LSI 9211-8i with 8 30-40GB SSDs in RAID-0 for about $1000 (less than 2 256GB SSDs).
Suggested lineup for such an article: RAID-0 of 4 Kingston V 30GB, Intel x25-V, and Indilinx Barefoot 32GB (Vertex?). 2 RAID-0 SF-1200/1500 50GB, Kingston SSDNow V+ 64GB, Indilinx Barefoot 64GB, Intel x25-M 80GB. And single 100/128/160 GB SSDs of various controllers.
Regarding performance degrading in RAID whitout TRIM, increased reserved area can help negate the performance degrading (Ref IDF whitepaper on spare area). Increasing the spare area to ~20-25% from the default 7% (on most SSDs) will make sure degrading will not be noticable by users in normale usage models.
Additional note on SSD RAID, IOPS, and QD by GullLars, 31 days ago
Just thought I'd add a link to a couple of graphs i made of IOPS scaling as a function of Queue Depth for 1-4 x25-V in RAID 0 from ICH10R compared to x25-M 80GB and 160GB, and 2 x25-M 80GB RAID 0. These are in the same price range, and the graphs will show why i think Anands reviews don't show the whole truth when there is no test beyond QD 8.
link:
http://www.diskusjon.no/index.php?ap...tach_id=348638
The tests were done by 3 users at a forum i frequent, the username is in front of the setup that was benched.
The IOmeter config run was: 1GB testfile, 30 sec run, 2 sec ramp. Block sizes 0,5KB, 4KB, 16KB, 64KB. Queue Depths 1-128, 2^n stepping. This is a small part of a project from a month back mapping SSD and SSD RAID scaling by block size and queue depth (block sizes 0,5KB-64KB 2^n stepping, QD 1-128 2^n stepping).
ATTO is also nice to show scaling of sequential performance by block size (and possibly queue depth).