Did you read at all what I wrote?
As long as random reads are slower than sequential reads (look at the benchmarks!), defragmentation (ie. rendering "sequential data" in the form of big files "actually sequential" by putting it into a contiguous LBA space) has a place.
This is not quite true.
I just ran AS-SSD on my drive & ATTO. The results are at the bottom of this post.
Now, it is important to understand what those results mean. Looking at AS-SSD. For example, notice the difference between 4k and 4k-64 threads?
4K sends a write/read request for writing/reading a single 4k file (occuping exactly one single sector), when this is reported as finished it sends another. This is repeated 2000 times.
The 64threads has 64 threads doing it concurrently. Notice how it adds up to 40,000 IOPS
The "sequential" write is 16MB per segment, notice how it adds up to a measly 16 IOPS compared to the 40,000 IOPS on random access.
The limiting factor in the 4k test is the speed of a single die (since it cannot be parallellized) as well as the speed of the controller in handling IO requests, and the latency (not bandwidth, latency) of your connection (SATA), the latter two add up to a significant amount. The limiting factor in the 64thread version is just the speed of the controller in handling IO requests.
The speed in the "sequential" test is one where the speed at which the controller processes IO requests is rendered insignificant and mostly measures the full parallellized performance. Note that in this case the data is access/stored on multiple NAND, in parallel, in a random pattern. This is RANDOM access brought about by the OS requesting sequential set of data from the SSD controller.
16Mb sequential would be sequential on a HDD but is actually "low thread concurrent random" for an SSD
4K random is closest (but somewhat under due to latency and controller IO handling speed) to the performance you would have gotten had you been able to force "sequential" speed for an SSD (but you can't force it without modifying the firmware so its moot). Because if you tried to access truly sequential data on an SSD then it cannot be parallellized. It is also equal to the speed of "individual random access". As in, the speed if you have NOTHING ELSE using the drive and one single 4k write/read is asked for.
4K-64thread "random" gives you on an SSD "high thread concurrent random"
4K random is random individual access while 16M is random concurrent access on an SSD. There is no sequential writes/reads on an SSD
How does this all tie up to fragmentation? Unless you artificially fragment a file into 4k chunks (aka 100% fragmentation, a 4.5GB DVD image would have 1,207,959,552 such chunks) then you will never realistically see 4k speeds.
A heavily fragmented file would still be in the "sequential access" range for an SSD. For example, take the above mentioned DVD image and fragment it into 100 seperate chunks. You will end up with 48MiB chunks.
Ah, but where lies the cutoff? Does it really need to be 16MB? Well lets take a look at ATTO.
As you can see, below, queue depth (number of concurrent threads) makes a big difference. At QD1 you notice it reaches near max speed at about 0.5MB (there is still a tiny bit of latency/controller IO processing overhead that gets less significant the bigger you get). So the above mentioned DVD image could be in 9,437,184 chunks (0.5MB each) and STILL have no loss of speed. Only if your chunks get even smaller do you get discernible (via benchmark not human perception) loss of speed.
So when does defragging your SSD make a difference (not necessarily a NOTICEABLE difference to the user, just a difference at all)? If all the following are true.
1. You are accessing a single file.
2. Said single file is in fragments smaller than 0.5MiB (unnatural fragmentation level)
3. Your OS uses QD1 and will wait until each chunk is retrieved before asking for the next one (AFAIK this is an issue with win9x but not with modern OS; also let us not forget NCQ)
http://en.wikipedia.org/wiki/Native_Command_Queuing which works nicely with SSDs. Unfortunately ATTO does not generate a new randomized test file every time, as a result testing it without direct write (allowing the OS to send multiple concurrent requests for a fragmented file) gets several GiB/s due to superfetch. A specialized benchmark would be needed to test the OS's handling of a fragmented file, but the info I presented thus far is all "worse case scenario" (worst case for the argument that you don't need to defrag, best case for those arguing that you should defrag your SSD) and despite that still doesn't show there being any benchmarkable benefit to defragging (much less a user appreciable one).
IF specialized testing shows that the implementation of modern OS is somehow deficient and incorrect (which AFAIK it isn't), then and only then and only until such a deficiency is fixed via an update, would you show a benchmarkable speed benefit, in fringe cases, from defragging fragments that are under 0.5MB in size on an SSD.
Furthermore, all this analysis I just performed shows that the "random read" of SSD figure touted by reviewers is thus pointless and heavily misleading. 4k writes is what you should concern yourself with, as that is something that does happen. (every time a log is updated you try to write a few dozen bytes, which cause a single sector write aka 4k random write).
And there too you must take into account QD. each individual program making single sector log writes is increasing the QD by 1 (or more if it keeps multiple logs).
The only time you will ever experience the "random read" as measured by anandtech occurs is when a program uses a singular reads of a single sector file. However this measurement already has a name, its called "Access Time" and it is tested for individually. (well, actually that is QD1, anandtech uses QD3)
A program accessing many files aka true random reading is going to give you a high thread count random read performance (see 4k-64thread test below) which is only slightly under sequential (due to IO processing overhead) and is not in any way indicative of fragmentation (due to fragments being larger then 1 sector)
The four cornerstones of SSD performance are thus:
1. Random Writes
2. Sequential Writes
3. Access Time
4. Sequential Reads
Benchmarks: