Defragging SSD

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Scout80

Member
Mar 13, 2012
80
0
0
This thread got me thinking, "I wonder if win7 recognizes the fact that I have an SSD and doesn't schedule regular defrags?"

The answer: No, it does not. My computer has been set, by default, to automatically defrag on a schedule.
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
This thread got me thinking, "I wonder if win7 recognizes the fact that I have an SSD and doesn't schedule regular defrags?"

The answer: No, it does not. My computer has been set, by default, to automatically defrag on a schedule.

run the WEI(windows experience index) and it should adjust the settings as needed.
 

KingFatty

Diamond Member
Dec 29, 2010
3,034
1
81
Would there be a way to run a dummy test, where instead of running a defrag utility on the drive, you run some kind of ... I don't know, something to hit the drive a bunch of times in a similar pattern to what a defrag utility does, except instead of defragging it, just maybe randomly write stuff and mess it around for a bit.

Maybe if you ran that dummy test for the same amount of time as the defrag utility was run, then after that dummy test, you check the drive to see if it has improved in performance similar to when you run the defrag, just by virtue of allowing the drive to mess around with itself (sorry I don't know the technical details)?

I guess I'm suggesting the idea of running a "control" group. Like giving one group of test subjects a placebo, and give another group the actual medicine. It appears we have a test for the medicine as done by several members (with apparently divergent results?), but nobody has done a control/placebo test.

Sometimes the placebo works better than the medicine, but humans are weird compared to predictable SSDs.
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
it really depends on the SSD being tested because they all have varying degrees of abilities and aggressiveness. IE; some do better on the fly recovery, or have preset limits that will notify the controller that it needs to get busy and clean ahead in anticipation of the next write load, aggressive(or lazy/deferred such as Sandforce) recovery from TRIM notifications, etc. Even the amount of factory OP allowance or DRAM size will have impact.

But generally speaking, writing random data will consume greater amounts of partial blocks, and therefore greater numbers of blocks written to(compared to sequentially written data) which inevitably causes additional overhead as the controller tries to sort it all out and maintain a balance for drive longevity and consistent performance.

Not to mention that a defrag tool doesn't necessarily move or cause the logical data layout to be randomly placed at the physical level. In fact that is VERY old early firmware thinking and these newer drives do have much more control and ability to place data contiguously whenever wear leveling will allow.

So basically speaking, it just depends on all the variables listed above in conjunction with the existing level of fresh remaining blocks currently remaining on the drive at that particular time.

Another way of looking at it that a larger drive with more fresh block availability(current dirty state) and/or factory OP.. will generally respond more favorably with a free space consolidation than a smaller drive that is fairly full(or dirty) which has less OP space to pull from when you write more data during the consolidation.

So, if the drive is already near the point of lost performance due to internal data layout/fresh block reserve?.. the greater the chance that the defrag(or consolidation) will just bog it down that much more as it consumes more blocks to move that data around.

On the other hand.. some drives will respond quite favorably even if they are in short supply of fresh blocks/OP because the defrag/consolidation moves enough data to force the drive into immediate GC mode(called on-the-fly recovery). Which is what some of the others have rightly tried to get across.

Which all means that most of those tests would need to use the identical drive and similar usage/data patterns to be fully comparible in results. Otherwise it's 1% on this drive, 3% on that one, and even potential losses on another.
 
Last edited:

icanhascpu2

Senior member
Jun 18, 2009
228
0
0
"Which is what some of the others have rightly tried to get across. "

Yet have failed to show any sort of positive results. Let TRIM/Garbadge collection and the firmware do its work. Trying to defrag an SSD, something designed for HDD hardware is silly.
 

_Rick_

Diamond Member
Apr 20, 2012
3,937
69
91
"Which is what some of the others have rightly tried to get across. "

Yet have failed to show any sort of positive results. Let TRIM/Garbadge collection and the firmware do its work. Trying to defrag an SSD, something designed for HDD hardware is silly.

Did you read at all what I wrote?

As long as random reads are slower than sequential reads (look at the benchmarks!), defragmentation (ie. rendering "sequential data" in the form of big files "actually sequential" by putting it into a contiguous LBA space) has a place. If the SSD can 'predict' the next LBA being read, which it can for sequential reads, then it can read up almost ten times faster, then when it can't. The reason you don't see this a lot, is because fragmentation got relatively rare. But if you use an SSD for a while, with a lot of writes, and a relatively full disk, you will get fragmentation, and your random to sequential read ratio will increase, which will decrease performance.

There's really no myth there. The only reason defragmentation is not as important is because in the age of HDD's random reads were about another or two order of magnitudes slower, so we just don't really notice that our big file read slowed to 50MB/s from 500MB/s supposed speed, it's not nearly as noticable as when it slows down to below 1M/s as it is prone to with HDDs.

This doesn't change the principle of fragmentation, and that sequential reads always will be faster.
 
Last edited:

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Did you read at all what I wrote?

As long as random reads are slower than sequential reads (look at the benchmarks!), defragmentation (ie. rendering "sequential data" in the form of big files "actually sequential" by putting it into a contiguous LBA space) has a place. If the SSD can 'predict' the next LBA being read, which it can for sequential reads, then it can read up almost ten times faster, then when it can't. The reason you don't see this a lot, is because fragmentation got relatively rare. But if you use an SSD for a while, with a lot of writes, and a relatively full disk, you will get fragmentation, and your random to sequential read ratio will increase, which will decrease performance.

There's really no myth there. The only reason defragmentation is not as important is because in the age of HDD's random reads were about another or two order of magnitudes slower, so we just don't really notice that our big file read slowed to 50MB/s from 500MB/s supposed speed, it's not nearly as noticable as when it slows down to below 1M/s as it is prone to with HDDs.

This doesn't change the principle of fragmentation, and that sequential reads always will be faster.

And for most people's usage patterns, large, sequential reads are the exception instead of the rule. Windows already does heavy filesystem caching and read ahead to minimize the effects of seek time. And when we're talking about "seek times" in the nanoseconds a difference of a few percent isn't going to be appreciable and the effort required to install and configure 3rd party defrag software is going to heavily outweigh any gains it may give you.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Did you read at all what I wrote?

As long as random reads are slower than sequential reads (look at the benchmarks!), defragmentation (ie. rendering "sequential data" in the form of big files "actually sequential" by putting it into a contiguous LBA space) has a place.

This is not quite true.

I just ran AS-SSD on my drive & ATTO. The results are at the bottom of this post.

Now, it is important to understand what those results mean. Looking at AS-SSD. For example, notice the difference between 4k and 4k-64 threads?
4K sends a write/read request for writing/reading a single 4k file (occuping exactly one single sector), when this is reported as finished it sends another. This is repeated 2000 times.
The 64threads has 64 threads doing it concurrently. Notice how it adds up to 40,000 IOPS

The "sequential" write is 16MB per segment, notice how it adds up to a measly 16 IOPS compared to the 40,000 IOPS on random access.

The limiting factor in the 4k test is the speed of a single die (since it cannot be parallellized) as well as the speed of the controller in handling IO requests, and the latency (not bandwidth, latency) of your connection (SATA), the latter two add up to a significant amount. The limiting factor in the 64thread version is just the speed of the controller in handling IO requests.
The speed in the "sequential" test is one where the speed at which the controller processes IO requests is rendered insignificant and mostly measures the full parallellized performance. Note that in this case the data is access/stored on multiple NAND, in parallel, in a random pattern. This is RANDOM access brought about by the OS requesting sequential set of data from the SSD controller.

16Mb sequential would be sequential on a HDD but is actually "low thread concurrent random" for an SSD

4K random is closest (but somewhat under due to latency and controller IO handling speed) to the performance you would have gotten had you been able to force "sequential" speed for an SSD (but you can't force it without modifying the firmware so its moot). Because if you tried to access truly sequential data on an SSD then it cannot be parallellized. It is also equal to the speed of "individual random access". As in, the speed if you have NOTHING ELSE using the drive and one single 4k write/read is asked for.

4K-64thread "random" gives you on an SSD "high thread concurrent random"

4K random is random individual access while 16M is random concurrent access on an SSD. There is no sequential writes/reads on an SSD

How does this all tie up to fragmentation? Unless you artificially fragment a file into 4k chunks (aka 100% fragmentation, a 4.5GB DVD image would have 1,207,959,552 such chunks) then you will never realistically see 4k speeds.
A heavily fragmented file would still be in the "sequential access" range for an SSD. For example, take the above mentioned DVD image and fragment it into 100 seperate chunks. You will end up with 48MiB chunks.

Ah, but where lies the cutoff? Does it really need to be 16MB? Well lets take a look at ATTO.
As you can see, below, queue depth (number of concurrent threads) makes a big difference. At QD1 you notice it reaches near max speed at about 0.5MB (there is still a tiny bit of latency/controller IO processing overhead that gets less significant the bigger you get). So the above mentioned DVD image could be in 9,437,184 chunks (0.5MB each) and STILL have no loss of speed. Only if your chunks get even smaller do you get discernible (via benchmark not human perception) loss of speed.

So when does defragging your SSD make a difference (not necessarily a NOTICEABLE difference to the user, just a difference at all)? If all the following are true.
1. You are accessing a single file.
2. Said single file is in fragments smaller than 0.5MiB (unnatural fragmentation level)
3. Your OS uses QD1 and will wait until each chunk is retrieved before asking for the next one (AFAIK this is an issue with win9x but not with modern OS; also let us not forget NCQ) http://en.wikipedia.org/wiki/Native_Command_Queuing which works nicely with SSDs. Unfortunately ATTO does not generate a new randomized test file every time, as a result testing it without direct write (allowing the OS to send multiple concurrent requests for a fragmented file) gets several GiB/s due to superfetch. A specialized benchmark would be needed to test the OS's handling of a fragmented file, but the info I presented thus far is all "worse case scenario" (worst case for the argument that you don't need to defrag, best case for those arguing that you should defrag your SSD) and despite that still doesn't show there being any benchmarkable benefit to defragging (much less a user appreciable one).

IF specialized testing shows that the implementation of modern OS is somehow deficient and incorrect (which AFAIK it isn't), then and only then and only until such a deficiency is fixed via an update, would you show a benchmarkable speed benefit, in fringe cases, from defragging fragments that are under 0.5MB in size on an SSD.

Furthermore, all this analysis I just performed shows that the "random read" of SSD figure touted by reviewers is thus pointless and heavily misleading. 4k writes is what you should concern yourself with, as that is something that does happen. (every time a log is updated you try to write a few dozen bytes, which cause a single sector write aka 4k random write).
And there too you must take into account QD. each individual program making single sector log writes is increasing the QD by 1 (or more if it keeps multiple logs).
The only time you will ever experience the "random read" as measured by anandtech occurs is when a program uses a singular reads of a single sector file. However this measurement already has a name, its called "Access Time" and it is tested for individually. (well, actually that is QD1, anandtech uses QD3)
A program accessing many files aka true random reading is going to give you a high thread count random read performance (see 4k-64thread test below) which is only slightly under sequential (due to IO processing overhead) and is not in any way indicative of fragmentation (due to fragments being larger then 1 sector)

The four cornerstones of SSD performance are thus:
1. Random Writes
2. Sequential Writes
3. Access Time
4. Sequential Reads

Benchmarks:



 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
There's really no myth there. The only reason defragmentation is not as important is because in the age of HDD's random reads were about another or two order of magnitudes slower, so we just don't really notice that our big file read slowed to 50MB/s from 500MB/s supposed speed, it's not nearly as noticable as when it slows down to below 1M/s as it is prone to with HDDs.
Except that doesn't even happen with HDDs, except on rare super-fragmented files, synthetic benchmarks, and Exdeath's Java project backups . Windows 7 w/ AHCI (read: NCQ) pretty well takes care of it on HDDs. With SSDs, it's basically only adding more logical IOs for a given amount of data read and written, that should be a concern, and that will take severe fragmentation to be noticeable.

2. Said single file is in fragments smaller than 0.5MiB (unnatural fragmentation level)
Though sometimes happens on NTFS, IME. FSes like EXT3/4, JFS, XFS, and so on, will only ever get bad enough to worry about if they get too full, and then medium-sized files get small edits. NTFS seems to be just enough of a throwback to be able to get such fragmentation with some files, by whatever pathological editing pattern allows it, even with enough free space. Even on NTFS, though, it will tend to be a rare issue, and a manual copy+delete+replace should fix it for months to come, when/if it occurs (an SSD-tuned defrag service could do that only with files averaging < xMB per fragment and fragments > y threhold, and keep NTFS good for many more years, with a negligible increase in host writes over time).
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
ah, but even if condition 1 & 2 occur, you need condition 3 to also occur.
Or, to have a ton of fragments to read. If stuck at a QD of 1, a faster drive will still be a faster drive, and no on would not use NCQ unless they had reason to (such as adding a new drive to existing hardware/drivers that will not support it).

We are a point where a 10MB file in 30 evenly-sized fragments should be considered moderate fragmentation for a HDD, unless copying a bunch of them is all you do, and hardly worth mentioning for a SSD. A 10MB file with >100 little 4-16K edits scattered across the drive's address space, over its lifetime...now, that's a problem. Rarer than in the NT 4 days, certainly, but I've still seen it a few times on Windows 7. There's no way that level of fragmentation is not going to cause lower performance. OTOH, defragging a whole volume, when a clean copy of a very small number of files (maybe just one), is all that's needed, would be well into the realm of overkill.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
If stuck at a QD of 1, a faster drive will still be a faster drive, and no on would not use NCQ unless they had reason to (such as adding a new drive to existing hardware/drivers that will not support it).

So the question now is, does modern windows have such a defect.
We know from the 64thread test that if you are accessing MULTIPLE files there is no such defect in the OS.
We would require a specialized test to measure weather accessing a singular file that is heavily fragmented via modern OS has such a defective algorithm (wherein it waits for each segment of the file to finish reading before requesting the next one)

We are a point where a 10MB file in 30 evenly-sized fragments should be considered moderate fragmentation for a HDD, unless copying a bunch of them is all you do, and hardly worth mentioning for a SSD. A 10MB file with >100 little 4-16K edits scattered across the drive's address space, over its lifetime...now, that's a problem. Rarer than in the NT 4 days, certainly, but I've still seen it a few times on Windows 7. There's no way that level of fragmentation is not going to cause lower performance.

sure there is a way such a fringe case will not cause lowered performance. All you need to maintain performance in such a case is concurrent requests rather then the fetching thread WAITING until each segment is fully read before requesting the next one.

If that is NOT the way it is done in win7 then:
1. It is a bug
2. It can be fixed via a patch
3. It will create a singular situation where there IS a benefit to partial defragging an SSD.

Partial defragging was introduced by win Vista, it was observed that there is no noticable difference between reading a HDD file in multiple 64MB fragments, and one that is a singular fragment.
As a result vista & win7 consider fragments larger then 64MB to be non fragmented, and will only partially defragment files until they consist of fragments larger then 64MB.

With an SSD you would show the benefit in the above mentioned fringe case via partial defragmenting to a target of 0.5MB or larger fragments IF there is the above mentioned defect in the single file fetch algorithm of the OS in question.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |