Defragging SSD

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

taltamir

Lifer
Mar 21, 2004
13,576
6
76
An SSD can show increased performance and efficiency from a purpose built restoration image which has been consolidated and defragged.. more so than 1 that has just been installed with typical protocol.

You say you met with disbelief from so called experts. I don't know who you talked to but as an actual expert I can tell you that it a plausible scenario worth testing. Performing a secure erase followed by imaging a defragged image unto an SSD should result in "sequential writes" which are well parallelized on actual data placement; at least in the short term until they are shifted around via GC and wear leveling.

This scenario however has absolutely nothing at all to do with defragging your SSD. Due to the fact that SSDs present the OS with a virtual drive mapped to ever shifting actual data locations. Using a defrag software on SSD software merely randomly shuffles your data on the SSD.
If you are defragging your SSD as part of a process to do something else entirely, fine, enjoy. The act of defragmenting is randomly suffling data on the SSD in the process but that is irrelevant since it is not the goal but a step in a specific process that isn't inherantly wrong.

Restoring a defragged image to a secure erased SSD is completely different then defragging your SSD, and it is in no way shape or form justification for defragging your SSD!

6. Take another backup image of the optimized and consolidated version. Even test it if you want to peek at results. Even at this stage I usually see tangible results despite the tested/partially used or possibly benchmark spent drive state.
Bull. Restoring a defragged image can theoretically help.
Defragging an existing install randomly shuffles data. This can help OR hurt. The fact you say "usually" indicates that you have seen the latter occur yourself too.
On an agressive GC drive it might seem to be helping more often the hurting but it is only because the GC is triggering to do extra work. Something that could have been done far more efficiently with your manufacturer's SSD tools or even third party tools specialized for the task rather then running a defrag on your SSD.

This ridiculous claim is why I am going to distrust your supposed experimental data (you can trust the data and disagree with conclusion) until I can reproduce it (which isn't gonna be anytime soon with how busy I am).
 
Last edited:

velis

Senior member
Jul 28, 2005
600
14
81
I have just completed a stupid test on my development partition.
I ran grep on my sources dir searching for a particular expression which yielded ~400 hits.
Since my work machine doesn't have diskeeper, the partition was pretty heavily fragmented. I rebooted, waited for all services to start (disk stops activity) and ran grep. Followed by a defraggler pass, another reboot and another search.

total disk 141000 files 11000 folders 16GB file size
fragmented files 18000 files, 212000 fragments (up to 100 on some searched files)
grep done on 24554 files, 1843 folders, 2,28GB size

fragmented performance 12:44:10 - 12:48:35 = 04:25
defragmented performance 14:40:30 - 14:44:50 = 04:20

Seems defragging the partition shows little to no performance benefits. Modern processors are powerful enough to handle any additional processing caused by FS file fragmentation.
 

Zap

Elite Member
Oct 13, 1999
22,377
2
81
I read through the original locked thread and this one, and all I can say is... wow. That's saying something because I've probably only read a handful of threads on these whole forums in recent weeks.

Sure, the question is worth asking. However, the horse is now dead and the barn has burnt down.

FACTS, as understood by Zap

Yes, defragmenting an SSD will reduce the life span of your SSD since NAND has finite writes.

No, doing it a few times probably won't matter much in the long run, since you have a better chance of the controller/firmware bricking itself than the NAND wearing out.

Yes, there may be theoretical instances where defragging an SSD can improve performance, however miniscule.

No, it probably isn't a good use of your time.

Professionals who make a living at stuff like this have said to not bother defragging SSDs. This includes manufacturers of SSDs, vendors of defragging software and professional hardware reviewers.
 

kmmatney

Diamond Member
Jun 19, 2000
4,363
1
81
"Yes, there may be theoretical instances where defragging an SSD can improve performance, however miniscule."

I'm not sure if it even helps at all. However what probably does help is the consolidation of free space, as explained by PerfectDisk. Defragging by traditional means is not beneficial for sure:

http://www.raxco.com/ssd-optimization.aspx

I think PerfectDisk is just doing a manual TRIM, sort of like what the manufacturers toolbox software can do. I downloaded the Perfectdisk trial, and an running the SSD optimize, and it's taking quite a long time, so it may be more thorough than the Samsung toolbox, which I just ran before running my initial benchmark. I'm not sure I would want to run the Perfect Disk optimize very often - it looks like it going to take about an hour! I'll see how my benchmarks do. I have a Samsung 256GB 830. I don't run benchamrks very often at all - I just decided to do it for this study.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
"Yes, there may be theoretical instances where defragging an SSD can improve performance, however miniscule."

I'm not sure if it even helps at all. However what probably does help is the consolidation of free space, as explained by PerfectDisk. Defragging by traditional means is not beneficial for sure:

http://www.raxco.com/ssd-optimization.aspx

I think PerfectDisk is just doing a manual TRIM, sort of like what the manufacturers toolbox software can do. I downloaded the Perfectdisk trial, and an running the SSD optimize, and it's taking quite a long time, so it may be more thorough than the Samsung toolbox, which I just ran before running my initial benchmark. I'm not sure I would want to run the Perfect Disk optimize very often - it looks like it going to take about an hour! I'll see how my benchmarks do. I have a Samsung 256GB 830. I don't run benchamrks very often at all - I just decided to do it for this study.

Still reads as bullshit to me. Defragging spindle drives is only marginally useful and doing it to SSDs is even less useful.
 

kmmatney

Diamond Member
Jun 19, 2000
4,363
1
81
Here are my test results with perfect disk, and my Samsung 830. I first ran Samsung's magician software and optimized the disk. I ran the AS-SSD benchamrk and had a score of 1037.

I then downloaded perfect disk, and ran it on my drive. It automatically detected it as an SSD, and ran it's "SSD Optimize" routine. This took 2.5 hours to run. I then uninstalled Perfect disk, and reran AS-SSD. My new score is 1045. The write score stayed the same, while my Read score went up a little.

So it did help the benchmark a little. I don't believe that any actual "defragging" is going on. I think its just running a heavy-duty manual TRIM, and the consolidation of the free space is also helping. Keep in mind it took 2.5 hours to run on my SSD, where-as the Samsung utility takes about 30 seconds.
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
OK.. let's keep the party going then.









All screens posted are of separate volumes with various degrees of logical free space fragmentation over several months/years of testing. I even have the logical bitmaps for most of the before and afters too. See any trends or patterns going on in the above screens?

I've got dozens of these from 4 different SSD controllers and I can keep right on going here for longer than the most of the smart guys can keep up. And as time permits?.. we can even cover different drives(controllers) and machines should you have the gumption to do so.

In the end I don't come here to be a punching bag for geeks or little fella's who hide behind keyboards while calling people out. Or to dispute and debate things with someone to vent out life's little frustrations. I'm too old and tired or busy so I tend to just say what I feel and post what I find. Don't like it?.. then don't read it.

Orrrr.. you could just do your own testing rather than reading, posturing, and following along with what anyone else tells you.. like a good little sheep. I kid(mostly). lol

Plus,.. if it doesn't work?.. you can always come back here and post your fresh and newly acquired scientific data for some petty satisfaction that you were right.. again. Should be reason enough right there to pull some folks into testing on their own. lol

I'll end with this. There are firmware algorithms present in just about every drive made these days that will effectively promote sequentially layed out data patterns. To think that all controllers will leverage WA reducing wear leveling as top priority over all other performance metrics?.. is definitely underestimating todays firmware capability. Furthermore.. it would be silly to think that an SSD with data spread all over the drive at the physical state would not show some type of speed variance from the less than desirable data structure. Again, "low latency".. is not quite "no latency" and there is overhead and cumulative loss involved at times. Even ram suffers from these cumulative latency effects or we wouldn't be screwing around with it.

There is an always ongoing compromise being made in real time within the drive and firmware mfgrs know that peak along with improved steady state performance is key to selling a high-performance drive.

In effect here.. you are tasking the controller and its firmware to deal with ever changing data structure and layout requirements by literally forcing some of these data streams on the controller. And as expected.. not every controller/firmware will respond in identical manner. One's with more aggressive GC will show better result.

PS. just to clarify here.. I do not think that defragging an SSD will typically show much result because I've tested that too(although, highly fragmented drives will show measurable gains). What I do find is that free space consolidations can in fact effect the physical layout of data and force the firmware to "reassess" its current structure. Seemingly gives good results on all of my personal systems and 4 other machines that I maintain on a semi-consistent basis. I should also mention that largest effects I see are definitely being associated with raid arrays so there is likely a cumulative effect from laying data across the stripe. The "few MB/s here and there" results for the previous poster are about what I typically see for single drive systems.
 
Last edited:

taltamir

Lifer
Mar 21, 2004
13,576
6
76
I then downloaded perfect disk, and ran it on my drive. It automatically detected it as an SSD, and ran it's "SSD Optimize" routine.

That would explain the "results" grobest claims to have gotten using that very same software
 

razel

Platinum Member
May 14, 2002
2,337
90
101
Here are my test results with perfect disk, and my Samsung 830. I first ran Samsung's magician software and optimized the disk. I ran the AS-SSD benchamrk and had a score of 1037.

I then downloaded perfect disk, and ran it on my drive. It automatically detected it as an SSD, and ran it's "SSD Optimize" routine. This took 2.5 hours to run. I then uninstalled Perfect disk, and reran AS-SSD. My new score is 1045. The write score stayed the same, while my Read score went up a little.

So it did help the benchmark a little.

That's very little. 1037 vs 1045 is barely 1%. Benchmarks differences less than 3-5% are insignificant. You can bench on a different day when the Sun has thrown more solar flares and get another result. It may even be an opposite result that supporting points that it will slow down your SSD.
 

icanhascpu2

Senior member
Jun 18, 2009
228
0
0
In the end I don't come here to be a punching bag for geeks or little fella's who hide behind keyboards while calling people out. Or to dispute and debate things with someone to vent out life's little frustrations. I'm too old and tired or busy so I tend to just say what I feel and post what I find.

Don't like it?.. then don't read it.

The same goes for you, now doesn't it?
Also don't act like you're 'busy' when youre doing hard drive tests to see if defragmenting them improves performance for fun. You're not fooling anyone nor impressing anyone. We are all nerds here. Its alright, really.

All of your tests are fine, except doing one single test per application is a really poor way to get accurate results. All of your testing is basically within the realm of what can change between two different tests in sequence on the same drive with zero else changed. So all of this information is basically not useful. You need a better testing methodology that more accurately give result differences, else why even do the work.

Before you say; 'Hay man! YU NO DO IT DEN?!', I'd answer, 'hay man, I dont feel like wearing my SSD for what amounts to digital snake oil, but if youre going to do something, you might as well do it right.'.
 
Last edited:

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
what is the point/s being made here?

no one said "you must do this!". I thought we were mostly adults here and could share based on experience of something more tangible than.. "it's well known".. or "I read this article"... or "the mfgr said". How the hell do you guys think that we(and the mfgrs) learn new things or find weaknesses? We often must push boundaries and go outside the safety nets to find them.

I've also never said to myself "I must find huge results before it can be seen as useful or worth my time". If it's only a messily 1% gain and fully repeatable?.. I'm all over that. Christ.. I spend more time squeezing the last 7 horsepower out of a carburetor than I do these devices. I always say to each his own.. and what is worth something to one person is not worth the time, effort, or "wearing out an SSD"(which is hilarious in itself since were not moving 200 gigs of data around during a consolidation) to others. Some tweak ram timings and OC CPU's to that last few mhz possible. That's exactly what this is.

As for accuracy in testing?("one test to quantify results"?).. LOL funny stuff.. and I won't even go there. All I can say is that I don't get free SSD's to test for no reason. I rub elbows with some pretty smart feller's and I know how I do by their standards which is good enough for me. Squabbling with narrow minded posters around here who only care to speculate without real testing?.. will not easily diminish what I've learned so far through terabytes worth of testing.
 
Last edited:

KingFatty

Diamond Member
Dec 29, 2010
3,034
1
81
Is it possible to tell that the benefit is due to the actual utility/test, and not something else? I keep wondering if the results are just due to the drive doing its trim/garbage collection thing a bunch of times as triggered by the utility/test? Could that even be tested somehow, any way to isolate the utility/test benefits from the other benefits that happen inherently whenever you use the drive and give it a chance to experience trim/garbage collection? (sorry I'm not well informed on SSD details, looking to learn more as I follow this thread).
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
no one said "you must do this!". I thought we were mostly adults here and could share based on experience of something more tangible than.. "it's well known".. or "I read this article"... or "the mfgr said".

I've never been in space, but I don't need to be to review the overwhelming evidence and come to the conclusion that the earth is round and not flat. You are arguing for the flat earth society here. Adults should also realize when there is no point in continuing a debate since all the evidence points to only one possible conclusion.
 

icanhascpu2

Senior member
Jun 18, 2009
228
0
0
Is it possible to tell that the benefit is due to the actual utility/test, and not something else? I keep wondering if the results are just due to the drive doing its trim/garbage collection thing a bunch of times as triggered by the utility/test? Could that even be tested somehow, any way to isolate the utility/test benefits from the other benefits that happen inherently whenever you use the drive and give it a chance to experience trim/garbage collection? (sorry I'm not well informed on SSD details, looking to learn more as I follow this thread).

Thats exactly why a single test as the guy above did is useless. At least a dozen tests need to be done on a blank fully trimmed drive. You you cut off the extremes at both ends and average them. Repeat for the second test set, and thats just for a basic analysis! Using differant tools is fine but not even close to as good as using just one tool correctly..
 

velis

Senior member
Jul 28, 2005
600
14
81
Yes, a single test IS useless, but knowing the logic behind it also helps. In this case FS fragmentation causes more processing in form of CPU cycles and in form of fetch/store requests the OS has to send to the controller.

Both of these DO TAKE TIME, though this time is negligible. As is wearing the drive by performing defragmentation. Neither makes a significant dent in the overall statistics so live and let live.
 

icanhascpu2

Senior member
Jun 18, 2009
228
0
0
Or it could be just part of normal fluctuation of CPU use. You cant know from so few test samples. Thats the point. I dont see how live and let live applies here. We are talking about technical issues, not what the best kind of pie is.. (its cherry).

What we DO know is that this DOES cause ware on the drive. Nothing else is really been established, and the firmware should be let to do the organization it sees fit.
 

GrumpyMan

Diamond Member
May 14, 2001
5,778
262
136
I tested after a defrag on my Intel 510 yesterday, and even though I did not see any higher scores, I did win Safety Bingo at work and am buying pizza for the plant tomorrow.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Or it could be just part of normal fluctuation of CPU use. You cant know from so few test samples. Thats the point. I dont see how live and let live applies here. We are talking about technical issues, not what the best kind of pie is.. (its cherry).

What we DO know is that this DOES cause ware on the drive. Nothing else is really been established, and the firmware should be let to do the organization it sees fit.

Exactly, when running any kind of benchmark in a multitasking OS like Windows you have to take into account the other things going on in the system at that time. A single background process waking up and checking for updates could cause delays and lower the results of one run which might not happen 98% of the time.
 

razel

Platinum Member
May 14, 2002
2,337
90
101
I tested after a defrag on my Intel 510 yesterday, and even though I did not see any higher scores, I did win Safety Bingo at work and am buying pizza for the plant tomorrow.

You're the kind of co-worker I'd want to work with!
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
Exactly, when running any kind of benchmark in a multitasking OS like Windows you have to take into account the other things going on in the system at that time. A single background process waking up and checking for updates could cause delays and lower the results of one run which might not happen 98% of the time.

Normally the case.. yes. Not applicable to my setup in the slightest though since I run dedicated testing configs to eliminate nearly all of that possibility. My results are spot on and consistent all the time(unless I'm screwing around with hardware configs/drivers of course).

1. No power mgmt babysitting being used. EIST, c-states, all other sata/sound/USB3.0, etc.. is gone. Runs fully OC'd and near full tilt 24/7.

2. I use a dedicated testing OS(W7 x64 pro) of 4 gigs in size. Only the sata driver is updated and the rest is default MS fare. No updates, no swap, no sys restore, no hibefile, no applications aside from the ones needed for optimization/testing and none of those auto-start or run in the background anyways. Manual only with no bells and whistles whatsoever.

3. Testing has been done in various configs throgh the years. TRIM/no-TRIM, full dirty/secure erased, volumes filled to 80% full, boot drives/spares, purpose built highly "free space fragmented" images(which is actually quite different than "file fragmentation"), and many more tricks I don't care to divulge here.

4. A large portion of testing was also done on first gen Sandforce drives for a very specific reason. TRIM and GC is VERY lazy with those particular controllers and doesn't often get in the way to skew results during the testing process. Not that it would on a drive that was secure erased and had not formed its Durawrite mapping yet since fresh blocks remained anyways. Which is why I like comparing restored images to those particular drives as they work in very predictable and distinct ways.

5. Properly validate any sequential speed increase results with dedicated file transfers. Because when you can shave a couple seconds of here and there?.. you know your doing something right. Problem here is the need to have a faster storage volume than the SSD/s being tested to eliminate bottlenecks. I use an 8 x HDD array and ramdisk for my testing.

The best test that you can do to verify this free space consolidations effectiveness.. is to build the particular install that you will be using in the long term... take an image... secure erase the SSD/s and reinstall the image.. DISABLE TRIM(yep, you heard me).. test with various benchmarks or whatever file types you want.

Note: TRIM is typcially disabled only for the sake of testing since the more variables you eliminate?.. the greater the process of elimination will reduce skewing due to internal drive algorithms coming into play during the testing process. And the main reason that you don't want to start benchmarking right after that fresh install(although I've certainly done it many times just because I couldn't wait to see if I was heading in the right direction or not) and before secure erase/reimage.. is to eliminate partially filled blocks and physical space fragmentation(at least moreso than usual) which are surely there after the install, OS/apps temp file deletes, OS cofigs/tweaks/updates and whatever else you do to get the volume set up the way you like.

Now consolidate that volume and take another image.. secure erase and restore the optimized version.. and then compare those results with the results of the secure erased and restored non-consolidated bitmap.

A lot of work?.. yeah, no doubt. But since I purpose build all of my images for every machine I build/maintain anyways?.. it's par for the course. I also never(very rarely anyways) use any fresh installs after they are built. All my installs run longterm only after they've been secure erased/restored. Along with my personal choice of free space consolidation before imaging, of course.

And for god's sake people.. who runs one single test to verify results like these? I mean seriously. Use some common sense here and quite assuming so much. Plus, you'll learn much faster and eliminate a good portion of false pride in the process. lol
 
Last edited:

Revolution 11

Senior member
Jun 2, 2011
952
79
91
I've never been in space, but I don't need to be to review the overwhelming evidence and come to the conclusion that the earth is round and not flat. You are arguing for the flat earth society here. Adults should also realize when there is no point in continuing a debate since all the evidence points to only one possible conclusion.
I don't think defragging SSDs will help either but are you seriously suggesting that people should not question the conventional wisdom and stop "continuing a debate" just because the earlier evidence is against the new argument? By that logic, we would have never discovered how to write or use fire for cooking.

I welcome groberts's inquiry into this question and instead of bashing his efforts as unneeded, please show some data of your own.

As for Nothinman's comment, defragmenting spindle drives, especially the older crummy laptop 2006-era drives like the one I have, can really help. Random access speed for a 16% fragmented drive with 2% free space was about 0.98 MB/sec. Defragmenting the same drive with Defraggler to 6% fragmentation more than doubled the speed to 1.98 MB/sec. Not impressive or anything new, but defragging a older hard drive that is heavily fragmented will help considerably. Especially if you can't buy a new hard drive or SSD or do a clean install.
 

bononos

Diamond Member
Aug 21, 2011
3,894
162
106
OK.. let's keep the party going then.
........
In the end I don't come here to be a punching bag for geeks or little fella's who hide behind keyboards while calling people out. Or to dispute and debate things with someone to vent out life's little frustrations. I'm too old and tired or busy so I tend to just say what I feel and post what I find. Don't like it?.. then don't read it.

....... PS. just to clarify here.. I do not think that defragging an SSD will typically show much result because I've tested that too(although, highly fragmented drives will show measurable gains). What I do find is that free space consolidations can in fact effect the physical layout of data and force the firmware to "reassess" its current structure........

Did you know that the whole thread was not talking about any improvements from 100% rebuilds with image files vs defragging before your posts. From your edited P.S. comment it sounds like you've only come to realize that.
 

_Rick_

Diamond Member
Apr 20, 2012
3,937
69
91
Right, this thread made me register, because something is being consistently overlooked:

Random read speeds are an order of magnitude below sequential read speeds. And this is "LBA-sequential" and "LBA-random", so it doesn't matter, that internally the drive uses a different addressing scheme.

This implies to me, that on a heavily fragmented drive, any read load will appear to be random, and you lose around 80-90% of throughput. This is something that a good defragmenter will avoid.
Of course, it is highly unlikely, that your drive will end up in this state.
But if you have a workload that truly thrashes the disk, with a bunch of small and large files being created and deleted at high rates, then clearly, for the large files, you can get performance benefits for reads, when your reads are over sequential LBA's.

As a benchmark, I suggest (to whoever is bored enough to try this) to fill the flash drive with any (reasonably modern, due to the number of files involved and potential filename issues) file system with a bunch of 4-16K files. A while loop with an incrementer for the filename filling with /dev/zero (/dev/urandom, if it's a sandforce controller) ought to be the method of choice. Next you use /dev/urandom to randomly delete some of these files, to the tune of some 16Gb or so. Next, write a single 4-8 GB file to the disk.
Check read speed for that file a few times, then defragment and repeat. If defragmentation succeeds in creating sufficient contiguous space for the file, read sped should go up from 50-100 MB/s to 400-500Mb/s depending on your interface, SSD etc.


But, clearly, that shouldn't be necessary, as the benchmarks are readily available.
 

velis

Senior member
Jul 28, 2005
600
14
81
http://forums.anandtech.com/showpost.php?p=33306775&postcount=27

You can't expect defragmented read speeds to go up that much on a SSD. The only difference is decreased sent command count due to less fragments (larger LBA chunks requested).
The SSD controller still has to translate each of those requests to (multiple) actual flash chip addresses. So pretty much all the difference there is is number of requests that the OS sends to the SSD. In my case that translated to a measly 2% improvement of which 1% (could be the entire 2% for all we know) was probably benchmark variance due to external factors (services).

Note though that my test was performed on a worn disk that has never been secure erased. Secure erasing the disk and applying the same partition image to it (fragmented even) would probably yield much better results, especially due to age of my particular controller.

@icanhascpu2: live and let live referred to the heightened tones between groberts101 and some other posters. Some of us defragment our SSDs for various reasons that may not have much to do with performance, but that is our choice, as is our reasoning for performing defragmentation. All the nay sayers may argue that there are no performance benefits, but my simple benchmark says otherwise even if within 3% benchmark tolerance. Also they may scream how it will wear out my SSD, but it's *MY* SSD, not theirs. Besides, the drive map is so much more beautiful in blue than in red

Finally I will argue that I have some confidence in Diskeeper's fragmentation prevention technology actually doing what it says on the tin. Of course this will not bring me much performance benefits, this much has been proved over and over, but people are overclocking and tuning for much less than 2% speed gains. The major progress has been made by actually purchasing the SSD in the first place, any further tuning is just picking nits for the fun of it.
 
Last edited:

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
Those last couple of posters are getting the bigger picture and there's often much more involved here with SSD's, and PC's in general, than what is constantly spewed around the net by the copy cats who rarely stray from the written page.

Couple of years ago there were posters here(some in this thread) who belittled my need for a 6 drive SSD array saying things like.. "there is no way that a system could need all those IOPS".. or my favorite.. "aside from benchmarking?.. that array is useless". LOL Now look around you(and at your system).. to see many of these same people who were spewing that redundant crap, are now using sata3 SSD(which is in essence similar to raided SSD from 2-3 years ago) and/or even raided sata3 SSD setups.

Then I won't get too deep into the fact that many of these same experts were also spewing the need to move directories and swap files not so long ago as well(all of which I never did either. So, what you don't know now?.. will eventually sneak in when you study newer data, or actually test some things out for yourselves. Or not. Either way is fine by me since your system runs just fine from where I sit.

Did you know that the whole thread was not talking about any improvements from 100% rebuilds with image files vs defragging before your posts. From your edited P.S. comment it sounds like you've only come to realize that.

wow.. you're right! Thanks for pointing that out and now I'll have to go back and delete all that I wrote earlier because you guys don't agree with it. lol

Seriously though?.. the thread is about defragging an SSD. Doesn't specifically state HOW, or with what utility, that it needs to be done. So in that respect, it seems like a good enough thread as any other on the subject to talk about free space consolidations. Especially since it does actually defrag as part of the process. But you are right to assume that there are more optimized ways to do things.. SE/reimage being one of them.

And I will also say this. I have seen more than a handful of neigh sayers run Perfect Disk 10/11 free space consolidations on SSD's that were performing well below par.. and get more than 100MB's sequential gains. Doesn't matter why it helps.. just that it does help some systems despite flying in the face of logic or the users current understanding of a particular controllers firmware algorithms. So whether you believe that writing 10 gigs of flash to move data around your drive(or far less for the majority) is going to wear it out prematurely is a pretty moot point when you decide to write that same sized file with pics, vids or games. Data is data, or writes to flash for the most part, and these things will be relegated to glorified USB sticks or USB 3.0 enclosures by the time you wear them out anyways.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |