SSD Defrag Optimize

Brian Stirling

Diamond Member
Feb 7, 2010
4,000
2
0
Early on the advise was not to defrag SSD's owing to the wear issue with limited writes, but how does that stack up today? In Win 10 when I select the properties of a drive and go to 'tools' then click on 'optimize' it brings up a drive listing and clearly show the SSD as a SSD and HD's as HD's so I have to believe that the software handles the optimization in the appropriate way for a SSD.

I recently installed Win 10 as an update on a Win 7 laptop and during boot it would take 20-30 seconds to the splash screen then another 20-30 seconds after the password was entered. It did this across several power cycles so it wasn't a hidden update. I then optimized the drive and now the boot is more like 18 seconds to the splash then another 5 seconds after the password so it does look like optimization did something to improve boot times. I'm curious though about just what Win 10 is doing to the SSD during an optimization.


Brian
 

mikeymikec

Lifer
May 19, 2011
18,042
10,224
136
Early on the advise was not to defrag SSD's owing to the wear issue with limited writes, but how does that stack up today?

It doesn't.

http://forums.anandtech.com/showpost.php?p=38113145&postcount=23


In Win 10 when I select the properties of a drive and go to 'tools' then click on 'optimize' it brings up a drive listing and clearly show the SSD as a SSD and HD's as HD's so I have to believe that the software handles the optimization in the appropriate way for a SSD.

On Win81, when you tell Windows to optimise the SSD, in the status column it should say 'x% trimmed' during the process. I don't know if Win10 does the same thing.

I recently installed Win 10 as an update on a Win 7 laptop and during boot it would take 20-30 seconds to the splash screen then another 20-30 seconds after the password was entered. It did this across several power cycles so it wasn't a hidden update. I then optimized the drive and now the boot is more like 18 seconds to the splash then another 5 seconds after the password so it does look like optimization did something to improve boot times. I'm curious though about just what Win 10 is doing to the SSD during an optimization.

Pass. Though I would check the AHCI drivers.
 

Brian Stirling

Diamond Member
Feb 7, 2010
4,000
2
0
Yeah, I think the data from the last few years has pretty much put an end to the fear that you cold wear out an SSD in a normal use lifetime. Not saying you couldn't do so but it's highly unlikely unless you write a whole hell of a lot and tend to keep the drive at 95% capacity or there about's.

Having data scattered across the SSD isn't going to impact performance the same way that it does on a HD, but if you have to do many small reads/writes because the data is scattered across the SSD the performance will be lower. And, as I mentioned, my boot times improved quite a bit after running optimize on my boot SSD so that would suggest there is something being done to optimize and that it worked.

I would like to know what exactly optimize does on an SSD...


Brian
 

nerp

Diamond Member
Dec 31, 2005
9,866
105
106
It forces TRIM. That basically looks for partially filled blocks, moves stuff over to other partially filled box and empties the boxes that are partially filled. That way, when it comes time to write, it can just dump into the empty box instead of moving the stuff partially filling it first, then writing. You see, it can't write into a half-full box. Only empty boxes. TRIM gets rid of half-full boxes.

This is a crude explanation but that's basically it.

Many drives now have smart background garbage collection which does this even without TRIM active. TRIM helps keep the process as efficient as possible.

Defrag is pointless because access times for SSDs have nothing to do with data location. It takes no more time too pull from one part or the other and there is no seeking involved. Defrag helps mechanical drives because it takes a large file that is broken up and combines it into one clean chunk so the drive head doesn't have to move around to read the file. It can just stay in one place and read the file in one spot. Since there are no heads, there is no seeking, so a contiguous file can be read just as fast as a file scattered all over the place on an SSD.

The advice against defrag for SSDs was frequently based on the notion that it causes a lot of writing to the media for no reason, wearing it out sooner than need be.
 

Brian Stirling

Diamond Member
Feb 7, 2010
4,000
2
0
Well if it makes no difference where the data is or how broken up it is why is small file transfer rates so much lower than larger file transfer rates? I mean, if it makes no difference how the data is stored then it should not matter how big the file is.


Brian
 

DigDog

Lifer
Jun 3, 2011
13,622
2,189
126
fragmented files make your drive slower because the arm actually has to move between different location on the platter as it reads the whole file. NAND memory does not need that as it has no moving parts, and any location on it is as quick as any other to provide the data. so, defrag is not needed.
earlier in the life of SSDs it was speculated that defrag might also be damaging because of the limited writes a SSD has; turned out to be not accurate, as any SSD is obsolete way before it has had a chance to "burn out".

if you are OCD you can defrag it anyway, so that it shows all the little line of color "all in teh right place".
 

TheELF

Diamond Member
Dec 22, 2012
3,993
744
126
fragmented files make your drive slower because the arm actually has to move between different location on the platter as it reads the whole file. NAND memory does not need that as it has no moving parts, and any location on it is as quick as any other to provide the data. so, defrag is not needed.
If you have a file in one part it's get read with like 400-500mb/s if it is broken up into many parts then each part is read with a couple thousand iops (yeah ssd manufacturers are smart like that) plus the access time it takes to skip between each of those parts ~2ms is very fast but multiplied by tens or hundreds still is much.

Reading a file in one go is still much faster although on a ssd it's much harder to notice the difference.
 

mikeymikec

Lifer
May 19, 2011
18,042
10,224
136
Also when transferring small files there are file system alteration instructions to do along with the transfer of file content.
 

VeryCharBroiled

Senior member
Oct 6, 2008
387
25
101
took a look at my dm4 notebook I let upgrade to win10 a few months ago. it has an intel x25m g2 80 gig ssd in it. it had been set to optimize once a week since I put win10 on it, and when it had win7 on it I had the drive set to never defragment. look like it snuck the optimize setting in there during the upgrade.

so I clicked "optimize" and it took about a second to come back. thats about how long the intel ssd toolbox takes to optimize too. sure looks like a simple whole drive trim from here.
 

nerp

Diamond Member
Dec 31, 2005
9,866
105
106
Well if it makes no difference where the data is or how broken up it is why is small file transfer rates so much lower than larger file transfer rates? I mean, if it makes no difference how the data is stored then it should not matter how big the file is.


Brian

Because a file operation has a start and stop process. A really large file has the same stop.start but the amount of data gives the drive a chance to get to full speed. With short files, you're spending almost as much time opening and closing the file as you are reading or writing. Think of a car on a looooong highway. You can get up to a fast speed and roll for a while. But in the city with 4-way intersections and traffic lights every 300 feet, you're doing a lot of stopping and starting.
 

Brian Stirling

Diamond Member
Feb 7, 2010
4,000
2
0
Because a file operation has a start and stop process. A really large file has the same stop.start but the amount of data gives the drive a chance to get to full speed. With short files, you're spending almost as much time opening and closing the file as you are reading or writing. Think of a car on a looooong highway. You can get up to a fast speed and roll for a while. But in the city with 4-way intersections and traffic lights every 300 feet, you're doing a lot of stopping and starting.

Right, and a large file that's scattered around the drive will have a similar start/stop thing going on. Even though it has only one file it may have dozens or hundreds of chunks of data that need to be addressed individually. Why would it require a long time to open/close a file and yet it be instantaneous to address the potentially large number of fragments.

Again, before running 'optimize' I had a long boot time and after the boot time was about half as long. So, just exactly what was done I don't know but the performance surely improved.


Brian
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
For the final time, you CAN'T actually defrag an SSD because the operating system doesn't have control over data placement.
 

TheELF

Diamond Member
Dec 22, 2012
3,993
744
126
You can defrag ssds,it just won't stick for as long as it does for mechanical drives,due to the wear leveling the ssds perform.
 

nerp

Diamond Member
Dec 31, 2005
9,866
105
106
Right, and a large file that's scattered around the drive will have a similar start/stop thing going on. Even though it has only one file it may have dozens or hundreds of chunks of data that need to be addressed individually. Why would it require a long time to open/close a file and yet it be instantaneous to address the potentially large number of fragments.

Again, before running 'optimize' I had a long boot time and after the boot time was about half as long. So, just exactly what was done I don't know but the performance surely improved.


Brian

No. That's not it at all. A large file scattered all over the place is read all at once. Every scattered bit is read at roughly the same time. There is no pause between pieces being read because there is no arm swinging around to get to that part. Think of water flowing through a strainer. Each hole is pouring water. They're not taking turns. The delay for small files is due to the file open/file close process and that's a function of every method of reading and writing files for every operating system.
 

Brian Stirling

Diamond Member
Feb 7, 2010
4,000
2
0

Hugo Drax

Diamond Member
Nov 20, 2011
5,647
47
91
All it does when you run optimize is run a trim command. It will not run defrag.
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Well if it makes no difference where the data is or how broken up it is why is small file transfer rates so much lower than larger file transfer rates? I mean, if it makes no difference how the data is stored then it should not matter how big the file is.

Because all data reads are slower on every type of drive ever made at the beginning of the read, than it is at the end of the read. It takes much, much longer to find the spot where you need to begin reading from, and negotiate the read, than it does to actually read. If the files are too tiny, there never is any chance to "make up" for that time that was wasted at the beginning, so you just end up with much, much slower average reads, when reading only small files.
 
Last edited:

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
All it does when you run optimize is run a trim command. It will not run defrag.

That's only true if you are running Windows 7 SP1 or later, at least in the Windows world.

edit: Just found this, in the article linked above:

hanselman.com said:
Storage Optimizer will defrag an SSD once a month if volume snapshots are enabled. This is by design and necessary due to slow volsnap copy on write performance on fragmented SSD volumes. It’s also somewhat of a misconception that fragmentation is not a problem on SSDs. If an SSD gets too fragmented you can hit maximum file fragmentation (when the metadata can’t represent any more file fragments) which will result in errors when you try to write/extend a file. Furthermore, more file fragments means more metadata to process while reading/writing a file, which can lead to slower performance.

^^^That is with the latest versions of Windows. The article was published in Dec 2014.

edit #2: For those too lazy, or just not able to click the linked article above, this portion is also very relevant:

hanselman.com said:
Additionally, there is a maximum level of fragmentation that the file system can handle. Fragmentation has long been considered as primarily a performance issue with traditional hard drives. When a disk gets fragmented, a singular file can exist in pieces in different locations on a physical drive. That physical drive then needs to seek around collecting pieces of the file and that takes extra time.

This kind of fragmentation still happens on SSDs, even though their performance characteristics are very different. The file systems metadata keeps track of fragments and can only keep track of so many. Defragmentation in cases like this is not only useful, but absolutely needed.
 
Last edited:

JimmiG

Platinum Member
Feb 24, 2005
2,024
112
106
Well it's not really the SSD that gets fragmented. SSD's actually "fragment" files on purpose as part of the wear-leveling. Even if NTFS shows a 10GB file as one contiguous block of data, it's actually scattered all over the flash cells.

However NTFS itself is vulnerable to fragmentation. We really should be using another file-system on SSD's. At least there should be a way to defrag the file system without physically shuffling data around in the SSD flash memory (by some fakery at the controller/translation level - it could just report that it has moved a block of data, when in reality it just modified the pointer in the translation layer...).
 
Last edited:

KingFatty

Diamond Member
Dec 29, 2010
3,034
1
81
That's only true if you are running Windows 7 SP1 or later, at least in the Windows world.

edit: Just found this, in the article linked above:



^^^That is with the latest versions of Windows. The article was published in Dec 2014.

edit #2: For those too lazy, or just not able to click the linked article above, this portion is also very relevant:

I wish there was some context in the description you quoted, as to what the author meant as "useful" and "absolutely needed" when talking about defragging SSDs.

My concern is it comes across as unclear whether 1) performance remains good but there is a dead-end you can eventually reach if it becomes too fragmented to exceed the eventual threshold, or 2) if it's more like sliding off a cliff and your performance degrades steadily until the end.

I'm thinking it's more like scenario 1), and you don't really have to worry about hitting the dead-end threshold because it would take perhaps on the order of centuries of reads/writes to overwhelm the metadata pointers? I mean maybe the author just wanted to puff his chest and talk about this scary scenario that will never happen? Why no context as to how robust the metadata pointers are and how much fragmentation they can handle until you hit the threshold/dead-end?
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Well if it makes no difference where the data is or how broken up it is why is small file transfer rates so much lower than larger file transfer rates? I mean, if it makes no difference how the data is stored then it should not matter how big the file is.


Brian
It does make a difference. The SSD itself will arrange data for best access. What that means may vary by make and model. More IOs on the same channel, same die/plane, etc., are slower, so that should be mixed up, where possible. Most SSDs have a parity RAID 0-like striping setup, with proprietary implementation details, these days (plus added ECC, which is often RAID-5-like, and so on).

But, you don't control that, beyond the OS choosing LBAs. With plenty of free space, and TRIM, there will be enough room to make files that are large be stored in large contiguous chunks. Files that are small, or often-edited, are going to be small latency-bound IOs, no matter what. General software/memory overhead can make that slower, SATA can make it slower, the controller can make it slower, and potentially having to wait on a single chip to return multiple sets of data from multiple blocks can make it slower.

You can defrag an SSD, and some SSDs might even get a bit of a speedup from it...but if so, your best bet would have been to use a larger SSD from day one, so that the OS wouldn't have to split up moderate-sized files in the first place. OTOH, there's no technical reason why your OS couldn't take advantage of all your modern RAM, and perform an SSD-optimized defrag, either (IE, read and write 50-100MB chunks, or whole single files, if smaller, rather than the traditional tiny piecemeal work). Defragging is considered bad for SSDs because traditional defragging tools performed many small steps in the process, so that it could be safely stopped at almost any time, rather than determining an optimal final state and performing a much smaller number of writes to get there. Some 3rd-party tools may well do this, today.

However NTFS itself is vulnerable to fragmentation. We really should be using another file-system on SSD's. At least there should be a way to defrag the file system without physically shuffling data around in the SSD flash memory (by some fakery at the controller/translation level - it could just report that it has moved a block of data, when in reality it just modified the pointer in the translation layer...).
You really can't, though. To what end that could be done, most of it already is being done. However, there's no technical reason why files that get fragmented the worst can't be neatly rewritten contiguously, without making tens of times the file's size in writes, and the rest left alone.

The problem is that small writes tend to be, by nature, unpredictable, or requiring servicing right now, and that can build up into dense fragmentation, over time. If you're running a program that has many database files, it will generally edit them in small parts, fragmenting them a lot over time (desktop accounting and tax software can be really bad about this, IME, and for some users, even AutoCAD). Just copying the files usually is enough to take care of the problem, but I see no reason why the OS/FS couldn't do the same, as it sees the excessive fragmentation, and automatically tackle those files only.

All that said, you probably don't have to worry. Most of the time, for most users, fragmentation doesn't become a major problem. It's almost all corner cases, these days, like lingering shadows (last PC I had to clean and set up for someone new had 15 active snapshots--WTF, no wonder updates were taking so long!), and client-side databases, from programs that don't optimize those files. MS started preventing fragmentation in the first place, by scattering file chunks across the drive, years ago. With plenty of free space, you'll usually be fine never defragging, these days.
 
Last edited:

Brian Stirling

Diamond Member
Feb 7, 2010
4,000
2
0
I generally try to have a large enough SSD to make sure it's not filled or even close to being filled. I have three PC's with SSD for primary boot/OS and programs and all are 512GB or larger. I tend to store most of my data on other drives to keep my SSD on the lower side of utilization. Typically these boot drives are kept below 30% utilization and often around 20%.

The latest PC I built a couple months ago for video editing has a 512GB Samsung 950 Pro for boot/OS and programs and two 6TB HD's (WD Black) for bulk storage of my images and video -- more than 4TB.

I wanted to add a second SSD as working storage during my video editing but didn't want a drive of only 512GB. I held off until a PCIe SSD of 1TB was available and it looks like that may happen sooner than I figured. Using a SSD for video editing may only improve rendering a little, but the bigger benefit will be scrubbing the timeline while editing.


Brian
 

JimmiG

Platinum Member
Feb 24, 2005
2,024
112
106
I generally try to have a large enough SSD to make sure it's not filled or even close to being filled. I have three PC's with SSD for primary boot/OS and programs and all are 512GB or larger. I tend to store most of my data on other drives to keep my SSD on the lower side of utilization. Typically these boot drives are kept below 30% utilization and often around 20%.

This is a good strategy, although 30% utilization is probably overkill (underkill?). However you don't always need to follow it. For example, if a drive is only used for storing games and applications, the data will only be read, and almost never written unless you apply an update/patch. The actual user data that get modified would be under %appdata% or My Documents or something. That's why my games-only 120GB SSD is 97% full with zero issues.

However, there's no technical reason why files that get fragmented the worst can't be neatly rewritten contiguously, without making tens of times the file's size in writes, and the rest left alone.

In an ideal world, they shouldn't have to be re-written at all. SSD's don't care about the physical location of the data like HDD's do. The flash translation layer presents the flash cells as a spinning disk to the host, but it's just a table of pointers, not an actual spinning disk.
The translation layer could simply "lie" to the host during a defrag - A request to move data from location B to location A comes in - instead of erasing the data, rewriting it somewhere else on the flash cells and updating the translation table, it could simply say "OK, I moved that data from B to A for you" and just update the translation table to reflect this without rewriting the data.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |