SSD Controller is aware of OS partitions?

Modus

Platinum Member
Oct 9, 1999
2,235
0
0
I was re-reading Anand's SSD articles in preparation for recommending a good bang-for-buck SSD/HD combo for a new system build, when I came across this statement:

Intel ships its X25-M with 7.5 - 8% more area than is actually reported to the OS. The more expensive enterprise version ships with the same amount of flash, but even more spare area. Random writes all over the drive are more likely in a server environment so Intel keeps more of the flash on the X25-E as spare area. You’re able to do this yourself if you own an X25-M; simply perform a secure erase and immediately partition the drive smaller than its actual capacity. The controller will use the unpartitioned space as spare area.

(See http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=9 and scroll to the last paragraph.)

To me this doesn't make sense. As I understand, an ATA storage device just presents a bunch of accessible data blocks to the host OS, which is responsible for partitioning the blocks in whatever arrangement it chooses. The partitions are simply ranges of blocks set aside, and the partition table is itself written to a range of blocks that is hard-coded into the OS. The storage device has no way of knowing how it is partitioned, and so it cannot exploit the unpartitioned space to provide additional wear leveling and spare space.

Unless Anand is referring to some special utility that Intel provides, which can somehow communicate with the SSD controller directly to inform it of unused space. Can anyone confirm this?

P.S. Hi everybody! This is my first post in this forum in about six years. My career has taken me away from computer hardware/support and into software development, but I still like to maintain enough knowledge to help a few old clients and built myself a good bang-for-buck PC every couple years.
 
Last edited:

blackangst1

Lifer
Feb 23, 2005
22,914
2,359
126
I was re-reading Anand's SSD articles in preparation for recommending a good bang-for-buck SSD/HD combo for a new system build, when I came across this statement:



(See http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=9 and scroll to the last paragraph.)

To me this doesn't make sense. As I understand, an ATA storage device just presents a bunch of accessible data blocks to the host OS, which is responsible for partitioning the blocks in whatever arrangement it chooses. The partitions are simply ranges of blocks set aside, and the partition table is itself written to a range of blocks that is hard-coded into the OS. The storage device has no way of knowing how it is partitioned, and so it cannot exploit the unpartitioned space to provide additional wear leveling and spare space.

Unless Anand is referring to some special utility that Intel provides, which can somehow communicate with the SSD controller directly to inform it of unused space. Can anyone confirm this?

P.S. Hi everybody! This is my first post in this forum in about six years. My career has taken me away from computer hardware/support and into software development, but I still like to maintain enough knowledge to help a few old clients and built myself a good bang-for-buck PC every couple years.

I believe youre asking about trim.

Unless Anand is referring to some special utility that Intel provides, which can somehow communicate with the SSD controller directly to inform it of unused space.

Yes, that is true. SSD communicate differently than platters. They are kind of their own entity, thus the need for a controller. The OS talks to the controller, and the controller controls the flash pages (or areas, sectors, whatever you want to call it...but theyre called pages). Some do this automatically, some do not.

For example, in a non trim drive, as you use the drive it begins to fill up. Once it reaches max space, lets say you want to save more info. There is a little lag (not much) because before the controller can write the new info to flash, it has to first erase what was previously there. In the newer flashes, this is done automatically. In other words, when you erase a file, the OS tells the controller hey! This file is gone! And the controller erases it permanently. Its all automatic. With non-trim SSD, there are utilities that you can run, say weekly, that "sync" your SSD with whats actually there or not. It's also called garbage collection.

Currently, there are only three OS's (that Im aware of) that support trim. Windows 7, Windows server 2008 RC2, and later versions of the Linux kernal. Both the OS and the SSD must support trim in order for this automation to take place.

Make sense?

Oh and welcome back! :wave:
 
Feb 21, 2010
72
0
0
No he is not asking about TRIM. He is asking how does the SSD know what part of the SSD is partitioned and formatted and what part is not. According to him partitions are something on the OS level, something which the OS knows after reading some information stored on the drive. The drive's controller itself doesn't see the partition.

Just clarifying what he mean, I've no technical knowledge on this.
 

blackangst1

Lifer
Feb 23, 2005
22,914
2,359
126
I see. In that case, it is true that with Windows 7 and SSD partitions, they are logical only. In other words, the OS sees them, but SSD's are "flat" so to speak. The controller will still put data where it pleases to adhere to wear leveling schemes. So, you could show your SSD as C: and D:, and you will have different pieces of data on each; however, the controller doesnt really acknowledge those partitions, and continues with wear leveling and putting data where it sees fit. So, there really is no reason to partition, other than convenience if you see fit. There is no performance gain whatsoever. You have probably read you dont want to defrag SSD...its because it doesnt matter where on the SSD your data is, access times will be (practically) identical. Its not like you have a platter arm thrashing all over your disk trying to find the hundreds of pieces of data that make up your resume (just as example).
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Wouldn't this occur naturally as part of the resolution of virtual addresses to physical addresses? The reason this actually works is because the drive has been completely cleared. If you merely delete a partition this will NOT work because the drive is NOT aware of the its partitions.

the controller keeps a table which it uses to match virtual address to physical address, as it shuffles things around (for wear leveling) it will change the physical location of data but have the same virtual address (the one given to the OS). Recall that there is already space put aside anyways. So your 100GB drive is actually 128GB, the smallest writable block is 4kb (but the smallest the OS will write is 1/8th that amount). so there are 32 million physical addresses. but the drive will only let you have 25 million virtual addresses. A secure erase means actually clearing the virtual to physical table. So that means that if you take a clear 100gb drive (128GB actually) and put a 40gb partition on it, it will only use the first 10 million addresses. This means that the 10 million virtual addresses will be shuffled across all 32 million physical addresses. As you fill in the drive with "first writes" (first since the table was cleared via secure erase) it will collect more virtual addresses up to the limit imposed by firmware (25 million). Had you actually filled the virtual addresses before, even deleting the partitions will not restore it because the drive table still sees those virtual addresses as being currently in use. TRIM tells the drive which virtual addresses are no longer in use so it doesn't care about preserving their "data" when shuffling things around.
 
Last edited:

blackangst1

Lifer
Feb 23, 2005
22,914
2,359
126
Wouldn't this occur naturally as part of the resolution of virtual addresses to physical addresses? The reason this actually works is because the drive has been completely cleared. If you merely delete a partition this will NOT work because the drive is NOT aware of the its partitions.

the controller keeps a table which it uses to match virtual address to physical address, as it shuffles things around (for wear leveling) it will change the physical location of data but have the same virtual address (the one given to the OS). Recall that there is already space put aside anyways. So your 100GB drive is actually 128GB, the smallest writable block is 4kb (but the smallest the OS will write is 1/8th that amount). so there are 32 million physical addresses. but the drive will only let you have 25 million virtual addresses. A secure erase means actually clearing the virtual to physical table. So that means that if you take a clear 100gb drive (128GB actually) and put a 40gb partition on it, it will only use the first 10 million addresses. This means that the 10 million virtual addresses will be shuffled across all 32 million physical addresses. As you fill in the drive with "first writes" (first since the table was cleared via secure erase) it will collect more virtual addresses up to the limit imposed by firmware (25 million). Had you actually filled the virtual addresses before, even deleting the partitions will not restore it because the drive table still sees those virtual addresses as being currently in use. TRIM tells the drive which virtual addresses are no longer in use so it doesn't care about preserving their "data" when shuffling things around.

That is my understanding as well. Although a "secure erase" on SSD isnt needed in the traditional sense...once either trim or a garbage collection tool clears the page, its gone and unrecoverable. I tested this myself a couple nights ago on my drive. I created a 5 meg powerpoint doc, and then deleted it. After a low level search (there really isnt such a thing on SSD, but you get my drift) two different recovery softwares were not able to find that doc.
 

Modus

Platinum Member
Oct 9, 1999
2,235
0
0
I'm just getting back to this thread. Thanks for your thoughts.

It sounds like, as I suspected, the spare space gained by using a smaller partition isn't recognized directly by the SSD. I guess what Anand was saying is that leaving some space unpartitioned just gives the SSD more breathing room, because there are fewer LBA's that need to be maintained intact.

I can imagine that when all the LBA's are in use, and one needs to be overwritted, the spare space comes in handy because the SSD can just write the LBA to an unused flash block. This avoids having to read a full block (which is larger than an LBA and can only be written in one shot) wipe the block, and rewrite it with the changed LBA. However, TRIM support would probably make this scenario much less common, since it would be rare to operate with all LBA's in use.
 

Hacp

Lifer
Jun 8, 2005
13,923
2
81
I'm just getting back to this thread. Thanks for your thoughts.

It sounds like, as I suspected, the spare space gained by using a smaller partition isn't recognized directly by the SSD. I guess what Anand was saying is that leaving some space unpartitioned just gives the SSD more breathing room, because there are fewer LBA's that need to be maintained intact.

I can imagine that when all the LBA's are in use, and one needs to be overwritted, the spare space comes in handy because the SSD can just write the LBA to an unused flash block. This avoids having to read a full block (which is larger than an LBA and can only be written in one shot) wipe the block, and rewrite it with the changed LBA. However, TRIM support would probably make this scenario much less common, since it would be rare to operate with all LBA's in use.

I'm pretty sure the spare space IS recognized by the SSD.
 

Modus

Platinum Member
Oct 9, 1999
2,235
0
0
Hacp, how exactly would the SSD directly recognize the free space when you partition a drive in Windows to less than its full reported space? An ATA device like an SSD or conventional HD doesn't know what a partition is -- that's just a logical construct, defined by data stored in an LBA somewhere. All the ATA device understands are LBA blocks. It's just that, whereas a conventional hard drive generally maps LBA's predictably across its platter, the SSD maps them dynamically across its flash blocks to spread out the wear of the writes.
 

Hacp

Lifer
Jun 8, 2005
13,923
2
81
Hacp, how exactly would the SSD directly recognize the free space when you partition a drive in Windows to less than its full reported space? An ATA device like an SSD or conventional HD doesn't know what a partition is -- that's just a logical construct, defined by data stored in an LBA somewhere. All the ATA device understands are LBA blocks. It's just that, whereas a conventional hard drive generally maps LBA's predictably across its platter, the SSD maps them dynamically across its flash blocks to spread out the wear of the writes.

I have no idea how. All i know is that the SSD needs to be aware of the amount of free space for its wear level algorithms. If the ssd needs to keep track of how many writes every block has had then it should be aware of the free space size too.
 

Modus

Platinum Member
Oct 9, 1999
2,235
0
0
Granted, the SSD is aware of its internally-allocated free space. E.g. an SSD with an advertised 80G capacity might have 5~10G internal free space in addition to the 80G of virtual LBA's that it exposes to the ATA host. But if you made a 75G partition, the SSD would have no way of knowing that an additional 5G was now free. The only benefit of this free 5G is that it guarantees you will never reach your maximum LBA size. So theoretically, a host that does not support TRIM would take longer to reach a point where performance degrades due to the read-erase-write sequence required to overwrite LBA's when the SSD is at peak capacity. But for a TRIM-aware host, I can't see much benefit to this smaller partition, since even near peak capacity, TRIM will ensure that flash blocks are freed as soon as files are deleted to make room for new ones.
 

Hacp

Lifer
Jun 8, 2005
13,923
2
81
Granted, the SSD is aware of its internally-allocated free space. E.g. an SSD with an advertised 80G capacity might have 5~10G internal free space in addition to the 80G of virtual LBA's that it exposes to the ATA host. But if you made a 75G partition, the SSD would have no way of knowing that an additional 5G was now free. The only benefit of this free 5G is that it guarantees you will never reach your maximum LBA size. So theoretically, a host that does not support TRIM would take longer to reach a point where performance degrades due to the read-erase-write sequence required to overwrite LBA's when the SSD is at peak capacity. But for a TRIM-aware host, I can't see much benefit to this smaller partition, since even near peak capacity, TRIM will ensure that flash blocks are freed as soon as files are deleted to make room for new ones.

Ok, what do you mean by this? Because I think you have a fundamentally wrong idea of how SSDs work. Here is a quote from an IBM paper that explicitly says the upper bound for free space is the capacity of the drives, at least for Intel.

Over-provisioning refers to a common practice that the
user address space can only take a fraction of the raw Flash
memory capacity. Because of out-of-place writes, overprovisioning
exists in practically all Flash SSDs, either in
an explicit or an implicit way. For instance, Texas Memory
Systems and STEC explicitly state that their SSDs have
more raw Flash memory than the maximum logical address
space that the user can use, whereas other SSD vendors,
such as Intel and Micron, let the user choose the size of user
address space up to the maximum of the raw Flash memory
capacity. In this case, over-provisioning exists implicitly and
generally decreases with usage.
For both explicit and implicit over-provisioning, the
user may adjust the Flash memory utilization (&#185, the ratio
of the number of current in-use LBA addresses (i.e.
for which the SSD holds valid data) over the total physical
Flash memory capacity.With implicit over-provisioning,
utilization can reach up to 1.0,
whereas with explicit overprovisioning,
the utilization is upper-bounded by the vendor’s
over-provisioning specification.

You can read more about how free space affects drive performance and its relationship to garbage collection and wear leveling here.
http://domino.research.ibm.com/library/cyberdig.nsf/papers/50A84DF88D540735852576F5004C2558/$File/rz3771.pdf
 
Last edited:

Mark R

Diamond Member
Oct 9, 1999
8,513
14
81
No he is not asking about TRIM. He is asking how does the SSD know what part of the SSD is partitioned and formatted and what part is not. According to him partitions are something on the OS level, something which the OS knows after reading some information stored on the drive. The drive's controller itself doesn't see the partition.

Just clarifying what he mean, I've no technical knowledge on this.

The drive knows how much data it must 'preserve'. When the drive is new, it knows that the flash is blank - and every flash cell is 'spare'. Once the OS writes data to a sector, the drive must preserve the data, and so it has to reduce the amount of spare space available for internal use.

E.g. if you have a new 100 GB drive, and save a 1GB file to it. The SSD controller will know that 1 GB of data must be preserved, and only 99 GB of flash are 'spare'. If the OS writes over the top of that file, the SSD controller knows that the old data can be destroyed (even if the data is kept intact on the flash chips due to wear levelling) and doesn't need to decrease the amount of 'spare' data available.

So even if you overwrite the file 1000 times, the drive knows that only 1 GB of the 'visible surface' (i.e. how the drive appears to the OS) has been written to, it knows that it can safely treat 99 GB as spare space.

So, if you partition a new drive, then the OS will never touch the unparitioned bits of the drive - and so the SSD controller will never need to preserve the data on them, and can safely use that flash for internal processing. If you later add a new partition, then the SSD controller will automatically reduce the spare pool once that space is used by the OS.

The latest SSD controllers (e.g. sandforce) will during idle time, automatically TRIM any sectors that lie in the drive's 'spare' area. Such an advanced SSD, fitted with several GB of extra 'hidden' flash, never needs to be TRIMmed, because there will always be, at minimum several GB of flash which has been prepared by the internal garbage collector.
 
Last edited:

Modus

Platinum Member
Oct 9, 1999
2,235
0
0
other SSD vendors,
such as Intel and Micron, let the user choose the size of user
address space up to the maximum of the raw Flash memory
capacity

Maybe that explains it. It sounds like they're saying that the Intel controller allows you to manually set the free space through some proprietary API. But it does not say that simply making a smaller partition in Windows accomplishes this. Maybe this special ability of Intel controllers is what my original quote from Anand's article is referring to -- though it's missing some important details.

Mark R,

That's my understanding too -- the unpartitioned area is used as spare by default since the SSD doesn't need to preserve those blocks for used LBA's. No need for a special utility to tell the SSD controller explicitly that it now has additional spare space.
 
Last edited:

Hacp

Lifer
Jun 8, 2005
13,923
2
81
Maybe that explains it. It sounds like they're saying that the Intel controller allows you to manually set the free space through some proprietary API. But it does not say that simply making a smaller partition in Windows accomplishes this. Maybe this special ability of Intel controllers is what my original quote from Anand's article is referring to -- though it's missing some important details.

Mark R,

That's my understanding too -- the unpartitioned area is used as spare by default since the SSD doesn't need to preserve those blocks for used LBA's. No need for a special utility to tell the SSD controller explicitly that it now has additional spare space.

Its a little confusing as to what you mean. it sounds like your 2nd paragraph contradicts your first. The reason why spare area is good is because it helps provide a buffer for the garbage collecting algorithm intel uses.

I'm not exactly sure but I think the key is overwritting vs deleting. When you try to overwrite a previous file,windows is signaling to the SSD that the file is invalid. When you delete a file, no signal is sent by windows.If windows is used long enough, sooner or later, every single block will have something written on it. Once its filled, when it needs to write something new, most of the time, it will need to overwrite "deleted" data.

I guess once the SSD controller notices that every single write done by windows is an overwrite command, it will start garbage collection. So thats probably why you want to create a smaller partition.You want windows to begin overwriting "deleted" files faster so the SSD can go into garbage collection mode.

As to why you need to secure erase, well how will the SSD know that the data in the unpartitioned blocks is invalid? If you wrote data in the blocks and kept it unpartitoined after you reformat, then the SSD won't be smart enough to know that the data in the unpartitioned blocks is invalid. It will keep the data there until the end of time. There is no OS to tell it to overwrite the data because that part is unpartitioned.

In a sense, you are probably correct. The SSD probably never knows how big or small the windows partition is.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |