Need SSD for *very specific* usage scenario - Advice?

TrackSmart

Member
Feb 6, 2011
36
8
81
I've been keeping up with the world of SSDs via Anandtech reviews, but I have a question that requires some in-depth knowledge of controller behavior.

We have a half-dozen scientific instruments that record data every few seconds - accumulating about 100 GB of data per year (which we clear off periodically). The proprietary software for these instruments does not support TRIM.

Early SSDs had a high failure rate in these systems and got into *very low* performance states, given the constant small writes and lack of TRIM. We are using regular hard disks now, but it is really an overload on the spinning disks. The instruments are very unresponsive b/c the hard disk is constantly in use. And we are having trouble getting data off the machines b/c the operating system will hang for several minutes when accessing folders with thousands of files in them.

RECAP: We need an SSD that will perform garbage collection, even when it is NEVER idle and NEVER performs large sequential writes. The total volume of writes is not that high, they are just constant and in the form small files. We are limited to a single storage device due to the propriety software.

Any ideas from our SSD experts?
 
Last edited:

razel

Platinum Member
May 14, 2002
2,337
90
101
This sounds like a job for the professional/server level SSDs where running reliably 24/7 and handling large I/O loads takes precedence over flashy sequential big numbers performance. Off the top of my head Intel, Crucial and Kingston offer such solutions. But if I were buying I would just stick with Intel or Crucial.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Given your needs, and that it's not a home user who might need to penny pinch a lot, I would basically ignore every option but Intel. Depending on budget, a 710 might even be a decent option. Micron (Crucial) has some, too, but they aren't as widely available through retail as Intel's. Kingston does, as well, but they are nearly the same cost as consumer drives, which makes me wonder (they only really have to be cheaper than Intel, so are they really trying to undercut everyone and make thin margins, or are the drives cheap enough that those prices include good margins?). The big difference with the enterprise SSDs is that they are made for many small 4K and 8K writes (8K = SQL loads), no TRIM, and tend to favor lazy incremental GC over optimistic and unpredictable idle GC.

The low performance state w/o TRIM was due to rushing the technology out the door (several online reviewers were able to push some drives, like Samsung's 470, into such states). Even some companies that aught to know better didn't do enough design and testing (Crucial's C300, FI--they aught to have known better!). The idle garbage collection couldn't keep up, and they would get into what amounted to a panic mode. Some, OTOH, were actually just plain bad enough that they got very slow once they got sufficiently fragmented.

I suspect some of the known-good consumer SSDs, like the Intel 520 and Crucial M4, will not have any issues, today, but I'd hate for you to spend many thousands of dollars to find out that's wrong, in your particular case. Intel's and Micron's enterprise drives, some of which are just stickers and firmware differences compared to their consumer drives, are known to work under really nasty conditions just fine.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Would a Velociraptor possibly work well for your application? Just a thought.
If it's trying to get stats on folders with thousands of files, no. it would be faster, but not by enough to matter. Directory listings and metadata collection on lots of files is a case where average SSDs can be hundreds of times faster, in practice. I'm sure a Velociraptor would be faster, but more than a few times faster, I would doubt.

The basic problem looks to me like the OP has broken software, and has to find a way to deal with it other than getting it fixed.
 

Coup27

Platinum Member
Jul 17, 2010
2,140
3
81
I think an Intel 320 or 710 would be best. These drives use Intels own controller and it prefered to "clean up along the way" instead of "leave it until idle or last resort".

Intel also do some enterprise drives in their 9 series but I think a 320 would be best if you can still find somewhere to buy them from.
 

TrackSmart

Member
Feb 6, 2011
36
8
81
This sounds like a job for the professional/server level SSDs where running reliably 24/7 and handling large I/O loads takes precedence over flashy sequential big numbers performance. Off the top of my head Intel, Crucial and Kingston offer such solutions. But if I were buying I would just stick with Intel or Crucial.


Thanks for all of your replies (there are several now!). Throughput is actually very low - just a small ASCII file every few seconds, for a total ~100GB/yr (or 2 GB/ week). The instrument writes much less data than a laptop that hibernates once per day. The problem is that the writes are constant and there is no idle time. This is the opposite of normal desktop loads.

So, we don't need high throughput or write endurance, just garbage collection that doesn't wait until the disk is idle. Or until big sequential writes (which the instrument doesn't perform).

EDIT: If I could rewrite the software, I'd have the instruments write to a RAM disk, and only dump the data to the hard disk once every few minutes. You'd risk losing a few minutes of data if there were a power outage or other failure - but I'd take that tradeoff.
 
Last edited:

TrackSmart

Member
Feb 6, 2011
36
8
81
We can only realistically afford ~$200/drive, since we have several instruments. That rules out the Intel 710 series ($400 for 100GB model).

I see one vote for the Intel 320 series ($200 for 120GB). Any further insight into how garbage collection works on these drives?
 

StarTech

Senior member
Dec 22, 1999
859
14
81
Thanks for all of your replies (there are several now!). Throughput is actually very low - just a small ASCII file every few seconds, for a total ~100GB/yr (or 2 GB/ week). The instrument writes much less data than a laptop that hibernates once per day. The problem is that the writes are constant and there is no idle time. This is the opposite of normal desktop loads.

So, we don't need high throughput or write endurance, just garbage collection that doesn't wait until the disk is idle. Or until big sequential writes (which the instrument doesn't perform).

I agree. I would use a 512GB 830 Samsung. At that rate the drive will last many years, assuming you have a large part of the drive empty or is over provisioned.

What OS is being used ?
 

Smoblikat

Diamond Member
Nov 19, 2011
5,184
107
106
Ever considered using a RAMDisk as storage/access? You get speeds greater than an SSD, with unlimited writes like an HDD. If you REALLY need speed, buy a few I-RAMs, load em up with 4gb RAM each (if its 100gb a year 4gb will be fine for a week/day, then at night when the instriment isnt in use (or whenever you need to) have all the information copy over to an array of velociraptors in RAID6 or somthing similar. That way you have a single organized place that is backed up and fast for all the information you process.

EDIT - I saw that you are about 200$ a drive, try bringing my idea to your boss. An SSD works now, until you have to replace it in a few years when it runs out of writes and dies/becomes too slow. RAM doesnt work like that, and the velociraptors (or whatever enterprisee disks you get) certainly will outlast an SSD in write intensive areas. It makes more financial sense in the long run to get RAM disks.
 
Last edited:

Hellhammer

AnandTech Emeritus
Apr 25, 2011
701
4
81
Intel 330 or 520. Why? Because due to real-time compression performance will not degrade as much over time. Since we are talking about recording, the data is most likely easily compressible, which is even better for SF.
 

hhhd1

Senior member
Apr 8, 2012
667
3
71
For that amount of write, do you really need garbage collection to be 'that' efficient ?

Early SSDs had a high failure rate in these systems and got into *very low* performance states

What SSDs where they ?


I vote for a regular, Samsung 830 64gb or 128gb.

Assuming the OS you are running does TRIM, you should be fine, if not, then running TRIM manually every week or so should keep things going fine.
 

Eeqmcsq

Senior member
Jan 6, 2009
407
1
0
Can you store the data from the instruments separately into separate HDDs/SDDs? Or do they all go through the same software and is stored in the same directory?

Edit: Scratch that, because of this comment in the OP: "We are limited to a single storage device due to the propriety software."

Perhaps a wilder and more complicated solution is to create a RAM disk and have the software write these tiny files to the RAM disk. Then write your own background script that moves these tiny files that are x minutes older than "right now" from the RAM disk to the HDD/SDD, and have this background script trigger every y number of minutes. Of course, that requires you or someone you work with to have some kind of scripting knowledge.
 
Last edited:

ALIVE

Golden Member
May 21, 2012
1,960
0
0
Can you store the data from the instruments separately into separate HDDs/SDDs? Or do they all go through the same software and is stored in the same directory?

Edit: Scratch that, because of this comment in the OP: "We are limited to a single storage device due to the propriety software."

Perhaps a wilder and more complicated solution is to create a RAM disk and have the software write these tiny files to the RAM disk. Then write your own background script that moves these tiny files that are x minutes older than "right now" from the RAM disk to the HDD/SDD, and have this background script trigger every y number of minutes. Of course, that requires you or someone you work with to have some kind of scripting knowledge.

no need dataram ramdisk has an option to save the contents of the ramdrive at a specifies time
the only drawback it saves the contest of the whole ramdrive. into a big image file.
 

Blain

Lifer
Oct 9, 1999
23,643
3
81
We can only realistically afford ~$200/drive, since we have several instruments.
How mission critical is storage of the data you're collecting?
Would the companies reputation be damaged if a drive failed and the data wasn't recoverable?
 

tweakboy

Diamond Member
Jan 3, 2010
9,517
2
81
www.hammiestudios.com
I agree. I would use a 512GB 830 Samsung. At that rate the drive will last many years, assuming you have a large part of the drive empty or is over provisioned.

What OS is being used ?


He said their budget was 200 dollars. Gosh that Sammy is more expensive then the best video card out there. 700 last I checked.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
For that amount of write, do you really need garbage collection to be 'that' efficient ?
No, he needs an SSD that will be guaranteed not to get backed into a wall. When Intel's 2nd gen (and 320) came around and pretty much cleaned house, that's why: without worrying about perfect numbers gained from idle-time GC, they could keep on chugging, where other drives could be written to at rates that outran GC.

Intel 330 or 520. Why? Because due to real-time compression performance will not degrade as much over time. Since we are talking about recording, the data is most likely easily compressible, which is even better for SF.
If the data is sufficiently small, like less than 4KB, there will be no meaningful benefits from compression, since at a minimum, 4KB or so must be written, even for a a single-byte change. I don't think Intel's 330 or 520 are bad drives to choose from, but compression would probably only help if the data starts having to get moved about the drive.
 
Last edited:

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
Your workload really isn't very difficult. 100GB per year? That is tiny.

I'd go with 256GB Samsung 830 SSDs. They can be had for about $200 on sale. Other alternatives would be 256GB Plextor M3, 256GB Plextor M5S, or 256GB Crucial m4.

When you get them, do a secure erase and then create a 128GB partition, leaving the rest of the SSD unused. This will give you about 147GB of overprovisioning (53%), so even if you fill up the 128GB partition, the SSD still has plenty of extra blocks to work with.

Even that is probably overkill. If you just used the SSD normally (with a full drive partition), and you cleared it off once a year and secure-erased it (or TRIMmed the whole drive), you would likely have no problems.

The only time you would run into problems with a workload like yours is if you were writing to a SSD that had already been filled with the equivalent of full-span 4K random writes, then the files deleted without TRIM or secure-erase.

For reference, to get the worst-case performance from an SSD, just do 4K random writes to the entire span, but random in a specific way -- write to every LBA in a random order. In other words, divide the 512-byte LBAs of the SSD into 4KiB groups, then write to each 4K group once, in a random order. A good tool to do this is fio. After doing that, any further sequential or random writes that you do (assuming you don't TRIM the SSD) will be worst-case.

I'm guessing your workload did something similar to your SSDs. But you can avoid that in the future by using 50+ % overprovisioning, and secure-erasing once a year.

If you want to test this, just do the full-span 4K random write I mentioned, then test the write speed for your workload. Then secure-erase the SSD, and then do a similar 4K random write, but only over a 128GB span, then test the write speed for your workload. You will find that the write speed will be low in the first test, but should be close to new (fresh out of box) in the second test.
 
Last edited:

TrackSmart

Member
Feb 6, 2011
36
8
81
How mission critical is storage of the data you're collecting?
Would the companies reputation be damaged if a drive failed and the data wasn't recoverable?


I appreciate the pitch that you are recommending to the 'boss', but this is academic research, so we aren't generating revenue. I was hoping someone would have an idea regarding which SSDs might work for our application. Basically, something that doesn't wait until idle, or until large write operations, for garbage collection.
 

hhhd1

Senior member
Apr 8, 2012
667
3
71
+1 on overprovisioning.

leave some space unpartitioned as described in the previous post,
although leaving 7% unpartitioned should be enough,
leaving 20% would be more than enough.
 

TrackSmart

Member
Feb 6, 2011
36
8
81
Yes, this is exactly the case. We had small SSDs (30 GB), from an earlier generation, inside an instrument that's "only mission in life" was to fill them with small files. Files that would get 'deleted' to make space, but without TRIM or Garbage Collection. One disk failed. Both slowed down tremendously. It's possible that any ordinary SSD from the modern era will work, but I'd like to know more before making the investment in new SSDs for several instruments.

Yes, we could open the instruments, pull the drives, and manually trim on a desktop computer each year. But I'd rather install drives that will perform their own maintenance if I can find them. Wouldn't you?

So does anyone know which drives will perform GC under non-idle conditions outside of large writes? That is the question I'm hoping someone can answer.



Your workload really isn't very difficult. 100GB per year? That is tiny.

I'd go with 256GB Samsung 830 SSDs. They can be had for about $200 on sale. Other alternatives would be 256GB Plextor M3, 256GB Plextor M5S, or 256GB Crucial m4.

When you get them, do a secure erase and then create a 128GB partition, leaving the rest of the SSD unused. This will give you about 147GB of overprovisioning (53%), so even if you fill up the 128GB partition, the SSD still has plenty of extra blocks to work with.

Even that is probably overkill. If you just used the SSD normally (with a full drive partition), and you cleared it off once a year and secure-erased it (or TRIMmed the whole drive), you would likely have no problems.

The only time you would run into problems with a workload like yours is if you were writing to a SSD that had already been filled with the equivalent of full-span 4K random writes, then the files deleted without TRIM or secure-erase.

For reference, to get the worst-case performance from an SSD, just do 4K random writes to the entire span, but random in a specific way -- write to every LBA in a random order. In other words, divide the 512-byte LBAs of the SSD into 4KiB groups, then write to each 4K group once, in a random order. A good tool to do this is fio. After doing that, any further sequential or random writes that you do (assuming you don't TRIM the SSD) will be worst-case.

I'm guessing your workload did something similar to your SSDs. But you can avoid that in the future by using 50+ % overprovisioning, and secure-erasing once a year.

If you want to test this, just do the full-span 4K random write I mentioned, then test the write speed for your workload. Then secure-erase the SSD, and then do a similar 4K random write, but only over a 128GB span, then test the write speed for your workload. You will find that the write speed will be low in the first test, but should be close to new (fresh out of box) in the second test.
 

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
Yes, we could open the instruments, pull the drives, and manually trim on a desktop computer each year. But I'd rather install drives that will perform their own maintenance if I can find them.

I already answered your question. Get a 256GB SSD and 50+ % overprovision it.

If you cannot be bothered to secure-erase the SSDs once a year, and you cannot afford to buy the proper tool for the job (an enterprise SSD like the Micron P400 that will have a relatively high worst-case performance), then the best you can do is to get a 256GB Samsung 830 (or Plextor M3, M5S, or Crucial m4) and 50+ % overprovision it.

Also, your formulation of the question was poor, which is why you got such a wide variety of answers. You should have included:

1) What make and model of SSDs you had before

2) What EXACTLY were the problems you experienced. Specifically, and with numbers.

3) What EXACT performance is required on the new SSDs, specifically, with numbers.

Finally, your approach to solving the problem is poor. Once you identify what appears to be the best solution, you need to TEST it. That is why I included an outline for how to test the solution I suggested. It would be crazy to spend a lot of money based only on posts in an internet forum thread. You need to verify that the solution is viable yourself by testing it.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
So does anyone know which drives will perform GC under non-idle conditions outside of large writes? That is the question I'm hoping someone can answer.
Manufacturers are quite cagey about such things. They don't want to make any of their tricks easier to reverse-engineer. Practically none require idle conditions, now, but writes at a constant period could very well bring out nasties that typical use and testing might not. Most do still want to have periods of light use, even if they don't get minutes of idle time.

There's no guarantee without paying the big bucks, but Intel would be hard to go wrong with, Sandforce or not.
 

Coup27

Platinum Member
Jul 17, 2010
2,140
3
81
Reading Anandtech's review of the 320, I found this:

The 320 behaves a lot like the old X25-M G2 did when tortured. Minimum performance drops pretty low - Intel prefers cleaning up as late as possible to extend drive longevity. As a result, I wouldn't recommend using the 320 in an OS without TRIM support.
I am sure Anand praised Intel during the X25-M G2 / 320 days for delivering a more consistent performance compared to Samsung and Crucial who did GC as late as possible, so I am not sure where that quote came from. If it is true, then I think all the major controllers out there, SF, Intel, Marvell and Samsung all clean up as late as possible.

I do think jwilliams4200 is talking a lot of sense. If you over provision a 256GB into 50/50, it will take a full year to write to that available 50% and there will still be a whole 50% free to maintain performance over time. I would maybe still try and get an Intel 320 300GB, but if the price is too high, there's some great deals on m4's and 830's.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I am sure Anand praised Intel during the X25-M G2 / 320 days for delivering a more consistent performance compared to Samsung and Crucial who did GC as late as possible, so I am not sure where that quote came from. If it is true, then I think all the major controllers out there, SF, Intel, Marvell and Samsung all clean up as late as possible.
They all do. Try to find one drive that can't be dropped into a very low performing state with torture tests like AT runs. Some do better than others, but they're all made for desktop/mobile work, and the technology is not yet sufficient for every workload to get very fast results with the same drive and firmware.

OTOH, everybody's favorite Samsung 830, much newer, got even worse results. I don't think it's a question of if for any desktop/mobile drive, but how bad can it get. Some older drives would become practically unusable.

Waiting gives the most information, which allows for the best results for peak performance and longevity. Incremental will never be able to reach the same level of WA, nor the same level of peak performance, though it is what those of us who want our technology to, "just work," would prefer, so that we can get some guarantee that we'll never have to worry about it[, dammit.].
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |