Data Storage Plan Feedback + Some Questions

ElectroPulse

Member
Jun 13, 2012
26
0
61
Hello, all!

I am planning on saving up for a smoking hot PC over the next year or so (in preparation for consumer VR), and I'm just thinking about what I'm wanting to do for my data storage. In the past, I have always found HDDs infuriatingly slow... I've had a 120GB Samsung 830 for about 3 years, and am wanting even faster than that. At this point, if I manage to save up enough in the next year or so (I'm starting into my career soon, so hopefully won't be an issue), I would like to set up a storage solution so ridiculously fast that I have absolutely nothing to complain about. Computers are my life, and I would like as fast an experience as possible.

I'm wanting to go with a large primary storage based on SSDs, with secondary storage on HDDs for bulk file storage. I don't know much about SSD caching for HDDs, other than having it set up in my laptop when I got it, so am not sure what the possible configurations are for it. As such, I am here to see if what I am thinking about is possible. For example, I only know that caching works with a single drive, and am unsure about whether you can do it with an array.

Another thing that I came across while thinking about it, is whether you can run RAID 10 with one of the RAID 0 arrays SSD-based and the other HDD-based, but from what I've read the entire array will only run as fast as the slowest device. As such, I am thinking maybe run an SSD-based RAID 0 array, with a single HDD for backup and incremental backup (also, I am planning on using Crashplan, Backblaze, Amazon, or some other unlimited storage solution for offsite).

So, here's the idea:
Primary storage (OS/software):
4x 256GB SSDs in RAID 1 (only $60 more than buying one 1TB SSD, but with 4x the performance, theoretically (based on Samsung 850 Pro Pricing))
1x 2-4TB HDD for backup/incremental backup + pagefile (for older programs that were designed to use pagefile)
Maybe 1x cheap 60gb SSD for caching for the HDDs for faster backup?

Bulk storage:
2x 2TB HDDs in RAID 1
1x 120GB SSD for caching

Any feedback?

In addition, a question about RAID 1: I've been trying to find a conclusive answer about (I keep finding conflicting answers, mostly from 5+ years ago) whether RAID 1 read speed is the same as RAID 0 read. Obviously write speed would be the same as a single HDD (hence the reason for the SSD for caching to bulk storage), but am wondering about the read. I was originally thinking about 4x 2TB HDDs in RAID 10, but after reading that RAID 1 may have similar read performance, I switched to RAID 1 with SSD cache.

Anyway, this is a long time in the future, so whether or not I will end up going with this type of a setup (for all I know, I may end up going with a lot cheaper solution), but it's fun to think about

Thanks!
ElectroPulse
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
4x 256GB SSDs in RAID 1 (only $60 more than buying one 1TB SSD, but with 4x the performance, theoretically (based on Samsung 850 Pro Pricing))
4x? Nope. You'll hit the chipset's limits for bandwidth, and the smaller drives don't perform equally to the larger. So, more like 2-3x file copy performance, unless you add a nice LSI HBA, and go with bigger drives. You also won't get any real improvement in IOPS, just in synthetics (serial dependencies come in to play).

1x 2-4TB HDD for backup/incremental backup + pagefile (for older programs that were designed to use pagefile)
1. Such programs do no exist. They can check the existence of a PF, being broken POSes, but that's as far as it goes.
2. The HDD is a ton slower. This is not 2009. The PF goes on the SSD.
Maybe 1x cheap 60gb SSD for caching for the HDDs for faster backup?
Caching will make little difference for backing up. Too much data to go through, and too little chance of being recently-read. Just accept that ~150MBps is as good as it gets, and leave it alone. Cache if you want, but it will mainly help other usage than backups.

Any feedback?
You're over-complicating things. When you get near upgrading, go to http://www.pcpartpicker.com/, then filter and sort the storage page for suitable SSDs (FI, SSD, 960GB-8TB, sort by $/GB, then ignore minor brands). When a good deal pops up, buy. Else, buy when you need to. Nothing wrong with using SATA RAID (PCIe may be a better value, if you wait awhile, but right now, 1+TB SATA RAID 0 has better speed for the money), but if you do, don't get tiny drives to do it with. Performance consistency will be better with bigger drives, and more free space.

Personally, I'd go with Intel (Intel controller), Micron/Crucial (Marvell-based, and M-series derived--IE, like the MX200), or Sandisk (Marvell-based, MLC).

For example, right now, you could get 1.9TB of M500 for $600, compared to 1TB of 850 Pro for $560. In the long run, nearly double the space will be worth way more than benchmark scores that bear no resemblance to your daily usage. But, there may be good budget PCIe SSDs by then, too, of >1TB and larger sizes. The big brands are not screaming their plans from the rooftops and street corners, because they all really want to woo OEMs, and minimize the chance of a competitor being able respond before either have products to ship (it's mostly generalities, in their roadmaps and presentations).

In addition, a question about RAID 1: I've been trying to find a conclusive answer about (I keep finding conflicting answers, mostly from 5+ years ago) whether RAID 1 read speed is the same as RAID 0 read. Obviously write speed would be the same as a single HDD (hence the reason for the SSD for caching to bulk storage), but am wondering about the read. I was originally thinking about 4x 2TB HDDs in RAID 10, but after reading that RAID 1 may have similar read performance, I switched to RAID 1 with SSD cache.
RAID 1 depends. Some implementations give faster reads, some do not. Windows' RAID 1 does not. Intel's probably does, but I don't recall exactly. If you are concerned about that, however, a simple RAID 1 is not for you.

But, all that being said, a year from now is a long ways away, with NVMe really coming out, now. Too much change to be sure what the best solution will be, right now.
 
Last edited:

BonzaiDuck

Lifer
Jun 30, 2004
15,785
1,500
126
So far, the other posters offer valid insights. You DID ask some questions I can answer -- possibly to the irritation of some others.

I did more than "dabble" in RAID configurations, and I still regret simply giving away a 3Ware 9650SE 4-port controller in a system I handed down to my brother 20 miles from here.

But SSDs have changed the entire landscape. multi-drive RAID arrays of HDDs aren't "that much" faster; they're cumbersome; the computer becomes heavy with HDDs. I began to drift away from ANY RAID configuration, although some here will continue to promote RAID arrays of SSDs. I choose now -- and pretty much have done so for the last four years -- to make my systems simple.

If I want redundancy and easy backup, I have my home-server for that, but not everyone wants -- or has -- a home server. If I didn't, my workstation would be fitted with one hot-swap SATA-drive bay. I might even leave the hot-swap drive powered on for automatic backups, but allowing quick and easy replacement with a spare.

For the "simple" system, I choose a single fast SSD for boot-system disk as one part of the strategy. Ideally, this would be a 500GB SSD, but I'm building systems with SSDs deployed the same way with half that capacity.

If you have an older motherboard, you may only have two SATA-III ports, and certainly the boot-system SSD would be connected to one of them. Newer boards include more SATA-III ports. Either way, it is not a limitation for me.

The second aspect of my strategy falls back on a reasonably large HDD: 1TB or larger, but I've used 500GB units initially with the plan to replace them by simply cloning to a larger unit. When my SSD usage is approaching 50% (especially), I'll begin installing programs and game files on the HDD. Slow? Not really as part of my strategy.

A 60GB SATA-III SSD costs between $40 and $50. While the Intel flavor of ISRT relies on a RAID-mode BIOS configuration (or did -- tell me if it's changed), Samsung's SSD RAPID RAM caching only works for a single Samsung drive. Marvell's "Hyper-Duo" attempts to match ISRT, and still requires BIOS configuration of a controller likely designed for RAID. A little $30 program and license has shown these "proprietary" solutions for what they are.

Primo-Cache is a swiss-army-knife of caching potential. It will cache SSDs or HDDs to RAM; it will cache HDDs to an SSD. There is no limit on the size of the caching SSD as with ISRT. You can use a single SSD to cache several HDDs, and as far as I can tell, you could cache several SSDs to RAM regardless of manufacture. Especially, you can do both RAM-caching and SSD-caching with all drives in AHCI-mode BIOS configuration. You can cache RAID arrays or single drives.

Alternatively, you are limited to 60GB SSD caching with ISRT, although it, too, can cache a RAID array.

So my "prescription" involves three storage devices: two SSDs and at least one HDD, but I see no reason to add more spinners. I suppose one would spend about $200 on the first SSD for a boot-system drive; ~$45 for the smaller SSD; anywhere from $40 to $100+ for the single HDD. Personally, I just use recycled 1TB SATA_II drives for the latter, because the key factor in "speed" is the caching SSD attached to the second SATA-III controller port. Whether the HDD is an older model connected to an SATA-II port hardly matters.

The two SSDs are going to consume considerably less power than one or more HDDs. And the simplest design deploying a single HDD (say -- 2TB) will show less wear and tear on the HDD because of the caching.

It's important to put power-draw of storage devices in perspective. It's also important to distinguish power-draw of other devices, like cooling fans. I just retired my brother's C2D system, which used 4x case-intake fans, one exhaust and one CPU-cooler fan. It was fitted with 2x WD Black SATA-II HDDs in RAID0. It also had two BFG GT 9600 graphics cards in SLI. The SLI consumes power; the fans consume power; the drives consume only so much -- but more than necessary.

So his system was drawing maybe 190 idle watts; the replacement system -- with the two SSDs and 2.5" HDD -- (in idle) maxes out at about 72W. There's only a single GTX 570 gfx card. But even with the system working "above idle" conditions, the wattage draw couldn't be much more than 100W and probably lower than that.
 

ElectroPulse

Member
Jun 13, 2012
26
0
61
Thank you for the replies, all!

@dave_the_nerd:
Did some more looking into PCIe cards, and I'm liking what I see in terms of performance, especially with NVMe (thanks for the heads up on that new technology, Cerb!). Hopefully the prices come down a bit in the next year!

@Cerb:
Ok, so putting the pagefile on the SSD is no longer considered bad practice? Is this due to an increased expected lifetime of current SSDs? I remember a few years ago when I set up mine, people were saying to move it off because it could cause the early demise of an SSD due to the writes.

Also, thank you for that link! It's been several years since I looked into PC hardware, and back then I just did shopping cart builds... That looks to be a very useful website!

Also a very good point regarding space vs speed, I hadn't thought of it that way... Being able to put everything on the primary drive without needing to deal with symlinks and stuff down the road would definitely be nice. I'll have to see how prices are in a year, and see how much PCIe drops.

@BonzaiDuck:
The reason for wanting to run RAID 1 (which I guess running RAID 1 with the primary storage would more of cater to this desire) is to be able to shrug off a hardware failure with little more effort than replacing the drive, without any downtime. Though I plan on running a storage server, I am planning on that being for important files and shared resources (in addition to something like Bittorrent Sync or OwnCloud). I may look into the idea of the hotswap bay.

For Primo-Cache, are there any performance drawbacks to it when comparing it to vendor-specific solutions? As for caching to RAM, with the way Windows now caches programs, would the be much/any improvement in performance by using it?

I like your "prescription." I may end up doing that myself. 1x large PCIe SSD for primary, 1x 2-4TB HDD for storage + 60GB for caching.

Also, another question: I see a lot of people recommending 60GB for caching. What is the reason for this? Are there diminishing returns going larger or something? Or is it simply the pricing? I'm wondering, because my laptop came with a 30GB SSD, and caches for my 500GB HDD. Unfortunately, when I copy large files to it, it's only at 120MB/s+ for a few seconds, then it drops to like 20-40MB/s. I'm taking this to mean that the majority of the cache SSD is for loading programs and the OS quickly, and not much for file copies. Wouldn't a lot more storage be beneficial for this?
 
Last edited:

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
So, here's the idea:
Primary storage (OS/software):
4x 256GB SSDs in RAID 1 (only $60 more than buying one 1TB SSD, but with 4x the performance, theoretically (based on Samsung 850 Pro Pricing))
1x 2-4TB HDD for backup/incremental backup + pagefile (for older programs that were designed to use pagefile)
Maybe 1x cheap 60gb SSD for caching for the HDDs for faster backup?

Bulk storage:
2x 2TB HDDs in RAID 1
1x 120GB SSD for caching


Any feedback?
I agree with basically everything Cerb said.

Multi-raiding SSDs is not really doing anything for you. Get a single SSD of large enough capacity (probably either 240/256->480/512 GB) and don't bother with RAIDing them. If you're not doing extremely IO intensive work (and if you have to ask, you're probably not) don't even waste your money on the "pro" SSDs. If you're not buying immediately (as you mentioned) then as dave_the_nerd indicated, it sure seems like PCIe SSDs are just on the cusp of breaking into the mainstream. Assess your options when you have the money, don't commit to SATA SSDs now when it is not clear that they will be the best options in a year.

When you get an SSD, keep the pagefile on it. It's not worth stressing over. Drives have improved dramatically over the last few years. As long as you don't fill the drives to capacity and you don't intentionally abuse them by running huge benchmarks day in and day out, they are highly unlikely to die because the NAND wore out.

A multi TB spinner is a fine idea for back-ups. I would not bother wasting time, money, or effort to set up a caching drive for my back-up drive. Do you even know what software you'll be using for back-ups that you're worried will be too slow that it will require caching?

Where is the back-up for your bulk storage drives? Don't make me chant the RAID mantra...

In addition, a question about RAID 1: I've been trying to find a conclusive answer about (I keep finding conflicting answers, mostly from 5+ years ago) whether RAID 1 read speed is the same as RAID 0 read. Obviously write speed would be the same as a single HDD (hence the reason for the SSD for caching to bulk storage), but am wondering about the read. I was originally thinking about 4x 2TB HDDs in RAID 10, but after reading that RAID 1 may have similar read performance, I switched to RAID 1 with SSD cache.
I'd have to double check, but I think you can only get a speed boost from RAID-1 in reads at very high queue depth with a decent controller. Windows software RAID might do it. I would seriously doubt Intel's motherboard raid would do it. Either way, ordinary desktop computer usage will not benefit from this. I would probably just get a single HDD of the capacity you need (1x 4TB instead of 4x2TB in RAID 10, 1x 2TB instead of 2x 2TB in RAID 1).
 
Feb 25, 2011
16,822
1,493
126
I like your "prescription." I may end up doing that myself. 1x large PCIe SSD for primary, 1x 2-4TB HDD for storage + 60GB for caching.

That sounds reasonable.

Also, another question: I see a lot of people recommending 60GB for caching. What is the reason for this? Are there diminishing returns going larger or something? Or is it simply the pricing?
Both - it's price/performance. A 32/64GB SSD is relatively inexpensive ($30-$40 last I checked, probably less now) and gives you a big boost compared to nothing. A 120/128GB SSD at $50 is in the ballpark, but until recently, prices in the $80-100 range were common, and wouldn't get you much of a boost vs. a 60/64GB SSD at half the price. (And how much is "worth it" to make an HDD faster, when you could just go all-SSD and declare victory?)

As larger SSDs move down the price list, you'll probably see more people recommending 128GB SSDs for caching. (Also, they're usually faster since they have more NAND channels populated.) "Common wisdom" recommendations tend to lag behind the state of the hardware market a bit, since most people aren't compulsively hitting "refresh" on pcpartpicker so they can be the first person on Anandtech to claim that ___ is the best bang for your buck now.

Also, I seem to recall that some caching software only supported <64GB SSDs. That may still be a limitation, depends.

I'm wondering, because my laptop came with a 30GB SSD, and caches for my 500GB HDD. Unfortunately, when I copy large files to it, it's only at 120MB/s+ for a few seconds, then it drops to like 20-40MB/s. I'm taking this to mean that the majority of the cache SSD is for loading programs and the OS quickly, and not much for file copies. Wouldn't a lot more storage be beneficial for this?
Depends on the caching software. It could be tuned to emphasize write performance (leave more blank space on the SSD) but most desktop workloads are, like, 90% reads, so that would be wasted space for most people, most of the time. So I'd be surprised if your SSD tiering software was using an SSD for write cache at all, really.

What you're probably hitting is the write cache in RAM.
 
Last edited:
Feb 25, 2011
16,822
1,493
126
FWIW, I'm seeing pretty good iSCSI performance from my server. If my gaming machine had a "good" ethernet card, I'd probably have my Steam folder on there. (Realtek is not so awesome.)

Next build.

(And yes, I know that's ludicrously impractical.)
 
Last edited:

LokutusofBorg

Golden Member
Mar 20, 2001
1,065
0
76
For a hot-swappable 2.5"/3.5" SATA port, you should give the Rosewill Blackhawk case a look. It has a SATA port/dock built into the top of the case, and it's so damn convenient, I use it all the time -- for backups, repairs, whatever. As long as it, or another case with a similar feature, is available, that is all I will ever buy for my personal desktop machines for the rest of my life. I love it.

Also, I run a home server with hot-swappable 5.25"-to-3.5" cages in it, running Windows Server with Storage Spaces. I would avoid RAID unless you buy a good controller card. But if you're still wanting to do some elaborate storage configuration and will be running Windows 8 or higher, you should give Storage Spaces a look, as it's also available in the regular (non-server) versions of Windows. It even has data tiering with SSDs now (which might only be available in Windows Server, haven't looked into it cause I'm just running a bunch of NAS SATA drives). But, similar to dave_the_nerd, I see awesome performance to and from my server with gigabit Ethernet. My media players (WDTV, et al) all read from it flawlessly, as well.

In my opinion, though, keeping your storage setup as simple as possible is the way to go. I used to partition my drives and run RAID setups and all kinds of stuff, that is just so much a waste of time that it's not even worth thinking about anymore. Just get a good SSD for a boot drive, and a nice, big SATA drive for storage. Done.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |