SSD file server for work?

yh125d

Diamond Member
Dec 23, 2006
6,907
0
76
Our work file server has 3 or 4 1TB drives in raid 5. We only use about 350-400gb of the array, and while I haven't benchmarked it it feels extraordinarily slow. As in, 30 minutes to copy a 4gb file from the desktop of the server itself to the array

Right now, we're on 100mbit lan, we will be moving to gigabit in a couple of months and I've been thinking an SSD array might be a good bit faster for us, but I don't know enough about it to be sure, or know about potential pitfalls, so I have a few questions


We have 22-25 users, and the server is mostly Autocad .dwgs and related documents, with ~10-40gb worth of files r/w a day

1: Does this sound even feasible? 4x 256gb drives for ~700gb array, should be about $800 for consumer drives

2: Would enterprise drives be needed? Would I be able to just use consumer drives with an extra spare (still cheaper) to make up for the difference?

3: Is raid 5 even the best way to go? We don't really need more than ~500gb, would 4 256gb drives in raid 10 be better? I want fast, but the ability to have at least one, preferably 2 drives that could fail

4: Would a hardware raid card be needed/worth it? I imagine probably so, but I know even less about those. What's a good one?

5: SSD's R/W reliability - What would be a realistic failure rate for SSDs in a service like this? I'm sure it'd be more than disk drives, but I've no idea how much more. Lose a drive once a year maybe? 6 months? 18 months? They're so cheap now as long as it's not a drive every month or two I'm not worried about cost of replacement, but I'd want to avoid server downtime at all costs, even if that means doubling up on parity

6: Any important features I should look up on for SSD's here? I don't know much beyond they're fast as hell and trim is good.

7: Anything I'm missing, or alternative suggestions?


TIA
 

GlacierFreeze

Golden Member
May 23, 2005
1,125
1
0
Our work file server has 3 or 4 1TB drives in raid 5. We only use about 350-400gb of the array, and while I haven't benchmarked it it feels extraordinarily slow. As in, 30 minutes to copy a 4gb file from the desktop of the server itself to the array

Right now, we're on 100mbit lan, we will be moving to gigabit in a couple of months and I've been thinking an SSD array might be a good bit faster for us, but I don't know enough about it to be sure, or know about potential pitfalls, so I have a few questions


We have 22-25 users, and the server is mostly Autocad .dwgs and related documents, with ~10-40gb worth of files r/w a day

1: Does this sound even feasible? 4x 256gb drives for ~700gb array, should be about $800 for consumer drives

2: Would enterprise drives be needed? Would I be able to just use consumer drives with an extra spare (still cheaper) to make up for the difference?

3: Is raid 5 even the best way to go? We don't really need more than ~500gb, would 4 256gb drives in raid 10 be better? I want fast, but the ability to have at least one, preferably 2 drives that could fail

4: Would a hardware raid card be needed/worth it? I imagine probably so, but I know even less about those. What's a good one?

5: SSD's R/W reliability - What would be a realistic failure rate for SSDs in a service like this? I'm sure it'd be more than disk drives, but I've no idea how much more. Lose a drive once a year maybe? 6 months? 18 months? They're so cheap now as long as it's not a drive every month or two I'm not worried about cost of replacement, but I'd want to avoid server downtime at all costs, even if that means doubling up on parity

6: Any important features I should look up on for SSD's here? I don't know much beyond they're fast as hell and trim is good.

7: Anything I'm missing, or alternative suggestions?


TIA

1: I'd consider getting enterprise. I'm sure consumer would be more than fine though. Just don't get the cheapest you see. Lean towards the more reliable for sure.

2: Up to you. Could go either way. Since it's for work, I'm sure most would recommend enterprise. Not a bad idea really.

3: I'd lean RAID 5. The more free space you have, the longer the drive will last write endurance wise. And if the server is only seeing 10-40GB writes per day and it's across 4 drives, endurance should not be a concern. Will last until that server gets thrown out for a completely new one. Plus extra free space is always good incase the company sees data growth in the near future.

4: For work, yeah better get one. Will defer to others about brand. Think a good one is about $300.

5: Only thing to really worry about these days is the controller crapping out. That's just one of those rare things that can just die on an SSD. Quality SSDs it's not something to really worry about, but it can happen.

6: Defer

7: Defer


With all that said, you will still need to occassionally make a backup of the server incase multiple disks fail. Not likely to happen, but as everyone will tell you, RAID is not a backup solution. RAID + backup = doing it right. lol

And SSD storage server + upgrading to Gig network will be pretty sweet.
 
Last edited:

Coup27

Platinum Member
Jul 17, 2010
2,140
3
81
You said 40GB R/W's. The reads don't matter, is there a way you can measure over a typical week the total writes to the array? Knowing this figure will help you enormously in choosing between a quality but cheaper consumer SSD or knowing that due to the write workload you will have to look at enterprise, which depending on your budget may blow it.

If you can obtain this figure you can also estimate the SSD lifespan of any given drive.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Just bare in mind a Gigabit network is only going to offer you about 100MB/s. You could get a server with 2 bonded network ports and see 200MB/s total but that is about it with a Gigabit LAN. A hard drive array can easily saturate that network. Over the LAN a lot of the advantage of an SSD is lost due to the protocols in place, if you benchmark an SSD in another machine over file sharing it does pretty poorly on small files. Now with lots of parallel small file usage you may very well get significantly better utilisation of the network on SSDs, but just bare all this in mind, SSDs might not perform any better than hard drives depending on how big your files are and how fast the network ends up being.
 

yhelothar

Lifer
Dec 11, 2002
18,408
39
91
If it's taking you 30minutes to transfer 4GB, it's definitely not the hard drive that's the bottleneck.

That works out to about 2.2MB/s.

Your RAID5 array is definitely way faster than that. Your network is the bottleneck here. Even with gigabit LAN, you won't bottleneck your hard drives.
 

biostud

Lifer
Feb 27, 2003
18,398
4,963
136
wait until you have a gigabit lan and see if the problem persists, the 100mbit network is definitely the bottleneck atm.
 

Carson Dyle

Diamond Member
Jul 2, 2012
8,174
524
126
Our work file server has 3 or 4 1TB drives in raid 5. We only use about 350-400gb of the array, and while I haven't benchmarked it it feels extraordinarily slow. As in, 30 minutes to copy a 4gb file from the desktop of the server itself to the array

Is the desktop folder also located on the array? That might explain the problem. Reading a writing large files at the same time to/from a single disk or array can thrash badly.

If you don't know whether the server has "3 or 4" drives in its RAID array, I'm guessing you're not actually the administrator of the server.
 

yh125d

Diamond Member
Dec 23, 2006
6,907
0
76
1: I'd consider getting enterprise. I'm sure consumer would be more than fine though. Just don't get the cheapest you see. Lean towards the more reliable for sure.

2: Up to you. Could go either way. Since it's for work, I'm sure most would recommend enterprise. Not a bad idea really.

3: I'd lean RAID 5. The more free space you have, the longer the drive will last write endurance wise. And if the server is only seeing 10-40GB writes per day and it's across 4 drives, endurance should not be a concern. Will last until that server gets thrown out for a completely new one. Plus extra free space is always good incase the company sees data growth in the near future.

4: For work, yeah better get one. Will defer to others about brand. Think a good one is about $300.

5: Only thing to really worry about these days is the controller crapping out. That's just one of those rare things that can just die on an SSD. Quality SSDs it's not something to really worry about, but it can happen.

6: Defer

7: Defer


With all that said, you will still need to occassionally make a backup of the server incase multiple disks fail. Not likely to happen, but as everyone will tell you, RAID is not a backup solution. RAID + backup = doing it right. lol

And SSD storage server + upgrading to Gig network will be pretty sweet.


1/2/4 - We're a small company, only about 25 users, so enterprise drives would be close to breaking the bank. But if, say, I can get a 5 disk (I'm thinking like Intel 335 240gb) RAID 5, with two parity drives for extra extra safety, for a ~650gb array and a ~300 controller for maybe $1500-2000 that'd probably be pretty easy to sell to my boss, and a big ROI even if the speed gain isn't all that big.

We have nightly backups to idrive, and a weekly windows backup to a local external drive, so I'm not worried about data loss, just don't want down time on the server (and an extra $250 for an extra parity drive is worth it even if it's never really needed)



Regarding the R/W, that's just a very ballpark figure based on the types and sizes of files we work with. Anyone know of a tracking tool I can put on the server to get a better idea of activity? I'm probably off by quite a bit (I left out all the PST file activity, which I know is a lot of GB, but single files)


We recently got an officewide IP based phone system, running off a POE switch (which is why we currently have megabit). We will be moving to a 48 port gigabit POE probably within a few weeks though, so I'll definitely wait to see how performance changes with that upgrade.
 

yh125d

Diamond Member
Dec 23, 2006
6,907
0
76
Is the desktop folder also located on the array? That might explain the problem. Reading a writing large files at the same time to/from a single disk or array can thrash badly.

If you don't know whether the server has "3 or 4" drives in its RAID array, I'm guessing you're not actually the administrator of the server.

I assumed the OS was on a separate drive, but you're right. It's on the array. Just checked, and it indeed is just 3 drives. That makes more sense now, as it only took about 5 minutes to copy from the flash drive...

I am the "administrator", but not a proper one. Essentially I'm a hobbyist who learned everything I know about computers from Anandtech, for the most part. So, I know plenty about building our workstations, but my knowledge of actual network admin stuff is pretty low. That puts me way ahead of anyone else here though, so I'm it here. The server was purchased/set up by an IT company that handles the stuff I can't. If we do wind up moving to an SSD server, it'll be them actually setting it up and moving everything over.
 

Demo24

Diamond Member
Aug 5, 2004
8,357
9
81
Honestly, the only way I can see it being worth it is if you need to deal with lots of small files being randomly accessed. But it seems your mostly just transferring large files from the comps to the server every so often. I would not even bother with ssd's, cause its going to be very expensive to get a proper and reliable setup, and even then you'll more than saturate a gigabit network with it. I'd go for some fast mechanical drives, probably 2.5" 10k's for decent speed, reliability, and cost concerns. If the box is setup right you'll still be saturating that network.

I would agree with above though, currently I think there's a problem somewhere in the network as I've done much faster over 100mbit networks that are scattered around my locations.
 

yh125d

Diamond Member
Dec 23, 2006
6,907
0
76
We're working with Autocad. Some of the time, we'll be accessing just one <10mb file, but a lot of the time we'll have between 10 and 200 files loaded, from <10mb total to >200mb total

But from what I'm hearing, if 4-5 SSDs can handle it reliably, even if the gain is minimal, it'll pay for itself pretty quickly. And I'd think even on the low end at least a small gain would be had
 

Ayah

Platinum Member
Jan 1, 2006
2,512
1
81
I'd go RAID10 over any parity raid for SSDs for random access, but that's just me. I'd add a hot spare if you're really paranoid about uptime.

You're not writing that much so consumer drives should be fine. If you were burning through >.25 of the accessible pool worth of data per day, then I'd be worried about wearing out a decent SSD.

Bonded gigabit is definitely a good idea for cheap high-performance multi-access environments.
 
Last edited:

nanaki333

Diamond Member
Sep 14, 2002
3,772
13
81
If it's taking you 30minutes to transfer 4GB, it's definitely not the hard drive that's the bottleneck.

That works out to about 2.2MB/s.

Your RAID5 array is definitely way faster than that. Your network is the bottleneck here. Even with gigabit LAN, you won't bottleneck your hard drives.

agreed. ssd would be an overpriced waste.

upgrade your infrastructure. there's no reason you shouldn't be getting at least ~80MB/s on large file transfers on a gigabit network. you're gonna end up wasting thousands of dollars on an SSD setup for nothing (with enterprise drives).

my 3 year old intel NAS has 4x 1.5TB drives in raid5 in it and lacks jumbo frame support, and i can still hit 18MB/s. it's replacement is a supermicro server with 8x3TB drives and kill it at 85MB/s. something is seriously wrong on your network, or whatever you use currently. you can get a cheap case, atom mobo and such, ram for a couple hundred and throw some drives in it, and put freeNAS on. good to go!
 
Last edited:

Lorne

Senior member
Feb 5, 2001
874
1
76
You serious need the GigaBit network setup, 100MBs over the 12MBs you get now.
No need for SSD in raidx as your network limit will be 100MBs but you can do as Britecandle mentions a bonded network setup.

You can keep the drives cheap like 5 or 6 1-2 TB green drives in a raid5 and those will fill the network bandwidth easy.

Dont forget to put Gigabit NICs in the workstations.
 

simpletron

Member
Oct 31, 2008
189
14
81
agreed. ssd would be an overpriced waste.

upgrade your infrastructure. there's no reason you shouldn't be getting at least ~80MB/s on large file transfers on a gigabit network. you're gonna end up wasting thousands of dollars on an SSD setup for nothing (with enterprise drives).

my 3 year old intel NAS has 4x 1.5TB drives in raid5 in it and lacks jumbo frame support, and i can still hit 18MB/s. it's replacement is a supermicro server with 8x3TB drives and kill it at 85MB/s. something is seriously wrong on your network, or whatever you use currently. you can get a cheap case, atom mobo and such, ram for a couple hundred and throw some drives in it, and put freeNAS on. good to go!

They are using AutoCAD and they aren't transfering one large file. My last CAD project(different program, but they all follow similar patterns) was 734MB consisting of 2,529 files. roughly 2/3 of the files are under 100KB with numerous sub 20KB files. Just something to think about when transfering from the user's computer, probably single 7200rpm harddrive.
 

Ayah

Platinum Member
Jan 1, 2006
2,512
1
81
I don't understand how people keep thinking multi-user autocad = large sequential transfers.
If even half of those people access an array at the same time, the requirement for random i/o will more than likely slow down any spinning array.
But a network upgrade is essential to improved performance. Shouldn't be hard to do 10MB/s random, but you're never going to make it to more than 50.
 
Last edited:

Rifter

Lifer
Oct 9, 1999
11,522
751
126
Its you network not the array. Wait till you go gigabit and ill bet you still wont be maxing the array.
 

Zxian

Senior member
May 26, 2011
579
0
0
I'll also throw in my suggestion that you upgrade the network first. Get some proper Intel Gigabit NICs (These can be found for ~$35 each) for your workstations and then a good switch to tie it all together. Different network admins will have their own preferences, but do your homework here as well. A good switch can make or break your network. Get a quality Layer 2 gigabit switch. Even if you don't use all the functionality now, you'll have access to it if/when you need it.

Start with that infrastructure upgrade. Then start looking at benchmarking the storage array itself. If the the array itself can't keep up with large queue depths, you might want to look at replacing it.


Network first, then storage. 100Mbps is so 2003.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
$450 - Intel 8 Port (6gbs) RAID Card
$1250 total - 5 Samsung 840 Pro 256 GB, 4 for RAID 10 ~500gb capacity, + 1 for hot spare

~$1700 total, but with gigabit networking, it should really fly. The intel raid controller has support for ~32 drives according to the ark page, so I'm guessing it works with the SATA Multiplexers. Either way seems reasonably priced for something that should last a while.

You don't put consumer desktop SSD in a business server.

Period.
 

Dadofamunky

Platinum Member
Jan 4, 2005
2,184
0
0
I can only add to what everyone else is saying - upgrade to a hardened gigabit switch first. I doubt the SSDs are a good investment in this context. I would avoid doing that and possibly offer SSD upgrades on the desktop, that would give people more bang for the buck.
 

nk215

Senior member
Dec 4, 2008
403
2
81
I have a similar situation (different program, not autocad). I used SSD for both the data storage and a backup for one reason only: so that I can do a lunch time backup of my data with minimal down time. End of day backup is not an issue.

My setup is pretty simple:

+ Server with 2 SSD arrays. The first one is to store the actual data. The second one is keep a temporary backup

+ A HDD backup array also connected to the server.

I use cobian back up to do my backup. At lunch, cobian backup the actual data (on ssd) to the temp area (also ssd). This process takes very little time to ensure that files are not "locked" out for too long. Then cobian copy data from the temp area to the HDD array. This process takes much longer. The HDD stores 60 copies of the complete data set.
 

nanaki333

Diamond Member
Sep 14, 2002
3,772
13
81
make sure you enable 9K jumbo frames on all your PC's (and NAS) if/when you upgrade to gigabit, so you maximize the throughput. it's not enabled by default in windows (not even windows 8).
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |