Storage area network advice

zak2011

Junior Member
Jul 18, 2012
1
0
0
Hi,

I need some advice on Storage Area Networks. We are thinking of purchasing the EMC VNX 5300e model. It comes with SSD and SAS drives. What are your reviews on this model? Do i need to concerned about the type of SSD and SAS drives i am going to get as it may affect performance. We have a virtulaized environment. Any important things to consider when purchasing the SAN?
Any suggestions are welcome!
Thanks,
 
Last edited:

Mungla

Senior member
Dec 23, 2000
843
0
71
Just a suggestion, I would check out your options if you haven't already. The SAN market is highly competitive like everything. I went through the decision process last year for purchasing two large SAN clusters for the company that I work. I decided on the NetApp FAS3210. We use SnapMirror and replicate our VMWare data stores from one filer to the other for disaster recovery. The good thing about NetApp is that you don't have to spend months in classes to learn enough to administer one. They are very easy to work with in comparison to EMC for example.
 

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
EMC's are just as easy to administer now. Probably just as much if not more than the current NetApp lineup. As far as drives, the EMC VNX 5500 that my previous company purchased had Intel SSD drives and I believe WD HDD.
 

Ray2012

Junior Member
Aug 1, 2012
1
0
0
I agree with Mangla that you should be looking at the NetApp systems. NetApp is the storage leader in virtualized environments and also supports SSD, SAS and SATA shelves all on their systems.

As for what is important when considering a purchase; I would look at ease of use and integration to your management councils, growth potential (and who can help you keep a smaller footprint but keep you best protected), who can be the most agile for you as your workload and enviornment changes and grows with you as a company.
 

murphyc

Senior member
Apr 7, 2012
235
0
0
It's worth considering all available options in this realm, because as has been said, it's quite competitive. Also, since a qualification hasn't been made why SAN, and not NAS, I am including some NAS suggestions in the meantime.

I'd look at Oracle directly:
http://www.oracle.com/us/corporate/press/1505421

I'd also consider OpenIndiana based solutions that also use ZFS, like Aberdeen's AberSAN racks which scale to a petabyte, and Racktop's Brickstor or vBox.

There's another company floating in my head using ZFS with a prioprietary dedup scheme instead of ZFS's dedup, with a pile of SSD's plus HDD's for incredibly good performance. But I'm spacing out the name at the moment and browser history isn't helping.

Not SAN, but in the realm of very big storage is RHEL Storage Appliance (a.k.a. GlusterFS), which has many features: distribution, replication, striping, remote asynchronous replication.
 

Cr0nJ0b

Golden Member
Apr 13, 2004
1,141
29
91
meettomy.site
I agree with Mangla that you should be looking at the NetApp systems. NetApp is the storage leader in virtualized environments and also supports SSD, SAS and SATA shelves all on their systems.

This is an interesting point, can you back it up with support? IDC numbers? I think that EMC has a strong market share lead.

http://wikibon.org/wiki/v/Wikibon_User_Survey:_EMC_and_NetApp_Dominate_VMware_Storage

http://wikibon.org/wiki/v/Projecting_VMware_Storage_Market_Shares:_EMC_Continues_to_Lead

Also, as an FYI EMC owns VMware and the COO from EMC was just appointed as the CEO for VMware. If you are looking for VMware integration I would say EMC gets the nod.

Usability is a bit of a personal preference, but I would say, look at both of them. I think you will find them similarly easy to setup and use.

Lastly, on the disk question, it shouldn't really matter all that much which OEM the mfg uses. EMC uses SLC drives NOT MLC for SAN which are a bit faster and more reliable, but you will have a 3 year support contract, so replacements are free. EMC is the largest seller I think in the world of integrated disk drives (i'm not 100% on this point), but everything is tested for a very long time and validated before it goes to the customer. The product in general is excellent and pricing should be similar to the competitors.
 

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
I agree with Mangla that you should be looking at the NetApp systems. NetApp is the storage leader in virtualized environments and also supports SSD, SAS and SATA shelves all on their systems.

As for what is important when considering a purchase; I would look at ease of use and integration to your management councils, growth potential (and who can help you keep a smaller footprint but keep you best protected), who can be the most agile for you as your workload and enviornment changes and grows with you as a company.

Great argument by a first time poster, it's not like EMC and VMWare are the same company or anything.
 

Red Squirrel

No Lifer
May 24, 2003
67,898
12,365
126
www.anyf.ca
Dell Equalogic is another option to look at, they are very cheap compared to EMC and the other products in that range.

I would avoid SSD disks as well, in an enterprise critical environment the last thing you want to worry about is trying to reduce the number of writes. Because SSDs don't fail randomly like HDDs but fail based on writes, they are more likely to all fail around the same time, too.
 

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
Dell Equalogic is another option to look at, they are very cheap compared to EMC and the other products in that range.

I would avoid SSD disks as well, in an enterprise critical environment the last thing you want to worry about is trying to reduce the number of writes. Because SSDs don't fail randomly like HDDs but fail based on writes, they are more likely to all fail around the same time, too.

Going to be very hard avoiding SSD as they are integral to EMC and NetApp SANS at this point. Any quality system will have phone-home and if you are paying maintenance you don't have to worry about SSD's.
 

Cr0nJ0b

Golden Member
Apr 13, 2004
1,141
29
91
meettomy.site
I would avoid SSD disks as well, in an enterprise critical environment the last thing you want to worry about is trying to reduce the number of writes. Because SSDs don't fail randomly like HDDs but fail based on writes, they are more likely to all fail around the same time, too.

I reject this statement completely. To have 2 drives fail in the time it takes to rebuild from a spare, is highly unlikely. To say that SSDs are substantially different in this respect from HDDs would need some serious math, and factual evidence. You are right in that SSDs fail differently, just like SATA and FC drives fail differently, but that in no way means that they would fail at the same time.
 

Red Squirrel

No Lifer
May 24, 2003
67,898
12,365
126
www.anyf.ca
I reject this statement completely. To have 2 drives fail in the time it takes to rebuild from a spare, is highly unlikely. To say that SSDs are substantially different in this respect from HDDs would need some serious math, and factual evidence. You are right in that SSDs fail differently, just like SATA and FC drives fail differently, but that in no way means that they would fail at the same time.

SSDs fail based on the number of writes. Typically drives in a SAN will be written to more or less equally. Take a grid square of paper and write stuff by spreading your sentences across all sheets by putting one letter per sheet. You will reach the end of all sheets more or less around the same time. This is basically how SSDs work, except the squares can be erased and rewritten to a certain number of times... but once that number is reached you start running out of squares, and eventually the drive becomes unusable. With HDDs the squares have unlimited writes, but the drive has a chance to fail at a random time. The odds of two failing at the same time is much more slim than with SSDs.

SSDs are great for an OS drive in a non mission critical environment, but I would never use it to store actual data that is critical. As an OS drive they last many years as the OS drive should not get much I/O, but for actual mass data storage such as VMs, spindle drives are better.

It's not worth sacrificing reliability to get performance.
 

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
SSDs fail based on the number of writes. Typically drives in a SAN will be written to more or less equally. Take a grid square of paper and write stuff by spreading your sentences across all sheets by putting one letter per sheet. You will reach the end of all sheets more or less around the same time. This is basically how SSDs work, except the squares can be erased and rewritten to a certain number of times... but once that number is reached you start running out of squares, and eventually the drive becomes unusable. With HDDs the squares have unlimited writes, but the drive has a chance to fail at a random time. The odds of two failing at the same time is much more slim than with SSDs.

SSDs are great for an OS drive in a non mission critical environment, but I would never use it to store actual data that is critical. As an OS drive they last many years as the OS drive should not get much I/O, but for actual mass data storage such as VMs, spindle drives are better.

It's not worth sacrificing reliability to get performance.

Do you have actual SAN experience? You are not going to get away from SSD with the major players. It's fundamental to many of the newer technologies.
 

murphyc

Senior member
Apr 7, 2012
235
0
0
Year old article on the subject, Investigation: Is Your SSD More Reliable Than A Hard Drive?


SSDs fail based on the number of writes.

FTA: none of the failures have to do with write exhaustion

and

failed without any early warning from SMART

There's some evidence this may be firmware related, due to the much higher number of firmware revisions in SSD compared to HDD, and different rates and kinds of failures across identical base hardware, but different OEM modified firmwares.

The odds of two failing at the same time is much more slim than with SSDs.
I'd like to see the data that demonstrates this. I think your premises are wrong therefore your conclusions are wrong.

SSDs are great for an OS drive in a non mission critical environment, but I would never use it to store actual data that is critical. As an OS drive they last many years as the OS drive should not get much I/O, but for actual mass data storage such as VMs, spindle drives are better.

I don't see the presently available data supporting these contentions.
 

Red Squirrel

No Lifer
May 24, 2003
67,898
12,365
126
www.anyf.ca
Do you have actual SAN experience? You are not going to get away from SSD with the major players. It's fundamental to many of the newer technologies.

Deployed several. SSDs were always a big avoidance because of the write limit and the fact that they were too new, but mostly the write limit. You don't want to worry about that in a mission critical environment. Maybe things changed since then, it's been about 3 years. In fact normally we went with SATA. SAS for the speed critical stuff like DB/Email VMs but SATA ran 60+ VMs fine for us.

And how can you have SSD even if you don't want to? You simply ask to go with SAS or SATA spindle drives, and not SSD. If they refuse, you move on to the next company. If a company wont sell you what you want, then you don't buy from them. When talking about a $100,000+ product, the company better be ready to deliver what you want since they're going to lose a pretty big sale.

Imagine having to tell your users to avoid oversaving their documents because it ads wear and tear on the SAN. "Only save at the end of day!"
 
Last edited:

Red Squirrel

No Lifer
May 24, 2003
67,898
12,365
126
www.anyf.ca
Year old article on the subject, Investigation: Is Your SSD More Reliable Than A Hard Drive?




FTA: none of the failures have to do with write exhaustion

and

failed without any early warning from SMART

There's some evidence this may be firmware related, due to the much higher number of firmware revisions in SSD compared to HDD, and different rates and kinds of failures across identical base hardware, but different OEM modified firmwares.


I'd like to see the data that demonstrates this. I think your premises are wrong therefore your conclusions are wrong.



I don't see the presently available data supporting these contentions.

There is no need for any data. Read up on SSDs and how they have limited writes. You can only write to them so many times before they fail. Each sector has a certain number of writes that it supports. If you have two identical drives and write to them equally, it only makes sense that they would both fail around the same time, without warning. There may be some slight offset as one drive may manage to write to a sector a couple hundred times more than the other, but when talking about high I/O operations where sectors are rewritten over and over multiple times a second it does not really matter much. Once the first drive fails, the 23 others are more than likely on the verge of failing too. Of course, like any electronics SSDs can randomly fail as well, but the big thing with SSDs is that they are GUARANTEED to fail after a SET AMOUNT of writes. So common sense says there is a decent chance of having multiple fail at once when they are equally used.
 

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
Deployed several. SSDs were always a big avoidance because of the write limit and the fact that they were too new, but mostly the write limit. You don't want to worry about that in a mission critical environment. Maybe things changed since then, it's been about 3 years. In fact normally we went with SATA. SAS for the speed critical stuff like DB/Email VMs but SATA ran 60+ VMs fine for us.

And how can you have SSD even if you don't want to? You simply ask to go with SAS or SATA spindle drives, and not SSD. If they refuse, you move on to the next company. If a company wont sell you what you want, then you don't buy from them. When talking about a $100,000+ product, the company better be ready to deliver what you want since they're going to lose a pretty big sale.

Imagine having to tell your users to avoid oversaving their documents because it ads wear and tear on the SAN. "Only save at the end of day!"

Things have changed in 3 years. The lasts two installations I have been involved in ($800k and $1MM) both used SSD and SAS drives. Go look at any of the higher end EMC or NetApp solutions and SSD are not only used for storage but for caching.

Two articles showing a push towards SSD by EMC:
http://searchsolidstatestorage.tech...rld-2012-EMC-SSD-strategy-is-flash-everywhere
http://www.computerworld.com/s/article/9221061/EMC_unveils_all_SSD_VNX_high_bandwidth_arrays

Read the data sheet and you will see that SSD is an important part of the current VNX line:
http://www.emc.com/collateral/hardware/data-sheets/h8520-vnx-family-ds.pdf
 
Last edited:

murphyc

Senior member
Apr 7, 2012
235
0
0
There is no need for any data.

If you're going to make assertions, they should be backed by citation referenced research.

Read up on SSDs and how they have limited writes.

You have provided no data indicating the write limitation is meaningful. I have provided data indicating that failures are not write related. If you have contradicting data, supply it.

You can only write to them so many times before they fail. Each sector has a certain number of writes that it supports.

Yes you have now just repeating what you've already written. Please provide references that the write limits are meaningful.

when talking about high I/O operations where sectors are rewritten over and over multiple times a second it does not really matter much

It's called wear leveling. File systems do not have direct access to physical sectors on SSD. The LBA to physical sector map is entirely up to the SSD firmware, and it changes constantly and quickly, whereas on HDD it pretty much never happens unless the firmware detects persistent write failure with a sector.

So common sense says there is a decent chance of having multiple fail at once when they are equally used.

I do not accept the described logic. The premises are flawed, therefore the conclusions are almost certainly flawed as well - and don't agree with the available data.

So if you have contradicting data, please reference it. Don't say data doesn't matter, that's just absurd.
 

Red Squirrel

No Lifer
May 24, 2003
67,898
12,365
126
www.anyf.ca
SSDs are still too new, give it like 5-10 years until there are more cases of write limitations being hit, and the data will be there. Right now one of the other issues with them being too new is there are still some firmware and other issues that do cause them to fail randomly so that makes people forget about the fact that even if it was not for those issues, they are destined to a guaranteed failure. For home users SSDs will most likely not fail before they are retired due to something better being out, but in an enterprise where 100's of GB is written to each disk every day? I would not do it. If it's for cache or something that's another story, as long as the SAN can continue to run even if the cache fails. So you have to replace a couple non critical drives every couple years, not a huge deal compared to having your whole SAN go read only on you and having your IT manager expecting you to "make it work NOW".

Also:

http://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923.html

Keep in mind they are basing this on 10GB/day which is what a typical desktop PC at home will do when it's mostly idling all day with a few hours of gaming or web surfing. An enterprise class setup will do that in less than an hour just from shuffling data around, VMs writing to their virtual disks, and of course actual file transfers within the VMs which translate to writes to the SAN.

While reading up I also noted that using SMART you can actually tell how much life is left in a SSD with the SSD_LIFE_LEFT attribute. 100 meaning it's at full and 0 meaning it's basically dead. (percentage). So guess in an enterprise environment you could just setup a monitoring solution to monitor each disk and when it reaches close to 20 you could start replacing one at a time and rebuild the array. I would imagine an array rebuild with SSDs would be quick so this could probably be done in a week day or so. I suppose that is something that is feasible for a business that only has one SAN.
 

murphyc

Senior member
Apr 7, 2012
235
0
0
SSDs are still too new...

Pure conjecture. Do you think companies warrantying SSDs for 5 years for enterprise applications are crazy?

Keep in mind they are basing this on 10GB/day

You didn't read the entire article...

While reading up I also noted that using SMART you can actually tell how much life is left in a SSD with the SSD_LIFE_LEFT attribute.

And it's unclear at best to what degree this correlates to failures.

I'm definitely getting a margarita now...
 

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
SSDs are still too new, give it like 5-10 years until there are more cases of write limitations being hit, and the data will be there. Right now one of the other issues with them being too new is there are still some firmware and other issues that do cause them to fail randomly so that makes people forget about the fact that even if it was not for those issues, they are destined to a guaranteed failure. For home users SSDs will most likely not fail before they are retired due to something better being out, but in an enterprise where 100's of GB is written to each disk every day? I would not do it. If it's for cache or something that's another story, as long as the SAN can continue to run even if the cache fails. So you have to replace a couple non critical drives every couple years, not a huge deal compared to having your whole SAN go read only on you and having your IT manager expecting you to "make it work NOW".

Also:

http://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923.html

Keep in mind they are basing this on 10GB/day which is what a typical desktop PC at home will do when it's mostly idling all day with a few hours of gaming or web surfing. An enterprise class setup will do that in less than an hour just from shuffling data around, VMs writing to their virtual disks, and of course actual file transfers within the VMs which translate to writes to the SAN.

While reading up I also noted that using SMART you can actually tell how much life is left in a SSD with the SSD_LIFE_LEFT attribute. 100 meaning it's at full and 0 meaning it's basically dead. (percentage). So guess in an enterprise environment you could just setup a monitoring solution to monitor each disk and when it reaches close to 20 you could start replacing one at a time and rebuild the array. I would imagine an array rebuild with SSDs would be quick so this could probably be done in a week day or so. I suppose that is something that is feasible for a business that only has one SAN.

Just the fact that you make a statement of setting up monitoring in an enterprise environment makes me wonder if you've been around a modern SAN. They all have monitor and call home out of the box. EMC or NetApp will have a new SSD to your doorstep before you even know what's wrong. Secondly, it takes little to no time to rebuild a SSD array and they always carry at least one hot spare if not two. Hell, I can repair a 2TB array using 300GB 15k SAS6 in RAID 5 in a short time.
 

Red Squirrel

No Lifer
May 24, 2003
67,898
12,365
126
www.anyf.ca
I was referring to pre fail monitoring. Yes most sans will alert you when a drive, ESM controller or other part fails, but in the case of SSDs you want to know before that, as it means they are all pretty much worn out and need to all be replaced, or more specificly, all the drives in a given LUN. As each LUN may have different usage levels depending on how it's setup. So it's better to know ahead of time so you can start replacing them before shit hits the fan.
 

jolancer

Senior member
Sep 6, 2004
469
0
0
first I'm not an IT network guy or whatever... just a normal person with random interest in technology thats why i read this thread. so my comment isn't based on any bias or experience, but...

Just IMO, i think you guys are taking each others comments a little too uncomprimising(theres probably a better way to say it but IDK ATM)... I think, everyone had something good to contribute especialy with the differences in experience you may have more or less doesnt matter its always a different perspective.

that said i also think acording to the topic and posts, it sounds like SSD is being implimented and the system developed enough to have it doable in SAN, and will only get better as SSD improves aswell... I also think though Red Squirrels logic is sound, are there applications SSD is currently good for YES, are there better alternatives for all practical purposes depending on the application especially depending on the client... I think everyone can agree on that.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I would look here:

http://www.anandtech.com/show/5518/a-look-at-enterprise-performance-of-intel-ssds

Where they write 114GB a day to a consumer Intel 320 120gb with only 8GB of spare pool (which no Enterprise drives would ever do.) Even with 114GB written a day life is estimated @ 1.44 years. If a cheap consumer drive can take that abuse, I am fairly confident that the Enterprise SSD's can make 5 years.

SSD life is "strange" to measure because it is entirely based on writes and spare area (lesser degree.) So taking that SSD and bumping it up to 240GB -> 2 x 1.44 years -> 2.88 years expected life.

Also in most cases, SSD's are being used as cache so the hit rate is varied even more, IE an incoming write often goes to RAM -> Magnetics. If the data is actually read again it might be read back and written to SSD for faster recall based on the expected performance profile. The profiles would also greatly affect the SSD life. If you set all clusters to be read once -> cache to SSD, SSD life will fall due to read activity. If you set it to something a bit more sane like "3 cluster accesses in 20 seconds" which means it was more than just a random one off look up you reduce the writes and improve the life.

Furthermore, the Enterprise SSDs generally have better expected lives than that Intel SSD. SLC / Large spare areas / cluster compression all play in to the life. As an extra bonus the major players are constantly polling the drive for what it thinks its life is and general health data (just like the magnetics) and if there is an issue NetAPP and the like have a new drive in your hands before the warning light appears on the front of the unit...
 

Cr0nJ0b

Golden Member
Apr 13, 2004
1,141
29
91
meettomy.site
In reading all of this, people are really caught up in a debate that has already been settled. SSDs are here and they are safe for enterprise storage needs. That point I don't think can be debated. The reason I say this is because they are used by nearly every storage manufacturer for almost any application.

The question brought up is around how they fail and what risk might be introduced by their failure scenarios. In a general sense that was all the issue that came from the transition from Enterprise SCSI and FC drives to IDE/SATA technology. SATA drives in general fail less predictably than an EFC (enterprise Fibre Channel drive). I don't have any test data to support this, but I've talked to very seasoned storage and disk engineers and they have all confirmed this. The issue is that with larger drives and longer rebuild times, you might have more potential for double disk failure because drives wouldn't have been prefailed out early like they are with EFC drives. To help shore up this discontinuity they introduced the RAID 6 or added parity RAID schemes that could withstand 2 simultaneous failures.

In the world of SSD drives you have much smaller data sets, so much less data needs to be rebuild, in addition to that, the drives are much faster, so rebuild times are really short. That means that you have a smaller window of time for that second drive to fail in and thus a lower probability that you will have 2 drives fail in that exact window. I can't do all of the math, but the probability drops off pretty fast as the rebuild times go up.

The point of an earlier post was...I think...that SSD only live for 100,000 write ops or whatever and if you put two of them in at the same time, they would hit 100,000 ops at about the same time. But this misses two points. 1) Drives aren't necessarily doing the same exact amount of iops and 2) that 100,000 number isn't an exact number...it's an expected number. Some might get 150,000 some might get 90,000, it depends on the drive. That said, the likelihood is remote and no larger than other storage types that you would have a DD failure cause data loss with SSDs over other storage types.

Lastly, with enterprise solutions, you are covered by the mfg warranty, so when or if a disk fails in say 1.4 years, it will be replaced with a new one and off you go. That's why you buy integrated systems from good vendors. They will build reliable systems for you and support them for their useful life.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |