SANs Vs. Traditional File Servers

somecallmetim3

Junior Member
Sep 5, 2002
3
0
0
I am planning on creating a 2 Terabyte file server for my company, but I have also heard of this SAN technology, particularly the Adaptec iSCSI technology. My question is 3 fold:
1. What exactly is SAN ans iSCSI?
2. Is it dependable as much as a Linux Box
3. Would I be better of just building the original file server?

The file server I am building is a Dual P4 Xeon with a tyan i7500, 2 Gigs of DDR, 3dware 12-port IDE RAID card
and about 20 drives WD drives (120Gig) running across a 1 gigabit copper connection.
I gladly take any advisement you guys can give me thanks
 

nightowl

Golden Member
Oct 12, 2000
1,935
0
0
First, I would not advise building the server yourself. By buying a server you gain a valuable support channel when something goes down. Also, how are you planning on hooking 20 WD drives up to the 3ware card? The card only supports 12 drives, so says the documentation. If it did support 2 drives per channel, you would not want it because it will slow the over all performance of the drives. Are you planning on using some sort of RAID for redundancy and what about a backup provision?
 

ScottMac

Moderator<br>Networking<br>Elite member
Mar 19, 2001
5,471
2
0
A SAN is basically a storage network ... a (usually) huge collection of physical drives that are managed under an executive application that makes them appear as one or more logical containers.

It is NOT a file server (by itself). I CAN be (usually is) a single/multiple storage unit that one or (usually) many servers use as their storage resource. SANs typically/frequently range up to the tens of Terrabytes.

Each logical storage device can be configured for multiple partitions, the partitions can be configured as any flavor of raid (or look like a single drive unit). A common advantage is that you can figure a partition to match some other partition, mirror the two (let 'em sync up the data), then freeze and break the mirror and back it up to tape ... so you have a current snapshot of the drive, plus you can back up the data without worrying about impacting the on-line storage.

SANs usually talk amonget themselves and the servers with FiberChannel. FiberChannel is currently running up to 2Gbps, though 1Gbps is probably more typical (older in-place stuff). There are switches, NICs, hard rives, and routers specifically for handling FC traffic.

One of the traditional headaches with SANs is that you have a sh!tload of data in one place, usually company/business crtical stuff. Most organizations using SANs would really like to have a live/hot standby somewhere else in case of fire/disaster at the primary data center. The problem is connecting enough bandwidth to accomodate that kind of data transfer to a place that's far enough a way to make sure the same natural disaster doesn't get both centers.

Enter iSCSI (and FiberChannel over IP). In a (very small) nutshell, iSCSI is SCSI over IP for the purpose of feeding remote storage using a (not quite yet) standard. Another group is (I think it's another group) is working out the details for FC over IP to accomplish the same thing. It'll also afford some flexability to the data center in terms of network infrastructure: instead of having Ethernet and FC switching / routing, they can go back to just having Ethernet, and switch/route the FC over TCP/IP.

SO, there ya go, standard disclaimer: some details omitted or bent for the sake of brevity. Google can be your friend.......


FWIW

Scott
 

somecallmetim3

Junior Member
Sep 5, 2002
3
0
0
Thanks for the responses. Im sorry I didn't mention I am using 2 RAID cards, because RAID10 cannot be done over one card.
I was planning on getting this server built by CalPc. I was looking more into the SAN configuration for the network and I dont believe that my company want sto change their infrastructure that much, especially when we are on the brink of changing to all Linux boxes, so I believe I will go with the dual system, I just heard of the SAN technology and wanted to get a little information on my options, thanks again.
 

SaigonK

Diamond Member
Aug 13, 2001
7,482
3
0
www.robertrivas.com
Maybe you should look into a Fiber Channel storage array...i just implemented a 1/2 terabyte unit from EMC that uses fiber cards..blah blah blah.
Works like a champ!
That 530gig is RAID 5 by the way so it is sweet to know my data is safe to some extent =)
 

JustinLerner

Senior member
Mar 15, 2002
425
0
0
Please read the CAUTION at the bottom about Linux compatibility!!!

ScottMac is your spec man. Like he said, iSCSI is an spec that enables SCSI commands over IP. Thus allowing SAN over IP. As he said, SAN is a type of storage network with independent storage systems connected to a central storage switch usually by FC-AL (fibre channel). Standard switches allow two devices to communicate together independent of other device communication channels or links. Fabric switches can handle around 16 million addresses while standard FC-AL switches like for your situation can handle address for up to 126 nodes. Fabric switches are for larger and the fastest SANs. (More expensive too.)

I don't know if it's recommendable to go with an ATA RAID solution. SCSI drives still have greater reliability (life and data preservation) although with higher cost. Another advantage for SCSI is multitagging usage and function. I too was wondering how you were going to power 20 drives in one PC and figured even 12 per PC (1 for system/boot, 11 for RAID 5, resulting in 10x120 for data storage) is a lot for most people to build.

As mentioned by NightOwl, with IDE/ATAPI only one drive one each channel can READ or WRITE at any instance of I/O activity. So if you have 12 channels, only 12 drives actually function for I/O at a time. In and IDE situation, you should actually use two 12 channel cards, each drive to a separate channel. This allows each drive to perform a read or write without waiting on another drive. Additionally, if you perform a RAID 0 or 5, they must all be on separate channels anyhow. Any additional drives on the same channel can be a different array or individual JBOD's (for backup or other use), but can't function in the primary array.

With SCSI, each drive on a channel (7-14) can perform a tagged (staggered controller command) read or write. SCSI drives are still more reliable than IDE/ATAPI drives and have better performance specs. (Noise and heat is another matter.) SCSI can support external arrays easily through HD external connectors over LVD cable.

So if your company can't spend the $25 grand on a small, prebuilt SAN, you can build it yourself. Other than lack of support and reliability, there's nothing wrong with going the 3DWare route if your company can't afford something else, but then don't expect the reliability and support of a commercial system and expect much longer deployment times and problems. Fully document everything so that any maintenance tech or support in the future can do the maintenance they need to perform with the least amount of system/storage disruption.

If you go with the build-it-yourself route, use two 12 channel cards, and two systems to support 11 drives each. (not counting system/boot drive). Buy at least a couple of spare drives and hot spare them.

Here's a brief example of how Compaq envisions a SAN with 4 servers and one storage array with switch. ftp://ftp.compaq.com/pub/products/storageworks/raidstorage/SAN123br.pdf

Here's a detailed example of an entry SAN.
ftp://ftp.compaq.com/pub/products/storageworks/whitepapers/14JU-0401A-WWEN.pdf

How to generally plan a SAN.
ftp://ftp.compaq.com/pub/products/sanworks/SANPlanning.pdf

Individual PCI HBA adapters from HP cost around $2,000 each.

-- CAUTION about Linux compatibility ---
If you plan to change your systems to Linux, be careful.
Here's an example. Note that only the FC-AL switch, not hubs or fabric switches have the greatest compatibility. The FC-AL listed here in conjuntion with the RA4000 or RA4100 can work with Netware and clustering, Windows NT/2000 and clustering, Linux and clustering, and SCO (without clustering). Note that the hubs in combination with the RA4x000 do not support Linux or Linux clustering and that the fabric switches in conjuction DO support Linux, but not Linux clustering. ftp://ftp.compaq.com/pub/products/storageworks/RA4100/MatrixRevs0827JL.pdf

Most of the IBM SAN solutions support Windows NT/2000 and Unix/Linux/AIX, AS/400 and OS/390 (mainframe). Clustering you would need to research.

Here's entry level SAN components from HP.
http://www.compaq.com/storage/entryvaluesan.html
A RA4100 runs about $6,500 (with capacity for up to 12 SCSI drives each with configurable expansion capabilities for your storage needs, or get 2 RA4100's and drives). The Modular SA 1000 costs around $10,000 but has faster SCSI bus and can handle more drives.
So if you want an 8 port FC-AL it runs around $5,400
IF you need director/management software for Windows 2000 servers, the SANworks Virtual Replicator software runs around $4,500 for one server and doesn't work with Linux. If you plan to go to Linux, do this before you roll out a SAN, not a year down the road. SAN backup software costs from $22,000 and up. [Legato, Veritas, CA]
And for each server to connect to the FC-AL switch, you will need an appropriate PCI adapter at about $2,000 each.
http://www.smb.compaq.com/dstore/families.asp?ProductLineId=450&oi=E9CED&BEID=19701&SBLID=

That's all a hunk of change, isn't it?

---

There is an advantage of having individual channels/buses per drive because drives get dedicated bus bandwidth. So, the 3Ware Serial ATA contollers in conjuction with Seagate SATA drives sounds like an interesting idea. SATA uses 32bit CRC, which is better than current IDE/ATA and SCSI standards.
 

mobogasm

Golden Member
Oct 25, 1999
1,033
0
0
don't know much about SAN's but I just thought I'd add you might want to wait until the 200gb or 320gb drives come out later in the year i believe. That would cut down on the amount of drives needed or would increase the storage drastically.
 

Somecallmetim

Member
Apr 19, 2001
84
0
0
I was thinking about that, but then that means i'll be using 1 RAID card to impliment a RAID 10 and that can't be done on one card. So the cheaper solution would to be to use 2 cards with the 120gig drives so I can have some protected data. I am interested in the SCSI idea, what cards and drives would you suggest?
 

CBuxton

Senior member
Dec 8, 1999
389
0
0
One other thing that I haven't seen mentioned yet.....are all the drives hot-swappable? If not then when a drive dies (and trust me, they WILL! =)) you'll have to shut down the machine, crack the case, and replace it. I'm just assuming that with 2TB of data that isn't something you would like to do. That said, I can understand the cost factor. Obviously you're saving a HUGE amount of money by having a machine custom-built instead of buying a SAN. I would steer you towards a NAS. Take a look at these products: Network Appliance NASs. I think the performance is outstanding and the prices are much more reasonable than a SAN.
 

Hoober

Diamond Member
Feb 9, 2001
4,385
34
91
Originally posted by: CBuxton
One other thing that I haven't seen mentioned yet.....are all the drives hot-swappable? If not then when a drive dies (and trust me, they WILL! =)) you'll have to shut down the machine, crack the case, and replace it. I'm just assuming that with 2TB of data that isn't something you would like to do. That said, I can understand the cost factor. Obviously you're saving a HUGE amount of money by having a machine custom-built instead of buying a SAN. I would steer you towards a NAS. Take a look at these products: Network Appliance NASs. I think the performance is outstanding and the prices are much more reasonable than a SAN.

Depends on your setup and who you purchase your SAN from. Every vendor we spoke with when making our decision had multiple hot swappable drives that were monitored by the onboard software. The milisecond it detected a problem in a drive it moves the data going to/from that drive and drops it on the hotswappable, shuts down the bad drive, recreates the LUN and you're good to go. Then you get a nice email saying that you've lost a drive and please replace it. Very, very cool stuff.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |