That's the problem with eSATA. Last time I looked there was only one company actually producing cables for it and they wanted a arm and a leg.
Same thing for external enclosures. All that stuff _should_ be very inexpensive, but it isn't.
Now if you want something fancy I bet you can go online and score a fiberchannel rack storage for about the same cost. Maybe be able to find 1TB storage relativeley inexpensively. To bad nothing comes close to the GB/$ that you can get from SATA drives nowadays.
If I had a couple grand to dump into a storage array what I'd do is head down to newegg and buy the biggest server case they got.
For example for 250 dollars you can get a nice Lian Li "PC-V2000Aplus II". It has 12 3.5 slots at the bottom and 7 5.25" at the top. If you stick a double-sided DVD burner in there then that leaves 6 5.25 drive bays, which you can stick 3.5" adapters in.
That leaves a total of 18 drive bays for 3.5" harddrives.
For the computer hardware inside the case I'd get a nice Intel motherboard with a cheap CPU.
for example:
http://www.newegg.com/Product/...?Item=N82E16813121060R
3 PCI-express slots. 8 onboard SATA. No onboard video so a cheap PCI card should be aquired.
Then maybe 3
http://www.newegg.com/Product/...x?Item=N82E16816103058
It's 4 ports sata controller that uses PCI-Express. That way I don't have to worry about and PCI buss limitations.
So 8 + 4 * 3 = 20 SATA ports.
Then get the cheapest core 2 duo cpu supported by that motherboard and 1gig of RAM. Also a nice 450-500 watt Seasonic power supply. Then a second Gb/s ethernet controller.
Ideally I'd setup 2 small drives in mirrored raid for the OS and boot. Then the rest in whatever array you'd like.
500GB drives seem reasonably priced. Assuming that you'd don't mind rigging up a couple dirty little mounts for the OS drives, then that still leaves me 18 ports to play around with.
About 8.5TB in JOBD.
About 7.5-8TB in RAID6 or 2 RAID 5 arrays.
Or about 4-4.5 TB in RAID 10.
Of course this is only for home storage. No hotspare drives left over, no hotswappable drive bays, no ECC ram, no half a dozen other little details that I'd need if I was doing this for a business. For home storage I don't mind downtime. If a drive goes south I'd rather just have the machine shutdown (properly) till I get a chance to look at it, rather then trying to keep running. For a 'real' setup you can expect probably 3-4x the price versus bare-bones non-critical setup. Probably 10x the cost if you throw the cost of backups into the mix (which is a hard requirement for businesses)
this essentially would turn the server into a over-glorified network disk controller. Not usefull for much else besides light-weight server duties. If I wanted to get fancy I'd use iSCSI to export LVs as disks. This way I can use the storage as if it's local to my machine with native FS and everything. Perfect for home directories or other such things.
There is good software emulation for iSCSI in Linux. Very fast, perfect for gigabit ethernet. For the iSCSI target (the server half) there is 'iSCSI enterprise target', and the Linux kernel has built-in iSCSI initiator (the client half) through the Open-iSCSI project.
If I wanted to get REALY fancy then I'd use the in-kernel support for Oracle's OCFSv2 for cluster storage file system so that multiple computers can access the same iSCSI shares at the same time. The difference between a 'clustering' file system and a 'regular' file system is that there is network communication between machines for syncronizing file system access and file locking mechanisms. This way you avoid file system corruption.
Perfect for any sort of high performance shared storage. Home directories, sharing programs through /usr/local/ mounts, media editing, etc.
Bonding 2 ethernet controllers from the file server to a switch, and with a network that supports 'Jumbo Frames' (all controllers and the switch must support large MTU sizes) you can probably get a 30-40% boost in file access performance over just plain 1000Mb/s ethernet.
All sorts of fun stuff.