Please read the CAUTION at the bottom about Linux compatibility!!!
ScottMac is your spec man. Like he said, iSCSI is an spec that enables SCSI commands over IP. Thus allowing SAN over IP. As he said, SAN is a type of storage network with independent storage systems connected to a central storage switch usually by FC-AL (fibre channel). Standard switches allow two devices to communicate together independent of other device communication channels or links. Fabric switches can handle around 16 million addresses while standard FC-AL switches like for your situation can handle address for up to 126 nodes. Fabric switches are for larger and the fastest SANs. (More expensive too.)
I don't know if it's recommendable to go with an ATA RAID solution. SCSI drives still have greater reliability (life and data preservation) although with higher cost. Another advantage for SCSI is multitagging usage and function. I too was wondering how you were going to power 20 drives in one PC and figured even 12 per PC (1 for system/boot, 11 for RAID 5, resulting in 10x120 for data storage) is a lot for most people to build.
As mentioned by NightOwl, with IDE/ATAPI only one drive one each channel can READ or WRITE at any instance of I/O activity. So if you have 12 channels, only 12 drives actually function for I/O at a time. In and IDE situation, you should actually use two 12 channel cards, each drive to a separate channel. This allows each drive to perform a read or write without waiting on another drive. Additionally, if you perform a RAID 0 or 5, they must all be on separate channels anyhow. Any additional drives on the same channel can be a different array or individual JBOD's (for backup or other use), but can't function in the primary array.
With SCSI, each drive on a channel (7-14) can perform a tagged (staggered controller command) read or write. SCSI drives are still more reliable than IDE/ATAPI drives and have better performance specs. (Noise and heat is another matter.) SCSI can support external arrays easily through HD external connectors over LVD cable.
So if your company can't spend the $25 grand on a small, prebuilt SAN, you can build it yourself. Other than lack of support and reliability, there's nothing wrong with going the 3DWare route if your company can't afford something else, but then don't expect the reliability and support of a commercial system and expect much longer deployment times and problems. Fully document everything so that any maintenance tech or support in the future can do the maintenance they need to perform with the least amount of system/storage disruption.
If you go with the build-it-yourself route, use two 12 channel cards, and two systems to support 11 drives each. (not counting system/boot drive). Buy at least a couple of spare drives and hot spare them.
Here's a brief example of how Compaq envisions a SAN with 4 servers and one storage array with switch.
ftp://ftp.compaq.com/pub/products/storageworks/raidstorage/SAN123br.pdf
Here's a detailed example of an entry SAN.
ftp://ftp.compaq.com/pub/products/storageworks/whitepapers/14JU-0401A-WWEN.pdf
How to generally plan a SAN.
ftp://ftp.compaq.com/pub/products/sanworks/SANPlanning.pdf
Individual PCI HBA adapters from HP cost around $2,000 each.
--
CAUTION about Linux compatibility ---
If you plan to change your systems to Linux, be careful.
Here's an example. Note that only the FC-AL switch, not hubs or fabric switches have the greatest compatibility. The FC-AL listed here in conjuntion with the RA4000 or RA4100 can work with Netware and clustering, Windows NT/2000 and clustering, Linux and clustering, and SCO (without clustering). Note that the hubs in combination with the RA4x000 do not support Linux or Linux clustering and that the fabric switches in conjuction DO support Linux, but not Linux clustering.
ftp://ftp.compaq.com/pub/products/storageworks/RA4100/MatrixRevs0827JL.pdf
Most of the IBM SAN solutions support Windows NT/2000 and Unix/Linux/AIX, AS/400 and OS/390 (mainframe). Clustering you would need to research.
Here's entry level SAN components from HP.
http://www.compaq.com/storage/entryvaluesan.html
A RA4100 runs about $6,500 (with capacity for up to 12 SCSI drives each with configurable expansion capabilities for your storage needs, or get 2 RA4100's and drives). The Modular SA 1000 costs around $10,000 but has faster SCSI bus and can handle more drives.
So if you want an 8 port FC-AL it runs around $5,400
IF you need director/management software for Windows 2000 servers, the SANworks Virtual Replicator software runs around $4,500 for one server and doesn't work with Linux. If you plan to go to Linux, do this before you roll out a SAN, not a year down the road. SAN backup software costs from $22,000 and up. [Legato, Veritas, CA]
And for each server to connect to the FC-AL switch, you will need an appropriate PCI adapter at about $2,000 each.
http://www.smb.compaq.com/dstore/families.asp?ProductLineId=450&oi=E9CED&BEID=19701&SBLID=
That's all a hunk of change, isn't it?
---
There is an advantage of having individual channels/buses per drive because drives get dedicated bus bandwidth. So, the 3Ware Serial ATA contollers in conjuction with Seagate SATA drives sounds like an interesting idea. SATA uses 32bit CRC, which is better than current IDE/ATA and SCSI standards.