Ya, but that's 800 dollars. I'd rather spend that much money on a extra TB or 2.
It's possible to use iSCSI without any special hardware. It's actually quite fast. Benchmarked it against things like NFS or SMB. It works out very well.
I've done it using PXE boot.
Software wise you need iSCSI enterprise target on the server, dedicated disks or logical volumes, a DHCP server that supports netboot, and a tftp server, pxelinux pxe bootloader, and a custom Linux kernel and initrd image.
You need a server (or servers) with:
DHCP that supports advanced configurations; Bind9
A TFTP server; atftpd or tftpd-hpa (tftp is tricky, it's udp protocol, and you may find that one works better then the other when dealing with paticular hardware)
A NIC card on the client that supports PXE (it's possible to use built-in ROM on some cards to do netboot or do netboot from floppy or cdrom or flash key)
Pxelinux, which is the PXE-specific bootloader from the syslinux family of bootloaders.
iSCSI enterprise target on the file server with dedicated disks or logical volumes.
a custom initramfs initrd that you've added Open iSCSI initiator to.
How PXE generally works is like this:
PXE boots ---> asks DHCP server for lease ---> you have the DHCP preconfigured for this paticular MAC address and the server responds with a address and file and location of the bootloader ---> the card downloads pxelinux and it's *.cfg file, executes it ---> Pxelinux looks through the configuration and find a entry that matches the system ---> the vlinuz image and initrd image are downloaded and executed ---> Linux launches, runs the init script in the initrd image, and sets up the network and various system services needed to access the remote share ---> after setting everything up, root pivot is done and your now running on your remote file system.
The tricky part is writing the bash scripts to setup the Open iSCSI software. The sucky part is that with the Open-iSCSI daemon there is no way to hand off control to another daemon.. you have to be able to kill the old daemon, and launch a new one once your out of the initrd image, but you can't (at least not with the version I was using). So it works, but you loose control over the daemon since the context is off. The Open ISCSI folks have this as a issue and something they want to fix, but I don't know if they gotten around to it by now. I haven't kept up.
I ran my desktop like that for a year or so.
The major downside to any network booting, iSCSI, GNBD, NFS.. it's fundamental to all forms of network-booting is the lack of local cache. Imagine your accessing a iSCSI lun and on that you have a partition setup for swap space. If you exhaust your local physical memory you need to allocate some from swap to keep the machine running. In order to write out memory to the swap parition then you have to transverse the entire network stack, store packets, be able to analise and respond.. all of this requires (you guessed it) more ram. So in order to allocate swap space on a ram exhausted system you need to allocate more ram, which means you need to allocate more swap space, which means you've hit a deadlock.
You can work around it a bit.. you can allocate more ram in a special reserve you save just for this occasion, or you can be very carefull that your workload does not exceed your ram, but it's still there waiting to bite you.
so real the solution is to have a local disk for swap whether or not you actually plan on using to boot off of.
I am thinking that since GNBD support is Linux native then it may be easier to deal with as far as the booting part. It may be faster also since it's Linux native you don't have to deal with the very complex iSCSI protocol.
Keep in mind that there are a variety of network block device stuff for Linux.. This one I am talking about specificly is GNBD, other ones are not suitable for this sort of stuff.
I just need to get my server backup and play around with GNBD, CLVM, and GFS. Hrm... have to order new drives.
For VM stuff I've used Qemu/Kqemu/KVM and Xen. They are much more dependant on other projects for the various features that you get with ESX.