It may match it in sustained writes, but there are better parts to running one large SCSI drive as opposed to a RAID 0 array.
- multitasking ability - IDE drives can only focus on one thing at once, SCSI drives can do multiple things at once - hence the ability to run a game like Quake III whilst burning a CD.
- lower CPU overhead - IDE RAID 0 arrays can take up much of a CPU's power during heavy seeks, while SCSI drives utilize much much less CPU overhead since the I/O negotiations are handled by the controller chip. The only thing that limits SCSI's speed is the PCI bus it's plugged into - a typical 32-bit, 33Mhz slot will offer a maximum of 133MB/sec, a 64-bit, 33Mhz slot 266MB/sec, and 64-bit, 66Mhz 320MB/sec. PCI-X offers a much larger potential bandwidth per slot, but you'll always be limited by the maximum bus speed of your motherboard. For example, Hypertransport on nForce boards expand the maximum bandwidth to 800MB/sec, but that's still limited by the PCI bus limited speed of 133MB/sec - thus, one slot can be maxxed out in speed yet still leave some for other cards and IDE drives. Ideally, one could stick multiple U160 adapters and be able to run 30 drives (drives can easily be put in external drive towers at little loss of speed - though enclosures that house over three LVD drives are prohibitively expensive - as are external LVD-rated cables)
- lesser chance of failure - one drive in a RAID 0 array fails, and you're in a world of hurt. Only when you run a RAID 10 or 0+1 array do you near the stability of SCSI. And that takes four drives and two IDE channels at the cost of two IRQs. 15 SCSI devices can be bound to one LVD chain and one IRQ with no loss in speed.
I hope this answers some questions - any who still have any are welcome to PM me any questions and I'll try my best to answer them.