I've always wondered about the theoretical transfer rates of IDE/SCSI buses are actually approached when you put multiple drives on the bus(actually this could apply to any sort of shared data bus):
Let's say you have a single-channel SCSI controller capable of 160MB/s transfer rate. If you were to install a single SCSI drive (also supports 160MB/s transfer) capable of a sustained transfer rate of ~30MB/s (sustained as in from the internal/disk platter to drive buffer) then you should be able to measure ~30MB/s transfer rate on the controller/bus itself. One might naturally ask, "What good is having a 160MB/s bus if the drive can only pump out data at 30MB/s?" The answer is that in situations where there are multiple devices on the bus, then you need the extra bandwidth for the other devices. So taking our previous example, if we put 2 drives on the bus then we should be able to measure a sustained transfer rate of ~60MB/s. With 3 drives, ~90MB/s and so on until we get near the 160MB/s limit.
My question is this: Assuming that only 1 device at a time can transmit data at any given point in time, how can the SCSI bus actually transmit data faster than 30MB/s in this example? So if during time period 1, drive #1 is transmitting data at 30MB/s no other drive should be able to transmit data or else there would be a collision (??) Any other drive wanting to transmit data would have to wait till time period 2, etc. Based on this reasoning, you should never measure more than the sustained transfer rate of one single drive on the bus. However, in the real world people *do* measure higher sustained transfer rates so obviously the transfer rates are adding up somehow.
Can anyone enlighten me as to how this works?
Let's say you have a single-channel SCSI controller capable of 160MB/s transfer rate. If you were to install a single SCSI drive (also supports 160MB/s transfer) capable of a sustained transfer rate of ~30MB/s (sustained as in from the internal/disk platter to drive buffer) then you should be able to measure ~30MB/s transfer rate on the controller/bus itself. One might naturally ask, "What good is having a 160MB/s bus if the drive can only pump out data at 30MB/s?" The answer is that in situations where there are multiple devices on the bus, then you need the extra bandwidth for the other devices. So taking our previous example, if we put 2 drives on the bus then we should be able to measure a sustained transfer rate of ~60MB/s. With 3 drives, ~90MB/s and so on until we get near the 160MB/s limit.
My question is this: Assuming that only 1 device at a time can transmit data at any given point in time, how can the SCSI bus actually transmit data faster than 30MB/s in this example? So if during time period 1, drive #1 is transmitting data at 30MB/s no other drive should be able to transmit data or else there would be a collision (??) Any other drive wanting to transmit data would have to wait till time period 2, etc. Based on this reasoning, you should never measure more than the sustained transfer rate of one single drive on the bus. However, in the real world people *do* measure higher sustained transfer rates so obviously the transfer rates are adding up somehow.
Can anyone enlighten me as to how this works?