Why no direct PCI-attached harddrives?

Benedikt

Member
Jan 2, 2002
71
0
0
My question is:
As bus converters (internal HDD bus -> IDE -> PCI) slow performance and increase latency, why are there no direct-PCI connected harddrives, e. g. over a PCI bridge?

My example:
As PCI is a parallel bus interface, maybe there could a hdd be configured without an IDE interface, instead a PCI bridge chip and maybe a little mechanical case and then we stick the whole thing directly into a PCI slot, wouldn't this be a little bit faster instead of various bus conversions in the data path from the HDD to the north/southbridge?

As some solid state disks work this way, why no Harddrives?

Greetings,

Bene
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
they had cards like this in the time of ISA - an ISA card with a hard drive on it

Would you really want to lose a PCI slot for each hard drive though? For most likely minimal performance gains? The moving parts in drives are orders of magnitude slower than the controllers/bridges.
 

rimshaker

Senior member
Dec 7, 2001
722
0
0
Why would you want to? For the same reason why video cards migrated away from PCI bus... 33.3MB/s limit.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: rimshaker
Why would you want to? For the same reason why video cards migrated away from PCI bus... 33.3MB/s limit.

current chipsets still have the drives go through the PCI - they just don't have a physical slot.
edit: according to peter i'm wrong. trust what he says
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
IDE originally was exactly about this - move the controller into the drive and attach the drive directly to the bus. ISA it was at the time.

Later chipsets offered a stripped down and accelerated copy of the ISA bus for IDE drive attachment, even later to be souped up with a DMA engine. That's what we still have now.

This is about being cheap, not about being fast. If you want Fast, you want a (PCI) controller that manages multiple drives in an intelligent manner, not one controller per drive that hogs the entire bus while at work. If you did IDE again, this time based on PCI, you'd get exactly the latter. You don't want that.

btw, rimshaker, the limit is 133 MB/s, not 33. And current chipsets don't connect their IDE logic through PCI anyway, chipset internal busses have become way faster than that.
 

SharkyTM

Platinum Member
Sep 26, 2002
2,075
0
0
Back in the day, IBM servers had each hard drive mounted on a "PCI" card... they were actually a hybrid VLB bus.
The hard drives had no internal controllers, everything was mounted on the external board. This was due to cost- Hard drives were Insanely! expensive... this was in ~1980... the servers we ripped apart had 6x 80MB hard drives. The logic boards could be easily tested/replaced if a problem showed up.

The reason we dont do it now, as Peter said, is speed... PCI is SLOW... 33.3MB/s is a joke...

SharkyTM
 

Evadman

Administrator Emeritus<br>Elite Member
Feb 18, 2001
30,990
5
81
Originally posted by: SharkyTM
Back in the day, IBM servers had each hard drive mounted on a "PCI" card... they were actually a hybrid VLB bus.
The hard drives had no internal controllers, everything was mounted on the external board. This was due to cost- Hard drives were Insanely! expensive... this was in ~1980... the servers we ripped apart had 6x 80MB hard drives. The logic boards could be easily tested/replaced if a problem showed up.

The reason we dont do it now, as Peter said, is speed... PCI is SLOW... 33.3MB/s is a joke...

SharkyTM

I was under the impression that max throughput on a PCI bus is 528 mb/s but it runs at 33.3 mhz. but I have been drinking and am no longer sure because so many people are saying 33.3. Someone correct me or them
 

Rand

Lifer
Oct 11, 1999
11,071
1
81
Originally posted by: Evadman
Originally posted by: SharkyTM
Back in the day, IBM servers had each hard drive mounted on a "PCI" card... they were actually a hybrid VLB bus.
The hard drives had no internal controllers, everything was mounted on the external board. This was due to cost- Hard drives were Insanely! expensive... this was in ~1980... the servers we ripped apart had 6x 80MB hard drives. The logic boards could be easily tested/replaced if a problem showed up.

The reason we dont do it now, as Peter said, is speed... PCI is SLOW... 33.3MB/s is a joke...

SharkyTM

I was under the impression that max throughput on a PCI bus is 528 mb/s but it runs at 33.3 mhz. but I have been drinking and am no longer sure because so many people are saying 33.3. Someone correct me or them

The PCI bus runs at either 33MHz or 66MHz, in the case of 99.99% of consumer motherboards it's a 32bit/33MHz implementation yielding a peak maximum bandwidth of 133MB/s.
One could potentially have a 64bit/66MHz PCI bus yielding up to 532.8MB/s, one more common example of a chipset that supports 64bit/66MHz PCI slots is the AMD760MPX.

64bit or 66MHz PCI bus implementations are strictly server level however, you'll never see them in the home except for the rare few high end enthusiast systems... and those few enthusiasts likely do not have any 64bit/66MHz PCI cards.

This is of course peak theoretical bandwidth, real world achievable bandwidth is naturally lower.
I'm not sure where the 33.3MB/s figure is coming from... unless perhaps their thinking of EISA which provides 33.3MB/s on a 32bit/8.3MHz bus.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Once and for all folks, the max throughput on consumer level PCI (32-bit, 33 MHz) is 133 MB/s, not 33. Current chipsets have the IDE channels on the much faster chipset internal bus, so you aren't even limited by that. In fact, it's the rather slow drives and lack of multi-access intelligence in the drive-embedded controllers that makes IDE setups perform poorly compared with more brainiac solutions like SCSI.

Again, IDE is about being cheap, not about being fast.
 

Benedikt

Member
Jan 2, 2002
71
0
0
BTW, is it possible for 2 IDE drives connected to the same IDE channel to run at different speeds at the same time(for example: ATA 100 and the 2nd drive ATA 66), or do they switch to the lowest data rate in common?

Greetings

 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
Food for thought:

VESA bus in most 486-50 (pre DX/2) systems = 200MB/s
PCI bus in modern PCs = 133 MB/s

Real progress there.....
 

Mday

Lifer
Oct 14, 1999
18,647
1
81
Originally posted by: glugglug
Food for thought:

VESA bus in most 486-50 (pre DX/2) systems = 200MB/s
PCI bus in modern PCs = 133 MB/s

Real progress there.....

it's called the VESA Local Bus, or VLB, and it really sucked btw. and it was super expensive. besides, VLB IS HUGER THAN THE XBOX. also, vlb was proprietary tech, if i recall, and pci is open.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Originally posted by: Benedikt
BTW, is it possible for 2 IDE drives connected to the same IDE channel to run at different speeds at the same time(for example: ATA 100 and the 2nd drive ATA 66), or do they switch to the lowest data rate in common?

Greetings

Two IDE drives on the same channel NEVER run at the same time, this is impossible anyway. The system can only access one at a time, and it does that at the drive's individual speed. The days of IDE bus bridges that didn't have separate speed programming for master and slave are long gone.

But because the controller in (!) the master drive has to handle the slave drive as well, other compatibility issues between the two drives may appear. This is getting rarer as well, but it's not gone.

VLB btw was the 486's CPU bus essentiall, running 32 bits wide at up to 40 MHz - so yes, it was slightly faster than consumer PCI, but in case you guys forgot already, it was highly fragile and practically useless with more than one card. 40 MHz w/o waitstates was impossible if you added a 2nd card, so even the most basic system with graphics and mass storage controllers couldn't run at the theoretical maximum speed.

PCI is far superior to that ... pity only that after 12 years, we're still stuck with the most basic 32-bit/33 MHz version of it, no 64-bit slots in sight on consumer boards, not to mention 66 MHz busses.

regards, Peter
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
it's called the VESA Local Bus, or VLB, and it really sucked btw. and it was super expensive. besides, VLB IS HUGER THAN THE XBOX. also, vlb was proprietary tech, if i recall, and pci is open.

it didn't suck. and it was cheaper than pci back in the day. nor was it proprietary. it just wasn't very flexible. the problem with it was that you could only use 3 vlb slots reliably at 33mhz. on a 50 mhz bus, you'd be lucky to be able to use one slot.
 

addragyn

Golden Member
Sep 21, 2000
1,198
0
0
Because there is no reason for this. It's a nonsolution to a nonproblem. The drive is the limitation, not the interface. Here's some good reading on hard drives, storagereveiw.com.

Some early drives had their controller on a card and now they are on the drives. But AFAIK they were no more directly attached to the bus. Furthermore newer chipsets have IDE channels builtin, i.e. not piggybacked onto the PCI bus. [Just saw Peter already explained this.]

The SSD disks you're talking about, like the RocketDrive and QikCache I presume, are not really SSD disks. True SSDs typically connect thru fibre, SCSI, and I think a few are avaialble for IDE. The two above are basically cards with a controller chip and RAM. Something is still connecting (translating) the RAM to the PCI bus. It's plugged into a slot but not "direct-PCI" in the way you seem to be thinking. A NIC or sound card is no different. Extend your idea but instead of the "direct-PCI" connection consider a more direct connection and that's exactly what's hapening w/ integrated chipset features and point to point buses.


 

everman

Lifer
Nov 5, 2002
11,288
1
0
I did read a review on a 4gig ram drive today which used pci, as well as an external power source so it doesn't get erased when you shut down. I think it was about 2x as fast as a 15k rpm ultra 160 scsi drive...quite fast.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Originally posted by: jhu
it's called the VESA Local Bus, or VLB, and it really sucked btw. and it was super expensive. besides, VLB IS HUGER THAN THE XBOX. also, vlb was proprietary tech, if i recall, and pci is open.

it didn't suck. and it was cheaper than pci back in the day. nor was it proprietary. it just wasn't very flexible. the problem with it was that you could only use 3 vlb slots reliably at 33mhz. on a 50 mhz bus, you'd be lucky to be able to use one slot.

Yes it did - if you'd been working in system configuration and maintenance back then, you'd now. For the users, it may have been nice.

The most stupid thing about it was that the whole thing was tied to a particular processor's local bus. With the move to the Pentium, out it went. (And every CPU bus from there was way too fast and fragile to allow for slots. Remember the COAST desaster?)
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |