Originally posted by: InlineFive
Most days on recent hardware (especially Intel side) the I/O and integrated NICs no longer utilize the PCI bus, so PCI vs PCI-E is a moot point for performance.
Originally posted by: Madwand1
Originally posted by: InlineFive
Most days on recent hardware (especially Intel side) the I/O and integrated NICs no longer utilize the PCI bus, so PCI vs PCI-E is a moot point for performance.
Intel's recent LOMs are PCIe, and there are others still putting PCI-based solutions on the motherboard. You need to check the details.
The links I posted earlies show the Asus P5N-SLI for example, a particularly poorly-executed design from this perspective -- they discarded the native nVIDIA networking for an add-on PCI-based one (and the numbers show this).
Originally posted by: cmetz
In the case of others like NVidia... I'll take a PCI-E or PCI controller.
Originally posted by: Gary Key
I will jump in here for a moment and deserve any bashing that my statements my cause.
Our current network tests are based on a widely utilized standard for reporting maximum throughput on the controllers. However, the test results represent a theoretical throughput number that will not be or cannot be reached on your typical Gigabit LAN in the home or most businesses for that matter. We are switching over to a more real world test (actual file transfers/downloads) utilizing both small and large file groups in the near future. Even these test scripts are not completely without issue as the network traffic and latency will be controlled, something that obviously does not occur in the "real" world.
However, these tests along with our current methods should give a better overall look at consumer networking hardware and network capabilities on the motherboards. Our first results will be in the upcoming Vista article where we will show the file transfer times between PCI Gigabit controllers on the ASUS P5B-Deluxe board are actually better than the PCIe Gigabit controllers on the same board. We still have some fleshing out to do on the test scripts before we roll it out in the motherboard/network hardware articles but right now, no real difference between the two standards unless you have saturated the PCI bus. That took some creative hardware combinations to do on the newer boards which have limited PCI slots now.
We were also surprised by the performance difference between Vista and XP. We still have some engineers up tonight looking at our initial test results. Our coverage will be limited in the Vista article but we will have further results in the near future and hopefully some answers about our test results.
Originally posted by: Gary Key
Our first results will be in the upcoming Vista article where we will show the file transfer times between PCI Gigabit controllers on the ASUS P5B-Deluxe board are actually better than the PCIe Gigabit controllers on the same board.
Just to get a quick idea of what these new features can do, we ran our usual networking benchmark suite on a pair of ASUS P5B-Deluxe motherboards using both the on-board PCI and PCIe connected gigabit network controllers (Marvell 88E8056 and 88E8001 respectively).
ftp: 10000000000 bytes received in 97.42Seconds 102646.22Kbytes/sec.
ftp: 10000000000 bytes received in 184.99Seconds 54058.44Kbytes/sec.
Originally posted by: jlazzaro
Originally posted by: JackMDS
There is No Functional diffrence.
you sure?
I thought the problem with PCI is that it only offers a throughput (theoretical) of 1.056Gbps or 132MBps...this is shared among all devices running on it since it only uses one bus.
So, if you have a Gigabit NIC running at 1000Mbps, you are using about 95% of the avaliable PCI bus bandwidth, basically maxxing out the PCI bus and taking usable bandwidth away from the other devices on the bus.
Originally posted by: Madwand1
Here are some sample numbers for actual file transfers in my setups. Same computers, switch, cables, etc. used. Similar configuration of NIC properties. Same NIC driver version.
Marvell PCIe, no jumbo frames:
ftp: 10000000000 bytes received in 97.42Seconds 102646.22Kbytes/sec.
Marvell PCI, no jumbo frames:
ftp: 10000000000 bytes received in 184.99Seconds 54058.44Kbytes/sec.
This is an extreme case, and the Marvell PCI can perform somewhat better (with jumbo frames), and the Marvell PCIe can perform somewhat worse (also with jumbo frames!).
However, this performance is representative for my implementations -- the Marvell PCI coming in around bottom for throughput, and the Marvell PCIe around the top.
Could the situation be reversed in some other pair of Marvell PCI/PCIe implementations? I guess so, but I find it hard to believe because of what I always see with my implementation.
Originally posted by: sieistganzfett
madwand1, your test shows a PCIe Gbe NIC is about 2x as fast as a PCI one... what else is on your pci bus that is taking bandwidth? i would only expect that if something bandwidth intensive was on PCI, or something else like a flaw in the drivers somewhere else, like chipset. theoretically, just a NIC on pci, nothing else, would not hit a cap like that, unless there is really that much overhead on pci, or something wierd going on.
Originally posted by: Madwand1
PCIe generally performs better, and sometimes has the additional benefit of newer chips (which can be a "mixed" blessing with cheapening of designs, etc., but let's try to be positive!).
Originally posted by: acaeti
What was CPU load like during this PCI-E vs. PCI test?
Originally posted by: Dutchmaster420
would it be eaiser on my pc if i used a pci-e nic or just use my onboard? or does it not matter