NICs: PCI or PCI-E?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dnoyeb

Senior member
Nov 7, 2001
283
0
0
There should not be a difference, but in actuality, if the drivers are not that great, there can be a difference. Probably you can write sloppier drivers without being noticed when there is less pci bus traffic.
 

InlineFive

Diamond Member
Sep 20, 2003
9,599
2
0
I think that for most desktop and client computers it doesn't make a difference. Most days on recent hardware (especially Intel side) the I/O and integrated NICs no longer utilize the PCI bus, so PCI vs PCI-E is a moot point for performance.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: InlineFive
Most days on recent hardware (especially Intel side) the I/O and integrated NICs no longer utilize the PCI bus, so PCI vs PCI-E is a moot point for performance.

Intel's recent LOMs are PCIe, and there are others still putting PCI-based solutions on the motherboard. You need to check the details.

The links I posted earlies show the Asus P5N-SLI for example, a particularly poorly-executed design from this perspective -- they discarded the native nVIDIA networking for an add-on PCI-based one (and the numbers show this).
 

InlineFive

Diamond Member
Sep 20, 2003
9,599
2
0
Originally posted by: Madwand1
Originally posted by: InlineFive
Most days on recent hardware (especially Intel side) the I/O and integrated NICs no longer utilize the PCI bus, so PCI vs PCI-E is a moot point for performance.

Intel's recent LOMs are PCIe, and there are others still putting PCI-based solutions on the motherboard. You need to check the details.

The links I posted earlies show the Asus P5N-SLI for example, a particularly poorly-executed design from this perspective -- they discarded the native nVIDIA networking for an add-on PCI-based one (and the numbers show this).

My bad, I remember reading about how Intel had created a special bus on the later 8XX chipsets that provided a dedicated 1Gbps bandwidth to a NIC. That's why I said that.
 

cmetz

Platinum Member
Nov 13, 2001
2,296
0
0
InlineFive, 875 had CSA, which was really a dedicated but stripped down PCI-X bus that could be used with basically only one Intel gigE chip. Short-lived. Other Intel chipsets had 10/100 built into the south bridge and not shared with the PCI bus.

NVidia has their proprietary gigE on the south bridge not shared with anything else, and I think there are others (Via?).

In the Intel case, where they have a relatively decent gigE controller design, I wish they'd just put it in the south bridge and be done with it. In the case of others like NVidia... I'll take a PCI-E or PCI controller.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: cmetz
In the case of others like NVidia... I'll take a PCI-E or PCI controller.

Hehe. I really like my nForce3 nVIDIA implementation. Feature rich, great performance, never any trouble. Of course, needs and tastes vary.
 

Gary Key

Senior member
Sep 23, 2005
866
0
0
I will jump in here for a moment and deserve any bashing that my statements my cause.

Our current network tests are based on a widely utilized standard for reporting maximum throughput on the controllers. However, the test results represent a theoretical throughput number that will not be or cannot be reached on your typical Gigabit LAN in the home or most businesses for that matter. We are switching over to a more real world test (actual file transfers/downloads) utilizing both small and large file groups in the near future. Even these test scripts are not completely without issue as the network traffic and latency will be controlled, something that obviously does not occur in the "real" world.

However, these tests along with our current methods should give a better overall look at consumer networking hardware and network capabilities on the motherboards. Our first results will be in the upcoming Vista article where we will show the file transfer times between PCI Gigabit controllers on the ASUS P5B-Deluxe board are actually better than the PCIe Gigabit controllers on the same board. We still have some fleshing out to do on the test scripts before we roll it out in the motherboard/network hardware articles but right now, no real difference between the two standards unless you have saturated the PCI bus. That took some creative hardware combinations to do on the newer boards which have limited PCI slots now.

We were also surprised by the performance difference between Vista and XP. We still have some engineers up tonight looking at our initial test results. Our coverage will be limited in the Vista article but we will have further results in the near future and hopefully some answers about our test results.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
Originally posted by: Gary Key
I will jump in here for a moment and deserve any bashing that my statements my cause.

Our current network tests are based on a widely utilized standard for reporting maximum throughput on the controllers. However, the test results represent a theoretical throughput number that will not be or cannot be reached on your typical Gigabit LAN in the home or most businesses for that matter. We are switching over to a more real world test (actual file transfers/downloads) utilizing both small and large file groups in the near future. Even these test scripts are not completely without issue as the network traffic and latency will be controlled, something that obviously does not occur in the "real" world.

However, these tests along with our current methods should give a better overall look at consumer networking hardware and network capabilities on the motherboards. Our first results will be in the upcoming Vista article where we will show the file transfer times between PCI Gigabit controllers on the ASUS P5B-Deluxe board are actually better than the PCIe Gigabit controllers on the same board. We still have some fleshing out to do on the test scripts before we roll it out in the motherboard/network hardware articles but right now, no real difference between the two standards unless you have saturated the PCI bus. That took some creative hardware combinations to do on the newer boards which have limited PCI slots now.

We were also surprised by the performance difference between Vista and XP. We still have some engineers up tonight looking at our initial test results. Our coverage will be limited in the Vista article but we will have further results in the near future and hopefully some answers about our test results.

Sounds like good stuff. Just keep in mind that IMHO (and probably verified in your tests) that the MS stack and their attachment to Windows for Workgroups mentatlity does hamper performance. XP started to utilize "proper" standards for TCP implementation, I hope vista carries on that progress.

If I can help with the protocol/trace analysis please PM me.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Yes, I've seen Vista produce some unexpectedly nice performance from PCI (Intel) without using jumbo frames (and some real klunkers elsewhere, which we excuse due to presumably immature drivers). We should note here that Vista is a bleeding edge minority, and for many of us, not really ready for prime time due to issues with this or that requirement we have. In other words, Vista alone doesn't define how PCI/PCIe networking behaves in practice, and I'd look forward to measurements of PCI vs. PCIe in XP/others as well, and be really surprised if these results somehow overturned all the PCI vs. PCIe results seen to date.

Esp. a Marvell PCI NIC as a gigabit speed demon? Ain't that a surprise! Is this a new chip...?

I'd like to add that I set out a couple of months ago to do Vista to Vista performance measurements, but had to give that up, just because the PCI bus on one of my test machines was itself so slow that I couldn't get good numbers from the system, despite my best efforts, NICs, and querying on several forums including here.
 

cmetz

Platinum Member
Nov 13, 2001
2,296
0
0
Gary Key, please look at the netperf tool. It's primarily intended for UNIX-like systems and may not use the Windows network stack the way that gives best performance, but it was written by and has been contributed to by people who understand the problem well. A lot of Windows network benchmarks published on review sites are not the work of people who are really networking people, and it shows in the quality of their test. (Yes, benchmarks are a hard problem, they're never reality, and it's a question of what simulates reality best...)

It would also be worthwhile to bring in Linux 2.6.19 and some BSDs to compare. First, it's interesting from an OS comparison point of view, and second, it's interesting to try to separate the hardware and the device driver as variables. Especially for Vista, where a lot of vendors are scrambling to put out a driver, and will surely refine later.

(and of course, my personal bias... I'd just like to see review sites publish *good* Linux and OpenBSD numbers, because those are the only OSs I care about performance on anyway)
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: Gary Key
Our first results will be in the upcoming Vista article where we will show the file transfer times between PCI Gigabit controllers on the ASUS P5B-Deluxe board are actually better than the PCIe Gigabit controllers on the same board.

From the Feb. 1 article:

http://www.anandtech.com/systems/showdoc.aspx?i=2917&p=9

Just to get a quick idea of what these new features can do, we ran our usual networking benchmark suite on a pair of ASUS P5B-Deluxe motherboards using both the on-board PCI and PCIe connected gigabit network controllers (Marvell 88E8056 and 88E8001 respectively).

That's backwards -- the 88E8056 is PCIe.

http://www.marvell.com/products/pcconn/...30R43uJmVmj5MDKJ1Wr40obORi58FS80201601

How far does this mistake extend, and did I miss the comparison in the article somewhere?
 

cmetz

Platinum Member
Nov 13, 2001
2,296
0
0
That could just be an error / typo in the article, not in the result.

We're still dealing with relatively early PCI-E implementations, and they may have performance-impacting problems. Remember the first PCI implementations? Intel's NX chipset?
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Here are some sample numbers for actual file transfers in my setups. Same computers, switch, cables, etc. used. Similar configuration of NIC properties. Same NIC driver version.

Marvell PCIe, no jumbo frames:

ftp: 10000000000 bytes received in 97.42Seconds 102646.22Kbytes/sec.

Marvell PCI, no jumbo frames:

ftp: 10000000000 bytes received in 184.99Seconds 54058.44Kbytes/sec.

This is an extreme case, and the Marvell PCI can perform somewhat better (with jumbo frames), and the Marvell PCIe can perform somewhat worse (also with jumbo frames!).

However, this performance is representative for my implementations -- the Marvell PCI coming in around bottom for throughput, and the Marvell PCIe around the top.

Could the situation be reversed in some other pair of Marvell PCI/PCIe implementations? I guess so, but I find it hard to believe because of what I always see with my implementation.
 

marulee

Golden Member
Oct 27, 2006
1,299
1
0
Originally posted by: jlazzaro
Originally posted by: JackMDS
There is No Functional diffrence.

you sure?

I thought the problem with PCI is that it only offers a throughput (theoretical) of 1.056Gbps or 132MBps...this is shared among all devices running on it since it only uses one bus.

So, if you have a Gigabit NIC running at 1000Mbps, you are using about 95% of the avaliable PCI bus bandwidth, basically maxxing out the PCI bus and taking usable bandwidth away from the other devices on the bus.

 

sieistganzfett

Senior member
Mar 2, 2005
588
0
0
madwand1, your test shows a PCIe Gbe NIC is about 2x as fast as a PCI one... what else is on your pci bus that is taking bandwidth? i would only expect that if something bandwidth intensive was on PCI, or something else like a flaw in the drivers somewhere else, like chipset. theoretically, just a NIC on pci, nothing else, would not hit a cap like that, unless there is really that much overhead on pci, or something wierd going on.
 

cmetz

Platinum Member
Nov 13, 2001
2,296
0
0
Madwand1, is your PCI-E NIC a Yukon or a Yukon2? They did some architecture improvements in the Y2 I believe, and I would hope that means much better performance. The Yukon's performance was disappointing, doubly so because its ancestor (the XMAC) was a good performer.
 

acaeti

Member
Mar 7, 2006
103
0
0
Originally posted by: Madwand1
Here are some sample numbers for actual file transfers in my setups. Same computers, switch, cables, etc. used. Similar configuration of NIC properties. Same NIC driver version.

Marvell PCIe, no jumbo frames:

ftp: 10000000000 bytes received in 97.42Seconds 102646.22Kbytes/sec.

Marvell PCI, no jumbo frames:

ftp: 10000000000 bytes received in 184.99Seconds 54058.44Kbytes/sec.

This is an extreme case, and the Marvell PCI can perform somewhat better (with jumbo frames), and the Marvell PCIe can perform somewhat worse (also with jumbo frames!).

However, this performance is representative for my implementations -- the Marvell PCI coming in around bottom for throughput, and the Marvell PCIe around the top.

Could the situation be reversed in some other pair of Marvell PCI/PCIe implementations? I guess so, but I find it hard to believe because of what I always see with my implementation.

What was CPU load like during this PCI-E vs. PCI test?
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: sieistganzfett
madwand1, your test shows a PCIe Gbe NIC is about 2x as fast as a PCI one... what else is on your pci bus that is taking bandwidth? i would only expect that if something bandwidth intensive was on PCI, or something else like a flaw in the drivers somewhere else, like chipset. theoretically, just a NIC on pci, nothing else, would not hit a cap like that, unless there is really that much overhead on pci, or something wierd going on.

My PCI bus is fine; different PCI implementations perform better, even that Marvell PCI with a different config, as I'd mentioned. This example wasn't a neat and clean PCI bus vs. PCIe comparison -- it was sort of, but more so a specific Marvell PCI vs. Marvell PCIe comparison. I hinted about this earlier:

Originally posted by: Madwand1
PCIe generally performs better, and sometimes has the additional benefit of newer chips (which can be a "mixed" blessing with cheapening of designs, etc., but let's try to be positive!).

From what I see, Marvell PCI is limited to the 88E8001 chip -- which is Yukon I, and as we see in my example, doesn't perform that great in general. My Marvell PCIe is based on 88E8052, which is Yukon II, and does perform well. So what? Well, as I said above, with PCIe you can often get newer chips, and then some benefit from that... If you decide Marvell and PCI, IMO you'll probably get less than great performance (unless you get lucky and find some older design perhaps). If you decide Marvell and PCIe, your luck holds.

This is why I have trouble with the claim of another Marvell 88E8001 PCI chip out-performing a Marvell 88E8056 PCIe implementation. Also note my PCIe numbers -- the don't leave much room for improvement. Even if my PCI results were low due to a flawed PCI bus in my case, it'd be very hard for another another gigabit implementation to significantly out-perform it in practice.

A cleaner PCI vs. PCIe comparison might done using Broadcom or Intel -- they probably have more similar chips with different interfaces. I don't have an Intel PCIe nor a Broadcom PCI at present, so can't do them, but my nVIDIA native, Broadcom PCIe, and Marvell PCIe numbers are all somewhat better than the best PCI numbers I've got -- using an Intel server NIC, which is also quite a bit better than the Realtek and Marvell PCI.

So what? These are just tendencies I've observed, and when asked about PCI vs. PCIe, that's what I report. The point about crowding the PCI bus is even more important, and is a commonly-made mistake when implementing PCI-based solutions, esp. with older hardware. With PCIe-capable hardware, there's generally no need to risk that mistake, and generally benefits seen, in the higher end.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: acaeti
What was CPU load like during this PCI-E vs. PCI test?

Are you asking if there was any other significant load on the system during these tests? Was the CPU utilization unfair in favour of PCIe perhaps? No, I didn't have any difference in the system loads before the two tests, and don't put anything significant on the system during the tests because high-speed gigabit does takes its toll on the CPU and system.

Load was fairly high during the test, but the Marvell PCI chip doesn't have high CPU utilization itself, and benefits from its lower data rate -- if you transfer slower, your CPU utilization will be lower.

I don't think that specific numbers would be of much use -- they're very CPU and architecture and even data rate dependent. Hence they can also be misleading. I also just didn't record the numbers in these runs. But very roughly, I used AMD X2 3800+ at stock speed, and hit somewhere around 60-70% at times. (Salt please.)
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: Dutchmaster420
would it be eaiser on my pc if i used a pci-e nic or just use my onboard? or does it not matter

It depends... Which motherboard / on-board NIC? On-board could already be PCIe, or PCI, or native.

If you're using a native on-board NIC with an on-board storage controller for a high-speed RAID setup, and are transferring large amounts of data over your LAN to another high-speed setup, then there's a chance that going to an add-on PCIe NIC can help. Or if you're doing better than average speeds, and are being held back by a slower PCI-based on-board NIC. Or perhaps if you can't find decent drivers for your OS for the on-board NIC.

But for the most part, most users, configurations, and average speed transfers, it probably wouldn't make a material difference. <- this is the general rule. The exceptions are where it can make a difference.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |