I've tried to find information on this for a long time, and thought I'd ask here (sorry if its been discussed, as i can't find where).
100Mbit network, with a 100Mbit swtich:
100Mbit = 12.5Kbytes/sec (theory)
1000Mbit/Gigabit
1000Mbit = 125Kbytes/sec (theory)
I have NEVER been able to get these results or even close to them.
on the 100Mbit i've gotten windows to report about 50% utilization, and threw programs like bpftp a sustained 5-8Kb/sec
on the 1Gbit i've gotten only a max of 10% utilization, or 10-12Kb/sec
Why is this? I know there is overhead, but that shouldn't matter. Also, for the 1Gbit, I'm transfering between machines on the same switch, a gigabit one, and each computer has Sriped Raid drives that benchmark at 100Mbyets/sec on average, so no bottleneck there either.
Some network guru know why this happens? and ways to increase performance?
Thanks
100Mbit network, with a 100Mbit swtich:
100Mbit = 12.5Kbytes/sec (theory)
1000Mbit/Gigabit
1000Mbit = 125Kbytes/sec (theory)
I have NEVER been able to get these results or even close to them.
on the 100Mbit i've gotten windows to report about 50% utilization, and threw programs like bpftp a sustained 5-8Kb/sec
on the 1Gbit i've gotten only a max of 10% utilization, or 10-12Kb/sec
Why is this? I know there is overhead, but that shouldn't matter. Also, for the 1Gbit, I'm transfering between machines on the same switch, a gigabit one, and each computer has Sriped Raid drives that benchmark at 100Mbyets/sec on average, so no bottleneck there either.
Some network guru know why this happens? and ways to increase performance?
Thanks