Using firehose

Aug 22, 2004
107
0
0
My friend pointed me at this site:

http://heroinewarrior.com/firehose.php3

Basically, you can combine multiple NICs in a *nix box to increase the total max bandwidth (i.e. 2 1000 Mbit cards = 2000 Mbit connection). Has anyone played around with this yet? I'm not able to find really much info on the web when googling and this has not appeard to have been discussed in these forums. Basically, I'd like to have this setup on my Linux server and desktop so that I can move large files between them rapidly.
 

bluestrobe

Platinum Member
Aug 15, 2004
2,033
1
0
Load balancing is nothing new. However I never heard of firehose or know much about linux. It might be worth a shot.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
Never heard of it.

Normally bonding NICs is done at the driver level and requires a switch that supports it.

Also - huge servers have a hard time filling a single gigE NIC, let alone two. It's normally done for redundancy purposes. one nic goes to one access switch and the other to another access switch.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: pak9rabid
I'd like to have this setup on my Linux server and desktop so that I can move large files between them rapidly.

Interesting, and it probably can do the above. However, it's greatly limited by being just a toolkit that with some dedicated applications and not integrated into the OS / networking stack. This means that you can transfer files using the pre-built utility, but can't do much else. Transferring files back and forth is good and all, but a better use of a fast file server is direct access through applications.

I'd also think that getting greater than gigabit speeds during file transfers would require very fast drive arrays, so you'd typically see no benefit over basic gigabit. But you don't know all the cases and variable until you try, and this could be a useful tool for testing just that (though probably confirming the negative.)

You'd also get to see some other bottlenecks in addition to the drive bottleneck. CPU bottleneck perhaps. PCI bottleneck (if you're going through standard PCI, it'd be a killer).
 
Aug 22, 2004
107
0
0
from what I read on the site listed above, you can use this with anything that uses tcp/ip. but yea, I didn't think about the hard drive bottleneck though...im sure that would be the limiting factor.
 

phisrow

Golden Member
Sep 6, 2004
1,399
0
0
I think that 802.3ad is the official way to do this sort of thing; but it's nice to see an app that'll do it, albeit only for certain applications, over just about any TCP/IP link.
 
Aug 22, 2004
107
0
0
actually, when i went to recompile a linux kernel, i noticed the "Bonding driver support" (2.6 kernel). this looks like exactly the same thing that firehose does. this is basically 802.3ad support in linux. for those of you interested, you can access it in the linux kernel config here:

-> Device Drivers
-> Network device support
-> Network device support
->Bonding driver support
 

robphelan

Diamond Member
Aug 28, 2003
4,084
17
81
hmmm.. i'm a little leary of a site that states

Pipe gigabytes of uncompressed video, CD-R images, scientific data, tar archives, and porn all with the greatest of ease.

 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
OpenBSD's trunk(4) seems interesting, although I don't know how much it'll help performance wise. One of the ways devs said they would use it appears in the second example: a dev using wireless on his laptop most of the time, but switching to wired for bigger transfers.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: n0cmonkey
OpenBSD's trunk(4) seems interesting, although I don't know how much it'll help performance wise. One of the ways devs said they would use it appears in the second example: a dev using wireless on his laptop most of the time, but switching to wired for bigger transfers.

Of the specified protocols for trunk, none of them seem to address performance improvement. Round Robin sounds more like load balancing, not the concurrent transfers that would be implied for performance improvement.

Broadcom and others have some utilites that provide such features. Broadcom Advanced Control Suite (2) provides a couple of versions of their own "Smart Load Balancing", and a couple of versions of 802.3ad. 802.3ad requires a supporting switch. I've tried the SLB, but not properly -- with non-Broadcom NIC's on one end. In some cases/configuration, it alternates the connections in a round-robin. In others, it seems to do link aggregation as desired, but gives me no performance improvement.

Has anyone had more success than me with this / tried it in a kosher environment?
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Madwand1
Originally posted by: n0cmonkey
OpenBSD's trunk(4) seems interesting, although I don't know how much it'll help performance wise. One of the ways devs said they would use it appears in the second example: a dev using wireless on his laptop most of the time, but switching to wired for bigger transfers.

Of the specified protocols for trunk, none of them seem to address performance improvement. Round Robin sounds more like load balancing, not the concurrent transfers that would be implied for performance improvement.

More connections utilizing more bandwidth. Sounds like a performance improvement to me.

No, it isn't going to speed up the average download, but it's more useful.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: n0cmonkey
More connections utilizing more bandwidth. Sounds like a performance improvement to me.

A lot of things sound like performance improvement until you try them. The Broadcom SLB for example uses both connections and gives me no improvement. Has anyone actually confirmed a performance improvement from trunk or any of the others (even just for a benchmark app?)
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Madwand1
Originally posted by: n0cmonkey
More connections utilizing more bandwidth. Sounds like a performance improvement to me.

A lot of things sound like performance improvement until you try them. The Broadcom SLB for example uses both connections and gives me no improvement. Has anyone actually confirmed a performance improvement from trunk or any of the others (even just for a benchmark app?)

I haven't seen anything, but I haven't been paying too close attention. The timing at which this trunk(4) interface was added makes me believe that it was more for failover than anything else. I'm guessing it's being used with CARP and whatnot to aid in automatic failover when stuff happens.

EDIT: What kind of benchmark do you want? If I feel so inclined I may try it out.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
Originally posted by: Madwand1
Originally posted by: n0cmonkey
More connections utilizing more bandwidth. Sounds like a performance improvement to me.

A lot of things sound like performance improvement until you try them. The Broadcom SLB for example uses both connections and gives me no improvement. Has anyone actually confirmed a performance improvement from trunk or any of the others (even just for a benchmark app?)

Yes - we see steady 60-80% utilization on 4 NICs using Intel's bonding with cisco etherchannel (very similar to link aggregation).

The thing is the distribution algorithyms for "which nic do I send this packet out of" are based on layer2 or layer3 addresses. So a computer to computer transfer won't see any improvement because it will be following on link/path. But for servers that have thousands to 10s of thousands of connections the load will balance out and you'll see higher overall thruput from the server.

-edit- but no many servers are setup that way anymore. they just have two gig cards in them. each to separate switches running in a failover mode instead of bonded/channeled. Large backup servers will have 4 - two bonded to one switch, two bonded to another for failover.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |