10gbase-t

Brovane

Diamond Member
Dec 18, 2001
5,480
1,672
136
One of my latest pet peeves with vendor equipment. We are starting to get ready to upgrade to 10GB inside or Data Center. To keep costs down I am trying to use 10GBASE-T and CAT-6A cable where possible. It just seems like every other vendor I talk to doesn't support 10GBASE-T. They want me to use SFP+, SFP+ Twinax or CX4 to connect to there equipment. When I tell them I want to use 10GBASE-T and just plug a CAT6A patch cable into the back of their equipment, they look at me like I am crazy.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
10gig is still big on the SFP slots. You can buy twisted pair SFPs and should be able to slide them right in.
 

Brovane

Diamond Member
Dec 18, 2001
5,480
1,672
136
10gig is still big on the SFP slots. You can buy twisted pair SFPs and should be able to slide them right in.

Yeah but my costs go up going that direction. I was looking at droping a Nexus 2232TM at the top of rack but now I might need to go the 2232PP route. Just kind of annoying because inside the data center 10Gbase-T should work fine for most applications.
 

cmetz

Platinum Member
Nov 13, 2001
2,296
0
0
Brovane, this is exactly like how 1000BaseT adoption went. Right now, it takes a lot of DSP horsepower in the PHY to make the link actually work. That DSP horsepower costs significant money, requires significant power, and increases your latency significantly. Meanwhile, your cable plant might or might not really comply with the physical requirements, and the relatively immature PHY implementation doesn't give you as much extra headroom there as you might get from a mature implementation.

Much higher costs, much higher risk of problems. Of course vendors aren't pushing it yet.

CX4 is pretty dead at this point. Many vendors have proprietary SFP+ active link cables, and that's okay if it saves a lot of money, but remember it IS proprietary. Standardize on SR. Just put in the 10G SR-grade fiber and be happy. It works well. It's what people with real data centers use.
 

Brovane

Diamond Member
Dec 18, 2001
5,480
1,672
136
Brovane, this is exactly like how 1000BaseT adoption went. Right now, it takes a lot of DSP horsepower in the PHY to make the link actually work. That DSP horsepower costs significant money, requires significant power, and increases your latency significantly. Meanwhile, your cable plant might or might not really comply with the physical requirements, and the relatively immature PHY implementation doesn't give you as much extra headroom there as you might get from a mature implementation.

Much higher costs, much higher risk of problems. Of course vendors aren't pushing it yet.

CX4 is pretty dead at this point. Many vendors have proprietary SFP+ active link cables, and that's okay if it saves a lot of money, but remember it IS proprietary. Standardize on SR. Just put in the 10G SR-grade fiber and be happy. It works well. It's what people with real data centers use.

I understand what you are saying. It is just kind of annoying. I can drop a top of rack Cisco 2232TM which would give me 32 10Gbase-T ports at 1/3 the cost of going the 10G SR route and messing with SFP+ at both ends.
 

Brovane

Diamond Member
Dec 18, 2001
5,480
1,672
136
Brovane, do the opex math.

We have parts of or data center already wired with CAT-6A. I don't see how deploying top of rack Cisco 2232TM is going to result in high op-ex costs over using 10GB-SR with SFP+.
 

azev

Golden Member
Jan 27, 2001
1,003
0
76
I have heard great thing about arista 10gb stuff. My resource told me that lots of ISP and IX's have been deploying Arista 10Gb stuff cause others have failed to meet their bandwidth requirements.
 

RadiclDreamer

Diamond Member
Aug 8, 2004
8,622
40
91
If you cant afford the SFP then you cant afford 10g, it isnt cheap. The costs will come down just as they did with 1g but for the time being either pay the piper or stay with 1g
 

alkemyst

No Lifer
Feb 13, 2001
83,967
19
81
Yeah but my costs go up going that direction. I was looking at droping a Nexus 2232TM at the top of rack but now I might need to go the 2232PP route. Just kind of annoying because inside the data center 10Gbase-T should work fine for most applications.

10GBase-T is coming soon. I believe the blades are on the roadmap for 2013.
 

m1ldslide1

Platinum Member
Feb 20, 2006
2,321
0
0
You can always use the 2232PP with TwinAx cables instead of SR optics. That will cut the price down to about 1/10th per access port. Distance limitation of 10meters though, so keep that in mind when ordering...
 

Brovane

Diamond Member
Dec 18, 2001
5,480
1,672
136
If you cant afford the SFP then you cant afford 10g, it isnt cheap. The costs will come down just as they did with 1g but for the time being either pay the piper or stay with 1g

So you are saying a top of rack switch like a Nexus 2232TM wouldn't work?
 

Brovane

Diamond Member
Dec 18, 2001
5,480
1,672
136
You can always use the 2232PP with TwinAx cables instead of SR optics. That will cut the price down to about 1/10th per access port. Distance limitation of 10meters though, so keep that in mind when ordering...

I did consider that also. It looks like we are going to order a couple of these. The price point is a little less than the 2232TM. I like that you can use a regular SFP if you just need 1GB and you can upgrade to 10GB with TwinAX or SFP+ when needed. Nice to have that backward comptability.
 

Chapbass

Diamond Member
May 31, 2004
3,148
89
91
Not sure how you feel about Dell, but they have a 10gbase-t switch available for ToR: http://configure.us.dell.com/dellst...l_id=powerconnect-8024&c=us&l=en&s=biz&cs=555

I've also used Extreme Networks in the past with success: They have a 10gbase-t switch as well: http://www.extremenetworks.com/products/summit-x650.aspx#tab2n

Yeah, we use extreme networks for some things where I work. I'm not a networking guy myself, but the guys I've talked to seem to be happy with them. Were not running 10gb, but as a company they seem okay
 

alkemyst

No Lifer
Feb 13, 2001
83,967
19
81
I did consider that also. It looks like we are going to order a couple of these. The price point is a little less than the 2232TM. I like that you can use a regular SFP if you just need 1GB and you can upgrade to 10GB with TwinAX or SFP+ when needed. Nice to have that backward comptability.

As far as I know the 2232 is just a fabric extender and not autonomous. You will need a at least a Nexus 5k. Doing layer 3 is extra costs as well.
 

Brovane

Diamond Member
Dec 18, 2001
5,480
1,672
136
As far as I know the 2232 is just a fabric extender and not autonomous. You will need a at least a Nexus 5k. Doing layer 3 is extra costs as well.

We are replacing or 6500 with Nexus 7k this year and we where planning on having the 2232 be a extender of the 7k. It is fairly cool because the 2232 will just show up as a module on the 7k.
 

Brovane

Diamond Member
Dec 18, 2001
5,480
1,672
136
Not sure how you feel about Dell, but they have a 10gbase-t switch available for ToR: http://configure.us.dell.com/dellst...l_id=powerconnect-8024&c=us&l=en&s=biz&cs=555

I've also used Extreme Networks in the past with success: They have a 10gbase-t switch as well: http://www.extremenetworks.com/products/summit-x650.aspx#tab2n

With or network team Dell Power Connect switches = suck. They have used them in the past and didn't have much success. I am trying to get the network team to try out a Demo of a new Dell/Force 10 switch. However I am having difficulty getting traction on that one. I would love to test out some of these in one of or blade chassis's http://www.dell.com/us/enterprise/p/force10-mxl-blade/pd
Would be real nice to have internal 10GB without using pass through for everything which is what we currently do.
 

alkemyst

No Lifer
Feb 13, 2001
83,967
19
81
We are replacing or 6500 with Nexus 7k this year and we where planning on having the 2232 be a extender of the 7k. It is fairly cool because the 2232 will just show up as a module on the 7k.

Nexus is a big buzzword this year, however; it's really not a replacement for the 6500. If you are running a data center the Nexus makes sense. Outside of that the 6500 has features not available in the Nexus.

We had a best practices class a couple weeks ago that spoke on this.

The Nexus architecture is really cool. The chassis are extremely heavy though and unlike a 6500 where you can 'pull out' some of the weight, I think the 7010 chassis is like 350lbs+ empty.
 

Brovane

Diamond Member
Dec 18, 2001
5,480
1,672
136
Nexus is a big buzzword this year, however; it's really not a replacement for the 6500. If you are running a data center the Nexus makes sense. Outside of that the 6500 has features not available in the Nexus.

We had a best practices class a couple weeks ago that spoke on this.

The Nexus architecture is really cool. The chassis are extremely heavy though and unlike a 6500 where you can 'pull out' some of the weight, I think the 7010 chassis is like 350lbs+ empty.

We where not using any features on the 6500 that where not available on the 7010. Basically our two 6500 do L3 and L2 with no special modules etc. installed. We discussed this at length with or Cisco Sales Engineer because we where debating between a 6509-E,6513-E or the 7010. We also debated about layering in new 6500s with a Nexus 5ks as a distribution switch in the Data Center with the 6500 staying as the core. At some point one of our senior network guys sat down with the Cisco SE and went over or current 6500 config to make sure all the features we currently used are supported in the Nexus Chassis. After all was said and done the Nexus was about 20% more expensive than the 6500 however it offered a incredible amount of bandwidth potential and growth potential. The 6500 has some good modules however we have never used any of them and have no plans to.

The current chassis's we are replacing where first installed in 1999. The Sup engines and cards have been updated over time. We have very much gotten or money's worth out of the 6500. Management also felt that since the Nexus is a lot newer platform than the 6500 the Nexus down the road will have more growth potential than the 6500 which is literally from the last century. Not saying I completely agree with their reasoning. Personally I would have liked to see the Nexus 5k at the Distribution layer in the DC and a newer 6500-E remain at the Core. However this combination pushed up the price beyond just doing the Nexus 7k as both Core and Distribution. So during budgets last year the decision was made to go Nexus 7k.
 

alkemyst

No Lifer
Feb 13, 2001
83,967
19
81
We where not using any features on the 6500 that where not available on the 7010. Basically our two 6500 do L3 and L2 with no special modules etc. installed. We discussed this at length with or Cisco Sales Engineer because we where debating between a 6509-E,6513-E or the 7010. We also debated about layering in new 6500s with a Nexus 5ks as a distribution switch in the Data Center with the 6500 staying as the core. At some point one of our senior network guys sat down with the Cisco SE and went over or current 6500 config to make sure all the features we currently used are supported in the Nexus Chassis. After all was said and done the Nexus was about 20% more expensive than the 6500 however it offered a incredible amount of bandwidth potential and growth potential. The 6500 has some good modules however we have never used any of them and have no plans to.

The current chassis's we are replacing where first installed in 1999. The Sup engines and cards have been updated over time. We have very much gotten or money's worth out of the 6500. Management also felt that since the Nexus is a lot newer platform than the 6500 the Nexus down the road will have more growth potential than the 6500 which is literally from the last century. Not saying I completely agree with their reasoning. Personally I would have liked to see the Nexus 5k at the Distribution layer in the DC and a newer 6500-E remain at the Core. However this combination pushed up the price beyond just doing the Nexus 7k as both Core and Distribution. So during budgets last year the decision was made to go Nexus 7k.

Being you are talking data center then Nexus is probably the better match. However, saying the 6500 is from the last century so the Nexus will have better growth potential is totally wrong. Both will be current and improving lineups for a long time. Cisco is not decommissioning the 6500 chassis at all, but will continue to improve it as well. The E series is majorly different from the original 6500.

With all the non-E 6500's going out of support NOW you will see many thinking going to Nexus the smart move. In the data center it can be, outside of it usually a bad call. There are feature sets for the 6500 that will never be offered in the Nexus lineup that are critical for large businesses to get connectivity out to their end users.

Also I don't agree with mixing Nexus/non-Nexus unless budget dictates that as a stepping stone.

In the data center: Nexus 7k->5k->2k etc.

In the business: Cisco 6500->3750->2960 etc

Also add in the fact that Nexus is a very different animal as far as configuration. Re-tooling your people/staff to support the new platform is another heavy cost.

We are the number 2 Cisco partner, we sell both platforms everyday so I have no reservations to recommend either, but neither is a one-size fits all platform.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
We are replacing or 6500 with Nexus 7k this year and we where planning on having the 2232 be a extender of the 7k. It is fairly cool because the 2232 will just show up as a module on the 7k.

Watch the rack depth on Nexus. They're DEEP.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
Thanks for the heads up.

We installed brand new racks for this deployment so we should be good.

Ok...I say again. Watch the depth on the 7k. I had one gig where I put in 7018s (WHY cisco? WHY side to side air flow?) and had to redo everything because they simply didn't have the space for them. We're talking 20+ full rows.

No way they could fit them because of the depth. Had to rip out some racks and make space specifically for 7k, reroute a lot of cable etc. That's where the whole top of rack/end of row REALLY comes in handy. Core cabling is easy if you do that.

Back to your OP - you use high bandwidth 50 micron, laser optimized (OM3, OM4) MM fiber in the data center for 10 gig, twinax for short runs if you can get away with it ie host connections.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |