anyone else work at a data center?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

daveshel

Diamond Member
Oct 10, 1999
5,453
2
81
Originally posted by: foghorn67
Run away from there.













Now.

I spent the last 3+ years working in a local government data center and it was hell. Despite oppressive management and crippling change management procedures, everything that could go wrong did. Nobody ever knew what anybody else was doing. We kept undertaking poorly planned domain migrations in the name of consolidation and we routinely broke all kinds of stuff. Our CIO practiced management by temper tantrum and was more concerned about meeting arbitrary deadlines than listening to his experts about technical concerns. I was the Exchange admin, and I kept trying to tell everybody that our information stores were getting way too big, yet the CIO kept caving to some pressure from some attorneys who said we needed to keep everything forever. And when things broke, staff time was considered an unlimited resource.

After years of arguing myself blue in the face, getting written up for insubordination and having a mental health crisis, I quit and went to work for a smaller department that was exempt from the consolidation mandate. Now work is fun again and I have a life.
 

yoda291

Diamond Member
Aug 11, 2001
5,079
0
0
Originally posted by: loki8481
Originally posted by: spidey07
You need to fire everybody including the data center manager if they are hand terminating cables. Extremely bad practice, and now you see why.

how else would you terminate cables? I've been making them by hand for 5+ years and never had an issue.

If you do things "RIGHT", this kind of stuff never happens. And why do you only have a single network connection anyway? All servers are to have 4 connections - one primary, one secondary, one for out of band management and one for remote support board.

we do what our clients pay for... if they're not paying for fail-over ethernet connections, teaming, etc, it's not hooked up.

I can't imagine any DC with 3000+ servers giving 4+ connections to each one.

You buy them pre-crimped with the jacket that protects that stupid plastic snappy tab. You never use a hand-crimped cable for a production link, especially inter-switch or upstream connections.
Reason: network gear capable of pushing a cable is accessibly cheap now. It's almost impossible to hand crimp cables that will pass cat5e(let alone 6) standards in anything resembling useable quantities or in sufficient time to justify how long it'd take. Sure your cable technically works and will probably do 100Mbps+ just fine. Start using it on saturated gig links, and you'll see issues. Add to that time it takes to diagnose that a cable is faulty on a network level and it's pretty silly not to buy plant certified cable.

4 connections is excessive for most shops. if your infrastructure is architected and provisioned in such a manner that it is fault tolerant on both availability and system load, you almost never need more than 2. If you're excessively provisioned with regards to load, and have a good reaping policy for failures, 1 connection will work as well.
If you actually REQUIRE 4 connections to every single server, you've probably done something wrong.
 

Juddog

Diamond Member
Dec 11, 2006
7,852
6
81
Our data center is set up pretty well, the last time it failed was due to a power outage in the area that lasted over 8 hours, causing the UPS's to fail. Now we have a redundant generator as backup, to backup the UPS's.
 

Juddog

Diamond Member
Dec 11, 2006
7,852
6
81
Originally posted by: yoda291
Originally posted by: loki8481
Originally posted by: spidey07
You need to fire everybody including the data center manager if they are hand terminating cables. Extremely bad practice, and now you see why.

how else would you terminate cables? I've been making them by hand for 5+ years and never had an issue.

If you do things "RIGHT", this kind of stuff never happens. And why do you only have a single network connection anyway? All servers are to have 4 connections - one primary, one secondary, one for out of band management and one for remote support board.

we do what our clients pay for... if they're not paying for fail-over ethernet connections, teaming, etc, it's not hooked up.

I can't imagine any DC with 3000+ servers giving 4+ connections to each one.

You buy them pre-crimped with the jacket that protects that stupid plastic snappy tab. You never use a hand-crimped cable for a production link, especially inter-switch or upstream connections.
Reason: network gear capable of pushing a cable is accessibly cheap now. It's almost impossible to hand crimp cables that will pass cat5e(let alone 6) standards in anything resembling useable quantities or in sufficient time to justify how long it'd take. Sure your cable technically works and will probably do 100Mbps+ just fine. Start using it on saturated gig links, and you'll see issues. Add to that time it takes to diagnose that a cable is faulty on a network level and it's pretty silly not to buy plant certified cable.

4 connections is excessive for most shops. if your infrastructure is architected and provisioned in such a manner that it is fault tolerant on both availability and system load, you almost never need more than 2. If you're excessively provisioned with regards to load, and have a good reaping policy for failures, 1 connection will work as well.
If you actually REQUIRE 4 connections to every single server, you've probably done something wrong.

Speaking of CAT, has anybody switched over to CAT6 yet?
 

imported_Imp

Diamond Member
Dec 20, 2005
9,148
0
0
I almost took down our division servers once by tripping on the cables while looking for a serial number... Ok, not really, but I did have to go back there and it was ugly.
 

ultimatebob

Lifer
Jul 1, 2001
25,135
2,445
126
Originally posted by: loki8481
Originally posted by: spidey07
You need to fire everybody including the data center manager if they are hand terminating cables. Extremely bad practice, and now you see why.

how else would you terminate cables? I've been making them by hand for 5+ years and never had an issue.

If you do things "RIGHT", this kind of stuff never happens. And why do you only have a single network connection anyway? All servers are to have 4 connections - one primary, one secondary, one for out of band management and one for remote support board.

we do what our clients pay for... if they're not paying for fail-over ethernet connections, teaming, etc, it's not hooked up.

I can't imagine any DC with 3000+ servers giving 4+ connections to each one.

Yeah... 4+ connections sounds like extreme overkill unless you're running something with a lot of network I/O like a file server or a big VMWare ESX Server. We normally use 3... one primary, one secondary, and 1 for iLO remote management at remote sites.
 

xSauronx

Lifer
Jul 14, 2000
19,586
4
81

i wish i had pics from when i worked with a wISP in kansas. their "server room" when I got there was an absolute nightmare.

fortunately they soon moved to another floor, and had a new server room that wasnt frightening when they were done with it.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
Originally posted by: Juddog
Speaking of CAT, has anybody switched over to CAT6 yet?

Any new cabling I have spec out at 10 gig CAT6a for the last few years. All new cabling should be the same 10 gigabit copper is going to happen really fast. Haven't done anything with category 5/5e for at least 5 years - it's dead jim.

Regarding the 4 network cards - 1 to primary switch, 1 to secondary switch. If you have virtual switching support you can team/etherchannel/LAG them. Then you have a physically separate and also redundant management network - management NIC (for admin purposes) goes to one set, the ILO/RIM whatever goes to the other. The reason for the management NIC/network is for security and monitoring/logging. In large data centers it's not unheard of to capture all traffic at key points and the management network is a prime place. Add in a clustering/heartbeat NIC and you have 5.
 

nageov3t

Lifer
Feb 18, 2004
42,816
83
91
Originally posted by: yoda291
Originally posted by: loki8481
Originally posted by: spidey07
You need to fire everybody including the data center manager if they are hand terminating cables. Extremely bad practice, and now you see why.

how else would you terminate cables? I've been making them by hand for 5+ years and never had an issue.

If you do things "RIGHT", this kind of stuff never happens. And why do you only have a single network connection anyway? All servers are to have 4 connections - one primary, one secondary, one for out of band management and one for remote support board.

we do what our clients pay for... if they're not paying for fail-over ethernet connections, teaming, etc, it's not hooked up.

I can't imagine any DC with 3000+ servers giving 4+ connections to each one.

You buy them pre-crimped with the jacket that protects that stupid plastic snappy tab. You never use a hand-crimped cable for a production link, especially inter-switch or upstream connections.
Reason: network gear capable of pushing a cable is accessibly cheap now. It's almost impossible to hand crimp cables that will pass cat5e(let alone 6) standards in anything resembling useable quantities or in sufficient time to justify how long it'd take. Sure your cable technically works and will probably do 100Mbps+ just fine. Start using it on saturated gig links, and you'll see issues. Add to that time it takes to diagnose that a cable is faulty on a network level and it's pretty silly not to buy plant certified cable.

4 connections is excessive for most shops. if your infrastructure is architected and provisioned in such a manner that it is fault tolerant on both availability and system load, you almost never need more than 2. If you're excessively provisioned with regards to load, and have a good reaping policy for failures, 1 connection will work as well.
If you actually REQUIRE 4 connections to every single server, you've probably done something wrong.

ah, yeah. pre-made cables for everything would be ideal and we use them whenever possible, but sometimes you have to work with the army you have, not the army you'd like. I collect my paycheck and do what I'm told

if we're out of 7' cables, I'd much rather a tech make one themselves rather than using a 20' factory premade and stuffing ten feet of slack in the rack.

also, the trunk cables we use only support 10/100... any solution requiring 100/1000 requires cables run directly to gig blades on the cisco 6513's we use, which could involve a cable running on the ladders over 19-20 racks (20 racks/row, 2x6513's in the first rack of each row)... not to mention the random pain in the ass client requests ("please connect my two servers on opposite ends of the building with a cross-over connection")
 

Brovane

Diamond Member
Dec 18, 2001
5,489
1,679
136
For cabling in where I work I had us order about 12 of those rolling plastic file cabinet drawers from Target. They each have two drawers in them. I stock in the drawers all the cable lengths from 3' to 15' in 1' foot increments so I usually always have the correct cable length on hand. When we start running low I order more. I also have a drawer for 20 foot and then a drawer for 25 foot and above. I use the rest of the drawers to stock KVM dongles, fiber etc. all the stuff that is usually just sitting in a cabinet in a pile. Seems to work well for us and the IT Director is very happy that the Data Center is usually fairly clean. I also like it when I am working. If I am running several cables I can just roll a couple of the cabinets over with some of the lengths that I expect that I need and not have to walk back and forth plus the top of the cabinets make a handy work space.
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
I've been in one before. I like the strong a/c blowing up from the vents in the floor. Definitely not a place to go in a dress. :laugh:
 

Mr Pickles

Diamond Member
Feb 24, 2006
4,104
1
0
Originally posted by: Steve

No, I was a helpdesk jockey before and after they put that in. I didn't stay much longer.

You know, it really isn't that big of a mess. The cables aren't so screwed that you can't follow a line from one point to another, it doesn't look like things are so tight in there that one false move will pull a cord out of an interface, and the wires aren't blocking the fans from the SAN, they are just sagging down in front of the switch that doesn't have any fans.

Its not that bad. I've seen much worse, much less efficient.
 

Mr Pickles

Diamond Member
Feb 24, 2006
4,104
1
0
Originally posted by: Brovane
For cabling in where I work I had us order about 12 of those rolling plastic file cabinet drawers from Target. They each have two drawers in them. I stock in the drawers all the cable lengths from 3' to 15' in 1' foot increments so I usually always have the correct cable length on hand. When we start running low I order more. I also have a drawer for 20 foot and then a drawer for 25 foot and above. I use the rest of the drawers to stock KVM dongles, fiber etc. all the stuff that is usually just sitting in a cabinet in a pile. Seems to work well for us and the IT Director is very happy that the Data Center is usually fairly clean. I also like it when I am working. If I am running several cables I can just roll a couple of the cabinets over with some of the lengths that I expect that I need and not have to walk back and forth plus the top of the cabinets make a handy work space.

This works well. We are lucky enough to have a large storage area that helpdesk\desktop and network support shares with desk space for testing and hardware maintenance, but we keep a plastic shelf in the DC with clear boxes that have cable of different lengths labeled. Its very handy.
 

Juddog

Diamond Member
Dec 11, 2006
7,852
6
81
Originally posted by: spidey07
Originally posted by: Juddog
Speaking of CAT, has anybody switched over to CAT6 yet?

Any new cabling I have spec out at 10 gig CAT6a for the last few years. All new cabling should be the same 10 gigabit copper is going to happen really fast. Haven't done anything with category 5/5e for at least 5 years - it's dead jim.

Regarding the 4 network cards - 1 to primary switch, 1 to secondary switch. If you have virtual switching support you can team/etherchannel/LAG them. Then you have a physically separate and also redundant management network - management NIC (for admin purposes) goes to one set, the ILO/RIM whatever goes to the other. The reason for the management NIC/network is for security and monitoring/logging. In large data centers it's not unheard of to capture all traffic at key points and the management network is a prime place. Add in a clustering/heartbeat NIC and you have 5.

I can't wait for 10 gigabit. My whole building is wired with gigabit, and it's a night and day difference over the last place I worked which was mostly 100 megabit with a few buildings that were still 10 megabit. In their defense though, they didn't want to rip up the ceilings / walls to re-wire some of the old buildings as they found there was asbestos.
 

Ns1

No Lifer
Jun 17, 2001
55,414
1,574
126
Originally posted by: Rubycon
I've been in one before. I like the strong a/c blowing up from the vents in the floor. Definitely not a place to go in a dress. :laugh:

isn't that the best part?
 

Juddog

Diamond Member
Dec 11, 2006
7,852
6
81

Why does the lens look so steamy? The only time I have seen a datacenter look like that was when the AC wasn't working right, walked in and felt like a jungle. They had to set up huge fans outside to cool it off. :laugh:
 

Steve

Lifer
May 2, 2004
16,572
6
81
www.chicagopipeband.com
Originally posted by: Juddog

Why does the lens look so steamy? The only time I have seen a datacenter look like that was when the AC wasn't working right, walked in and felt like a jungle. They had to set up huge fans outside to cool it off. :laugh:

Probably pocket dust, it was the camera in my old phone.
 

vi edit

Elite Member
Super Moderator
Oct 28, 1999
62,403
8,199
126
I can't really say how many physical servers our datacenter has given that that we are huge VM ware users. I do know that's it's about 4000 sq/ft and they are tapped for power and we are building a new one to accomidate the demands.

We rarely have outages for something that is "in our control". We have specified downtimes once a month where everything is brought down and maintenance is done within a 2 hour window. It allows us keep things in tune and we aren't rushing around to band aid problems intra-day.

The only time I can really recall something going down that was a physical problem was the DS3 coming in from AT&T blew up and internet access and inbound traffic from was cutoff. Wasn't really the datacenter's fault though.
 

Chaotic42

Lifer
Jun 15, 2001
33,929
1,098
126
I used to. It was a comedy of errors. Every month the daily, weekly, and monthly backups would all go off at the same time. Databases would go down because the SysAdmins and DBAs couldn't work together to get enough space for the databases. One SysAdmin wrote a script that started bugging out trying to send files constantly. He had the password wrong. A developer made a program which had an odd date input. It didn't give an example date and didn't check to see if the date made sense. It updated a bunch of database rows with entries in the year 2910.

You are not alone.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |