Originally posted by: jonnyGURU
Originally posted by: DerwenArtos12
What PSU are you using and do you know how the 12v rails are devided out? Ideally it would be with the 24pin connector and the 4 pin on one rail, the 8 pin on another, teh 6 pins on another and the 4pin moles an d5 pin sata on the 4th right?
Actually, that wouldn't be ideal... nor is it normal.
EPS12V would split the 8-pin and 4-pin across two +12V that would be all to it's own. That's not good for high end graphics cards and this is why early adoption of split +12V rails was a problem for SLI/Crossfire.
Since then, Nvidia has made it "mandatory" that PCIe gets it's own rail.
For the same reason power consumption of SLI isn't the equivalent of one GPU multiplied by two, your CPU's power consumption won't be the double of one.
What you typically see is the 8-pin and 4-pin on one rail, the 24-pin, Molex, SATA, etc. on a second and then each of two PCIe connectors on a third and fourth. If the PSU manufacturer is trying to support both EPS12V and SLI certification, there will be more than four rails and the 8-pin and 4-pin will be split up across two rails. This can be seen in a number of Topower built 1000W+ units and the Enermax Galaxy.
At risk of sounding like a shill because I'm a product manager at BFG, I use the BFG ES-800. It
only has four +12V rails, but it works because the rails aren't "limited" at the typical 20A. There's 22A for the CPU's, 22A to the 24-pin and Molex, etc. and then each pair of PCIe connectors get their own 36A rail. Total overkill.