So I've been mining away for three days with my new Radeon 9750 card added to my system with an existing Radeon 9750 and the one thing that I really was unhappy with was how hot the room was. With my Wattmeter, I measured 330W total system power before I put the 9750 in, and then with the new card installed I measured 670W. And you might say, "that makes no sense because you added a 250W card to a system with a 250W card and about 80W of other stuff, so you should have been at 580W, not 670W". But regardless of what should have happened, I was measuring 670W with my generally accurate wattmeter. I think the problem is that my power supply is only 600W and so rather than failing, it just started burning more power to keep up... but I don't know.
While I'm a bit of an environmentalist, my actual problem was not the wasted power. I'm mining, I don't care. But the room was hot - like stuffy hot. Both cards were at 82C, my computer MB sensor read 66C (in the case), and I think the room was in the high 20's. Maybe 28-29C or so. So then I opened a window, and then my feet were cold but my body was hot... regardless I wasn't happy.
So I decided to underclock the cards, and this is where it gets weird - although the whole 250W+250W+80W=670W thing was already weird - but this is weirder. Using MSI afterburning, I pulled the clock rate on the cards from 1100MHz (overclocked) to 800MHz (slight underclock) and my hashing rate didn't change... but power didn't go down too much (670W to 640W). So then I pulled the voltage on the cards from 1.15V to 1.0V and the hashing rate of course didn't change, but nothing bad happened, but surprisingly power didn't go dramatically (650W to . So then I pulled the power limit slider from my old overclocked setting of +10%, down to -5% - so the new power/current limit is 5% less than factory spec - and the hashing rate dropped a bit, but my power dropped massively. I went from somewhere around 550W (after voltage/freq) down to 420W. But what makes this all weird is that I went from hashing at ~525kh/s (x2) = 1050Mh/s, pulling 550W with the cards pegged at 82C, down to ~495kh/s (x2) = 990kh/s pulling 415W and the cards are 68C ish. What I don't understand is where the power came from, what it was doing and why it doesn't affect the hashing numbers much.
So for data I get:
Please ignore all of this. I didn't measure it correctly and the hash rates really were going down.
power limit +10%, 525kh/s, 660W, card temp 90C
power limit 0%, 515kh/s, 565W
power limit -5%, 500kh/s, 482W
power limit -10%, 500kh/s, 426W
power limit -15%, 500kh/s, 350W
power limit -20%, 500kh/s, 302W, card temp 60C
So to me it's not so surprising that when you move the power limit slider, you get less power. But I don't understand why the hashing rate doesn't fall too. I have this mental picture of litecoin hashing using all of the resources of the card, so a reduction in max power should correlate to a reduction in hashing. But it seems immune. Meanwhile my computer is now hashing twice as fast and using less power (at the limit -20%) than it was with one card. So this was definitely not true of bitcoins. With BTC if I moved power up, I got more hashing, and down was less.
System specs: Intel Core i7 2600K (@ 4.4GHz), 16GB DDR3, 2 x Radeon 9750, 300GB Intel SSD, 1TB Seagate HD, Seasonic 620W "bronze" power supply.
Edit: so yeah, the data was wrong... which makes sense since the results were confusing. I was changing things while the system was running and there's some sort of averaging going on and so my changes weren't showing up as a reduction in hashing because it was averaging in the previous higher numbers.