Originally posted by: Cookie Monster
That doesn't answer my question. Let me ask you, is the power consumption going to be different if chip A has a larger die than chip B, knowing that chip A and B are identically the same in terms of spec?
Yep, it absolutely should, and if it doesn't its most likely a flaw in process or execution. In other cases, decreases in heat/power consumption are negated by core clock increases, which necessitate higher power draw and greater heat output (see G92 to G92b). With GT200b B2 there wasn't any significant benefit so it went back for a respin, resulting in B3 which clearly resulted in lower power draw and heat output. Similar was seen with R600 to RV670 on the ATI side.
Hmm, Im not sure what you mean here. Unless you mean that your basing your claims on user reports from the internet and then claiming these as absolute "fact" and evidence.
Certainly better than treating RV790 as the "White Rhino" chasing some "Pie in the Sky". Again, its not just based on user reports, its based on real product that is not significantly different than what RV790 is rumored to be along with the very real lack of overclocked RV770 products that would give significant assurance such a product might be feasible.
Why would one undervolt to achieve higher clocks in the first place???
They wouldn't, its a rhetorical response to your claim a clock speed increase to 850MHz wouldn't necessitate an increase in power draw. Of course this directly flies in the face of one of your own corrections with regard to the GTX 285, where you made sure to point out an increase in clockspeed would certainly be accompanied by an increase in power draw.
Yet you come out here and claim it as some "fact" when its clear that your just guessing basing your evidence from the internet users. Some of these same end users your basing your claim on have been denying your claims about "very little success overclocking" but you refuse to listen.
As already covered above, its more than just the results of internet users, its based on available product, or more accurately, lack thereof. Also, weren't you just claiming the theoretical limits of 55nm was 1.26V? Where'd you pull that bit of info and if there is any credibility to it, why would you think RV770 has any additional headroom given its already pulling 1.26V to run at 750MHz? As for having little success overclocking, well I guess I'm just used to more than 10% increases to clockspeed, not to mention even those in this thread seem unable to hit 850MHz, the reported speed of RV790.
The target MSRP of the HD4870 was $299. The GTX280 was going for $649. If you haven't read the article from AT about the background of RV770, even the engineers who worked on this project were surprised to see that the fastest RV770 at the time was about 80~95% (depending on app) performance of the GTX280. If you haven't figured it out, ATi has changed their market strategies ever since RV670. They have abandoned the "monolithic GPU design" to performance single GPU design then filling the gap with multi core variants ala X2 configurations of their performance chips for the high end.
Yep I have read the article and if you had as well you'd recall direct mentions of their product timelines with the following general theme:
Late, Hot, Leaky, Underperforming....
RV770 was the first time they executed on schedule since the R480. The big difference this time however was they were more competitive and also prepared to execute on an X2 at launch, a move Nvidia simply could not match until they moved to 55nm.
Again, none of those parts in that chip chart was that much slower than their competing Nvidia parts except for maybe the void left against G80. If overclocking was so prevalent or easy you'd think AMD would've just clocked their parts up to be more competitive, as every % would've helped.....yet they didn't.
And as the article further explains and affirms some of the points about ATI parts drawing more
power, running hot, and pushing thermal envelopes:
- ATI admitted to making a key manufacturing mistake with R600. The transistor technology selected for R600 was performance focused, designed to reach high clock speeds and yielded a part that didn?t have good performance per watt - something we noticed in our review. ATI has since refocused somewhat away from the bleeding edge and now opts for more power efficiency within a given transistor node. With leakage a growing problem as you go to smaller transistors it?s not worth it to be super leaky to gain a few picoseconds.
Again, this kind of info directly corroborates some of the claims and general observations about ATI parts being poor overclockers through the years. When you're always playing catch-up, you can't really afford to leave performance on the table.
Hmm, yet you were claiming that the HD4870/HD4850s were hot at their "max thresholds", when it fact the fan on the stock cooling of a HD4870 is running at 20% which is infact the lowest value you can go down using the ATis CCC. Up the fan a notch, and your going to see the same result as you did with the 8800GTs.
The difference is we know the 4870 and 4850 run hot based on power draw, which is very different from your poor choice of singling out the 8800GT, which doesn't draw as much power and scales extremely well with clockspeeds.
What im trying to say here is that when it comes to thermal characteristics, it comes mostly down to cooling not the actual chip itself. Any modern GPU chip will hit high 90C and over unless its the low low end), whether its an ATi chip or nVIDIA. The single slot design held the G92 back, and its was a hot card (not to mention the cards stock cooling went through a revision because of this). The 8800GTS on the other hand was a cooler card with higher clocks. Why? a beefy dual slot cooler. Likewise the stock cooler on the HD4870 could possibly be holding the card back. Yet they dont require more cooling than the stock, because there is no need for such SKU. An HD4850X2 fills that gap nicely, and gives its competition a rough time. Unlike nVIDIAs lineup, the consistency in different SKUs allows them to differentiate one another where as with nVIDIA you could possibly spend less
Again, where's the professor when you need him. Whatever the card's actual TDP is what it comes down to with thermal characteristics. Again, that heat is either on the GPU, in your case, or in your room, but that doesn't change the fact a 4850/4870 is generating more heat relative to its comparably performing parts. Again, given the chip is smaller, has fewer transiors and is on a smaller process (than 65nm GT200), that points to inefficiency in design and/or leakage which necessitates more power in order to stably run at higher frequencies. As you increase frequency you'll eventually reach a point where additional power and cooling are no longer effective in maintaining stability. Feel free to have the engineers over in the CPU forum explain this to you more clearly.
Its going well OT now so Im going to stop because I cant seem to have a decent discussion with you chizow (no offence though). Its hard to do so when one thinks company As product is just about superior than company Bs released/unreleased products in every possible imaginative way while justifying your absurd facts(?) on this "common sense" and supposed internet user claims.
How is it going OT? I'm trying to figure out the source of all this unbounded optimism with regard to RV790, I just simply can't find any. Once again, you've made various claims regarding ATI/Nvidia that you simply can't back up and are historically untrue.