Nvidia reveals Specifications of GT300

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
"I think in there it is said, and reasoned very well, about why AMD went for the smaller chip and then going for a X2-part, to compete with Nvidia in the high-end."

This in itself goes against the plan you speak of. Because they are now using two dies totalling over 1.8 billion transistors to best a 1.4 billion transistor part. Right then and there, the plan is shot. Mostly because there was never a plan. What you see in R7xx was the best ATI could do at the time. There's no shame in that. It's a great gaming GPU and that can't be taken away from it. But many, many other things can.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Originally posted by: MarcVenice
Maybe I was irritated, because you're insinuating I'm taking AMD's PR for factual truth. But, instead I thought about, weighed a few things, read a thing or two, and came to the conclusion their strategy isn't such a weird one, and doesn't strike me as coincidental at all.

Not sure if it was in AT's article, about how the HD 4850 ended up a lot faster then AMD intended too, but I think in there it is said, and reasoned very well, about why AMD went for the smaller chip and then going for a X2-part, to compete with Nvidia in the high-end. Afaik, it's their strategy to later on have _multiple_ gpu's, not just two, on a single videocard.

And I don't see how HD 4870X2 competes with a GTX285. That's not what AMD wanted indeed. If AMD wanted RV780 to compete/win the single gpu 'battle', they probably would have increased it's die size/shader count.

But now we simply disagree. Last time you were being sarcastic though, which struck me as odd, because of your usually well thought out posts, which actually have valid arguments to back things up.

Who said their strategy was weird? I actually admire it, just as much as I admired Intel's initial approach to putting dual-core and then quad-core under one IHS. For all the same reasons, designed for manufacturability.

And I wasn't referencing the 4xxx gen for their strategy, actually if you'll recall the strategy made its debut with the less successful 3870 X2.

It wasn't until they locked in some success with the 4xxx series that they were finally willing to come out in the public and claim it was intentional to lose a crapload of money and marketshare for a while until they could finally beat Nvidia with a second attempt at the X2 strategy. So when I make my comments they are intended to be wrt 3870X2 vs. 8800GT/GTS of the time.

At any rate they (AMD) can claim their strategy was anything they want going however far back they want, who am I to call them liars? I also truly believed peewee intended to fall off his bike, how can I prove he didn't? But I do like AMD's X2 approach, regardless whether it was done intentionally or not.
 

dreddfunk

Senior member
Jun 30, 2005
358
0
0
Originally posted by: Keysplayr
"I think in there it is said, and reasoned very well, about why AMD went for the smaller chip and then going for a X2-part, to compete with Nvidia in the high-end."

This in itself goes against the plan you speak of. Because they are now using two dies totalling over 1.8 billion transistors to best a 1.4 billion transistor part. Right then and there, the plan is shot. Mostly because there was never a plan. What you see in R7xx was the best ATI could do at the time. There's no shame in that. It's a great gaming GPU and that can't be taken away from it. But many, many other things can.

Hmmm, I thought it was about producing a smaller, more cost-efficient die. So I'm not sure comparing transistor count in this particular case is particularly useful.

Looking at total die size would be more useful, as would looking at the entire range of products which the R7xx spans, versus comparable NV products. Then we'd get an idea whether or not there was a cost savings associated with this particular strategy.

Let me see if I can go get all the figures.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: dreddfunk
Originally posted by: Keysplayr
"I think in there it is said, and reasoned very well, about why AMD went for the smaller chip and then going for a X2-part, to compete with Nvidia in the high-end."

This in itself goes against the plan you speak of. Because they are now using two dies totalling over 1.8 billion transistors to best a 1.4 billion transistor part. Right then and there, the plan is shot. Mostly because there was never a plan. What you see in R7xx was the best ATI could do at the time. There's no shame in that. It's a great gaming GPU and that can't be taken away from it. But many, many other things can.

Hmmm, I thought it was about producing a smaller, more cost-efficient die. So I'm not sure comparing transistor count in this particular case is particularly useful.

Looking at total die size would be more useful, as would looking at the entire range of products which the R7xx spans, versus comparable NV products. Then we'd get an idea whether or not there was a cost savings associated with this particular strategy.

Let me see if I can go get all the figures.

In a way you're right. We can't compare transistor count to transistor count. Die size to die size. These are two entirely different architectures. Both geared toward PC gaming, one of them geared for GPGPU usage. That right there is the reason for the extra transistors and die size. Nvidia want's to sell it's GPU's to a much more mass market. Their Tesla platform is now available in many flavors and colors. I get updates all the time about it.

I don't know if I can make anyone understand the difference. I've tried. But they don't listen very well, or are ignoring this very large important factor. most likely the latter, because it's real easy to understand. I mean, when I say that gaming isn't the only thing Nvidia is aiming at any longer, I get this response, "But this is a Video and Graphics forum and that's all we're interested in." And then when I say that corporate finances and what they spend on a wafer and what each one costs them is a discussion that belongs in a stock portfolio, or a dedicated forum for such discussions, I hear this, "But it is intrinsically intertwined with Video and Graphics. It goes hand in hand."


??????

I guess it's all intertwined isn't it? I guess most will choose what they wish to ignore.
Nvidia is eventually going to take on Intel. That alone should earn all of your respect.
What's ATI doing? AMD? Asking for Government handouts, that's what.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Originally posted by: Keysplayr
Originally posted by: dreddfunk
Originally posted by: Keysplayr
"I think in there it is said, and reasoned very well, about why AMD went for the smaller chip and then going for a X2-part, to compete with Nvidia in the high-end."

This in itself goes against the plan you speak of. Because they are now using two dies totalling over 1.8 billion transistors to best a 1.4 billion transistor part. Right then and there, the plan is shot. Mostly because there was never a plan. What you see in R7xx was the best ATI could do at the time. There's no shame in that. It's a great gaming GPU and that can't be taken away from it. But many, many other things can.

Hmmm, I thought it was about producing a smaller, more cost-efficient die. So I'm not sure comparing transistor count in this particular case is particularly useful.

Looking at total die size would be more useful, as would looking at the entire range of products which the R7xx spans, versus comparable NV products. Then we'd get an idea whether or not there was a cost savings associated with this particular strategy.

Let me see if I can go get all the figures.

In a way you're right. We can't compare transistor count to transistor count. Die size to die size. These are two entirely different architectures. Both geared toward PC gaming, one of them geared for GPGPU usage. That right there is the reason for the extra transistors and die size. Nvidia want's to sell it's GPU's to a much more mass market. Their Tesla platform is now available in many flavors and colors. I get updates all the time about it.

I don't know if I can make anyone understand the difference. I've tried. But they don't listen very well, or are ignoring this very large important factor. most likely the latter, because it's real easy to understand. I mean, when I say that gaming isn't the only thing Nvidia is aiming at any longer, I get this response, "But this is a Video and Graphics forum and that's all we're interested in." And then when I say that corporate finances and what they spend on a wafer and what each one costs them is a discussion that belongs in a stock portfolio, or a dedicated forum for such discussions, I hear this, "But it is intrinsically intertwined with Video and Graphics. It goes hand in hand."


??????

I guess it's all intertwined isn't it? I guess most will choose what they wish to ignore.
Nvidia is eventually going to take on Intel. That alone should earn all of your respect.
What's ATI doing? AMD? Asking for Government handouts, that's what.

If I had to make a guess I'd say you are feeling like you are running circles chasing your tails at times because the metric of success for whatever aspect of the argument you are debating at any given point in time isn't being clearly defined and agreed upon.

Leaving you feeling like you are proving the validity of your view of the metric of success while the other side is left feeling like your argument fails to mitigate their view of what metric of success they hold in their mind.

I'm just guessing this based on your statement of frustrations, I've seen this in timeless quantities at work and it invariably boiled down to both sides basically talking past each other because they hadn't agreed on what the metric of success was for whatever they were attempting to evaluate/discuss/critique. I could be wrong, I freely admit that, only you would know if the shoe fits, but thought I'd chime in just in case it happens to be spot on and helps take the frustration level down a notch.

Its friday, enjoy :beer:
 

dreddfunk

Senior member
Jun 30, 2005
358
0
0
Keys - I've no doubt that NV has other fish to fry besides gaming performance, and that's fine. The question would still remain, however, are they getting enough extra revenue out of the GPGPU approach relative to the increased costs associated with a larger die size? I couldn't answer that question without a detailed look at their books, and I'm sure no one here has those available, so it's all speculation. If they've gotten enough extra sales, all could be well. If the GPGPU market isn't that big, then the extra overhead for the regular gaming market may take its toll on margins.

For me, I'd personally look at die size relative to performance/price segment for 'gaming' applications. I think AMD has adopted an actual strategy of hitting a target in the upper middle of the performance/price segment, with as small a die as possible so that it scales well (economically speaking) in both directions.

Product/Price-----$300-$500----------$200-$300----------$150-$200----------$100-$150----------< $100---------------
------------------------------------------------------------------------------------------------------------------------------------
GTX295----------490 mm²x2 $500---------------------------------------------------------------------------------------------
GTX285----------490 mm² $299------------------------------------------------------------------------------------------------
GTX275----------------------------------490 mm² $225------------------------------------------------------------------------
GTX260--------------------------------------------------------576 mm² $155--------------------------------------------------
GTS250------------------------------------------------------------------------------230 mm² $105----------------------------
9800GT-----------------------------------------------------------------------------------------------------230 mm² $90-------
------------------------------------------------------------------------------------------------------------------------------------
HD4870x2-------260 mm²x2 $370---------------------------------------------------------------------------------------------
HD4890----------------------------------282 mm² $220------------------------------------------------------------------------
HD4870--------------------------------------------------------260 mm² $165-----260 mm² $145----------------------------
HD4850-----------------------------------------------------------------------------------------------------260 mm² $90------
HD4770------------------------------------------------------------------------------172 mm² $100----------------------------
HD4830-----------------------------------------------------------------------------------------------------260 mm² $74-------
------------------------------------------------------------------------------------------------------------------------------------

As you can see, AMD seems to have a significant advantage from $150-300, and a small to significant advantage in the $100-$150 (depending on if we're using the 4830 or 4770 as a comparison), while running at a small disadvantage in the <$100 and $300+ markets, though we might speculate that the disadvantage in the <$100 market will evaporate as other 40nm derivatives hit the streets or the 4770 declines in street price.

I think what AMD has done is very economically sensible. There are just two problems:

1) NV is still very competitive at all price/performance points.
2) AMD was (and is) in such lousy shape that this particular instance of economic rationality may akin to trying to keep the Titanic from sinking with a water pail.

Cheers.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Nvidia GT200 Die Size is 576mm (squared)

ATI RV870 Die Size is 276mm (squared).

So the Nvidia Die Size is twice as great, but not twice as powerful.

How much does DDR5 and the memory interface figure into the equation? When Nvidia goes DDR5 themselves will some of these simple surface area to performance ratios change? Does the memory interface take up a lot of die size?

Also does making a Chip on a larger Die Size reduce yields or not?

P.S. I have nothing against Nvidia. I actually like the other tech they are developing that goes beyond gaming (some of the medical applications in particular sound promising).
 

dreddfunk

Senior member
Jun 30, 2005
358
0
0
JL - As keys says, that's only looking at it one way: GPU performance. Would the GT200 die size be smaller without GPGPU stuff? Probably. The question is: are they getting enough extra GPGPU sales to justify the larger die size? We can't really know.
 

dreddfunk

Senior member
Jun 30, 2005
358
0
0
Oh, keys - and what I really meant to suggest by my first post is in the graph: the 4870x2, while having significantly more transistors, is only 30 mm² larger: 520 mm² compared to the GTX285's 490 mm². It's at a slight disadvantage to the GTX285 in terms of selling price/die size, but the disparity isn't very large, unlike the difference between the RV7xx solutions and NV's offerings in the $150-$300 range.

Hope you're having a good Friday. Cheers.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Originally posted by: dreddfunk
Keys - I've no doubt that NV has other fish to fry besides gaming performance, and that's fine. The question would still remain, however, are they getting enough extra revenue out of the GPGPU approach relative to the increased costs associated with a larger die size? I couldn't answer that question without a detailed look at their books, and I'm sure no one here has those available, so it's all speculation. If they've gotten enough extra sales, all could be well. If the GPGPU market isn't that big, then the extra overhead for the regular gaming market may take its toll on margins.

For me, I'd personally look at die size relative to performance/price segment for 'gaming' applications. I think AMD has adopted an actual strategy of hitting a target in the upper middle of the performance/price segment, with as small a die as possible so that it scales well (economically speaking) in both directions.

Product/Price-----$300-$500----------$200-$300----------$150-$200----------$100-$150----------< $100---------------
------------------------------------------------------------------------------------------------------------------------------------
GTX295----------490 mm²x2 $500---------------------------------------------------------------------------------------------
GTX285----------490 mm² $299------------------------------------------------------------------------------------------------
GTX275----------------------------------490 mm² $225------------------------------------------------------------------------
GTX260--------------------------------------------------------576 mm² $155--------------------------------------------------
GTS250------------------------------------------------------------------------------230 mm² $105----------------------------
9800GT-----------------------------------------------------------------------------------------------------230 mm² $90-------
------------------------------------------------------------------------------------------------------------------------------------
HD4870x2-------260 mm²x2 $370---------------------------------------------------------------------------------------------
HD4890----------------------------------282 mm² $220------------------------------------------------------------------------
HD4870--------------------------------------------------------260 mm² $165-----260 mm² $145----------------------------
HD4850-----------------------------------------------------------------------------------------------------260 mm² $90------
HD4770------------------------------------------------------------------------------172 mm² $100----------------------------
HD4830-----------------------------------------------------------------------------------------------------260 mm² $74-------
------------------------------------------------------------------------------------------------------------------------------------

As you can see, AMD seems to have a significant advantage from $150-300, and a small to significant advantage in the $100-$150 (depending on if we're using the 4830 or 4770 as a comparison), while running at a small disadvantage in the <$100 and $300+ markets, though we might speculate that the disadvantage in the <$100 market will evaporate as other 40nm derivatives hit the streets or the 4770 declines in street price.

I think what AMD has done is very economically sensible. There are just two problems:

1) NV is still very competitive at all price/performance points.
2) AMD was (and is) in such lousy shape that this particular instance of economic rationality may akin to trying to keep the Titanic from sinking with a water pail.

Cheers.

If only ATI could somehow make unified memory a reality then all the waste associated with "mirrored memory". on X2 or even X3 cards could be eliminated.


 

dreddfunk

Senior member
Jun 30, 2005
358
0
0
JL - Maybe that will happen. I'm no engineer, however, and so I can't offer much in the way of informed speculation here.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
dreddfunk, HD4770 is 137 mm^2, isn't it?

Originally posted by: Just learning
Also does making a Chip on a larger Die Size reduce yields or not?

Yes a larger die-size chip will have lower yields than a smaller die-size chip fabbed in the same fab and provided all other aspects of the process tech are identical (number of metal levels, same cycle time in the fab, etc).

Check me out for some info on yield and defect levels

And see me starting on slide 48 for "functional yield"
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
yes, assuming all those factors are the same... AND assuming that the die is of similar complexity, certain parts of the die are more tolerant (like the cache). where an error will either be non fatal, or the tolerances before an error occurs are wider.

however the die size is a major factor, so it is usually the determining factor.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Then there is the problem of how many chips you can produce per wafer and these dont come cheap either. In this case, nVIDIA still requires ~2 wafers to match what AMD produces per 1 wafer assumnig yields are equal.

Originally posted by: dreddfunk
JL - As keys says, that's only looking at it one way: GPU performance. Would the GT200 die size be smaller without GPGPU stuff? Probably. The question is: are they getting enough extra GPGPU sales to justify the larger die size? We can't really know.

Probably not, but this GPGPU capabilities on GT200 may well be an experiment done by nVIDIA (Tesla comes to mind also). A calculated move so that they can truly optimize their next generation architecture for GPGPU apps. The transistors spent on GPGPU isnt probably the only factor in the resulting large die size of GT200 so its hard to point at a single part of the GPU and say this is what resulted in the large die size.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Well let's review the last 3 generations....

2xxx was out after the 8xxx and slower
3xxx was out after the 9xxx and slower
4xxx was out after the GT2xx and slower

I'm betting this trend will continue, considering there is no evidence to the contrary.
 

lopri

Elite Member
Jul 27, 2002
13,211
597
126
Hehe.. see, this is why you should go out on Friday night! I just came home and logged on to see everyone in a cranky mood.. Oh I feel right at home.. and feel good in a pervert way.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Cookie Monster
Then there is the problem of how many chips you can produce per wafer and these dont come cheap either. In this case, nVIDIA still requires ~2 wafers to match what AMD produces per 1 wafer assumnig yields are equal.

Originally posted by: dreddfunk
JL - As keys says, that's only looking at it one way: GPU performance. Would the GT200 die size be smaller without GPGPU stuff? Probably. The question is: are they getting enough extra GPGPU sales to justify the larger die size? We can't really know.

Probably not, but this GPGPU capabilities on GT200 may well be an experiment done by nVIDIA (Tesla comes to mind also). A calculated move so that they can truly optimize their next generation architecture for GPGPU apps. The transistors spent on GPGPU isnt probably the only factor in the resulting large die size of GT200 so its hard to point at a single part of the GPU and say this is what resulted in the large die size.

Agreed. There probably isn't an "area" of the GPU dedicated for GPGPU ops. Instead, it's the entire complex shader design that attributes to the overall die size.
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
Originally posted by: Wreckage
Well let's review the last 3 generations....

2xxx was out after the 8xxx and slower
3xxx was out after the 9xxx and slower
4xxx was out after the GT2xx and slower

I'm betting this trend will continue, considering there is no evidence to the contrary.

This timeline is so full of fail, it's not funny. Let me show you how it looked like:

2006.11.xx - 8800 Launch - GTX@599$, Ultra@829$, GTS320@329$, GTS640@429$ (crazy prices!)
2007.05.14 - HD2900 Launch - generally slower than nVidia (GTX and Ultra untouchable), 7 months after, XT@399$
2007.09.xx - HD3800 Launch - same speed, lower price - getting competitive for its price, HD3870@239$, HD3850@179$
2008.01.28 - HD3870x2 Launch - fastest single card - 449$
2008.02.21 - 9600GT Launch - first 9-series card ~ HD3870 - 189$
2008.03.18 - 9800GX2 - fastest single card - second 9-series card - 599$
2008.04.01 - 9800GTX Launch, 299$-349$ (you could get a GTS512 for ~200$ - same card!) - this launch was a total joke in my eyes.
2008.06.18 - GTX260 and GTX280 Launch - 399$ and 649$
2008.06.23 - HD4800 Launch, HD4870 faster than GTX260 - 299$, HD4850 faster than 9800GTX - 199$
2008.08.12 - HD4870x2 - fastest single card - 499$-549$
2009.01.08 - GTX295 - fastest single card - 499$

Hence only your first statement is correct. Your second one is totally wrong, as the HD38x0 was launched way before the 9-series - which wasn't there to fight ATi anyway... since the 8800GT was doing that already being a better product. The HD48x0 was launched a week after the GTXs and was faster at their launch price (forced a big price drop from nVidia) - meaning your last line is also incorrect - if you say a week is significant and valid here that will be laughable.

To sum up, there were times ATi had the fastest product on the market and there were times nVidia did - nVidia having those periods longer. The delay and overall suckage of the HD2900XT contributed hugely to the current state of the market.

Next time if you want to add some information to a discussion, verify it first and don't post wrong things.

When the RV870 will launch, it should be the fastest thing on the market. Once the GT300 launches, it should be the fastest thing on the market... similarly to the timeline above - everything should repeat itself - this time ATi being first thus getting a boost in their market share - same thing happened with the initial 8800 cards.
 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
Originally posted by: Keysplayr
Originally posted by: Cookie Monster
Then there is the problem of how many chips you can produce per wafer and these dont come cheap either. In this case, nVIDIA still requires ~2 wafers to match what AMD produces per 1 wafer assumnig yields are equal.

Originally posted by: dreddfunk
JL - As keys says, that's only looking at it one way: GPU performance. Would the GT200 die size be smaller without GPGPU stuff? Probably. The question is: are they getting enough extra GPGPU sales to justify the larger die size? We can't really know.

Probably not, but this GPGPU capabilities on GT200 may well be an experiment done by nVIDIA (Tesla comes to mind also). A calculated move so that they can truly optimize their next generation architecture for GPGPU apps. The transistors spent on GPGPU isnt probably the only factor in the resulting large die size of GT200 so its hard to point at a single part of the GPU and say this is what resulted in the large die size.

Agreed. There probably isn't an "area" of the GPU dedicated for GPGPU ops. Instead, it's the entire complex shader design that attributes to the overall die size.

I wonder why IDC isn't shooting holes in this argument with his peewee comment.

I do not think Nvidia 'ment' G80 (and gt200 for that fact, as they are mighty similar) to be 'good' at gpgpu ops. It just is, because gpu's simply are massive parallel processing units. Besides gaming performance, Nvidia probably realised it's gpu's had untapped potential, and they unlocked that potential by writing CUDA for it. And it surely is no experiment, you don't experiment by producing millions of a gpu. They design it, and KNOW what it can do in terms of gpgpu-ops long before it is released onto the market.

Look at the diesize of a G92, look at how many shaders it has. Now look at GT200(b) and how many shaders it has. It's diesize correlates directly to the amount of shaders (and tmu's/rop's to go with it). Knowing G80 and thus G92 are also very good at gpgpu-ops, but simply have less shaders, means Nvidia ment to build a massive gpu, good at gpgpu-ops, more then 3 years ago, before it even aquired Ageia. That's what you are saying.

Now, I dare to say that if AMD were to invest as much R&D resources into ATI Stream, it's gpu's would deliver similar performance as Nvidia's gpu's. Not because they are such great gpu's, but because it's inherent to gpu's that can run games like Nvidia's and ATI's gpu's do, through the directx api.(which requires massive parellel processing power)
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: MarcVenice
Originally posted by: Keysplayr
Originally posted by: Cookie Monster
Then there is the problem of how many chips you can produce per wafer and these dont come cheap either. In this case, nVIDIA still requires ~2 wafers to match what AMD produces per 1 wafer assumnig yields are equal.

Originally posted by: dreddfunk
JL - As keys says, that's only looking at it one way: GPU performance. Would the GT200 die size be smaller without GPGPU stuff? Probably. The question is: are they getting enough extra GPGPU sales to justify the larger die size? We can't really know.

Probably not, but this GPGPU capabilities on GT200 may well be an experiment done by nVIDIA (Tesla comes to mind also). A calculated move so that they can truly optimize their next generation architecture for GPGPU apps. The transistors spent on GPGPU isnt probably the only factor in the resulting large die size of GT200 so its hard to point at a single part of the GPU and say this is what resulted in the large die size.

Agreed. There probably isn't an "area" of the GPU dedicated for GPGPU ops. Instead, it's the entire complex shader design that attributes to the overall die size.

I wonder why IDC isn't shooting holes in this argument with his peewee comment.

I do not think Nvidia 'ment' G80 (and gt200 for that fact, as they are mighty similar) to be 'good' at gpgpu ops. It just is, because gpu's simply are massive parallel processing units. Besides gaming performance, Nvidia probably realised it's gpu's had untapped potential, and they unlocked that potential by writing CUDA for it. And it surely is no experiment, you don't experiment by producing millions of a gpu. They design it, and KNOW what it can do in terms of gpgpu-ops long before it is released onto the market.

Look at the diesize of a G92, look at how many shaders it has. Now look at GT200(b) and how many shaders it has. It's diesize correlates directly to the amount of shaders (and tmu's/rop's to go with it). Knowing G80 and thus G92 are also very good at gpgpu-ops, but simply have less shaders, means Nvidia ment to build a massive gpu, good at gpgpu-ops, more then 3 years ago, before it even aquired Ageia. That's what you are saying.

Now, I dare to say that if AMD were to invest as much R&D resources into ATI Stream, it's gpu's would deliver similar performance as Nvidia's gpu's. Not because they are such great gpu's, but because it's inherent to gpu's that can run games like Nvidia's and ATI's gpu's do, through the directx api.(which requires massive parellel processing power)

The two bolded comments above directly contradict one another. So which is it? They didn't mean for it to be a good GPGPU and it just ended up that way? Or did they KNOW what they had long before it was released onto the market?

Die size? G92 is 230mm2 at 55nm. GT200 is 490mm2 at 55nm. Even if you doubled everything the G92 has, 128 to 256 shaders, 256bit memory interface to 512bit, 16ROP's to 32 ROP's. You'd still only end up with 460mm2 and this is including redundancy transistors that probably wouldn't be needed if just adding shaders, memory controller, ROP's. The die size does NOT correlate directly. AND you forget the external NVIO chip present on GT200 cards that G92 had moved ON DIE when it moved from G80.

Stream: If ATI could have done it, they would have. Heck, they are working on it now with little to no success if you compared it to what CUDA is now.

ATI's architecture is excellent for gaming as they have shown. But that is where the excellence ends. G80 thru GT200 are just all around technologically more advanced as is evident looking at what they're capable of. To deny this would simply be a farce.
If ATI's architecture was truly more advanced, well then, we'll never know because nobody wants to code for it. You have to provide the tools as well as the hardware.

You're not thinking things through, Marc.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Qbah
Originally posted by: Wreckage
Well let's review the last 3 generations....

2xxx was out after the 8xxx and slower
3xxx was out after the 9xxx and slower
4xxx was out after the GT2xx and slower

I'm betting this trend will continue, considering there is no evidence to the contrary.

This timeline is so full of fail, it's not funny. Let me show you how it looked like:

2006.11.xx - 8800 Launch - GTX@599$, Ultra@829$, GTS320@329$, GTS640@429$ (crazy prices!)
2007.05.14 - HD2900 Launch - generally slower than nVidia (GTX and Ultra untouchable), 7 months after, XT@399$
2007.09.xx - HD3800 Launch - same speed, lower price - getting competitive for its price, HD3870@239$, HD3850@179$
2008.01.28 - HD3870x2 Launch - fastest single card - 449$
2008.02.21 - 9600GT Launch - first 9-series card ~ HD3870 - 189$
2008.03.18 - 9800GX2 - fastest single card - second 9-series card - 599$
2008.04.01 - 9800GTX Launch, 299$-349$ (you could get a GTS512 for ~200$ - same card!) - this launch was a total joke in my eyes.
2008.06.18 - GTX260 and GTX280 Launch - 399$ and 649$
2008.06.23 - HD4800 Launch, HD4870 faster than GTX260 - 299$, HD4850 faster than 9800GTX - 199$
2008.08.12 - HD4870x2 - fastest single card - 499$-549$
2009.01.08 - GTX295 - fastest single card - 499$

Hence only your first statement is correct. Your second one is totally wrong, as the HD38x0 was launched way before the 9-series - which wasn't there to fight ATi anyway... since the 8800GT was doing that already being a better product. The HD48x0 was launched a week after the GTXs and was faster at their launch price (forced a big price drop from nVidia) - meaning your last line is also incorrect - if you say a week is significant and valid here that will be laughable.

To sum up, there were times ATi had the fastest product on the market and there were times nVidia did - nVidia having those periods longer. The delay and overall suckage of the HD2900XT contributed hugely to the current state of the market.

Next time if you want to add some information to a discussion, verify it first and don't post wrong things.

When the RV870 will launch, it should be the fastest thing on the market. Once the GT300 launches, it should be the fastest thing on the market... similarly to the timeline above - everything should repeat itself - this time ATi being first thus getting a boost in their market share - same thing happened with the initial 8800 cards.

Qbah, if you change it to this:

2xxx was out after the G80 and slower
3xxx was out after the G92 and slower
4xxx was out after the GT200 and slower

It changes things. 8800GT/GTS512 were G92 based chips well before the 9xxx series launched. So this may change some of your post above.

And by the way, faster at their launch price does not make them faster gaming cards.

 

dreddfunk

Senior member
Jun 30, 2005
358
0
0
Fair enough. As I indicated, I'm not an engineer, so it's hard for me to speculate why die sizes are what they are. In the context of the discussion to which I was responding, I think keys was suggesting that looking at performance/transistor merely in terms of GPU performance wasn't the complete story--that the GT2xx series was also meant for GPGPU applications. This is the statement of keys to which I was responding:

"In a way you're right. We can't compare transistor count to transistor count. Die size to die size. These are two entirely different architectures. Both geared toward PC gaming, one of them geared for GPGPU usage. That right there is the reason for the extra transistors and die size. Nvidia want's to sell it's GPU's to a much more mass market. Their Tesla platform is now available in many flavors and colors. I get updates all the time about it."

In reference to this statement, I was suggesting that any transistor/die size 'penalty' NV paid for their concern with GPGPU would need to have a significant enough benefit to overcome the extra costs. I don't know if they did pay a penalty, however, so I shouldn't have said 'probably' when referring to NV creating a smaller die without GPGPU stuff (here I was deferring to what I perceived as keys' greater knowledge of the subject). But keys did defend the GT2xx die size(or transistor)/performance ratio based on a belief that such a penalty existed.

I followed by saying that whatever penalty NV paid for GPGPU would create 'overhead' transistors in their products with reference to the GPU market and GPU applications, and that I hoped the potential gain (GPGPU market) was worth the penalty. But also I suggest that we can discuss the profitability of the two product series based on their die size, street prices, and targeted markets.

In other words, comparing the GT2xx and RV7xx architectures in terms of die size/performance or transistor/performance makes sense if they were both designed to solve the same problem. It seemed to me that keys was arguing that this wasn't fair because they *weren't* designed to address the same *exact* problem, and thus direct comparisons were difficult.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: Keysplayr

Qbah, if you change it to this:

2xxx was out after the G80 and slower
3xxx was out after the G92 and slower
4xxx was out after the GT200 and slower

It changes things. 8800GT/GTS512 were G92 based chips well before the 9xxx series launched. So this may change some of your post above.

And by the way, faster at their launch price does not make them faster gaming cards.

That would be more accurate as I am referring to GPUs. His discussion of pricing and X2 cards was irrelevant.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |