Nvidia reveals Specifications of GT300

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Blazer7

Golden Member
Jun 26, 2007
1,105
5
81
Originally posted by: Wreckage
Originally posted by: Keysplayr

Qbah, if you change it to this:

2xxx was out after the G80 and slower
3xxx was out after the G92 and slower
4xxx was out after the GT200 and slower

It changes things. 8800GT/GTS512 were G92 based chips well before the 9xxx series launched. So this may change some of your post above.

And by the way, faster at their launch price does not make them faster gaming cards.

That would be more accurate as I am referring to GPUs. His discussion of pricing and X2 cards was irrelevant.

It depends on the angle one is looking at things. However I believe that this trend is gonna come to an end. If not, I will be very impressed. What are the chances that nV releases the GT300 before the 5870?
 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
Originally posted by: Keysplayr
Originally posted by: MarcVenice
Originally posted by: Keysplayr
Originally posted by: Cookie Monster
Then there is the problem of how many chips you can produce per wafer and these dont come cheap either. In this case, nVIDIA still requires ~2 wafers to match what AMD produces per 1 wafer assumnig yields are equal.

Originally posted by: dreddfunk
JL - As keys says, that's only looking at it one way: GPU performance. Would the GT200 die size be smaller without GPGPU stuff? Probably. The question is: are they getting enough extra GPGPU sales to justify the larger die size? We can't really know.

Probably not, but this GPGPU capabilities on GT200 may well be an experiment done by nVIDIA (Tesla comes to mind also). A calculated move so that they can truly optimize their next generation architecture for GPGPU apps. The transistors spent on GPGPU isnt probably the only factor in the resulting large die size of GT200 so its hard to point at a single part of the GPU and say this is what resulted in the large die size.

Agreed. There probably isn't an "area" of the GPU dedicated for GPGPU ops. Instead, it's the entire complex shader design that attributes to the overall die size.

I wonder why IDC isn't shooting holes in this argument with his peewee comment.

I do not think Nvidia 'ment' G80 (and gt200 for that fact, as they are mighty similar) to be 'good' at gpgpu ops. It just is, because gpu's simply are massive parallel processing units. Besides gaming performance, Nvidia probably realised it's gpu's had untapped potential, and they unlocked that potential by writing CUDA for it. And it surely is no experiment, you don't experiment by producing millions of a gpu. They design it, and KNOW what it can do in terms of gpgpu-ops long before it is released onto the market.

Look at the diesize of a G92, look at how many shaders it has. Now look at GT200(b) and how many shaders it has. It's diesize correlates directly to the amount of shaders (and tmu's/rop's to go with it). Knowing G80 and thus G92 are also very good at gpgpu-ops, but simply have less shaders, means Nvidia ment to build a massive gpu, good at gpgpu-ops, more then 3 years ago, before it even aquired Ageia. That's what you are saying.

Now, I dare to say that if AMD were to invest as much R&D resources into ATI Stream, it's gpu's would deliver similar performance as Nvidia's gpu's. Not because they are such great gpu's, but because it's inherent to gpu's that can run games like Nvidia's and ATI's gpu's do, through the directx api.(which requires massive parellel processing power)

The two bolded comments above directly contradict one another. So which is it? They didn't mean for it to be a good GPGPU and it just ended up that way? Or did they KNOW what they had long before it was released onto the market?

Die size? G92 is 230mm2 at 55nm. GT200 is 490mm2 at 55nm. Even if you doubled everything the G92 has, 128 to 256 shaders, 256bit memory interface to 512bit, 16ROP's to 32 ROP's. You'd still only end up with 460mm2 and this is including redundancy transistors that probably wouldn't be needed if just adding shaders, memory controller, ROP's. The die size does NOT correlate directly. AND you forget the external NVIO chip present on GT200 cards that G92 had moved ON DIE when it moved from G80.

Stream: If ATI could have done it, they would have. Heck, they are working on it now with little to no success if you compared it to what CUDA is now.

ATI's architecture is excellent for gaming as they have shown. But that is where the excellence ends. G80 thru GT200 are just all around technologically more advanced as is evident looking at what they're capable of. To deny this would simply be a farce.
If ATI's architecture was truly more advanced, well then, we'll never know because nobody wants to code for it. You have to provide the tools as well as the hardware.

You're not thinking things through, Marc.

It only contradicts in the way you read it keys. They ment it to be good at gaming, and as a bonus, it's also good at gpgpu ops. But that's because gpu's in general are good at some things cpu's arent very good at, because of their parallel structure. CUDA unlocks that potential.

CUDA takes a LOT of time and money to produce though. That's something Nvidia choose to do. AMD/ATI choose not to, maybe because they can't ( I doubt it, some apps can do pretty impressive things with ATI gpu's in terms of gpgpu-computing ) or maybe because they don't have the money for it, or don't think it's important enough, right now.

Thats speculation though, but it's equally possible as your statement that this is what Nvidia wants, and that it is why GT200 ended up so big ( which it didn't if you compare it to G92 and the amount of raw fps it can spit out ).

And 460mm2 vs 487mm2 isn't such a big difference lol. At 460mm2 GT200 still would have been considered big. Adding redundancy is also a bad argument. G92 had the same thing, some gpu's ended up in a 8800gts 512, some in the 9800gt. Some gt200's ended up in the gtx285, some in GTX260.

Really, you keep re-iterating that GT200 is so big because of it's IMMENSE gpgpu-capabilities. But you don't know. Neither do I. But you keep using it as an argument, which imo isn't a very strong one.
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
Originally posted by: Keysplayr
Qbah, if you change it to this:

2xxx was out after the G80 and slower
3xxx was out after the G92 and slower
4xxx was out after the GT200 and slower

It changes things. 8800GT/GTS512 were G92 based chips well before the 9xxx series launched. So this may change some of your post above.

And by the way, faster at their launch price does not make them faster gaming cards.

The first G92 - 8800GT, launched on 2007.10.29, well after the HD3800 series. The 8800GTS 512MB launched on 2007.12.11 - even later (1,5 months after the GT). The HD3870x2 launched on 2008.01.28 - another 1,5 months after. So yeah... the HD3800 was worse... oh wait, it wasn't. It was a competitive solution - the HD3870 was slower and cheaper than a 8800GT.

And what? Now we're talking bout cores, not series? You're comparing ATi series to nVidia cores? That is totally crazy - since the newest GTS250, a G92b core would fall where? Or a GTS240 (GT G92b) - if it exists at all. There was around 6 months between the first 8-series G92 and the first 9-series G92. Both of them launched after the HD3800 series (8800GT after HD3870, 9800GX2 after HD3870x2). So that's also wrong - though that's besides the point.

Finally, 5 days later is so insignificant - how can you even bring that up? It's like saying 1FPS average more makes a card faster, when both are pushing 60+.

What do you compare when you create market segments? Price. So yeah, lower price does make a card faster. Do I need to remind you the HD4870x2 followed the same route? It was faster and cheaper than a GTX280 - though back then nRollo was on a crusade to show that a GTX280 is so much better, cause it's not a dual-GPU solution (to name one of the reasons he was pushing). Seriously, I am lost for words, how can you stir things up like that... Maybe cause I am right and that not only doesn't make ATi look bad but makes them look competitive for longer than people think? And the current state being still the fallout of the HD2900 fiasco? Because that makes them look like a company that has forced prices down and gave "your average Joe" access to high-end performance for a very attractice price? And doesn't make nVidia look like the "One True Company" people should go to?

Wreckage clearly posted things wrong, stating in his usual arrogant way "Ati=teh suck". People read those things, really. And leaving such blantantly false information unattended would just create the impression that it's correct - which it clearly isn't, as proven above in my posts.

And just so we're clear - I am looking forward to the GT300 cards, because I'm sure the new Radeons will be severly overpriced - being a lot earlier on the market to be one of the reasons. Also, because it looks like the performance of those cards will be amazing. I'm just wondering if it will run Crysis at Full HD, maxed AA at 60+FPS
 

dreddfunk

Senior member
Jun 30, 2005
358
0
0
Originally posted by: Wreckage
Originally posted by: Keysplayr

Qbah, if you change it to this:

2xxx was out after the G80 and slower
3xxx was out after the G92 and slower
4xxx was out after the GT200 and slower

It changes things. 8800GT/GTS512 were G92 based chips well before the 9xxx series launched. So this may change some of your post above.

And by the way, faster at their launch price does not make them faster gaming cards.

That would be more accurate as I am referring to GPUs. His discussion of pricing and X2 cards was irrelevant.

A statement like "4xxx was out after the GT200 and slower" elides a number of key facts about the way the two GPU architectures match up, and does nothing to help consumers understand what the particular strengths and weaknesses of each company's offerings may be.

In other words, this is a sweeping generalization that is of little substantive use. Continue to make them, if you wish, but I can't see what possible use they are.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Originally posted by: MarcVenice
Originally posted by: Keysplayr
Originally posted by: Cookie Monster
Then there is the problem of how many chips you can produce per wafer and these dont come cheap either. In this case, nVIDIA still requires ~2 wafers to match what AMD produces per 1 wafer assumnig yields are equal.

Originally posted by: dreddfunk
JL - As keys says, that's only looking at it one way: GPU performance. Would the GT200 die size be smaller without GPGPU stuff? Probably. The question is: are they getting enough extra GPGPU sales to justify the larger die size? We can't really know.

Probably not, but this GPGPU capabilities on GT200 may well be an experiment done by nVIDIA (Tesla comes to mind also). A calculated move so that they can truly optimize their next generation architecture for GPGPU apps. The transistors spent on GPGPU isnt probably the only factor in the resulting large die size of GT200 so its hard to point at a single part of the GPU and say this is what resulted in the large die size.

Agreed. There probably isn't an "area" of the GPU dedicated for GPGPU ops. Instead, it's the entire complex shader design that attributes to the overall die size.

I wonder why IDC isn't shooting holes in this argument with his peewee comment.

I'll admit this one is over my head...explain to me again how or why I should be shooting holes in keys argument with peewee comments? I'm not following. To my knowledge Nvidia has not made a public campaign out of attempting to convince people their larger GPU was a greater part of some strategy to takeover the GPGPU world. It is something we enthusiasts are speculating on, but not part of a larger marketing strategy for us to believe. (I could be wrong here, has NV come out in a PR piece trying to write the history of the GT200 as being so big because they explicitly wanted both GP and GPU markets under one IHS?)
 

Blazer7

Golden Member
Jun 26, 2007
1,105
5
81
Nice.... I hope that this is accurate. If it is then this is the best scenario for us consumers. We'll see DX11 parts from both companies before Xmas. Hopefully they'll both release their next-gen cards within a month or two. This will definitely help keep prices within sane limits.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: MarcVenice
Originally posted by: Keysplayr
Originally posted by: MarcVenice
Originally posted by: Keysplayr
Originally posted by: Cookie Monster
Then there is the problem of how many chips you can produce per wafer and these dont come cheap either. In this case, nVIDIA still requires ~2 wafers to match what AMD produces per 1 wafer assumnig yields are equal.

Originally posted by: dreddfunk
JL - As keys says, that's only looking at it one way: GPU performance. Would the GT200 die size be smaller without GPGPU stuff? Probably. The question is: are they getting enough extra GPGPU sales to justify the larger die size? We can't really know.

Probably not, but this GPGPU capabilities on GT200 may well be an experiment done by nVIDIA (Tesla comes to mind also). A calculated move so that they can truly optimize their next generation architecture for GPGPU apps. The transistors spent on GPGPU isnt probably the only factor in the resulting large die size of GT200 so its hard to point at a single part of the GPU and say this is what resulted in the large die size.

Agreed. There probably isn't an "area" of the GPU dedicated for GPGPU ops. Instead, it's the entire complex shader design that attributes to the overall die size.

I wonder why IDC isn't shooting holes in this argument with his peewee comment.

I do not think Nvidia 'ment' G80 (and gt200 for that fact, as they are mighty similar) to be 'good' at gpgpu ops. It just is, because gpu's simply are massive parallel processing units. Besides gaming performance, Nvidia probably realised it's gpu's had untapped potential, and they unlocked that potential by writing CUDA for it. And it surely is no experiment, you don't experiment by producing millions of a gpu. They design it, and KNOW what it can do in terms of gpgpu-ops long before it is released onto the market.

Look at the diesize of a G92, look at how many shaders it has. Now look at GT200(b) and how many shaders it has. It's diesize correlates directly to the amount of shaders (and tmu's/rop's to go with it). Knowing G80 and thus G92 are also very good at gpgpu-ops, but simply have less shaders, means Nvidia ment to build a massive gpu, good at gpgpu-ops, more then 3 years ago, before it even aquired Ageia. That's what you are saying.

Now, I dare to say that if AMD were to invest as much R&D resources into ATI Stream, it's gpu's would deliver similar performance as Nvidia's gpu's. Not because they are such great gpu's, but because it's inherent to gpu's that can run games like Nvidia's and ATI's gpu's do, through the directx api.(which requires massive parellel processing power)

The two bolded comments above directly contradict one another. So which is it? They didn't mean for it to be a good GPGPU and it just ended up that way? Or did they KNOW what they had long before it was released onto the market?

Die size? G92 is 230mm2 at 55nm. GT200 is 490mm2 at 55nm. Even if you doubled everything the G92 has, 128 to 256 shaders, 256bit memory interface to 512bit, 16ROP's to 32 ROP's. You'd still only end up with 460mm2 and this is including redundancy transistors that probably wouldn't be needed if just adding shaders, memory controller, ROP's. The die size does NOT correlate directly. AND you forget the external NVIO chip present on GT200 cards that G92 had moved ON DIE when it moved from G80.

Stream: If ATI could have done it, they would have. Heck, they are working on it now with little to no success if you compared it to what CUDA is now.

ATI's architecture is excellent for gaming as they have shown. But that is where the excellence ends. G80 thru GT200 are just all around technologically more advanced as is evident looking at what they're capable of. To deny this would simply be a farce.
If ATI's architecture was truly more advanced, well then, we'll never know because nobody wants to code for it. You have to provide the tools as well as the hardware.

You're not thinking things through, Marc.

It only contradicts in the way you read it keys. They ment it to be good at gaming, and as a bonus, it's also good at gpgpu ops. But that's because gpu's in general are good at some things cpu's arent very good at, because of their parallel structure. CUDA unlocks that potential.

CUDA takes a LOT of time and money to produce though. That's something Nvidia choose to do. AMD/ATI choose not to, maybe because they can't ( I doubt it, some apps can do pretty impressive things with ATI gpu's in terms of gpgpu-computing ) or maybe because they don't have the money for it, or don't think it's important enough, right now.

Thats speculation though, but it's equally possible as your statement that this is what Nvidia wants, and that it is why GT200 ended up so big ( which it didn't if you compare it to G92 and the amount of raw fps it can spit out ).

And 460mm2 vs 487mm2 isn't such a big difference lol. At 460mm2 GT200 still would have been considered big. Adding redundancy is also a bad argument. G92 had the same thing, some gpu's ended up in a 8800gts 512, some in the 9800gt. Some gt200's ended up in the gtx285, some in GTX260.

Really, you keep re-iterating that GT200 is so big because of it's IMMENSE gpgpu-capabilities. But you don't know. Neither do I. But you keep using it as an argument, which imo isn't a very strong one.

No, it contradicts to anyone who reads it. What did you mean, "In the way you read it"? There is only one way to read it. If you meant something else, then WRITE something else.

How much money and time does CUDA take to produce, Marc? You have no idea, yet feel free to make these comments. I honestly don't know what you were thinking.

I'm sorry, but I lost interest after the 2nd paragraph as you are just trying to make sense when there is none.
 

ShadowOfMyself

Diamond Member
Jun 22, 2006
4,230
2
0
Originally posted by: Wreckage
Originally posted by: Keysplayr

Qbah, if you change it to this:

2xxx was out after the G80 and slower
3xxx was out after the G92 and slower
4xxx was out after the GT200 and slower

It changes things. 8800GT/GTS512 were G92 based chips well before the 9xxx series launched. So this may change some of your post above.

And by the way, faster at their launch price does not make them faster gaming cards.

That would be more accurate as I am referring to GPUs. His discussion of pricing and X2 cards was irrelevant.


Of course its irrelevant, who in their right mind cares about pricing? I mean, Im all for buying the top card for 2 grand, how about you? :roll:

 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: ShadowOfMyself


Of course its irrelevant, who in their right mind cares about pricing? I mean, Im all for buying the top card for 2 grand, how about you? :roll:

Yes I suppose when someone has a losing argument changing the topic is their last desperate hope.

I was discussing performance and timing. He tried to explain all that away with irrelevant information. Not to mention the speed of the chips did not change, but the price does so it's useless to even mention it in retrospect.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
There is, was, and always has been plenty to choose from in all price ranges from both camps.
 

dreddfunk

Senior member
Jun 30, 2005
358
0
0
Keys - I don't find Marc's comments contradictory at all. He's simply saying that he doesn't think that NV designed G80/GT200 with GPGPU in mind, but that they knew they would make good GPGPUs from the outset because of the kind of chips they are: massively parallel. In fact, I can't tell what you think is actually contradictory. Do you think he's saying, "They didn't design it for GPGPU....They designed it for GPGPU?" Because that's not what he appears to be saying. He appears to be saying, "They didn't design it for GPGPU...But they knew it would make a good GPGPU because of the design."
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
That's a little better explaination. But explain to me this. Why Marc is so sure that ATI planned what they did, and Nvidia did not?
I don't see where he is getting his conclusions from with the exception of buying into ATI's PR that "they meant to do that".
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Originally posted by: dreddfunk
Keys - I don't find Marc's comments contradictory at all. He's simply saying that he doesn't think that NV designed G80/GT200 with GPGPU in mind, but that they knew they would make good GPGPUs from the outset because of the kind of chips they are: massively parallel. In fact, I can't tell what you think is actually contradictory. Do you think he's saying, "They didn't design it for GPGPU....They designed it for GPGPU?" Because that's not what he appears to be saying. He appears to be saying, "They didn't design it for GPGPU...But they knew it would make a good GPGPU because of the design."

Could you imagine if someone argued that "Intel didn't design Nehalem for use as a CPU, but they knew it would make a good CPU because of the design"?

Or "Qimonda didn't design IDGV1G-05A1F1C-50X for use as GDDR5, but they knew it would make for good GDDR5 because of the design".

It would be pretty remarkable if the GT200 happened to be CUDA 2.0 compliant by mere happenstance and coincidence and not by premeditated design. About as remarkable as Nehalem just so happening to be x86 compliant and not premeditated.
 

lopri

Elite Member
Jul 27, 2002
13,211
597
126
IMO, AMD planned what they did, and they got lucky with it. I have no idea what NV had planned, but NV's product launches / executions haven't been what they used to be..

So the perception is there. AMD is on track, on time, as planned, etc. NV is now all marketing, talk, etc. It used to be the other way around not too long ago.
 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
Originally posted by: Keysplayr
That's a little better explaination. But explain to me this. Why Marc is so sure that ATI planned what they did, and Nvidia did not?
I don't see where he is getting his conclusions from with the exception of buying into ATI's PR that "they meant to do that".

That's just the thing. I don't know. Yet when a company sais they did do something on purpose, it's not credible at all, no, it's coincidence instead.

Also, Nvidia probably ended up exactly with what they wanted, but I do not think they designed it with CUDA in mind.

And it's not very far fetched to think Nvidia dedicates a lot more time and resources on CUDA then ATI does with stream. If you're really going to dispute this, I'd have to give up, because I can't argue against someone so stubborn.

It would be pretty remarkable if the GT200 happened to be CUDA 2.0 compliant by mere happenstance and coincidence and not by premeditated design. About as remarkable as Nehalem just so happening to be x86 compliant and not premeditated.

You're losing credibility here. I'm saying GT200 was designed to be dx10 compliant in the first place. Then CUDA was written 'around' the gpu to unlock its gpgpu-potential. And Nvidia had this possiblity, because CUDA isn't very different from DirectX. It's just an API, that can put all that processing power to good use, just like DirectX can make use of all that processing power to put x-amount of fps on our screens. The X86 extension is a whole other ballgame.

And surely, after G80 it's very well possible Nvidia optimized GT200 for gpgpu-computing, but they had most of their architecture in place since G80. (i'm actually pretty interested in how well GT200 fares against G80/G92 on a clock per clock and shader vs shader basis)
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
I think the G80 was designed with CUDA in mind and the GT2xx was a clear extension of that.

Look how much better it runs things like game physics and folding@home. It has to do more that just with CUDA, it has to do with the hardware as well.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Originally posted by: MarcVenice
Originally posted by: Keysplayr
That's a little better explaination. But explain to me this. Why Marc is so sure that ATI planned what they did, and Nvidia did not?
I don't see where he is getting his conclusions from with the exception of buying into ATI's PR that "they meant to do that".

That's just the thing. I don't know. Yet when a company sais they did do something on purpose, it's not credible at all, no, it's coincidence instead.

Also, Nvidia probably ended up exactly with what they wanted, but I do not think they designed it with CUDA in mind.

And it's not very far fetched to think Nvidia dedicates a lot more time and resources on CUDA then ATI does with stream. If you're really going to dispute this, I'd have to give up, because I can't argue against someone so stubborn.

It would be pretty remarkable if the GT200 happened to be CUDA 2.0 compliant by mere happenstance and coincidence and not by premeditated design. About as remarkable as Nehalem just so happening to be x86 compliant and not premeditated.

You're losing credibility here. I'm saying GT200 was designed to be dx10 compliant in the first place. Then CUDA was written 'around' the gpu to unlock its gpgpu-potential. And Nvidia had this possiblity, because CUDA isn't very different from DirectX. It's just an API, that can put all that processing power to good use, just like DirectX can make use of all that processing power to put x-amount of fps on our screens. The X86 extension is a whole other ballgame.

And surely, after G80 it's very well possible Nvidia optimized GT200 for gpgpu-computing, but they had most of their architecture in place since G80. (i'm actually pretty interested in how well GT200 fares against G80/G92 on a clock per clock and shader vs shader basis)

I'm losing credibility? What's with the personal side-jabs?

You have an interesting concept of what actually goes into designing an IC, how the project is defined, the various stages it must go thru from a project management and timeline standpoint.

You don't just end up at the end of that process with a chip that coincidentally functions as dual-purpose. Nor do you end up with a chip for which you subsequently pull together a team to create the infrastructure necessary to support an ISA. The timelines for both efforts are so lengthy they MUST be carried out in parallel.

Could you imagine if Intel's compiler team didn't start working on incorporating SSE4.2 ISA support into their compilers until Nehalem chips started coming off the line? We'd still be waiting for compilers if that was how these things worked.

There is no plausible way Nvidia could have created the GT200 and only after the fact pull together an ISA support team to retroactively draft a CUDA 2.0 spec and create the necessary software/compiler infrastructure for the apps that require it. The time required to do such a thing doesn't fit reality.

Now if the gap between hardware release and the secondary (CUDA in this case) ISA support was ~1yr or longer then yeah I'd totally buy it as plausible that someone figured out they could extract a secondary purpose out of a chip's existing ISA after the fact. But to my knowledge the timeline for GT200 release and CUDA 2.0 release occurred at the same time.
 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
Originally posted by: Idontcare
Originally posted by: MarcVenice
Originally posted by: Keysplayr
That's a little better explaination. But explain to me this. Why Marc is so sure that ATI planned what they did, and Nvidia did not?
I don't see where he is getting his conclusions from with the exception of buying into ATI's PR that "they meant to do that".

That's just the thing. I don't know. Yet when a company sais they did do something on purpose, it's not credible at all, no, it's coincidence instead.

Also, Nvidia probably ended up exactly with what they wanted, but I do not think they designed it with CUDA in mind.

And it's not very far fetched to think Nvidia dedicates a lot more time and resources on CUDA then ATI does with stream. If you're really going to dispute this, I'd have to give up, because I can't argue against someone so stubborn.

It would be pretty remarkable if the GT200 happened to be CUDA 2.0 compliant by mere happenstance and coincidence and not by premeditated design. About as remarkable as Nehalem just so happening to be x86 compliant and not premeditated.

You're losing credibility here. I'm saying GT200 was designed to be dx10 compliant in the first place. Then CUDA was written 'around' the gpu to unlock its gpgpu-potential. And Nvidia had this possiblity, because CUDA isn't very different from DirectX. It's just an API, that can put all that processing power to good use, just like DirectX can make use of all that processing power to put x-amount of fps on our screens. The X86 extension is a whole other ballgame.

And surely, after G80 it's very well possible Nvidia optimized GT200 for gpgpu-computing, but they had most of their architecture in place since G80. (i'm actually pretty interested in how well GT200 fares against G80/G92 on a clock per clock and shader vs shader basis)

I'm losing credibility? What's with the personal side-jabs?

You have an interesting concept of what actually goes into designing an IC, how the project is defined, the various stages it must go thru from a project management and timeline standpoint.

You don't just end up at the end of that process with a chip that coincidentally functions as dual-purpose. Nor do you end up with a chip for which you subsequently pull together a team to create the infrastructure necessary to support an ISA. The timelines for both efforts are so lengthy they MUST be carried out in parallel.

Could you imagine if Intel's compiler team didn't start working on incorporating SSE4.2 ISA support into their compilers until Nehalem chips started coming off the line? We'd still be waiting for compilers if that was how these things worked.

There is no plausible way Nvidia could have created the GT200 and only after the fact pull together an ISA support team to retroactively draft a CUDA 2.0 spec and create the necessary software/compiler infrastructure for the apps that require it. The time required to do such a thing doesn't fit reality.

Now if the gap between hardware release and the secondary (CUDA in this case) ISA support was ~1yr or longer then yeah I'd totally buy it as plausible that someone figured out they could extract a secondary purpose out of a chip's existing ISA after the fact. But to my knowledge the timeline for GT200 release and CUDA 2.0 release occurred at the same time.

No you're right, thats not what happened. Nvidia released G80 in november 2006 and somewhere in february 2007 they released CUDA. Meaning they most likely started working on CUDA before they released G80. But I doubt they had CUDA in mind when designing G80, which they started in 2004 maybe? GT200 is just a minor improvement of G80, and CUDA 2.0 is just an improvement of CUDA. So we're talking G80 here, and CUDA, the building blocks of GT200 and CUDA 2.0 respectively.

Also, CUDA is based on C, as far as my understanding goes of programming, that's just a programming language, that Nvidia somehow got to work on their gpu's. It can't be compared to the x86 or sse-extensions.

Also, another point would be ATI's Stream. Imo ATI also did not design their gpu with gpgpu-ops in mind. To at least give us the faint idea that they are doing something with it, they whipped up ATI Stream. Something they did not plan at all, yet it works. And Nvidia's cards can also run OpenCL, something I do not think Nvidia had planned when designing G80. I might be mistaken on this one, if OpenCL is run through CUDA though.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: MarcVenice
Originally posted by: Idontcare
Originally posted by: MarcVenice
Originally posted by: Keysplayr
That's a little better explaination. But explain to me this. Why Marc is so sure that ATI planned what they did, and Nvidia did not?
I don't see where he is getting his conclusions from with the exception of buying into ATI's PR that "they meant to do that".

That's just the thing. I don't know. Yet when a company sais they did do something on purpose, it's not credible at all, no, it's coincidence instead.

Also, Nvidia probably ended up exactly with what they wanted, but I do not think they designed it with CUDA in mind.

And it's not very far fetched to think Nvidia dedicates a lot more time and resources on CUDA then ATI does with stream. If you're really going to dispute this, I'd have to give up, because I can't argue against someone so stubborn.

It would be pretty remarkable if the GT200 happened to be CUDA 2.0 compliant by mere happenstance and coincidence and not by premeditated design. About as remarkable as Nehalem just so happening to be x86 compliant and not premeditated.

You're losing credibility here. I'm saying GT200 was designed to be dx10 compliant in the first place. Then CUDA was written 'around' the gpu to unlock its gpgpu-potential. And Nvidia had this possiblity, because CUDA isn't very different from DirectX. It's just an API, that can put all that processing power to good use, just like DirectX can make use of all that processing power to put x-amount of fps on our screens. The X86 extension is a whole other ballgame.

And surely, after G80 it's very well possible Nvidia optimized GT200 for gpgpu-computing, but they had most of their architecture in place since G80. (i'm actually pretty interested in how well GT200 fares against G80/G92 on a clock per clock and shader vs shader basis)

I'm losing credibility? What's with the personal side-jabs?

You have an interesting concept of what actually goes into designing an IC, how the project is defined, the various stages it must go thru from a project management and timeline standpoint.

You don't just end up at the end of that process with a chip that coincidentally functions as dual-purpose. Nor do you end up with a chip for which you subsequently pull together a team to create the infrastructure necessary to support an ISA. The timelines for both efforts are so lengthy they MUST be carried out in parallel.

Could you imagine if Intel's compiler team didn't start working on incorporating SSE4.2 ISA support into their compilers until Nehalem chips started coming off the line? We'd still be waiting for compilers if that was how these things worked.

There is no plausible way Nvidia could have created the GT200 and only after the fact pull together an ISA support team to retroactively draft a CUDA 2.0 spec and create the necessary software/compiler infrastructure for the apps that require it. The time required to do such a thing doesn't fit reality.

Now if the gap between hardware release and the secondary (CUDA in this case) ISA support was ~1yr or longer then yeah I'd totally buy it as plausible that someone figured out they could extract a secondary purpose out of a chip's existing ISA after the fact. But to my knowledge the timeline for GT200 release and CUDA 2.0 release occurred at the same time.

No you're right, thats not what happened. Nvidia released G80 in november 2006 and somewhere in february 2007 they released CUDA. Meaning they most likely started working on CUDA before they released G80. But I doubt they had CUDA in mind when designing G80, which they started in 2004 maybe? GT200 is just a minor improvement of G80, and CUDA 2.0 is just an improvement of CUDA. So we're talking G80 here, and CUDA, the building blocks of GT200 and CUDA 2.0 respectively.

Also, CUDA is based on C, as far as my understanding goes of programming, that's just a programming language, that Nvidia somehow got to work on their gpu's. It can't be compared to the x86 or sse-extensions.

Also, another point would be ATI's Stream. Imo ATI also did not design their gpu with gpgpu-ops in mind. To at least give us the faint idea that they are doing something with it, they whipped up ATI Stream. Something they did not plan at all, yet it works. And Nvidia's cards can also run OpenCL, something I do not think Nvidia had planned when designing G80. I might be mistaken on this one, if OpenCL is run through CUDA though.

So basically, you feel that because ATI did not have the collective presence of mind to create not only a great gaming GPU, but also a great GPGPU, that Nvidia's product could only have been an accident, or a bonus, in simple terms.

Am I getting this correct?
 

dreddfunk

Senior member
Jun 30, 2005
358
0
0
Guys, I think this is all a mountain over a mole-hill. I simply think what Marc is trying to say is that he feels:

1) G80 wasn't initially designed with GPGPU in mind.
2) That, despite the close proximity of CUDA's launch to G80, the development cycle for G80 was long enough for it to plausibly be an after-the-fact (of design) decision on NV's part
3) That the architecture of any GPU has the potential for making a good GPGPU, if the company throws enough weight behind developing the API, etc., for such applications.
4) That NV decided to throw enough weight behind their GPU's GPGPU potential to create CUDA, and make a stronger move than AMD to go after the GPGPU market (which seems like a good marketig move now, so he's complimenting NV here).

I just don't see this as a slight to NV at all--nor does it make ATI look like a savant. He's not saying that CUDA is an 'accident' or 'lucky'. I think he's merely trying to point out that NV may not have had to do very much (if anything) on a hardware side to make G80/GT200 good GPGPUs.

I can't evaluate that statement (or Marc's position), as I'm no engineer. If someone is a credible GPU engineer here, then perhaps they could explain to us just what hardware differences are required, or if it is merely building the proper software to access the hardware. I admit to being confused. Some seem to be saying that NV had to consider GPGPU a lot when designing the G80/GT200, and that it impacted the number of transistors and die size in some way, but we're short of verifiable information--or even truly knowledgeable speculation.

Honestly, however, I think people are looking for things to interpret as slights in Marc's comments.


IDC - it's really not like your comparisons of memory or CPUs. What he's saying is more like this: if we design a truck to haul lumber, which has a large, flat bed, behind the cab, it would be no surprise that it would also be good at hauling bricks. After all, it's good at hauling things that can fit into large, flat beds.
 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
Originally posted by: dreddfunk
Guys, I think this is all a mountain over a mole-hill. I simply think what Marc is trying to say is that he feels:

1) G80 wasn't initially designed with GPGPU in mind.
2) That, despite the close proximity of CUDA's launch to G80, the development cycle for G80 was long enough for it to plausibly be an after-the-fact (of design) decision on NV's part
3) That the architecture of any GPU has the potential for making a good GPGPU, if the company throws enough weight behind developing the API, etc., for such applications.
4) That NV decided to throw enough weight behind their GPU's GPGPU potential to create CUDA, and make a stronger move than AMD to go after the GPGPU market (which seems like a good marketig move now, so he's complimenting NV here).

I just don't see this as a slight to NV at all--nor does it make ATI look like a savant. He's not saying that CUDA is an 'accident' or 'lucky'. I think he's merely trying to point out that NV may not have had to do very much (if anything) on a hardware side to make G80/GT200 good GPGPUs.

I can't evaluate that statement (or Marc's position), as I'm no engineer. If someone is a credible GPU engineer here, then perhaps they could explain to us just what hardware differences are required, or if it is merely building the proper software to access the hardware. I admit to being confused. Some seem to be saying that NV had to consider GPGPU a lot when designing the G80/GT200, and that it impacted the number of transistors and die size in some way, but we're short of verifiable information--or even truly knowledgeable speculation.

Honestly, however, I think people are looking for things to interpret as slights in Marc's comments.

IDC - it's really not like your comparisons of memory or CPUs. What he's saying is more like this: if we design a truck to haul lumber, which has a large, flat bed, behind the cab, it would be no surprise that it would also be good at hauling bricks. After all, it's good at hauling things that can fit into large, flat beds.

Thank you for clarifying
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: dreddfunk
Guys, I think this is all a mountain over a mole-hill. I simply think what Marc is trying to say is that he feels:

1) G80 wasn't initially designed with GPGPU in mind.
2) That, despite the close proximity of CUDA's launch to G80, the development cycle for G80 was long enough for it to plausibly be an after-the-fact (of design) decision on NV's part
3) That the architecture of any GPU has the potential for making a good GPGPU, if the company throws enough weight behind developing the API, etc., for such applications.
4) That NV decided to throw enough weight behind their GPU's GPGPU potential to create CUDA, and make a stronger move than AMD to go after the GPGPU market (which seems like a good marketig move now, so he's complimenting NV here).

I just don't see this as a slight to NV at all--nor does it make ATI look like a savant. He's not saying that CUDA is an 'accident' or 'lucky'. I think he's merely trying to point out that NV may not have had to do very much (if anything) on a hardware side to make G80/GT200 good GPGPUs.

I can't evaluate that statement (or Marc's position), as I'm no engineer. If someone is a credible GPU engineer here, then perhaps they could explain to us just what hardware differences are required, or if it is merely building the proper software to access the hardware. I admit to being confused. Some seem to be saying that NV had to consider GPGPU a lot when designing the G80/GT200, and that it impacted the number of transistors and die size in some way, but we're short of verifiable information--or even truly knowledgeable speculation.

Honestly, however, I think people are looking for things to interpret as slights in Marc's comments.


IDC - it's really not like your comparisons of memory or CPUs. What he's saying is more like this: if we design a truck to haul lumber, which has a large, flat bed, behind the cab, it would be no surprise that it would also be good at hauling bricks. After all, it's good at hauling things that can fit into large, flat beds.

This is all well and good DF, but all of this to what end? What was his point, or reason, for this information? And why am I asking you? LOL
 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
Originally posted by: Keysplayr
Originally posted by: dreddfunk
Guys, I think this is all a mountain over a mole-hill. I simply think what Marc is trying to say is that he feels:

1) G80 wasn't initially designed with GPGPU in mind.
2) That, despite the close proximity of CUDA's launch to G80, the development cycle for G80 was long enough for it to plausibly be an after-the-fact (of design) decision on NV's part
3) That the architecture of any GPU has the potential for making a good GPGPU, if the company throws enough weight behind developing the API, etc., for such applications.
4) That NV decided to throw enough weight behind their GPU's GPGPU potential to create CUDA, and make a stronger move than AMD to go after the GPGPU market (which seems like a good marketig move now, so he's complimenting NV here).

I just don't see this as a slight to NV at all--nor does it make ATI look like a savant. He's not saying that CUDA is an 'accident' or 'lucky'. I think he's merely trying to point out that NV may not have had to do very much (if anything) on a hardware side to make G80/GT200 good GPGPUs.

I can't evaluate that statement (or Marc's position), as I'm no engineer. If someone is a credible GPU engineer here, then perhaps they could explain to us just what hardware differences are required, or if it is merely building the proper software to access the hardware. I admit to being confused. Some seem to be saying that NV had to consider GPGPU a lot when designing the G80/GT200, and that it impacted the number of transistors and die size in some way, but we're short of verifiable information--or even truly knowledgeable speculation.

Honestly, however, I think people are looking for things to interpret as slights in Marc's comments.


IDC - it's really not like your comparisons of memory or CPUs. What he's saying is more like this: if we design a truck to haul lumber, which has a large, flat bed, behind the cab, it would be no surprise that it would also be good at hauling bricks. After all, it's good at hauling things that can fit into large, flat beds.

This is all well and good DF, but all of this to what end? What was his point, or reason, for this information? And why am I asking you? LOL

The point being that GT200 ended up so big, because by your reasoning it is because of it's gpgpu-capabilities, that is taking up extra diespace. And I'm saying that's incorrect. You forget easily it seems.
 

Blazer7

Golden Member
Jun 26, 2007
1,105
5
81
Watching you guys argue is a pleasure. I personally believe that both companies are proceeding as they've planned. nV with CUDA in mind and ATI with their most cost effective small-die strategy.

However here's a question regarding CUDA. Most of the people out there have no, to little use of CUDA right now and it may take a few more years for CUDA to become a necessity, if it ever becomes one. If CUDA is the reason that nV produces so big and expensive chips don't you guys think that nV's current policy hurts us consumers?

By the time CUDA makes some sense to the average Joe new and more advanced GPUs will be out. So why pay for CUDA now if we don't really need it. It would make much more sense if nV would only implement their CUDA logic in a specific line of GPUs.

That would be fairer towards us consumers as those that don't care much about CUDA can skip it and buy a better gaming GPU and vice versa.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |