SM3.0 is a scam.

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

KeepItRed

Senior member
Jul 19, 2005
811
0
0
the innovator who made them possible, or the follower.

"20 years of innovation" -ATI

Excluding Crossfire...

That is kind of how the business goes, one person invents something, then the other copies it and outperforms it, in this case it's Nvidia makes something ATI copies, ATI makes something Nvidia copies. That basically is the only alternative to the competitors solution...
 

gsellis

Diamond Member
Dec 4, 2003
6,061
0
0
Jumping in (dang, where is my helmet!)

Buying a card based on it having SM3 to 'future proof' your purchase is the wrong reason to buy a card with SM3. SM3 is goodness. But, considering whatever DX10 is (I have notes, but like heck it is worth it to dig them out...) in MS Vista Aero Glass, you may be missing components to run in full goodness mode. The 7800 and 520 'might' be minimum hardware for the full feature set and then again might not be (it has been more than a year since I sat in on the dev discussion).

Summary: SM3 = good. Telling someone that unless you have SM3, you suck = bad. Knowing that the moment you bought it, it was obsolete = priceless.

 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
The 7800 and 520 'might' be minimum hardware for the full feature set and then again might not be (it has been more than a year since I sat in on the dev discussion).

No AFAIK you simply need to have pixel shaders and DX9c support in order to run the new windows in all its "glory". Microsoft is not that stupid.

If someone is going to keep the card for like 2 or more years, why not get a SM3 card. THe X800's and 6800's have relatively the same performance, the only difference is that the 6800's have SM3, why not get it if you are keeping the card for a good long while.

-Kevin
 

gsellis

Diamond Member
Dec 4, 2003
6,061
0
0
Originally posted by: Gamingphreek
The 7800 and 520 'might' be minimum hardware for the full feature set and then again might not be (it has been more than a year since I sat in on the dev discussion).

No AFAIK you simply need to have pixel shaders and DX9c support in order to run the new windows in all its "glory". Microsoft is not that stupid.

If someone is going to keep the card for like 2 or more years, why not get a SM3 card. THe X800's and 6800's have relatively the same performance, the only difference is that the 6800's have SM3, why not get it if you are keeping the card for a good long while.

-Kevin
That contradicts what they told us at WinHEC in 2004, but they have been known to change their minds.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: BenSkywalker
Matthias-(spelled it properly this time )

I didn't write that, but I think he might be trying to make the same argument as I was, which is that with the shader lengths you can actually use on something the speed of a GF6, the differences are minor. Many simple shaders don't even need conditional branching.

But the problem with that is what is an extremely lengthy shader on a SM2.0 part could be a very short one on a SM3.0 part to do the exact same thing. That is what we are really talking about here.

If a function requires X number of mathematical operations to compute in SM2.0, you can't do it in significantly less than X operations with SM3.0. You may save some of the overhead from using multiple passes or other tricks in SM2.0, but you can't magically do math in half the number of clock cycles. At least not in the general case.

In any case, building a game engine "from the ground up" with SM3.0 in mind will not magically make the shaders run faster.

Except that it very easily can depending on the situation(although obviously that isn't always the case). Using FC as an example if they had built the interior shaders from the ground up to use SM 3.0 they could have collapsed the passes and instruction count considerably(beyond what they have modified, it would require rewriting all of the shaders) making it run much better with higher levels of accuracy then the SM2.0 path.

Consider me skeptical of the improvement that could be gained from doing this. You're talking about very general changes, then just waving your hands and saying it will run "much better".

I'm unconvinced from the performance numbers that even a (single) 7800GTX can really start to take advantage of SM3.0

Of course, because you are thinking of it bass ackwards. What you should be asking yourself is can the 7800GTX handle the simplicity of SM3.0, not the complexity. There really isn't much you can do under PS3.0 that you can't do under SM2.0- you can just do it in a much simpler fashion.

Um, yeah.

What I mean is that the sort of changes you're talking about here (using shaders much more extensively) requires an enormous number of shader ops/second (even with SM3.0). I'm not sure that even the 7800GTX has that kind of shader horsepower (although maybe I'll be proven wrong in this regard).

The 7800GTX can crank out shader ops faster then it can draw pixels- what are you waiting for in terms of it being the 'primary function'?

While yes, the 7800GTX can do 24 pixel and 8 vertex shader ops/clock, it still maintains a comparatively large amount of fixed-function rendering capability. I'm talking about something that has, say, 16 FF pipelines but 64 pixel/vertex shaders (or something along those lines). Clearly, if you want to move away from super-high-res textures and triangle counts towards vastly improved vertex and pixel shaders, you will eventually need hardware that is built with that kind of engine design in mind (or else your transistor counts are just going to explode -- not that they haven't already with NVIDIA's last two generations of hardware). Next-gen console hardware seems to be heading in this direction already.

SM3.0 is about taking what would be a big scary shader op and making it a more reasonable one- not about exploding complexity it is about increasing simplicity.

Again, while you may reduce the "length" of the shader by using loops/branches, etc., you are not necessarily reducing the number of instructions executed by the GPU to run the shader. By 'complexity' I mean the computational complexity of running the shader, not necessarily the number of instructions needed to express it.

Beyond that, I'm in no way am I saying that we don't need a massive increase in shader performance before we can think about a fully shaded game- that said SM3.0 is lowering that bar- that is a Good Thing.

That I would agree with.

Considering that none of these consoles are on the market yet, I think we have a while (I'd say in the realm of 2-3 years, but that's a semi-random guess) before this becomes a major concern.

So you are under the impression that the major game developers are going to dumb down their titles for years prior to bringing them to the PC?

Yes, I think they're gonna have to make them work with at least SM2.0 for the next few years if they want to sell anything on the PC side. I'm pretty sure they will at least start by developing engines similar to the ones in existence today.

I just don't think it makes a lot of sense to base purchasing decisions for a PC graphics card on it right now.

SM3.0 is an advantage for whatever board has it versus a SM2.0 board period. This is very far removed from an either or choice ATM- either you get very fast SM2.0 performance and SM3.0 as an option or you get very fast SM2.0 performance and no SM3.0 as an option.

The problem, as I have stated before, is that currently, the cards that have SM3.0 cost more than their non-SM3.0 equivalents (e.g. X800XL versus 6800GT), or cost about the same/slightly less but perform worse overall (e.g. 6600GT versus X800). Perhaps it would have been clearer if I had said that I do not believe SM3.0 should be the only thing you base a graphics card purchase on today.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
You may save some of the overhead from using multiple passes or other tricks in SM2.0, but you can't magically do math in half the number of clock cycles. At least not in the general case.

Maybe not in the general case but unrolling loops and executing every possible branch is certainly a lot more computationaly intensive then handling it with the least possible amount of steps.

Consider me skeptical of the improvement that could be gained from doing this. You're talking about very general changes, then just waving your hands and saying it will run "much better".

Take multiple light interactions on a given surface with a shader. Using SM 2.0 if you are running into multiple lights you need to recalculate the shader for each light light interaction as opposed to calculating out how that particular light interaction is going to impact an already executed shader.

What I mean is that the sort of changes you're talking about here (using shaders much more extensively) requires an enormous number of shader ops/second (even with SM3.0). I'm not sure that even the 7800GTX has that kind of shader horsepower (although maybe I'll be proven wrong in this regard).

Shaders being used much more extensively is coming- the games are well under way in developmental terms and they look far superior to anything that is out now. These titles are currently targetting the 7800GTX feature set and performance levels.

While yes, the 7800GTX can do 24 pixel and 8 vertex shader ops/clock, it still maintains a comparatively large amount of fixed-function rendering capability.

As a rasterizer GPUs are still going to need a large amount of fixed function hardware and as always everything that can remain fixed function in hardware should- it is orders of magnitude faster then flexible hardware.

I'm talking about something that has, say, 16 FF pipelines but 64 pixel/vertex shaders (or something along those lines). Clearly, if you want to move away from super-high-res textures and triangle counts towards vastly improved vertex and pixel shaders, you will eventually need hardware that is built with that kind of engine design in mind (or else your transistor counts are just going to explode -- not that they haven't already with NVIDIA's last two generations of hardware). Next-gen console hardware seems to be heading in this direction already.

You do realize the last part nVidia offered without programmable hardware was the TNT2Ultra? It isn't like basic rasterization has been driving transistor counts for the last four years- it has been programmable units and that trend isn't stopping.

Again, while you may reduce the "length" of the shader by using loops/branches, etc., you are not necessarily reducing the number of instructions executed by the GPU to run the shader. By 'complexity' I mean the computational complexity of running the shader, not necessarily the number of instructions needed to express it.

Could you explain how you can use loops, branches and collapse passes and not reduce the computational complexity? I can't think of a single example where you can do all of the former and not reduce overhead considerably.

Yes, I think they're gonna have to make them work with at least SM2.0 for the next few years if they want to sell anything on the PC side.

The PC side doesn't sell anyway. Sell 300K units on the consoles and you are an abject failure- those numbers are classed as a hit on the PC side. Even looking to a title like Halo(which as of last month was still a top ten selling PC game) where sales in the million unit range were likely on the PC they couldn't be bothered to dumb down the graphics engine to get it to sell on the PC. They gave an option to shut shaders off and leave blank areas all over the place(instead of replacing them with texture maps) but the only consideration they had when porting was to bump up the shader complexity- despite that the baseline engine wouldn't run decently on the overwhelming majority of hardware out at the time. Even with this horrible level of performance- Halo remains a best seller for years(into a time frame when it performed quite nicely).

The problem, as I have stated before, is that currently, the cards that have SM3.0 cost more than their non-SM3.0 equivalents (e.g. X800XL versus 6800GT), or cost about the same/slightly less but perform worse overall (e.g. 6600GT versus X800). Perhaps it would have been clearer if I had said that I do not believe SM3.0 should be the only thing you base a graphics card purchase on today.

So which cards are you going to use as an example? The 6800GT is faster then the X800XL and costs more- the X800 is faster then the 6600GT and on a percentage basis carries a larger premium then the 6800GT does over the X800XL.
 

TTLKurtis

Member
Oct 25, 2003
40
0
66
Originally posted by: Pr0d1gy
After owning both a sm 3.0 nVidia card & an X800XL, this is how I feel about SM 3.0

Obviously their plan worked because I see people everyday saying SM 3.0 is a reason to buy a video card. Well it isn't. Call it my opinion, baszh me, or do whatever else you fanboys feel you must. This is coming from the unbiased observation of someone who has owned both & really appreciated the quality of the x800xl's display.

In conclusion, if you think Sm 3.0 is some big deal & tell people that future games will have it, you are only telling nVidia "Yes, I want you to pay game developers to let you hack their graphics engine up & add some useless program so I can say i have the better video card".


that was a fairly uneducated opinion if i've ever seen one... sounds like you're an ATI fanboy to me. SM3.0 is better because more future games will use SM3.0. That is why ATI (hopefully) is going to be implementing SM3.0 in their next-gen hardware.

/edit: i'd also like to add that nvidia is currently better not BECAUSE of SM3.0, but because ATI has made some big mistakes and fallen behind while NVIDIA has released a new card that performs very well (and they're about to release another one). SM3.0 is just an added bonus.
 

hans030390

Diamond Member
Feb 3, 2005
7,326
2
76
Originally posted by: gac009
^yes but can the gpu your 6600gt really take avantage of the new shaders?
I know you gotta a lot of love for SM 3.0 in your heart but do you really believe that the 6600gt will out perform the x800 in games 2-3 years running the SM 3.0 path?
Have you ever tried out those cool SM 3.0 features and HDR while playing SC:CT on your 6600gt?

I believe that one day SM 3.0 will be king but when that day comes I wont want to be gaming with a 6 series card.

Yes, my GPU can take advantage of the new features. it gives me a nice performance boost on SC:CT WITH parallax mapping on (much better than bump mapping) and some of the other features BESIDES HDR. I don't use HDR, i find it pointless (unless its needed in the game, like the Lost Coast half life 2 thingy, then I might have to use it)...I mean using HDR runs fine anyways wit SC:CT. Last time i checked running SM1.1 and going to 3.0 not only improves IQ, but gives a decent performance boost. Running SM1.1 to 2.0 gives more IQ, but lowers performance.

So yes, possibly in 2-3 years (by then I'll have a decent job and can afford a nice rig) my 6600gt running SM3 will outperform an x800 (assuming they're on the same IQ level, which if i get good performance, i'd put it higher). That's just speculation, but i mean, if a 6800gt can beat an x850 in FEAR while running SM3, i would assume that in a similar situation, my 6600gt could beat an x800. Just assuming. I'm not saying "yeah, well with my Sm3 my 6600gt will PWNZ a x800!!@" And in my mind, if SM3 is going to be a standard one day, just as Sm2 is, I might as well have it even if my GPU wont run games well. I mean, look, you can run some pretty old GPUs (old geforces) on recent games such as HL2 or BF2. Sure, maybe at 25fps and lowest graphics settings, but you can still run it. Also, have you noticed that the mid range cards of one generation (9600 for example) still play recent (which would be "next gen" for that series of cards) games well enough assuming the rest of your rig is ok? Even my old 9200 played HL2 fairly well. Medium settings too. It played D3 also in case you wondered. So, i'm assuming my 6600gt will be the same, and since Sm3 will be used EXTEMELY heavily in next gen, hell, i might as well have it. Even if my card doesnt run, its not like the SM2 x800 would do much better. Since my budget was like $200 back in march, a 6600gt was not only the best performer for that much, but also had SM3.

So you see, with my budget, it made alot of sense for me to get sm3 because if I CAN use it next gen, thats great, but if i cant, i doubt a x800 would do better, so i said "why not get SM3. I could probably use it sometime" and I can. FEAR, Far Cry, SC:CT, and upcoming games.

I would say, SM3 gives those advanced graphics, but you'd say "your gpu wont run it"...and frankly, you dont know that. So, it might actually. And if it doesnt, once again, an x800 will either look worse and run fine or run worse and try to look like SM3. Now, if i had to i could always enable Sm2 on my card IF it brought bigger performance over SM2.

And i use the x800 as an example because it was kinda in the $200 price range at that time...kinda.

So really, its not a losing situation at all for me (and if it is, the x800 would have probably not been better off), so SM3 was a smart desicion on my half considering my budget and how rarely i upgrade my computer or get a new one.

Is that so hard to understand?
 

gac009

Senior member
Jun 10, 2005
403
0
0
^ I dont disagree with you about SM 3.0 being desireable. I wouldnt buy a highend card without SM 3.0.
But for midrange cards and lowend cards I wouldnt worry about it. An x800 out performs a 6600gt NOW, for a lowend/midrange cards the future is always uncertain. Im not saying that you dont have some valid points or are completely wrong I jsut disagree with you posting to everyone asking about which card to buy that they should automatically buy the one with SM 3.0, sometimes you are right but usually I think your comments are misleading. I have owned a 6600gt and it was an excellent card, Im glad you enjoy it.
 

hans030390

Diamond Member
Feb 3, 2005
7,326
2
76
I don't see why you wouldnt get it for a midrange card...the 6600gt is great. Now, if that dude (forgot his name) that works on the UE3 engine said that any card based off of the 6600gt (and up, I'm assuming) would run the engine easily. He also mentioned how important shader model 3 was to them. Now, I understand I wont get uber graphics with the 6600gt, but with Sm3, I will still get a performance boost over Sm2 assuming they were around the same image quality settings.

So, with me knowing my card will run it, why not get shader model 3? it seems like it will be important.

And when most people ask for a card, the Nvidia ones are nicely priced, perform well, and have shader model 3. So why not recommend it considering it will be used?
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
I don't see why you wouldnt get it for a midrange card...the 6600gt is great. Now, if that dude (forgot his name) that works on the UE3 engine said that any card based off of the 6600gt (and up, I'm assuming) would run the engine easily.

And that is where you are mistaken. The engine was made to scale all the way back to the 6600GT... therefore it will not run it easily, but it will run it.

-Kevin
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: BenSkywalker
You may save some of the overhead from using multiple passes or other tricks in SM2.0, but you can't magically do math in half the number of clock cycles. At least not in the general case.

Maybe not in the general case but unrolling loops and executing every possible branch is certainly a lot more computationaly intensive then handling it with the least possible amount of steps.

???

That is not at all what I meant. If you have to compute some mathematical function, the number of mathematical operations required to compute it does not change going from SM2.0 to SM3.0. The only thing that will change is the overhead -- and in many cases, this is only a small fraction of the total number of operations. In fact, as individual shaders get longer, the percentage of time spent on 'overhead' gets shorter.

Is there extra overhead associated with long, complex shaders in SM2.0 compared to SM3.0? Undoubtedly. I'm sure you can write programs that will run twice as fast in SM3.0, because an SM2.0 implementation requires jumping through a dozen hoops that don't exist in SM3.0. But in any shader where the actual time doing "real work" is large compared to the overhead, SM3.0 will make little to no difference.

Consider me skeptical of the improvement that could be gained from doing this. You're talking about very general changes, then just waving your hands and saying it will run "much better".

Take multiple light interactions on a given surface with a shader. Using SM 2.0 if you are running into multiple lights you need to recalculate the shader for each light light interaction as opposed to calculating out how that particular light interaction is going to impact an already executed shader.

If you have to handle an arbitrary number of lights, then yes, this is a huge PITA to do with SM2.0 (although ATI's SM2.0b extensions have tools for dealing with this sort of situation). If you set a fixed maximum number of lights, and the instruction count per light source is not ridiculous, you can write multiple SM2.0 shaders (e.g. a shader to handle one light, a shader to handle two lights, etc.) to do this without incurring noticeable performance losses.

Is the SM3.0 code simpler? Sure. Can you make SM2.0 do this just as fast as SM3.0? Usually, yes.

Again, while you may reduce the "length" of the shader by using loops/branches, etc., you are not necessarily reducing the number of instructions executed by the GPU to run the shader. By 'complexity' I mean the computational complexity of running the shader, not necessarily the number of instructions needed to express it.

Could you explain how you can use loops, branches and collapse passes and not reduce the computational complexity? I can't think of a single example where you can do all of the former and not reduce overhead considerably.

Introducing a lot of branch instructions into a shader can increase the total instruction count and/or execution time (especially if branch instructions are significantly more expensive than other instructions). Hence why loop unrolling is used as an optimization technique. If you know in advance how many passes you need, and you can fit the whole thing into a single linear shader program, you don't *need* the loop at all. It makes coding easier, but actually slows execution down.

Let's say you have some graphics code that does something (conceptually) along the lines of the following in SM2.0:

switch (num_of_lights)
{
case 1:
execute ShaderCalcOneLight(light_data);
case 2:
execute ShaderCalcTwoLights(light_data);
case 3:
execute ShaderCalcThreeLights(light_data);
case 4:
execute ShaderCalcFourLights(light_data);
default:
/* do stuff to handle lights in multiple passes */
}

if (num_of_lights > 1)
execute Shader2;
else
execute Shader3;

In SM3.0, you can write a single shader that looks like this:

for (count = 0; count < num_of_lights; count++)
{
/* code to calculate the contribution of one light */
}

if (num_of_lights > 1)
{
/* code that used to be in Shader2 */
}
else
{
/* code that used to be in Shader3 */
}

This is not necessarily a whole lot faster. Essentially all you save is the overhead of having two shader execution calls rather than one. In fact, the first part might actually be slower in SM3.0 for 1-4 repetitions, since the SM2.0 version doesn't have to use any dynamic branch instructions (again, depending on how slow such instructions are on a particular card).

While yes, the 7800GTX can do 24 pixel and 8 vertex shader ops/clock, it still maintains a comparatively large amount of fixed-function rendering capability.

As a rasterizer GPUs are still going to need a large amount of fixed function hardware and as always everything that can remain fixed function in hardware should- it is orders of magnitude faster then flexible hardware.

Okay, but in your last post you alluded to things such as using shaders to replace (or at least supplement) surface textures. If you want to shift to that sort of model (where you use shaders everywhere), you simply don't need as much fixed-function hardware, and you would be better off having more transistors devoted to programmable elements. Current PC graphics cards are still maintaining both in roughly equal amounts.

I'm talking about something that has, say, 16 FF pipelines but 64 pixel/vertex shaders (or something along those lines). Clearly, if you want to move away from super-high-res textures and triangle counts towards vastly improved vertex and pixel shaders, you will eventually need hardware that is built with that kind of engine design in mind (or else your transistor counts are just going to explode -- not that they haven't already with NVIDIA's last two generations of hardware). Next-gen console hardware seems to be heading in this direction already.

You do realize the last part nVidia offered without programmable hardware was the TNT2Ultra? It isn't like basic rasterization has been driving transistor counts for the last four years- it has been programmable units and that trend isn't stopping.

To some extent, yes, but the ratio of transistors devoted to shaders and fixed-function rasterization hasn't shifted all that much. A 16-pipe card with 16 pixel shaders and 6 vertex shaders is fundamentally just a 'bigger' version of an 8-pipe card with 8 pixel shaders and 3 vertex shaders. The 7800GTX changes this somewhat by only having 16 ROPs for its 24 pipelines, but fundamentally the architecture remains the same.

The problem, as I have stated before, is that currently, the cards that have SM3.0 cost more than their non-SM3.0 equivalents (e.g. X800XL versus 6800GT), or cost about the same/slightly less but perform worse overall (e.g. 6600GT versus X800). Perhaps it would have been clearer if I had said that I do not believe SM3.0 should be the only thing you base a graphics card purchase on today.

So which cards are you going to use as an example? The 6800GT is faster then the X800XL and costs more- the X800 is faster then the 6600GT and on a percentage basis carries a larger premium then the 6800GT does over the X800XL.

Um, I thought I was pretty clear.

The 6800GT is (slightly) faster than the X800XL (and slower at a few things, like HL2), and has SM3.0, but is significantly more expensive. In this case, you are paying a premium pretty much just for SM3.0 (and better performance in Doom3, I guess).

The 6600GT is (noticeably) slower than the X800, but only costs slightly less. The X800 is more cost-effective despite not having SM3.0.
 

hans030390

Diamond Member
Feb 3, 2005
7,326
2
76
Originally posted by: Gamingphreek
I don't see why you wouldnt get it for a midrange card...the 6600gt is great. Now, if that dude (forgot his name) that works on the UE3 engine said that any card based off of the 6600gt (and up, I'm assuming) would run the engine easily.

And that is where you are mistaken. The engine was made to scale all the way back to the 6600GT... therefore it will not run it easily, but it will run it.

-Kevin
Like, what do you mean will run it? if its 30fps, im fine with that. And I think I trust the developers more than someone who has never worked with or ran the engine...

It sounds like they've even done some work to get it working well on Sm2 cards, but I still like the fact that if I get good performance, I could add more eye candy with SM3, or if not, I'll still get a good performance boost. And you do have to remember, the 6600gt was the only card in my price range really...(x800's were like $250 at that time). No, I also got it for Sm3...I didn't just start liking it after I got it. I knew about it. That also encouraged me to get the 6600gt.
 

hans030390

Diamond Member
Feb 3, 2005
7,326
2
76
Matthias99: (in response to the last 2 paragraphs in his last post)
Its not noteiceably slower...Oh, wait sorry! that's right, it might get like, 15fps less!!! oh no!!! whats funny is that is still above 40fps!!! OH NO!!! thats not 60fps but is 100% playable!!! DEAR GOD NO!!!

If I'm still getting great performance, I'd rather have Sm3 with it.

http://anandtech.com/video/showdoc.aspx?i=2277&p=1

Here. This is against an x800pro (not vanilla, like i was talking about) and it still manages to lose by only 10-15fps max. Now, its still completely playable.

If it gets over 30fps, its good. So for me, the performance difference between the x800 and 6600gt is well worth Sm3. Considering Sm3 offers a performance boost (which will be big in next gen games) the gap will close considerably (assuming they're on the same graphical quality settings) so in a way, the 6600gt would be a better buy.

Right, whatever.

 

Ackmed

Diamond Member
Oct 1, 2003
8,498
560
126
Originally posted by: KeepItRed
the innovator who made them possible, or the follower.

"20 years of innovation" -ATI

Excluding Crossfire...

That is kind of how the business goes, one person invents something, then the other copies it and outperforms it, in this case it's Nvidia makes something ATI copies, ATI makes something Nvidia copies. That basically is the only alternative to the competitors solution...

Yeah uh, NV didnt "make" SLI. 3DFX created it, and then NV bought their assets.

ATi has had multi GPU solutions for years.
 

Golgatha

Lifer
Jul 18, 2003
12,379
1,004
126
Originally posted by: Pr0d1gy
Originally posted by: Busithoth
I can't believe this is the first time this has come up here, but, I would say it's overhyped now, but will become more relevant in the next couple years (maybe).

Hardware T&L was similar to this, in its day. Some people were jumping up and down calling it a gimmick, since no games actually accounted for it in the code, so it was a zero gain.
Then, after more than a year of being on the shelves, games were released which could take advantage of it, and some people saw the benefits.

Right now, it's not a biggie, but it could, if developers find it easier to write for and easy to adopt. It's kind of a chicken or egg situation. I think video cards have outpaced programmers for the last couple years.
Happily, we've been able to up the resolutions and AA eye-candy.

Exactly my point. T&L was another pointless add on "feature" that turned out to be all but useless. Thanks to the nVidia fanboys for calling fanboy & troll, but it's not really a fanboy or troll statement when the last 2 cards I owned before this x800xl were nVidia cards. I was happy with both, but I am sick of seeing nVidia fans use sm3.0 as a good reason to buy nVidia because it's not.


After not owning an nVidia card since a TI4600, and owning the 9800, 9800 XT, X800 XT PE, and X850; nVidia gave me a few reasons to buy their product. Namely, a single slot solution that is totally silent, full hardware HD decoding with PureVideo, better overall image quality due to new anti-aliasing modes, and enough horsepower to run those new anti-aliasing modes at a minimum of 1600x1200 on most games I own (BF2 being the exception). SM3.0 support is just a thin chocolate stripe on a fully frosted cake IMHO.
 

Ackmed

Diamond Member
Oct 1, 2003
8,498
560
126
Originally posted by: Golgatha
ATi has had multi GPU solutions for years.

Like what, the Rage Fury Maxx, which was a horrible failure?


That, and another solution with many more cores.

The fact is, NV didnt "make" SLI, like the user I quoted said.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Correction they didn't make Scan Line Interleave. However they did make Scalable Link Interface (they did borrow some tech from 3DFX though).

Additionally, although Rage Fury Maxx did fail, it still counts (according to rule number 3.431A Section 43Xx that says Rage Fury Maxx counts) as it was launched and it was in the retail channels.

-Kevin
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: Golgatha
ATi has had multi GPU solutions for years.

Like what, the Rage Fury Maxx, which was a horrible failure?

I loved my MAXX, it was a very cool card to have. It had some driver issues, but it's not all about seamless performance sometimes.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Originally posted by: TTLKurtis
SM3.0 is better because more future games will use SM3.0.
There's a small disconnect in your statement. How many times have we said not to buy a card now for a game that isn't out yet (especially one that's a year or more away)?

However, I do agree that
nvidia is currently better ... because ... SM3.0 is just an added bonus
when you compare it to a similarly-priced and performing ATI card. So I would recommend a nV card over an ATI one if price and performance are similar, just because the nV card lets you experience some extra special effects in some current games, and may have an edge in future ones.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |