SM3.0 is a scam.

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Hold on a sec. How would an SM3.0 shader get around doing an intersection test to see if the light source can illuminate the object?

They don't get around it, it is the way it has to be done. Using SM3.0 you run a vertex test on each light source then pass the data to the pixel shaders to calculate out which shader routines need to be run. You can't do this under SM2.0 in the fashion it is possible with 3.0(lack of dynamic branching)- you would need to calculate out a per pixel coverage map which wouldn't be close to reasonable.

The 'normal' sequence I would assume for culling back-facing surfaces (which is what I *thought* you were talking about; suddenly now we're doing intersection tests as well?) during dynamic lighting would be to have a shader that looks something like this in SM3.0:

It is beyond culling back facing surfaces- it is also culling unneeded shader routines from forward facing pixles.

In SM2.0, you can get a very close effect by running a shader like this:

{
calculate surface normal to light
store surface normal
}

and then for each pixel that is facing the light, you run another pass of a shader that looks like:

{
calculate contribution of light to color of pixel
}

You need to calculate from the light to the surface, not the other way around. How would you go about storing the data for this? You could tile the scene into six or eight different full frame sized textures to store the relevant shader data(lack of precission supported under SM2.0) in terms of what pixels need what light routines run- and then you would have to take that data and run a shader program to create a shader routine for each pixel and then apply that for the final run time- you are talking radiosity levels of complexity(far worse then the overhead of simply handling all of the shaders in the first place).

Perhaps I'm not understanding the exact problem, but I think the solution I just outlined above would work (doing one or two low-instruction-count passes to cull out the pixels that will not be affected, then do the bulk of the work on the remaining ones).

The major problem with your solution is that you can't use conditionals with SM2.0(that is a 3.0 feature) so you move from a simple visiblity test to calculating out an intersection map and then creating a shader routine per frame- not viable.

I thought it still had the capability of fully working on 24 pixels, but 16 could actively output per clock (due to the 16 ROPs). Perhaps the article I read on the card's architecture misled me.

It has 24 ALUs for pixel shaders. The 7800GTX is a four quad GPU- it simply has more shader units then what it can ouput. I think you will find a lot of sites misleading people, for simplicities sake, by calling parts by the amount of ALUs they have moving forward instead of how many pixels they can draw.

"You should pay for SM3.0 because at some indeterminate point in the future, it might provide larger performance gains" is hardly an overwhelming argument in favor of SM3.0 ATM.

Pay what is the question. If ATi had a part out right now that performed identical to a X800 but had SM 3.0 and it was charging a 10% premium would you say to go for it or not?
 

Intelia

Banned
May 12, 2005
832
0
0
Originally posted by: Gamingphreek
Intelia, no one wants you in this thread. Everyone in here was doing just fine before you came. So STFU, ESPECIALLY, about peoples children.

Additionally, since you didn't completely flame and troll; you list 1 thing about Nvidia's RELEASED SLI that was copied from ATI's UNRELEASED crossfire. It seem pretty unlikely that Nvidia had any chance of copying an unreleased product :roll:

Linuxator, PLEASE punctuate. Additionally, research a little more because your idea is wrong, and it is too hard to pull quotes to prove it in that post.

-Kevin

Where did I say that Sli copied cross fire reading comprehension problems again. I said sli used some of Ati"s tech. learn to read. I never talked about rollo's child at all . learn to read. Its unlikely that sli contains Ati tech LOL . You know nothing less than nothing . We were talking about were Rollo's kid plays online or self . There is at least some here that know ati tech. was used but you don't know it . Now do you?

 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Where did I say that Sli copied cross fire reading comprehension problems again.

Ok Bill Clinton... please define what you think copying is. I would consider copying using some of the oppostitions tech.

I never talked about rollo's child at all

Yet you go on to say:

We were talking about were Rollo's kid plays online or self

I have just proven you wrong in EVERY SINGLE instance. Please by all means show me where Nvidia "used" NOT COPIED, but USED ATI's tech.

Based on your posts you obviously dont have a clue about computers inner workings. Just do a favor for all of us and logoff and stop posting. You really do lower the IQ around here. THis thread was doing just fine with people (Matthias and Ben) having REAL debates without flaming or anything.

-Kevin
 

Intelia

Banned
May 12, 2005
832
0
0
We were talking about his PC fool.
I know exactly what I am talking about . You can't handle the fact that Nvidia SlI may or may not have used ATI tech. I would show you but this is much better like this . Go way back when Sli was coming out and read Anand's articles (hint its there)
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: hans030390
Honestly, which would you rather have, a crappy looking game with AA/AF on, or a game that looks really good but you dont run AA/AF on because it would kill performance?

I guess I'm just not into AA/AF...to me, the "eye candy" it adds isn't worth the framerate loss.


Yes, because Half Life 2 and Far Cry look SO crappy. Just give it up already Hans. SM3 adds a few nice features to SM2.0, but it can't magically transform a game from "crappy looking" to "looks really good" all by itself.

 

The Linuxator

Banned
Jun 13, 2005
3,121
1
0
lol basically hans without AA/AFwhen you are playing a racing game, inside the car you see the driver grabbing a square instead of the steering wheel. lol
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Originally posted by: Intelia
We were talking about his PC fool.
I know exactly what I am talking about . You can't handle the fact that Nvidia SlI may or may not have used ATI tech. I would show you but this is much better like this . Go way back when Sli was coming out and read Anand's articles (hint its there)

Why would i have any trouble "handling" it if Nvidia used ATI tech.

Once again, please give some proof. You are trying to prove a bs point, why in the hell am i going to go and do the research.

This just proves that you not only know nothing about computers but a lot of other stuff too. Seriously, i know of no one who wouldn't be able to carry on if Nvidia used ATI tech.

Come on, just log off and leave us all alone.

-Kevin
 

hans030390

Diamond Member
Feb 3, 2005
7,326
2
76
Originally posted by: Creig
Originally posted by: hans030390
Honestly, which would you rather have, a crappy looking game with AA/AF on, or a game that looks really good but you dont run AA/AF on because it would kill performance?

I guess I'm just not into AA/AF...to me, the "eye candy" it adds isn't worth the framerate loss.


Yes, because Half Life 2 and Far Cry look SO crappy. Just give it up already Hans. SM3 adds a few nice features to SM2.0, but it can't magically transform a game from "crappy looking" to "looks really good" all by itself.

Eh, I'm really bad at making comparisons...Look, all I'm saying is that AA/AF kills performance and in order to boost that performance back up, you either have to turn down the resolution/and or graphic settings. I'd rather just play without it, get good framerates, and have my graphics set higher. Sorry, Im just a really hard person to understand sometimes, and I don't think you took what i said the same way i meant it.

And I completely forgot how that tied in with shader model 3. Oh wait, now i remember. It had something to do with the X800 beating the 6600gt really badly with AA/AF and it didn't matter to me because I dont use AA/AF (never have, so i dont see a need to...and i never have because i never had good hardware). So, i said I'd rather take slightly slower performance (not bad performance though) and have shader model 3 EVEN IF it sucks in next gen. I'll still get a performance boost no matter how i have the graphics set as compared to Sm2 performance. So I don't see it as a bad thing. Do you?

besides, whats so bad about driving a car with a square wheel?
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: BenSkywalker
Hold on a sec. How would an SM3.0 shader get around doing an intersection test to see if the light source can illuminate the object?

They don't get around it, it is the way it has to be done. Using SM3.0 you run a vertex test on each light source then pass the data to the pixel shaders to calculate out which shader routines need to be run. You can't do this under SM2.0 in the fashion it is possible with 3.0(lack of dynamic branching)- you would need to calculate out a per pixel coverage map which wouldn't be close to reasonable.

Why is it "not close to reasonable"? Isn't this along the lines of what Doom3 does for its lighting? I realise it's easier/more straightforward with SM3.0, but it shouldn't be impossible.

The 'normal' sequence I would assume for culling back-facing surfaces (which is what I *thought* you were talking about; suddenly now we're doing intersection tests as well?) during dynamic lighting would be to have a shader that looks something like this in SM3.0:

It is beyond culling back facing surfaces- it is also culling unneeded shader routines from forward facing pixles.

Okay, you can do that too, but you would have to do the same calculations under SM3.0. It's just easier to implement (and has somewhat less overhead), since you can do it all in one shader (or, as you are describing, one vertex and one pixel shader).

You need to calculate from the light to the surface, not the other way around. How would you go about storing the data for this? You could tile the scene into six or eight different full frame sized textures to store the relevant shader data(lack of precission supported under SM2.0) in terms of what pixels need what light routines run- and then you would have to take that data and run a shader program to create a shader routine for each pixel and then apply that for the final run time- you are talking radiosity levels of complexity(far worse then the overhead of simply handling all of the shaders in the first place).

If the point you're concerned about on the surface can 'see' the light, the light can 'see' the point on the surface (barring single-sided surfaces between them, but most dynamically lit game engine objects are 'solid'). It shouldn't matter if you trace light->surface or surface->light for visibility and lighting calculations (although one or the other may be faster in a particular case, if you don't have to invert any of the vectors).

Why do you need "six or eight different full frame sized textures"? You need one full-frame buffer (presumably 24bpp) to store the dynamic lighting contribution for each pixel (or you might be able to work directly with the framebuffer). But you only need one per-pixel bitmap (with 1bpp) to mark the pixels that will be illuminated. With multiple lights, you either need one bitmap per light (and then do one pass per light to sum up the contributions), or you could just 'tag' each pixel that is visible to one or more lights, then calculate the contribution of each light to each tagged pixel (which may turn out to be 0 for some of the light/pixel combinations, and would be 'wasting' GPU time). You don't need to create a shader routine per pixel; you're running a fixed, pre-written routine on a subset of the pixels on the screen.

Perhaps I'm not understanding the exact problem, but I think the solution I just outlined above would work (doing one or two low-instruction-count passes to cull out the pixels that will not be affected, then do the bulk of the work on the remaining ones).

The major problem with your solution is that you can't use conditionals with SM2.0(that is a 3.0 feature) so you move from a simple visiblity test to calculating out an intersection map and then creating a shader routine per frame- not viable.

Why is it 'not viable'? A per-pixel bitmap is well within the memory limits of today's cards. Or do you think it would be too slow (which is possible, but SM3.0 has to do much of the same work, just in one pass)?

"You should pay for SM3.0 because at some indeterminate point in the future, it might provide larger performance gains" is hardly an overwhelming argument in favor of SM3.0 ATM.

Pay what is the question. If ATi had a part out right now that performed identical to a X800 but had SM 3.0 and it was charging a 10% premium would you say to go for it or not?

No; I simply do not feel that SM3.0 will make enough of a difference on a card of this level (and quite possibly any single card you can get right now). 10% more cost for a 0-5% performance boost (in supporting games) and the possibility of getting something more later doesn't exactly strike me as a great deal. At a 5% increase in cost I'd consider it basically 'break-even' right now.

Other people would probably say that being more 'future-proof' for "only" a 10% increase in cost is obviously worth it. There's nothing wrong with this view; you're basically 'betting' that SM3.0 will turn out to be worth it at some point in the future.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: hans030390
Originally posted by: Creig
Originally posted by: hans030390
Honestly, which would you rather have, a crappy looking game with AA/AF on, or a game that looks really good but you dont run AA/AF on because it would kill performance?

I guess I'm just not into AA/AF...to me, the "eye candy" it adds isn't worth the framerate loss.


Yes, because Half Life 2 and Far Cry look SO crappy. Just give it up already Hans. SM3 adds a few nice features to SM2.0, but it can't magically transform a game from "crappy looking" to "looks really good" all by itself.

Eh, I'm really bad at making comparisons...Look, all I'm saying is that AA/AF kills performance and in order to boost that performance back up, you either have to turn down the resolution/and or graphic settings. I'd rather just play without it, get good framerates, and have my graphics set higher. Sorry, Im just a really hard person to understand sometimes, and I don't think you took what i said the same way i meant it.

But setting your graphics higher *also* lowers your framerates. Sometimes by quite a bit. Even moreso with heavy-duty shaders.

Plus, earlier, you were talking about how anything above 30FPS (which should be easily achievable in almost any game with the 6600GT, even with AA/AF) is fine... are you going back on that statement?

And I completely forgot how that tied in with shader model 3. Oh wait, now i remember. It had something to do with the X800 beating the 6600gt really badly with AA/AF and it didn't matter to me because I dont use AA/AF (never have, so i dont see a need to...and i never have because i never had good hardware). So, i said I'd rather take slightly slower performance (not bad performance though) and have shader model 3 EVEN IF it sucks in next gen. I'll still get a performance boost no matter how i have the graphics set as compared to Sm2 performance. So I don't see it as a bad thing. Do you?

But with a card that's faster overall, you could have a consistent performance boost all the time, even in games that don't support SM3.0. And comparing the X800 to the 6600GT, you'd get significantly better performance when using AA/AF (maybe enough to actually let you use them without it being unacceptably slow).

The problem is that you seem to be playing both sides here. Saying that you don't care about IQ-enhancing features like AA or AF because of the performance hit, and then arguing that you want SM3.0 because it will improve IQ in future games (though also at a large performance hit), doesn't make a lot of sense.
 

hans030390

Diamond Member
Feb 3, 2005
7,326
2
76
Um...what? Yeah, setting the graphics higher on a game lowers framerate, but not as much as enabling AA/AF does. I've just been used to never using it, so i see no need in using it. (jeez, most of the time if you say I'm not interesting in AA/AF people just say, oh ok)

Yes, but since Sm3 is going to be used so heavily next gen, I might as well have it. Even if I cant use the graphics it can give, it will still let me use some displacement mapping or the like without hurting framerate too much, something that Sm2 can't do. So i'm not in a losing situation really.

Oh, and the x800 was too expensive when i got my 6600gt. In fact, I had to borrow money from my parents to pay for my 6600gt. So thats another reason why i chose a 6600gt over an x800
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: hans030390
Um...what? Yeah, setting the graphics higher on a game lowers framerate, but not as much as enabling AA/AF does.

That's a completely arbitrary statement that depends HIGHLY on the game involved, how much AA/AF you enable, and which 'settings' you're talking about. e.g. shifting Far Cry or HL2 from DX7 to DX9 mode drops performance far more than turning on 2xAA and 8xAF.

I would think that in many games, going from 'low/medium' detail settings to 'maximum' detail settings on everything would be a rather large performance hit.

I've just been used to never using it, so i see no need in using it. (jeez, most of the time if you say I'm not interesting in AA/AF people just say, oh ok)

But you're saying on one hand that AA and AF (which increase IQ by quite a bit in some games) are not worth the performance hit, but the IQ improvements from SM3.0 (which -- let me remind you -- are not available yet and will probably hurt performance even more) will be worth it. You don't find this a little incongruous?

Yes, but since Sm3 is going to be used so heavily next gen, I might as well have it. Even if I cant use the graphics it can give, it will still let me use some displacement mapping or the like without hurting framerate too much, something that Sm2 can't do. So i'm not in a losing situation really.

'Real' displacement mapping will hurt your performance far more than just AA/AF will. IMO, you're just deluding yourself if you think that SM3.0 will give you enormous amounts of 'free' IQ.

Oh, and the x800 was too expensive when i got my 6600gt. In fact, I had to borrow money from my parents to pay for my 6600gt. So thats another reason why i chose a 6600gt over an x800

Then it was probably the best option for you. That still doesn't change the fact that the arguments you're floating about SM3.0 seem to make little sense.
 

Staples

Diamond Member
Oct 28, 2001
4,953
119
106
Have fun with your 800Xl, I will continue to use my twice as fast as your 7800GTX.
 

HeXploiT

Diamond Member
Jun 11, 2004
4,359
1
76
Originally posted by: Rent
Originally posted by: Pr0d1gy
After owning both a sm 3.0 nVidia card & an X800XL, this is how I feel about SM 3.0

Obviously their plan worked because I see people everyday saying SM 3.0 is a reason to buy a video card. Well it isn't. Call it my opinion, baszh me, or do whatever else you fanboys feel you must. This is coming from the unbiased observation of someone who has owned both & really appreciated the quality of the x800xl's display.

In conclusion, if you think Sm 3.0 is some big deal & tell people that future games will have it, you are only telling nVidia "Yes, I want you to pay game developers to let you hack their graphics engine up & add some useless program so I can say i have the better video card".

Uh, who "hacked" their engine to implement SM3.0?

And just FYI, when you call out people as fanboys before anyone has even replied, your claim to be unbiased goes right out the window.


Isn't that kind of like saying "who farted"...when you're the only one in the room?
 

hans030390

Diamond Member
Feb 3, 2005
7,326
2
76
Matthias:
First off, when i mean moving to higher graphics settings, i mean like...DX9, from low to medium to high. now for HL2 (which is the only game i can think of for some reason), it's usually a matter of ram and less of the video card. Assuming you have enough ram, no its not a big performance difference (or at least its still playable).

About AA/AF, I should also mention I'm too lazy to turn it on, plus it seems to cause problems sometimes, so I'd rather leave it as is. Last time i checked, AA/AF doesnt give real "IQ", unless you consider smoothed out lines or being able to see textures better from far away. If I had to use AA/AF, i'd probably have to set my settings down some (graphics)...which makes the game look worse. I'd rather have better textures and shaders (and detail) then smoothed out graphics and being able to see my crappier textures from far away. BTW, this is mostly for next gen. Of course i can run AA/AF on my 6600gt. I don't see AA/AF as an IQ improvement. As for Sm3 features, i DO consider that IQ improvement. Sure, maybe I won't be able to use all of them, but I'll still benefit from a performance boost anyways just by using it.

What have I been saying. I KNOW THAT I WILL NOT BE ABLE TO RUN GAMES WITH ALL THE SM3 FEATURES! I might be able to use some (which no one can prove or disprove) of the features, or if not, i still get a performance boost. I'm not thinking that SM3 will boost performance soooo much that i can play games at uber settings.

I'm not finding you to make much sense either because i have a completely different point of view about SM3 than you do. Look i understand what you're saying but i think you're twisting what i say slightly such as "you're just deluding yourself if you think that SM3.0 will give you enormous amounts of 'free' IQ." Clearly, i've said before that no, i dont think that.

We'll just see come next gen. Look, I could be wrong, so could you. I'm not saying I'm right, but I'm saying what i think is most likely to happen. If I end up being wrong, i want you to flame me to know end. for now though, we can't say who's right or wrong.

I'm hungry.
 

hans030390

Diamond Member
Feb 3, 2005
7,326
2
76
Originally posted by: The Linuxator
lol basically hans without AA/AFwhen you are playing a racing game, inside the car you see the driver grabbing a square instead of the steering wheel. lol

and wait, AA/AF doesnt do that much for graphics...that'd be nice if it did. then i'd use it.

square wheel. lol

that'd be funny
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Why is it "not close to reasonable"? Isn't this along the lines of what Doom3 does for its lighting?

No, D3 does a Z pass to determine straight line visibility- childs play in comparison to what you are talking about.

If the point you're concerned about on the surface can 'see' the light, the light can 'see' the point on the surface (barring single-sided surfaces between them, but most dynamically lit game engine objects are 'solid'). It shouldn't matter if you trace light->surface or surface->light for visibility and lighting calculations (although one or the other may be faster in a particular case, if you don't have to invert any of the vectors).

You are ignoring reflection, refraction and scatter.

Why do you need "six or eight different full frame sized textures"? You need one full-frame buffer (presumably 24bpp) to store the dynamic lighting contribution for each pixel (or you might be able to work directly with the framebuffer). But you only need one per-pixel bitmap (with 1bpp) to mark the pixels that will be illuminated.

As long as your hardware supports conditionals you can do it that way- SM2.0 doesn't support conditionals. You can't exit the shader based on a simple flag- THAT IS THE PROBLEM. What you would have to do is create a shader per pixel and then store that for each frame.

With multiple lights, you either need one bitmap per light (and then do one pass per light to sum up the contributions), or you could just 'tag' each pixel that is visible to one or more lights, then calculate the contribution of each light to each tagged pixel (which may turn out to be 0 for some of the light/pixel combinations, and would be 'wasting' GPU time).

All requiring conditionals, which you can't do with SM2.0. We are not talking about fully programmable hardware here- SM3.0 isn't either it just removes some serious restrictions.

You don't need to create a shader routine per pixel; you're running a fixed, pre-written routine on a subset of the pixels on the screen.

Explain how you would do that with no flagging or pointers allowed. Your capabilites are extremely limited under SM2.0.

Other people would probably say that being more 'future-proof' for "only" a 10% increase in cost is obviously worth it. There's nothing wrong with this view; you're basically 'betting' that SM3.0 will turn out to be worth it at some point in the future.

It is a certainty that SM3.0 will be a major feature in the gaming industry- it is looking like every next gen console is going to be built around that exact feature set. No emerging gaming technology has had that level of support across the board to date.
 

AznAnarchy99

Lifer
Dec 6, 2004
14,695
117
106
Originally posted by: Rollo
It would be fairly easy for Prodigy to think this way today; there are only a handful of SM3 games out.

The point is that SM3 is the natural evolution of the industry, and nVidia/OEMs should be rewarded for accelerating the advance.

It's well known that SM3 saves developers work and time, allowing them to bring us games faster.

The only question is whether you want to pay nVidia for their R&D that gave developers the tools to do this over a year ago, or reward ATI for following suit sometime later this year. (hopefully)

The games will be playing in the year to come will largely have been developed on nVidia hardware because ATI has been treading water with the R300 core so long.

Whether you like the tradeoffs or not, things like SM3, displacement mapping, soft shadows, and EXR HDR are the current cutting edge that nVidia brought you and developers over a year ago.

It's a given future cards by ATI and nVidia will do these things better, the buyers choice now is whether to reward the innovator who made them possible, or the follower.

My .02


I dont know... look at Nvidia and their GF4 line. They didnt support SM1.4 while Ati did since the 8500s and now the GF4 cant run BF2.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Originally posted by: AznAnarchy99
Originally posted by: Rollo
It would be fairly easy for Prodigy to think this way today; there are only a handful of SM3 games out.

The point is that SM3 is the natural evolution of the industry, and nVidia/OEMs should be rewarded for accelerating the advance.

It's well known that SM3 saves developers work and time, allowing them to bring us games faster.

The only question is whether you want to pay nVidia for their R&D that gave developers the tools to do this over a year ago, or reward ATI for following suit sometime later this year. (hopefully)

The games will be playing in the year to come will largely have been developed on nVidia hardware because ATI has been treading water with the R300 core so long.

Whether you like the tradeoffs or not, things like SM3, displacement mapping, soft shadows, and EXR HDR are the current cutting edge that nVidia brought you and developers over a year ago.

It's a given future cards by ATI and nVidia will do these things better, the buyers choice now is whether to reward the innovator who made them possible, or the follower.

My .02


I dont know... look at Nvidia and their GF4 line. They didnt support SM1.4 while Ati did since the 8500s and now the GF4 cant run BF2.

That is different. Back then there were other very important reasons not to buy the 8500. Back then it was drastically slower than even the 4200, and also they had HORRIBLE drivers. It wasn't until the 9700 came out that ATI took the crown. And then not until the 9800 until ATI's drivers really got mature.

The case today, both drivers have very comparable drivers, and neither card is drastically faster than the other.

-Kevin
 

A5

Diamond Member
Jun 9, 2000
4,902
5
81
Originally posted by: Staples
Have fun with your 800Xl, I will continue to use my twice as fast as your 7800GTX.

It's also twice as expensive
 

hans030390

Diamond Member
Feb 3, 2005
7,326
2
76
Originally posted by: BFG10K
I don't see AA/AF as an IQ improvement.
You must have serious vision problems; either that or you have no clue as to what said features do.

I know exactly what the features do, though sometimes get them mixed up. Since no resolution can draw a straight line without it being jagged, (i think) AA fixes that up and smooths things out. AF makes textures appear clearer from farther away.

or vice versa if i mixed them up.

But honestly, even though i CAN use it, i'm just not picky about seeing textures from far away or smooth out aliasing.

Since I know my performance won't be good with AA/AF in next gen, I won't use it then, so I'd rather have something which MIGHT let me use some IQ features, SM3. It might not, but NO ONE is certain. AA/AF...well, honestly, i think we all know in next gen it wont run fast with it.

So I see AA/AF as an IQ improvement, but nothing I'm concerned about. It's not worth the framerate, even if i can use it.

But the whole BF2 argument does make sense, and even if the 8500 sucked, in this case, the x800 series and 6 series are about equal, so in the future it will be "6 series can play this game because of Sm3, x800 can't because it doesnt have it."

By then though, the 6 series would run games like the gf4 does today. But hey, it still runs. even if it is at lowest settings. Wouldnt that be more future proof to you?
 

botd4u

Banned
Jun 27, 2005
124
0
0
Originally posted by: Pr0d1gy
After owning both a sm 3.0 nVidia card & an X800XL, this is how I feel about SM 3.0

Obviously their plan worked because I see people everyday saying SM 3.0 is a reason to buy a video card. Well it isn't. Call it my opinion, baszh me, or do whatever else you fanboys feel you must. This is coming from the unbiased observation of someone who has owned both & really appreciated the quality of the x800xl's display.

In conclusion, if you think Sm 3.0 is some big deal & tell people that future games will have it, you are only telling nVidia "Yes, I want you to pay game developers to let you hack their graphics engine up & add some useless program so I can say i have the better video card".
LOL when Half-Life 2: Aftermath, lost coast comes out with HDR (a smart shader 3.0 technology), for nVidia users iz gonna own, I'll be able to run the game at 1800x1380 on 8x AF, you can be super envy of me at that time and regretting u bought an ATI card ROFL.

 

fstime

Diamond Member
Jan 18, 2004
4,382
5
81
Originally posted by: botd4u
Originally posted by: Pr0d1gy
After owning both a sm 3.0 nVidia card & an X800XL, this is how I feel about SM 3.0

Obviously their plan worked because I see people everyday saying SM 3.0 is a reason to buy a video card. Well it isn't. Call it my opinion, baszh me, or do whatever else you fanboys feel you must. This is coming from the unbiased observation of someone who has owned both & really appreciated the quality of the x800xl's display.

In conclusion, if you think Sm 3.0 is some big deal & tell people that future games will have it, you are only telling nVidia "Yes, I want you to pay game developers to let you hack their graphics engine up & add some useless program so I can say i have the better video card".
LOL when Half-Life 2: Aftermath, lost coast comes out with HDR (a smart shader 3.0 technology), for nVidia users iz gonna own, I'll be able to run the game at 1800x1380 on 8x AF, you can be super envy of me at that time and regretting u bought an ATI card ROFL.


HDR works just fine on ATI cards.
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0
I bet you were one of the people who said hardware T&L, DDR RAM, PIxel shaders, vertex shaders and even AA were "scams too" huh?

It's always nice to ahve a card that has a feature even if it's not implemented yet.

The cool thing about the 6800GT vs our X800XLs is that when that super cool game with the ridiculous graphics comes out using SM 3.0, 6800GT owners are still gonna be able to play and use their cards for a little longer, while you and I get to go shopping for a video card with SM 3.0
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |