SM3.0 is a scam.

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Hey matthias, anandtech has an article on the 6600gt. It compares it to an x800pro (not vanilla) and it only gets like 10-15fps less at either 10x7 or 12x10 (which are GREAT for games, no one really needs 16x12).

But we certainly need SM3. I guess with that mentality, we dont need 32bit color either :roll:

-Kevin
 

imported_goku

Diamond Member
Mar 28, 2004
7,613
3
0
Originally posted by: Pr0d1gy
After owning both a sm 3.0 nVidia card & an X800XL, this is how I feel about SM 3.0

Obviously their plan worked because I see people everyday saying SM 3.0 is a reason to buy a video card. Well it isn't. Call it my opinion, baszh me, or do whatever else you fanboys feel you must. This is coming from the unbiased observation of someone who has owned both & really appreciated the quality of the x800xl's display.

In conclusion, if you think Sm 3.0 is some big deal & tell people that future games will have it, you are only telling nVidia "Yes, I want you to pay game developers to let you hack their graphics engine up & add some useless program so I can say i have the better video card".

You purchased a video card BECAUSE IT HAD SM3.0? And NOT because you actually needed a better performing video card? Looks like some one was pwned..
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Originally posted by: hans030390
you ruined the fun. yes, we'll need sm3 like we need sm2 today. thats all im concerned about.

Yet we dont need anything above 12x10!? What kind of twisted logic are you arguing with?

-Kevin

Edit: Hans just out of pure curiosity, are those numbers next to your names your birthday?
 

imported_goku

Diamond Member
Mar 28, 2004
7,613
3
0
Originally posted by: hans030390
Hey matthias, anandtech has an article on the 6600gt. It compares it to an x800pro (not vanilla) and it only gets like 10-15fps less at either 10x7 or 12x10 (which are GREAT for games, no one really needs 16x12). Funny thing is, the 6600gt still runs above 40fps! Did you know thats REALLY playable? Most people don't need 1234309fps to play games. In fact, most people can't tell the difference between 30 and 60fps.

You know what else. Some people dont use AA/AF! OMG! Lets make a buying decision on a card because it does better with AA/AF! You know what, i KNOW i'll be using Sm3 sometime, in fact I even use it now. But I knew that AA/AF isn't something I use just because I don't need it. I'm fine with 10x7. No AA/AF. Its just a waste of performance. Perhaps its because I grew up without it.

Jeez. So wait, that means the x800 would be even close in performance to the 6600gt than the x800pro is. So...why not sacrifice a few FPS and still have a very playable game at high settings and not use the unneeded AA/AF so you can have something you will use in the future, SM3? And with SM3, it boosts performance in games that use it, so maybe you COULD put on that extra AA/AF if you want it and still have a playable game.

Sorry if you're the type that plays on uber 16x12 with full graphics settings and all AA/AF. Some of us really dont care about it.

your annoying me.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Originally posted by: hans030390
Hey matthias, anandtech has an article on the 6600gt. It compares it to an x800pro (not vanilla) and it only gets like 10-15fps less at either 10x7 or 12x10 (which are GREAT for games, no one really needs 16x12). Funny thing is, the 6600gt still runs above 40fps! Did you know thats REALLY playable? Most people don't need 1234309fps to play games. In fact, most people can't tell the difference between 30 and 60fps.

You know what else. Some people dont use AA/AF! OMG! Lets make a buying decision on a card because it does better with AA/AF! You know what, i KNOW i'll be using Sm3 sometime, in fact I even use it now. But I knew that AA/AF isn't something I use just because I don't need it. I'm fine with 10x7. No AA/AF. Its just a waste of performance. Perhaps its because I grew up without it.

Jeez. So wait, that means the x800 would be even close in performance to the 6600gt than the x800pro is. So...why not sacrifice a few FPS and still have a very playable game at high settings and not use the unneeded AA/AF so you can have something you will use in the future, SM3? And with SM3, it boosts performance in games that use it, so maybe you COULD put on that extra AA/AF if you want it and still have a playable game.

Sorry if you're the type that plays on uber 16x12 with full graphics settings and all AA/AF. Some of us really dont care about it.

You are not worthy of Shader Modely goodness.

You are, henceforth banned from ATI and Nvidia cards, as well as typing the word shader model. YOu have been exiled to XGI cards... enjoy

-Kevin
 

The Linuxator

Banned
Jun 13, 2005
3,121
1
0
Originally posted by: goku2100
Originally posted by: hans030390
Hey matthias, anandtech has an article on the 6600gt. It compares it to an x800pro (not vanilla) and it only gets like 10-15fps less at either 10x7 or 12x10 (which are GREAT for games, no one really needs 16x12). Funny thing is, the 6600gt still runs above 40fps! Did you know thats REALLY playable? Most people don't need 1234309fps to play games. In fact, most people can't tell the difference between 30 and 60fps.

You know what else. Some people dont use AA/AF! OMG! Lets make a buying decision on a card because it does better with AA/AF! You know what, i KNOW i'll be using Sm3 sometime, in fact I even use it now. But I knew that AA/AF isn't something I use just because I don't need it. I'm fine with 10x7. No AA/AF. Its just a waste of performance. Perhaps its because I grew up without it.

Jeez. So wait, that means the x800 would be even close in performance to the 6600gt than the x800pro is. So...why not sacrifice a few FPS and still have a very playable game at high settings and not use the unneeded AA/AF so you can have something you will use in the future, SM3? And with SM3, it boosts performance in games that use it, so maybe you COULD put on that extra AA/AF if you want it and still have a playable game.

Sorry if you're the type that plays on uber 16x12 with full graphics settings and all AA/AF. Some of us really dont care about it.

your annoying me.


I agree with you goku2100, but this guy is lost in the digital world and we need to get him back home.
Look hans030990 by the time all bigtime SM3.0 games become mainstream your 6600 will be obsolete I garauntee it ...... and you will need to upgrade, but since quality means sht to you , I guess you will not need to upgrade.
Here is what you do, sell that nice millions of colors, 32-bit color depth capable monitor and buy an ancient black and white one. then get the newest release of your favorite game that will come after 2-3 years from now with sm3.0 install it and run at 640 x 800(make sure the black and white one is capable of that) resolution turn off AA/AF but make sure you are using SM3.0 now for performance's sake.
and after a 10 hour gaming marathon kick back and think, how awesome it was to play that black and white game with sm3.0 at 640x480? but that SM3.0 sure gave the performance a hell of a kick..
Dude I don't know why you play games on a PC.

EDIT : also don't forget to turn off color processing in your driver settings, or that will be a perfomance loss!!
 

hans030390

Diamond Member
Feb 3, 2005
7,326
2
76
Originally posted by: Gamingphreek
Originally posted by: hans030390
you ruined the fun. yes, we'll need sm3 like we need sm2 today. thats all im concerned about.

Yet we dont need anything above 12x10!? What kind of twisted logic are you arguing with?

-Kevin

Edit: Hans just out of pure curiosity, are those numbers next to your names your birthday?

You're a quick one!...no seriously....first one to say it. I dunno, you really dont need even 8x6!!!! its just nice to have.
 

hans030390

Diamond Member
Feb 3, 2005
7,326
2
76
Woot! I like to be annoying!

I would like to discuss SMs in a nice manner, but no one seems to do that. Its all flame and bash.

So sorry, i'll leave you all to talk about something else. have fun!
 

The Linuxator

Banned
Jun 13, 2005
3,121
1
0
Originally posted by: hans030390
Woot! I like to be annoying!

I would like to discuss SMs in a nice manner, but no one seems to do that. Its all flame and bash.

So sorry, i'll leave you all to talk about something else. have fun!


Sorry if you got insulted, but you are arguing that quality means BS and all we need is performance.
You have set all the work ATI and Nvidia have made through years of research and investments on fire by throwing a Cuban cigar at it and saying oops I did it again.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Originally posted by: Drayvn
They are i think pixel fragment pipelines, all of them.
AFAIK, "pixel" pipelines should more accurately be called fragment pipelines, as they operate on fragments of the scene that are eventually delivered to the framebuffer as pixels courtesy of the ROPs. I guess pixel pipeline became the vernacular because ROPs were hard-wired to each pipe/quad, so it was a straight shot to the back buffer. Starting with the GF6, nV decoupled the ROPs from the quads (now there's a crossbar or FIFO buffer between them), and that's when "pixel pipe" became somewhat less accurate.

If the term is really "pixel fragment" pipeline, then please ignore me. Same if my explanation is just wrong, and pixel and fragment are simply (but improperly) used interchangably by most sites/fans. I'm under the impression it's not, though, as picture element element seems redundant.

And the ALUs can do 2 MADD ops per clock each.

EDIT: Yup just checked around the web, and yea the 7800GTX has 24 fragment pipelines. And it has 2 MADDs per ALU. And there are 2 ALUs.
Not quite.
The pixel pipe is made up of two vector units[/b] [ALUs] and a texture unit that all operate together to facilitate effective shader program execution. ... There was much talk when the 6800 launched about the distinct functionality each of the main shader ALUs had. In NV4x, only one ALU had the ability to perform a single clock MADD (multiply-add). Similarly, only one ALU assisted in texture address operations for the texture unit. Simply having these two distinct ALUs (regardless of their functionality difference) is what was able to push the NV4x so much faster than the NV3x architecture.

In their ongoing research into commonly used shaders (and likely much of their work with shader replacement), NVIDIA discovered that a very high percentage of shader instructions were MADDs. Multiply-add is extremely common in 3D mathematics as linear algebra, matrix manipulation, and vector calculus are a huge part of graphics. G70 implements MADD on both main Shader ALUs. -- AT
and
although NV40 has two ALU's they are not in fact each fully featured with the same instructions, instead one is a MADD unit and the other is a MUL unit; for G70 NVIDIA say they have added a MADD and MULL into each of the units that didn't previously contain them and in fact we are led to believe they are now complete instruction duplicates of each other (although, obviously the second unit doesn't have texture address processing instructions). The net result is that G70 features 48 fragment shaders of the same capabilities, with one of them having to handle the texture processing instructions. -- B3D
AFAIK, nV's ALUs can work on four-component vectors, but are capable of dual-issue instruction splits (1-3, 2-2, or 3-1).
Again, like NV40 the ALU's are FP32 precision, with a free FP16 normalise on the first ALU. Each unit is a single vector unit, but can execute two instructions that fit in or below 4 components (i.e. 3+1 components, 2+2, 2+1, 1+2, 1+1). -- ibid
Maybe that's what you're (erroneously) referring to when you say dual MADDs per ALU?
 

hans030390

Diamond Member
Feb 3, 2005
7,326
2
76
Originally posted by: The Linuxator
Originally posted by: hans030390
Woot! I like to be annoying!

I would like to discuss SMs in a nice manner, but no one seems to do that. Its all flame and bash.

So sorry, i'll leave you all to talk about something else. have fun!


Sorry if you got insulted, but you are arguing that quality means BS and all we need is performance.
You have set all the work ATI and Nvidia have made through years of research and investments on fire by throwing a Cuban cigar at it and saying oops I did it again.

You're missing the point. I'd rather have eye candy (bump mapping, displacement mapping, Sm2 stuff and upcoming Sm3 features to be used). I hardly consider AA/AF as eye candy because they just smooth out lines or show textures at higher quality from farther away.

Honestly, which would you rather have, a crappy looking game with AA/AF on, or a game that looks really good but you dont run AA/AF on because it would kill performance?

I guess I'm just not into AA/AF...to me, the "eye candy" it adds isn't worth the framerate loss.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: BenSkywalker
And you can't do this in SM2.0 because...? Couldn't you do a short pass to sort out the pixels that face the light, then only do a second pass to compute the full light calculations for those pixels?

No, you can't reasonably do that under SM2.0. You would need to do a raycast calculation per light figuring on a visibility intersect- that would be significantly more complex then the most demanding shader scenario we have discussed(approaching radiosity levels of complexity before we render anything at all).

Hold on a sec. How would an SM3.0 shader get around doing an intersection test to see if the light source can illuminate the object? Certainly, if you have objects casting shadows off of pixel-shaded light sources, something has to calculate this even with SM3.0.

The 'normal' sequence I would assume for culling back-facing surfaces (which is what I *thought* you were talking about; suddenly now we're doing intersection tests as well?) during dynamic lighting would be to have a shader that looks something like this in SM3.0:

{
calculate surface normal to light
if surface is not facing towards light
{
bail out;
}
calculate contribution of light to color of pixel
}

In SM2.0, you can get a very close effect by running a shader like this:

{
calculate surface normal to light
store surface normal
}

and then for each pixel that is facing the light, you run another pass of a shader that looks like:

{
calculate contribution of light to color of pixel
}

If you want to do an intersection test as well, do another pass for that (after checking the normal and before actually calculating the light's contribution).

What 'redundant overhead'?

Running shader routines on non visible pixels and calculating light surfaces that have no impact on pixels by rerunning entire shader routines. There is no way around this using SM 2.0- it is relatively trivial under 3.0.

Perhaps I'm not understanding the exact problem, but I think the solution I just outlined above would work (doing one or two low-instruction-count passes to cull out the pixels that will not be affected, then do the bulk of the work on the remaining ones). SM3.0 can just do it all in one shader pass, which, while marginally more efficient, is not going to double or triple throughput.

Admittedly, I am not an expert on writing pixel shaders. Perhaps there is some critical restriction or piece of information that I am not grasping.

The 7800GTX has 24+8 shaders and 24 pipelines;

It has sixteen traditional pixel pipes- the "24 pipelines" is from ALU shader hardware- it is only capable of drawing 16 pixels per clock. 16 pixel output per clock, 32 shader units- 1:2.

I thought it still had the capability of fully working on 24 pixels, but 16 could actively output per clock (due to the 16 ROPs). Perhaps the article I read on the card's architecture misled me.

Which is why I clarified my numbers, since you used a totally different set of assumptions and got different answers because of it.

I figured it both ways- I used the AMIR numbers as those were the best prices and as you noticed that benefited the X800XL- not the 6800GT.

I meant that you were looking at both AGP and PCIe cards, while I was only considering PCIe ones. That changes the results a little bit. Let me rerun the prices for AGP:

AGP CARD ONLY PRICES (I hope this is clear enough this time )

X800: $245-255
6600GT: $150-160 ($138 with MIR)

X800XL: $280-290
6800GT: $270-280

So no, neither of the ATI cards would be a particularly good buy in AGP. However, Newegg does have several AGP X800Pro cards for only $225-230, which still might be better than the 6600GT in terms of price/performance. Of course, there's also an XFX GF6800 for only $164 (at least it looks like a real GF6800), which blows both of those out of the water.

Basically, my conclusion is that AGP card prices are wacky right now.

3Digest got much closer Doom3 numbers with more recent drivers. The 6600GT was less than 20% faster at Doom3 -- and the X800 beat it at every setting with AA/AF enabled.

The X800 is better across the board (sometimes by significant margins) at Far Cry and HL2. The Far Cry and HL2 numbers with AA/AF are just depressing; the X800 beats a 6600GT SLI setup.

Funny the link you provided has the 6600GT at the top of the usability ratings(the X800 is second ).

If you look at the detail graphs, they also were quoting something like $225 for an X800 and $175 for a 6600GT. With closer prices, the X800 would almost certainly have ranked above the 6600GT, considering how hard it kicked its ass in most of the tests.

Or someone puts out a game that gets more than a 5% performance improvement from SM3.0 and/or offers significant and usable IQ improvements that are only available with SM3.0.

The problem is that most comparisons are SM3+HDR vs SM2- and then you have the fact that you must used the dumbed down SM2 shaders as a starting point due to all of the resources they waste.

I definitely saw plenty of SM2 versus SM3 benches for Far Cry. And more for Splinter Cell:Chaos Theory after they put out SM2.0 support in the latest patch. Both showed real but minimal gains from the switch. Yes, yes, I know, they don't count, since the games were only patched to SM3.0. But that's all that's on the market right now. "You should pay for SM3.0 because at some indeterminate point in the future, it might provide larger performance gains" is hardly an overwhelming argument in favor of SM3.0 ATM.

Again, like I said, let me know when there's a game that gets a big performance boost, or that gets a significant, usable increase in IQ over SM2.0.
 

Intelia

Banned
May 12, 2005
832
0
0
Originally posted by: Ronin
*chuckles* It's so useless that ATi is implementing it.

ATI is going that route because it is infact easier to write to. but does it give better quality thats a good ?. I don't see it but some do.

Don't pay no attention to Ronin as my husband plays him in online gaming BHD and says ronin lags badly.

Say it isn't so Ronin

 

Intelia

Banned
May 12, 2005
832
0
0
Originally posted by: Rollo
Originally posted by: munky
Sm3 will be needed in the gf7/r520 cards, but it's use in the gf6 series is limited at best.

I don't know Munky. My son's 6800GT SLI rig is a pretty formidable SM3 GF6 setup. There are no games out it can't run very, very well. Second only to 7800GTX SLI, and sometimes 7800GTX.

Does your son play with himself or online gaming . If its online gaming quit spreading BS.

Connection speed is all that matters in online gaming all know that.

 

Intelia

Banned
May 12, 2005
832
0
0
Originally posted by: Gamingphreek
Correction they didn't make Scan Line Interleave. However they did make Scalable Link Interface (they did borrow some tech from 3DFX though).

Additionally, although Rage Fury Maxx did fail, it still counts (according to rule number 3.431A Section 43Xx that says Rage Fury Maxx counts) as it was launched and it was in the retail channels.

-Kevin


Again there are some facts you left out like nvidia used some ATI tech. in there SLI .
 

swatX

Senior member
Oct 16, 2004
573
0
0
Originally posted by: Intelia
Originally posted by: Gamingphreek
Correction they didn't make Scan Line Interleave. However they did make Scalable Link Interface (they did borrow some tech from 3DFX though).

Additionally, although Rage Fury Maxx did fail, it still counts (according to rule number 3.431A Section 43Xx that says Rage Fury Maxx counts) as it was launched and it was in the retail channels.

-Kevin


Again there are some facts you left out like nvidia used some ATI tech. in there SLI .

please enlighten me on how NV implemention ATI's Technology in Sacalable Link Interface?
 

The Linuxator

Banned
Jun 13, 2005
3,121
1
0
Originally posted by: swatX
Originally posted by: Intelia
Originally posted by: Gamingphreek
Correction they didn't make Scan Line Interleave. However they did make Scalable Link Interface (they did borrow some tech from 3DFX though).

Additionally, although Rage Fury Maxx did fail, it still counts (according to rule number 3.431A Section 43Xx that says Rage Fury Maxx counts) as it was launched and it was in the retail channels.

-Kevin


Again there are some facts you left out like nvidia used some ATI tech. in there SLI .

please enlighten me on how NV implemention ATI's Technology in Sacalable Link Interface?

I didn't hear about this before, but nontheless I see ATI Crossfire as being superior to Nvidia's SLI in many aspects. I heard alot of ATI cards already out are Crossfire ready because there will be no need for a special connector on the PCB itself so that releifs alot of customers like me. I intend to buy another X800XL if the need arises and put those in Crossfire mode. The other point is that ATI went even more ahead of Nvidia in the game, in Anandtech computex artcle it was mentioned that you don't have to considre your integrated GPU as a loss any more when using a single or double GPU setup ( Crossfire) now Crossfire Motherboards will let you use 3 GPUs to render your frames, so if I have an X300 integrated in the motherboard for example, it might help in doing calculations instead of sitting there collecting dust. and the X300 gpu isn't obsolete neither great but still it will boost performance since it's their anyway
So from the advantages I see in ATI's current technologie, it's more likeley that NV used ATI tech more than it's likeley that ATI used NV tech, and remeber thats if any of them used ideas that aren't there own.
You know what this example reminds me of, when the soviets spied on the concorde project and then rushed to release it to the market ahead of the real Concorde team and the soviet model was so shtty( because the ppl developing the concorde knew about the espionage and let them have wrong data) and the soviet concorde crashed immediatly, so this reminds me of SLI and crossfire but lets not jump to conclusions nobody knows the facts that go behind the scenes.


 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: Intelia
Originally posted by: Rollo
Originally posted by: munky
Sm3 will be needed in the gf7/r520 cards, but it's use in the gf6 series is limited at best.

I don't know Munky. My son's 6800GT SLI rig is a pretty formidable SM3 GF6 setup. There are no games out it can't run very, very well. Second only to 7800GTX SLI, and sometimes 7800GTX.

Does your son play with himself or online gaming . If its online gaming quit spreading BS.

Connection speed is all that matters in online gaming all know that.

Errrr, my son is five. He does not online game. Any comments I make abou this system are based on my use of the 6800GTs prior to buying my 7800GTXs.

I don't spread "BS".
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Intelia, no one wants you in this thread. Everyone in here was doing just fine before you came. So STFU, ESPECIALLY, about peoples children.

Additionally, since you didn't completely flame and troll; you list 1 thing about Nvidia's RELEASED SLI that was copied from ATI's UNRELEASED crossfire. It seem pretty unlikely that Nvidia had any chance of copying an unreleased product :roll:

Linuxator, PLEASE punctuate. Additionally, research a little more because your idea is wrong, and it is too hard to pull quotes to prove it in that post.

-Kevin
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |