hans030390
Diamond Member
- Feb 3, 2005
- 7,326
- 2
- 76
Hey matthias, anandtech has an article on the 6600gt. It compares it to an x800pro (not vanilla) and it only gets like 10-15fps less at either 10x7 or 12x10 (which are GREAT for games, no one really needs 16x12).
Originally posted by: Pr0d1gy
After owning both a sm 3.0 nVidia card & an X800XL, this is how I feel about SM 3.0
Obviously their plan worked because I see people everyday saying SM 3.0 is a reason to buy a video card. Well it isn't. Call it my opinion, baszh me, or do whatever else you fanboys feel you must. This is coming from the unbiased observation of someone who has owned both & really appreciated the quality of the x800xl's display.
In conclusion, if you think Sm 3.0 is some big deal & tell people that future games will have it, you are only telling nVidia "Yes, I want you to pay game developers to let you hack their graphics engine up & add some useless program so I can say i have the better video card".
Originally posted by: hans030390
you ruined the fun. yes, we'll need sm3 like we need sm2 today. thats all im concerned about.
Originally posted by: hans030390
Hey matthias, anandtech has an article on the 6600gt. It compares it to an x800pro (not vanilla) and it only gets like 10-15fps less at either 10x7 or 12x10 (which are GREAT for games, no one really needs 16x12). Funny thing is, the 6600gt still runs above 40fps! Did you know thats REALLY playable? Most people don't need 1234309fps to play games. In fact, most people can't tell the difference between 30 and 60fps.
You know what else. Some people dont use AA/AF! OMG! Lets make a buying decision on a card because it does better with AA/AF! You know what, i KNOW i'll be using Sm3 sometime, in fact I even use it now. But I knew that AA/AF isn't something I use just because I don't need it. I'm fine with 10x7. No AA/AF. Its just a waste of performance. Perhaps its because I grew up without it.
Jeez. So wait, that means the x800 would be even close in performance to the 6600gt than the x800pro is. So...why not sacrifice a few FPS and still have a very playable game at high settings and not use the unneeded AA/AF so you can have something you will use in the future, SM3? And with SM3, it boosts performance in games that use it, so maybe you COULD put on that extra AA/AF if you want it and still have a playable game.
Sorry if you're the type that plays on uber 16x12 with full graphics settings and all AA/AF. Some of us really dont care about it.
Originally posted by: hans030390
Hey matthias, anandtech has an article on the 6600gt. It compares it to an x800pro (not vanilla) and it only gets like 10-15fps less at either 10x7 or 12x10 (which are GREAT for games, no one really needs 16x12). Funny thing is, the 6600gt still runs above 40fps! Did you know thats REALLY playable? Most people don't need 1234309fps to play games. In fact, most people can't tell the difference between 30 and 60fps.
You know what else. Some people dont use AA/AF! OMG! Lets make a buying decision on a card because it does better with AA/AF! You know what, i KNOW i'll be using Sm3 sometime, in fact I even use it now. But I knew that AA/AF isn't something I use just because I don't need it. I'm fine with 10x7. No AA/AF. Its just a waste of performance. Perhaps its because I grew up without it.
Jeez. So wait, that means the x800 would be even close in performance to the 6600gt than the x800pro is. So...why not sacrifice a few FPS and still have a very playable game at high settings and not use the unneeded AA/AF so you can have something you will use in the future, SM3? And with SM3, it boosts performance in games that use it, so maybe you COULD put on that extra AA/AF if you want it and still have a playable game.
Sorry if you're the type that plays on uber 16x12 with full graphics settings and all AA/AF. Some of us really dont care about it.
Originally posted by: goku2100
Originally posted by: hans030390
Hey matthias, anandtech has an article on the 6600gt. It compares it to an x800pro (not vanilla) and it only gets like 10-15fps less at either 10x7 or 12x10 (which are GREAT for games, no one really needs 16x12). Funny thing is, the 6600gt still runs above 40fps! Did you know thats REALLY playable? Most people don't need 1234309fps to play games. In fact, most people can't tell the difference between 30 and 60fps.
You know what else. Some people dont use AA/AF! OMG! Lets make a buying decision on a card because it does better with AA/AF! You know what, i KNOW i'll be using Sm3 sometime, in fact I even use it now. But I knew that AA/AF isn't something I use just because I don't need it. I'm fine with 10x7. No AA/AF. Its just a waste of performance. Perhaps its because I grew up without it.
Jeez. So wait, that means the x800 would be even close in performance to the 6600gt than the x800pro is. So...why not sacrifice a few FPS and still have a very playable game at high settings and not use the unneeded AA/AF so you can have something you will use in the future, SM3? And with SM3, it boosts performance in games that use it, so maybe you COULD put on that extra AA/AF if you want it and still have a playable game.
Sorry if you're the type that plays on uber 16x12 with full graphics settings and all AA/AF. Some of us really dont care about it.
your annoying me.
Originally posted by: Gamingphreek
Originally posted by: hans030390
you ruined the fun. yes, we'll need sm3 like we need sm2 today. thats all im concerned about.
Yet we dont need anything above 12x10!? What kind of twisted logic are you arguing with?
-Kevin
Edit: Hans just out of pure curiosity, are those numbers next to your names your birthday?
Originally posted by: hans030390
Woot! I like to be annoying!
I would like to discuss SMs in a nice manner, but no one seems to do that. Its all flame and bash.
So sorry, i'll leave you all to talk about something else. have fun!
AFAIK, "pixel" pipelines should more accurately be called fragment pipelines, as they operate on fragments of the scene that are eventually delivered to the framebuffer as pixels courtesy of the ROPs. I guess pixel pipeline became the vernacular because ROPs were hard-wired to each pipe/quad, so it was a straight shot to the back buffer. Starting with the GF6, nV decoupled the ROPs from the quads (now there's a crossbar or FIFO buffer between them), and that's when "pixel pipe" became somewhat less accurate.Originally posted by: Drayvn
They are i think pixel fragment pipelines, all of them.
Not quite.And the ALUs can do 2 MADD ops per clock each.
EDIT: Yup just checked around the web, and yea the 7800GTX has 24 fragment pipelines. And it has 2 MADDs per ALU. And there are 2 ALUs.
andThe pixel pipe is made up of two vector units[/b] [ALUs] and a texture unit that all operate together to facilitate effective shader program execution. ... There was much talk when the 6800 launched about the distinct functionality each of the main shader ALUs had. In NV4x, only one ALU had the ability to perform a single clock MADD (multiply-add). Similarly, only one ALU assisted in texture address operations for the texture unit. Simply having these two distinct ALUs (regardless of their functionality difference) is what was able to push the NV4x so much faster than the NV3x architecture.
In their ongoing research into commonly used shaders (and likely much of their work with shader replacement), NVIDIA discovered that a very high percentage of shader instructions were MADDs. Multiply-add is extremely common in 3D mathematics as linear algebra, matrix manipulation, and vector calculus are a huge part of graphics. G70 implements MADD on both main Shader ALUs. -- AT
AFAIK, nV's ALUs can work on four-component vectors, but are capable of dual-issue instruction splits (1-3, 2-2, or 3-1).although NV40 has two ALU's they are not in fact each fully featured with the same instructions, instead one is a MADD unit and the other is a MUL unit; for G70 NVIDIA say they have added a MADD and MULL into each of the units that didn't previously contain them and in fact we are led to believe they are now complete instruction duplicates of each other (although, obviously the second unit doesn't have texture address processing instructions). The net result is that G70 features 48 fragment shaders of the same capabilities, with one of them having to handle the texture processing instructions. -- B3D
Maybe that's what you're (erroneously) referring to when you say dual MADDs per ALU?Again, like NV40 the ALU's are FP32 precision, with a free FP16 normalise on the first ALU. Each unit is a single vector unit, but can execute two instructions that fit in or below 4 components (i.e. 3+1 components, 2+2, 2+1, 1+2, 1+1). -- ibid
Originally posted by: The Linuxator
Originally posted by: hans030390
Woot! I like to be annoying!
I would like to discuss SMs in a nice manner, but no one seems to do that. Its all flame and bash.
So sorry, i'll leave you all to talk about something else. have fun!
Sorry if you got insulted, but you are arguing that quality means BS and all we need is performance.
You have set all the work ATI and Nvidia have made through years of research and investments on fire by throwing a Cuban cigar at it and saying oops I did it again.
Originally posted by: BenSkywalker
And you can't do this in SM2.0 because...? Couldn't you do a short pass to sort out the pixels that face the light, then only do a second pass to compute the full light calculations for those pixels?
No, you can't reasonably do that under SM2.0. You would need to do a raycast calculation per light figuring on a visibility intersect- that would be significantly more complex then the most demanding shader scenario we have discussed(approaching radiosity levels of complexity before we render anything at all).
What 'redundant overhead'?
Running shader routines on non visible pixels and calculating light surfaces that have no impact on pixels by rerunning entire shader routines. There is no way around this using SM 2.0- it is relatively trivial under 3.0.
The 7800GTX has 24+8 shaders and 24 pipelines;
It has sixteen traditional pixel pipes- the "24 pipelines" is from ALU shader hardware- it is only capable of drawing 16 pixels per clock. 16 pixel output per clock, 32 shader units- 1:2.
Which is why I clarified my numbers, since you used a totally different set of assumptions and got different answers because of it.
I figured it both ways- I used the AMIR numbers as those were the best prices and as you noticed that benefited the X800XL- not the 6800GT.
3Digest got much closer Doom3 numbers with more recent drivers. The 6600GT was less than 20% faster at Doom3 -- and the X800 beat it at every setting with AA/AF enabled.
The X800 is better across the board (sometimes by significant margins) at Far Cry and HL2. The Far Cry and HL2 numbers with AA/AF are just depressing; the X800 beats a 6600GT SLI setup.
Funny the link you provided has the 6600GT at the top of the usability ratings(the X800 is second ).
Or someone puts out a game that gets more than a 5% performance improvement from SM3.0 and/or offers significant and usable IQ improvements that are only available with SM3.0.
The problem is that most comparisons are SM3+HDR vs SM2- and then you have the fact that you must used the dumbed down SM2 shaders as a starting point due to all of the resources they waste.
Originally posted by: Ronin
*chuckles* It's so useless that ATi is implementing it.
Originally posted by: Rollo
Originally posted by: munky
Sm3 will be needed in the gf7/r520 cards, but it's use in the gf6 series is limited at best.
I don't know Munky. My son's 6800GT SLI rig is a pretty formidable SM3 GF6 setup. There are no games out it can't run very, very well. Second only to 7800GTX SLI, and sometimes 7800GTX.
Originally posted by: Gamingphreek
Correction they didn't make Scan Line Interleave. However they did make Scalable Link Interface (they did borrow some tech from 3DFX though).
Additionally, although Rage Fury Maxx did fail, it still counts (according to rule number 3.431A Section 43Xx that says Rage Fury Maxx counts) as it was launched and it was in the retail channels.
-Kevin
Originally posted by: Intelia
Originally posted by: Gamingphreek
Correction they didn't make Scan Line Interleave. However they did make Scalable Link Interface (they did borrow some tech from 3DFX though).
Additionally, although Rage Fury Maxx did fail, it still counts (according to rule number 3.431A Section 43Xx that says Rage Fury Maxx counts) as it was launched and it was in the retail channels.
-Kevin
Again there are some facts you left out like nvidia used some ATI tech. in there SLI .
Originally posted by: swatX
Originally posted by: Intelia
Originally posted by: Gamingphreek
Correction they didn't make Scan Line Interleave. However they did make Scalable Link Interface (they did borrow some tech from 3DFX though).
Additionally, although Rage Fury Maxx did fail, it still counts (according to rule number 3.431A Section 43Xx that says Rage Fury Maxx counts) as it was launched and it was in the retail channels.
-Kevin
Again there are some facts you left out like nvidia used some ATI tech. in there SLI .
please enlighten me on how NV implemention ATI's Technology in Sacalable Link Interface?
Originally posted by: Intelia
Originally posted by: Rollo
Originally posted by: munky
Sm3 will be needed in the gf7/r520 cards, but it's use in the gf6 series is limited at best.
I don't know Munky. My son's 6800GT SLI rig is a pretty formidable SM3 GF6 setup. There are no games out it can't run very, very well. Second only to 7800GTX SLI, and sometimes 7800GTX.
Does your son play with himself or online gaming . If its online gaming quit spreading BS.
Connection speed is all that matters in online gaming all know that.