What's better? GeForce 2 Ultra or the new ATI Radeon?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
Doomguy:


<< Techo: You're the worst ati zelaot i've seen. >>


Yes, I remember you calling me a 3dfx zealot, despite the fact that I own a 64MB GTS, had a 32MB GTS, and my previous card was a Creative TNT2 Ultra.

Doom, you're a child, and a complete nvidiot to the nth degree. I like my GTS, but you take it as religion. You will attack ANYONE who thinks there might be some card, somewhere out there, that does ANYTHING better than your beloved GeForce. Get over it kid. Your mom really should pack some granola bars into your lunch so that you can get a bit of food in you in the middle of the day. You get cranky when you haven't eaten and can't take a nap.



<< You downplay the best microsoft OS ever made just because your radeon sucks in it. Anyone who has actually used win2k knows how great it is. >>



yep, Win2k is great, but it's not a gaming OS. It has enhancements made so that it has become more multi-media friendly, but it is still NT. The purpose of Win2k is NOT to play Deer Hunter or Barbie or Quake3, but to run a database server or a graphics workstation or something that has &quot;mission critical&quot; applications on it.

Win9x is the consumer OS, WinNT is the business OS. Win2k is a version of WinNT. Gaming falls under &quot;consumer application&quot;, not &quot;business application&quot;.

There really isn't anything here to discuss.


<< How old are you honestly? >>


hehehe...he's 16. a whole year older than you. <rofl>


<< Your english isnt very easy to read. Are from a 3rd world country? >>


shut up dickhead. I hate when people say that. Don't be so ethnocentric. Just because someone doesn't speak English as their FIRST language doesn't mean they're from a 3rd world country. Almost all of Europe and Asia speak something DIFFERENT as their first language.

Moron.


<< Also NVidia's new drivers increase speed and compaiblity with VIA chipsets. The radeon seems to have ALOT of VIA problems >>


okay, I haven't had the opportunity to try the 6.26 leaked drivers, so I'll assume that's what you're referring to. If not, you've got your head WAAAAAAAAAAAAAY up your ass on that one, because the 6.18 drivers have been giving MANY Via mobo owners absolute fits.
 

Techno

Golden Member
Oct 9, 1999
1,063
0
0
Also, i would like to add something. W/ the VIA chipset. Yes for some reason there is something in the hardware of VIA's KX133 chipset. I have been seeing alot of people only being able to use AGP 1x. But there are alot more poeople (me one of them when i get my parts back) that use the KT133 chipset that can get AGP4x.
 

Doomguy

Platinum Member
May 28, 2000
2,389
1
81
Robo: The radeon has more via problems than the gf2 although neither are actually their fault. Its the fault of via's poor agp implementation.

Robo: It dosent matter what win2k was intended for. It works VERY well as a gaming os. Please tell me how it dosent work well as one. Oh yeah you cant. BTW 3dfx's FSAA is blurier than NVidia's. Go talk to ben if you want more detail on it. SGI must have dumped RGSS for OGSS for a good reason. Also 3dfx cards dont have true trilinear filtering and they are slower. Unless you play glide games only the GF2 is superior. You cant seem to accept that though.
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
Doomguy:


<< Robo: The radeon has more via problems than the gf2 although neither are actually their fault. Its the fault of via's poor agp implementation. >>


agreed. You'd have to twist my arm and shove a snake up my ass to get me to get one of their mobo's (THE ONLY reason I haven't gone AMD)



<< Robo: It dosent matter what win2k was intended for. It works VERY well as a gaming os. Please tell me how it dosent work well as one. Oh yeah you cant. >>


It works *okay* as a gaming OS. It does not work as well as 9x as a gaming OS. There are titles that work on 9x that don't on 2k. Many games run better on 9x than 2k. 2k is far more resource-intensive as well. An oustanding operating system overall, and definitely superior *overall* than 9x, but not as good as 9x for gaming. Period.



<< BTW 3dfx's FSAA is blurier than NVidia's. Go talk to ben if you want more detail on it. >>


1) You haven't seen both of them in action, so you really don't know what you're talking about
2) You have to get Ben to tell me details, because you have no idea what you're talking about
3) 3Dfx implemented a lodbias slider for a reason - so that the blurriness of FSAA can be removed (improves overall texture detail as well)
4) Of course, you don't know any of the above, since you only know what other people tell you, and you have no clue what you're talking about
5) Did I mention you don't have any clue what you're talking about?
6) Once you get out of puberty, you'll realize that &quot;there is no substitute for experience&quot;



<< Also 3dfx cards dont have true trilinear filtering and they are slower. >>


1) They have true trilinear filtering, but it is not enabled when multitexturing is done at the same time
2) Lack of a 2nd TMU is the biggest fault of the 5500 ($$$ says you don't know the entire reason why)
3) The lodbias slider does a very nice job of making up for their lack of trilinear filtering
4) If I posted 4 screenshots, 2 of the with bilinear filtering, 2 with trilinear filtering, I'd bet your left nut that you couldn't guess which was which
5) What does trilinear filtering do? (quick, without looking it up, you silly boy you!)
6) The 5500 is slower in most benchmarks, but still very fast. Please go answer my &quot;how fast is fast enough&quot; thread and perhaps you'll learn something in the process (fat chance, since you must have an open mind to learn)
7) If you were to play 99% of the games out there on 2 separate high-end preconfigured P3 systems, one with a GTS, the other with a 5500, you wouldn't be able to tell the difference between them while playing games (except for Deux Ex and NFSU)



<< Unless you play glide games only the GF2 is superior. >>


I disagree. Of course, since I've used them both, and you haven't, you obviously haven't the faintest clue what you're talking about. The GF2 benchmarks better in almost all cases. Both cards are fast enough in just about every game. The 5500 provides better game compatibility, better image quality (in most cases that I've found), FAR better FSAA (go on, expose your ignorance once again with that 'FSAA is blurrier' BS, that truly is humorous), and does a much better job when you have a large monitor @ high resolution and you're NOT playing games (when you get out of puberty, you'll have to get a job, and you'll find out what I mean when I say &quot;computers are used for MORE than just gaming&quot

You cant seem to accept that though. [/i] >>


Probably because, unlike you, I've used them both. They both kick ass. The 5500 has several advantages over the GTS that most website reviews don't touch onYou wouldn't know that unless you used them for more than just benchmarking. Oh, that and you don't have a clue what you're talking about.

The GTS certainly has several advantages over the 5500, but most of them have ZERO influence on my decision-making, and they definitely were far less influential on my computer-using experience than the pitfalls.

go read a website, memorize some benchmarks, and come back here to spew some more, you adorable little pubescent nvidiot you.

Ever see the movie &quot;Good Will Hunting&quot;? You remind me of that schmuck that Matt Damon tore into in the bar when he first met Minnie Driver.

&quot;Have you ever had thought that wasn't from a book? A thought that was actually your own?&quot;



 

Doomguy

Platinum Member
May 28, 2000
2,389
1
81
Trilinear filtering gets rid of those nasty mip map lines. Most people prefer the GF2 over the V5 if the V5 was better overall more people would buy it. This just isnt the case. T&amp;L is the future and most people skip a generation so it will be very useful. The days of glide domination are over. FSAA blows anyway when you can raise the res to get more clarity, less jaggies and more speed than FSAA.
 

fodd3r

Member
Sep 15, 2000
79
0
0
after reading most of the posts on this thread i was baffled by the fact that most if not all of you don't understand how framerates are calculated, or how to interpret them. if you did know then you would have put forth this arguement.

framerate under q3 is calculated by giving the video card a certian amount of frames(a demo). at which point the clock is started, and it begins to work its way through the demo. when it finishes the time taken is checked. you take the amount of frames that where passed to the card and then divide that by the time taken to render them, this yields that wonderful, magical and oh so overly stressed frame rate.

looking at this anyone with half a brain will see an enormous flaw. all you are getting is an average framerate, which regardless of what you think isn't too descriptive of what's actually going on. what's missing you might ask? well simple, the minimum and maximum framerate. what's the importance of the min. and max. well this will tell you how well the video card works. with geforces and voodoos which have no intelligent rendering techniques you'll notice a good amount of fluctuation in fps. this means that the min will be really low, and thanks to fill rate the high will be way up there. the radeon on the other won't really experience this, now don't get me wrong it's fill rate is nothing to laugh at still i doesn't come close to the geforces, but, one should remember that it does put a wee bit of brains behind that brawn a la hyper z buffer.

here is the really big question: so what relavence does this all have?

well if the ati's framerate fluctuation isn't that severe, thus it delivers a smoother game. this seems like an image quality deal but it's not. when the brute force cards hit a complex scene they'll be crawling through it, while the ati will simplify it and move through the scene with much less difficulty. it doesn't matter if you can hit 200+ fps, all that matters is that you don't bottom out at 30 fps on a $500 card. and if you think that won't happen think again it will.

as to the future well, the radeon has more head room for improvement than the geforces, and that's merely on the driver front. also when it comes to all the features, well the geforces will have to make it up with fill rate and or do without. fat chance of that happening seeing as how memory bandwidth limited they are already.

also note as the polygon count goes up triangles are becoming the size of pixels. this may not seem that important but do note with massively parallel architectures such as the geforces quad pipe, you take a considerable fill rate hit, how much you ask simple only one triangle can be textured at a time, if a triangle is the size of a pixel, well then take a guess as to how many pixels can be textured.

as the shift to 3 times texturing takes place, expect the ati to beat out the geforces. considering 3 times texturing cuts the geforce2 pixel fillrate in half. if you are confused as to why that is, here it is in a nut shell. the geforce2 devotes two pipelines to each pixel in three times texturing. now all of a sudden it at best outputs two pixels per clock. not only does it take that hit, it also has to send the geometry information to each pipeline, so for one pixel it has to send geometry information twice. what makes this really bad is that there is a significant latency because the fact that sdram has destructive reads. i'm not sure whether the geforces will be able to cache the info.

this paired with the fact that the t&amp;l is taking a foot hold, and it needs it's own bandwidth, granted, that due to the way the scene is setup and rendered this won't pose an incredibily huge bandwidth problem, it will still hamper the performance enough to turn most ppl off.

lastely, just because you play a lot of games, read some gaming magazines and think you understand video cards, because you know all the &quot;lingo,&quot; think again. most of what you read when it comes to these so called hardware analysis is a bunch of hype, and watered down quasi technical literature. think about it this way people drive cars yet how many actually know how they work, let alone can fix them.
 

Weyoun

Senior member
Aug 7, 2000
700
0
0
sorry to ask, but why is the lack of a second TMU the greatest downfall of the V5? shouldn't the second pipeline make up for this? i did a few algorithms a while back and it proved that while doing single texture rendering, the two pipes were indeed faster, because the second TMU was idling. this is also true with 3 texture rendering, as the second TMU on the second pass is also going to waste, although the penalty isn't so big.

Honestly, if the 2nd TMU was the downfall of the V5, then why did they rip it out of the V3 chip and whack a second pipeline on the V5?

just my $0.04 (damn australian exchange rate )
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,979
126
3Dfx implemented a lodbias slider for a reason - so that the blurriness of FSAA can be removed (improves overall texture detail as well)

Which also lowers the perfomance to lower than that of the benchmarks shown on the websites.

They have true trilinear filtering, but it is not enabled when multitexturing is done at the same time

Because the performance hit would be too high. That's why they stick to the approximated crap.

Lack of a 2nd TMU is the biggest fault of the 5500 ($$$ says you don't know the entire reason why)

Because multi-texturing must be done between the CPUs rather than on one CPU.

If I posted 4 screenshots, 2 of the with bilinear filtering, 2 with trilinear filtering, I'd bet your left nut that you couldn't guess which was which

Of course you couldn't tell a difference. You see the difference while playing the game, as you move closer/further to things.

What does trilinear filtering do? (quick, without looking it up, you silly boy you!)

Same as bilinear filtering but on the Z axis as well. True tri-linear mip-mapping makes use of multiple sized textures and draws those that are the closest to the required size.

If you were to play 99% of the games out there on 2 separate high-end preconfigured P3 systems, one with a GTS, the other with a 5500, you wouldn't be able to tell the difference between them while playing games

I thought you said you tried the different boards? How come you couldn't see a difference when the reviewers could?

what's missing you might ask? well simple, the minimum and maximum framerate. what's the importance of the min. and max.

The maximum framerate is not important at all. All that is important is the average and the minimum. Anybody can get spikes by standing next to a wall but that doesn't show anything.

this means that the min will be really low, and thanks to fill rate the high will be way up there. the radeon on the other won't really experience this, now don't get me wrong it's fill rate is nothing to laugh at still i doesn't come close to the geforces



How can the minimum on the GF2 be really low if it is higher than the Radeon? The higher the average the higher the minimum will be. So you think that 100 fps average will have a lower minimum than 50 fps average? What kind of reasoning is this?

It's quite the opposite of your reasoning. The first you do is turn on VSYNC to limit your spikes. Then, the more average fps you have the more chance you will stay at your monitor refresh rate ensuring constant framerates. The Radeon will be more likely to dip below the VSYNC mark than the GF2 so it will experience more fluctuations.

Get the bechmarks here.

1024 x 768, Q3 HQ
---------------------------
GF2 64 MB: 95 fps
Radeon 64 MB: 79 fps

Say your monitor refresh is 75 Hz. The Radeon only needs to lose five frames per second before the framerates start to fluctuate, while the GF2 would require the loss of 21 fps to start flutuating. As for your spike theory, whether the GF2 is getting 95 fps or 75 fps it is always a constant 75 fps because it is being constrained by the VSYNC setting.

You want as high fps as high as possible at all times + VSYNC turned on. The higher the average, the higher the minimum, and the more likely the framerate will stay at a constant level equal to the monitor refresh rate.

Thus your ideal gaming situation is: minimum fps = average fps = maximum fps (which is your monitor refresh in this case). If these three values are always the same you will experience constant fps.

well if the ati's framerate fluctuation isn't that severe, thus it delivers a smoother game.

False as I have just proven.

when the brute force cards hit a complex scene they'll be crawling through it, while the ati will simplify it and move through the scene with much less difficulty.



How do you figure this??? If the GF2 gets higher scores than the Radeon, why do you think the GF2 be more likely to crawl? Do you gauge perfomance based on the lowest fps or something?

it doesn't matter if you can hit 200+ fps, all that matters is that you don't bottom out at 30 fps on a $500 card. and if you think that won't happen think again it will.

It does matter. The higher this number is the less likely the minimum fps will drop, which is exactly what you say you don't want in your next sentence. See my explantion above.

So if the GF2 Ultra is 2 - 2.5 times faster than the Radeon, you think that the GF2 Ultra is more likely to to drop lower than the Radeon? Alrighty.

I guess you want a card with the lowest possible average to stop fluctuations. Well that means V1 is the best card, doesn't it?
 

Techno

Golden Member
Oct 9, 1999
1,063
0
0
I am sure that someone else will make a longer explaination but BFG10K to the sentence:

&quot;So if the GF2 Ultra is 2 - 2.5 times faster than the Radeon, you think that the GF2 Ultra is more likely to to drop lower than the Radeon? Alrighty.&quot;

If you are talking about 32BIT proformance hit, I think the answer is yes. Ever since the Rage Fury people have been talking about how ATI engineered the cards to have less of a proformance hit when going from 16Bit -> 32Bit graphics vs. the competition. W/ the Radeon i can only guess they made it the same way if not better.

 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0


<< 3Dfx implemented a lodbias slider for a reason - so that the blurriness of FSAA can be removed (improves overall texture detail as well)Which also lowers the perfomance to lower than that of the benchmarks shown on the websites.[/QWell, had you ever used the card, you would realize that the framerate difference is almost negligible, up to about -1.50. I lost a grand total of just under 3 fps zipping the slider to -0.75. Poor me. 130 fps to 127, whatever will I do with such a horrid framerate? <laff>


<< If I posted 4 screenshots, 2 of the with bilinear filtering, 2 with trilinear filtering, I'd bet your left nut that you couldn't guess which was whichOf course you couldn't tell a difference. You see the difference while playing the game, as you move closer/further to things. >>

that's another strong point of the lodbias slider and mipmap dithering settings (which aren't enabled by default). They almost eliminate the texture aliasing and blurriness.

<< Text >>




<< If you were to play 99% of the games out there on 2 separate high-end preconfigured P3 systems, one with a GTS, the other with a 5500, you wouldn't be able to tell the difference between them while playing games I thought you said you tried the different boards? How come you couldn't see a difference when the reviewers could? >>

because I played the games instead of running benchmarks. If I noticed jitteriness or jerkiness, then I would take note. Here's where I noticed a definable speed difference between the 2 in the games I *played* (not benchmarked):
1) Quake3 and MDK2 > 1152x864. The 5500 didn't do as well.
2) UT @ 1280x1024x32 in D3d - the GTS didn't do as well.
3) Deux Ex @ any resolution in D3d - the GTS didn't do as well
4) NFSU - the GTS looked like garbage by comparison, was and was marginally stable. The 5500 was rock solid and looked like heaven, smooth as silk.

For benchmarks, I'd find the GTS ahead in Q3 and MDK2, the 5500 ahead in UT and Deux Ex. Dunno if NFSU has a framerate counter, but with all the jerks and hiccups it produced in NFS, it would be a total laugh to see the GTS score anywhere NEAR the 5500.



<< what's missing you might ask? well simple, the minimum and maximum framerate. what's the importance of the min. and max.The maximum framerate is not important at all. All that is important is the average and the minimum. Anybody can get spikes by standing next to a wall but that doesn't show anything. >>

actually, the minimum framerate is definitely most important, but unfortunately, the maximum framerate has a MAJOR influence on the average. Perfect example of this (And why the GTS ISN'T all that much better than a 5500 even in Q3) is in the basic, plain-jane Q3 demo001. The GTS was breaking 200 fps running from the railgun toward the hallway leading to where homey gibs the guy with the railgun. The 5500 never really broke 150-160. Was the 5500 any less playable in that period of time than the GTS? Obviously not, but it was a good 40 fps behind the GTS. How much of an impact does low-intensity areas like that have on final average framerate? A HUGE impact. A better benchmark would be one where we could CAP the framerate, at say 125, and then bench the two cards. I think most of us would agree that anything more than 125 fps is obviously a waste, right? So let's take that advantage away, and then see what the average framerate ends up being.



<< The higher the average the higher the minimum will be. >>

wrong. UT is a perfect example. 1280x1024x32 in D3d, the GTS got a MUCH higher max framerate than the 5500, by like 15 or 20 fps (in a few of the lower-intensity areas). The average framerate was only about 4 or 5 lower than the 5500, but the min fps was MUCH lower, like in the teens. The 5500's minimum was TWICE that, in the 30s. So much for that (BS) theory. Nice try tho.



<<
Get the bechmarks here.

1024 x 768, Q3 HQ
---------------------------
GF2 64 MB: 95 fps
Radeon 64 MB: 79 fps
>>


those GTS framerates are with marginally stable drivers (6.18) and TC enabled. TC is butt ugly on the GTS, so most users who give a damn about image quality will turn it off. The RAdeon's TC looks outstanding. So you can drop a good 10-12fps off those GTS benchmarks. At that point, the GTS has a marginal advantage (5-8 fps or so). That is not nearly enough to base a decision on, IMHO, especially considering other factors (check UT 32-bit scores, the GTS gets its ass HANDED to it in D3d)



<< Do you gauge perfomance based on the lowest fps or something? >>


YES!!!!!! That's EXACTLY what you do. Why? Because you won't give a damn about framerate until it DROPS (Q3=exception)! If you see stuttering, then you start giving a damn about your framerate. Aside from that, your framerate only matters when you are bragging to your geek buddies about how many 3dMarks you get in 3dM2k



<< So if the GF2 Ultra is 2 - 2.5 times faster than the Radeon, you think that the GF2 Ultra is more likely to to drop lower than the Radeon? Alrighty. >>

for $500, the GF Ultra better drop to it's knees and blow me.
 

fodd3r

Member
Sep 15, 2000
79
0
0
How can the minimum on the GF2 be really low if it is higher than the Radeon? The higher the average the higher the minimum will be. So you think that 100 fps average will have a lower minimum than 50 fps average? What kind of reasoning is this?

you dumb ass haven't you done averages before, if the min is 1 and the max is 99, well that 99+1=100 all divided by 2 = 50 fps average.

while 25+75=100 all divided by 2= 50.

there it is retard!

higher average means jack! because the average framerate isn't indicitive with the min frame rate --which is the most important number-- unless you use a full tile rendering architecture, at which point overdraw doesn't cause a huge varience.

what happens with the ati is that it's not too fast on the really easy frames, but it's not too slow on the really hard ones, so it can't make up the time on the easy frames like the geforce does. btw those demos used for benchmarks largely consistent of low load frames. making it a very weighted average towards the geforces.

and nvidia driver suck big time too. they all have had huge incompatiblities, cept you don't really hear as much about them. ati gets a bad rep, because the cards aren't as loved by retard gamer such as yourself bfg. that and i talk to more ppl who are satified with their ati cards more than nvidia's.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
&quot;If you are talking about 32BIT proformance hit, I think the answer is yes. Ever since the Rage Fury people have been talking about how ATI engineered the cards to have less of a proformance hit when going from 16Bit -> 32Bit graphics vs. the competition. W/ the Radeon i can only guess they made it the same way if not better&quot;

ATi doesn't have any better engineering, they have used a half a$$ed Z-Buffer trick to eliminate a great deal of the performance hit(16bit Z under 32bit color). In fact, this, along with horrible OpenGL(non games) drivers, is why I would only consider the Radeon for a gaming machine(though it would be my hands down first choice there). 16bit Z is fine for most games, but the ATi boards are absolutely horrible at delivering a truly precise 3D image.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,979
126
Well, had you ever used the card, you would realize that the framerate difference is almost negligible, up to about -1.50. I lost a grand total of just under 3 fps zipping the slider to -0.75. Poor me. 130 fps to 127, whatever will I do with such a horrid framerate?

Well given 3dfx are slower than everyone else to begin with, I wouldn't throw away those 3 fps which such reckless abandon.

actually, the minimum framerate is definitely most important, but unfortunately, the maximum framerate has a MAJOR influence on the average.

What do you mean actually? I said the minimum was the most important!

A better benchmark would be one where we could CAP the framerate, at say 125, and then bench the two cards.So let's take that advantage away, and then see what the average framerate ends up being.

By capping do you mean something like say... turning on VSYNC? And then checking to see which card has the most fluctuations? I believe that is what my whole post was about!!!

wrong. UT is a perfect example.

UT is an poor example of anything.

That's EXACTLY what you do.

I mean lowest average fps. What so you see card A get 10 fps and card B get 100 fps and say &quot;yeessss, give me card A!!!&quot;

for $500, the GF Ultra better drop to it's knees and blow me.

Hmmm... that's one feature nVidia doesn't have. That's what the V5 6000 is for. I have been told that's what the wall outlet requirement is for.

you dumb ass haven't you done averages before, if the min is 1 and the max is 99, well that 99+1=100 all divided by 2 = 50 fps average.while 25+75=100 all divided by 2= 50.


Thanks for the maths intro, but yes I do know about averages.

The question is: at 100 fps average, how did you &quot;deduce&quot; the Ultra has the 1-199 range while the Radeon has the 99-101 range?

higher average means jack! because the average framerate isn't indicitive with the min frame rate --which is the most important number-- unless you use a full tile rendering architecture, at which point overdraw doesn't cause a huge varience.

Once again, 50 fps average vs 100 fps average. Which do you think is more likely to have the highest minimum?

what happens with the ati is that it's not too fast on the really easy frames, but it's not too slow on the really hard ones, so it can't make up the time on the easy frames like the geforce does. btw those demos used for benchmarks largely consistent of low load frames. making it a very weighted average towards the geforces.

I can't believe this!!! Now a FATI is telling me the problem with nVidia's boards is that they are too fast!!!
When the GF2/Ultra is &quot;too&quot; fast VSYNC kicks in, as I said above.

And what about the crusher banchmarks? Are they too low weighted as well?

ati gets a bad rep, because the cards aren't as loved by retard gamer such as yourself bfg.

So you are saying the retard gamers buying ATI cards don't love them? So why do they buy ATI's boards?

 

fodd3r

Member
Sep 15, 2000
79
0
0
brillian bfg, once more you've proven your ignorance in stellar fashion.

your argument consited of absolutely no facts.

ut is an okay benchmark --though the game sucks, but we'll leave that for another thread, since it's basically the only one which shows the frame rate under dx and still support the much of the current feature set. other games are way too old, or such as mdk way to geforce centric to matter. as long as you don't use ut to compare 3dfx cards to others running it, it's actually decent.

And what about the crusher banchmarks? Are they to low weighted as well?

yes they are, because they have a few &quot;trouble&quot; spots, but largely consist of easier frames. a better demo would a be one that's edited such that the demo only consists of &quot;trouble&quot; spots.

will the geforces crawl?

of course they will! they employ no culling unit that will reduce the load. games such as quake 3 have an over draw rate of 3 to 4, so 3.5 is about the average. this means that every pixel is overwritten 3.5 times. in a complex scene nvidia cards will be strapped for memory bandwidth and take rather large memory latency hits, not to mention even their fill rate ain't much when most of it is wasted. of course you're going to make some moronic little quip, about how all this isn't relevant. go ahead and be the fool.

lets see how this works:

<easy frame>
say the over draw average is 3. no triangle is smaller than 4 pixels, thus maximum fill rate is kept at all times. well simple math tells you, at 800*600 that's 480,000 pixels in every frame, these become 3 times the work, with overdraw. this means that you now have to texture 1,440,000 pixels per frame. not too bad.

<hard frame>
say the over draw average is 4 now. we'll still be nice and say no trianle is smaller than 4 pixels, thus the geforce keeps it's mythical frame rate. the res at 800*600, giving you --you guessed it-- 480,000 pixels in every frame. but with over draw, 1,920,000 pixels per frame.

now that alone shows you that there is a significant fluctuation in the frame rate of the geforce --higher than the radeon, since it has ways to combat over draw penalties. there is a few other consideration that has to be looked at.

memory bandwidth: in the hard frame think about how much info will be needed, what makes you think the geforce can handle it?

memory bus: memory is run asyncronously with the gpu, this causes latency issues, if you want to know how this really affect performance, have a look at via mother boards vs bx, you'll see that the async bus on the via doesn't deliver the same performance that the bx does with a sync bus. for more infromation on this www.aceshardware.com has a excellent article dicussing memory technologies and related issues --latency being one of the of course. what this means is that there are wasted clocks on the gpu's part --it's waiting for the information from ram, ie. no texturing is taking place.

memory reads: with all this information being accessed by the gpu, you're bound to hit massive latencies, simply because of dram's destructive reads. basically what happens is information is read, and by reading it the information is removed from ram, and must be rewritten. well since a scene does use the same textures in different places, it's very likely that the gpu read the info --destroyed it, and wish to read it again; however, you have to wait for the rewrite. (see www.aceshardware.com memory guides part 1 and 2, great reads!)

massively parallel pixel rendering architecture: like i said before not all triangles are 4 pixles or larger; therefore, you don't get to have a really high fillrate at all times. so far i have heard of not a single video processor that can work on more than one triangle at a time. lets also not for get the intense geometry accesses --need it per pipeline, even if they are working in tandem.

how does the radeon stack up?

well it has:
a syncronous bus: reducing latency.

fast clear z buffer: further reducing latency issues.

culling: reduce the affects of over draw.

compression: reduces the amount of memory bandwidth wasted, in the constant z buffer accesses.

parallel pixel rendering architecture: now the same as nvidia they'll take a hit, but not as much of a hit as the geforce will, since it won't suffer from the geometry accesses as much.

dx8: will support all of the radeon features, thus richer feature set makes this card even better.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Techno-

From the post of mine you were responding to-

&quot;(though it would be my hands down first choice there). 16bit Z is fine for most games, but the ATi boards are absolutely horrible at delivering a truly precise 3D image.&quot;

There is a big difference between image quality of a game on a non calibrated monitor, and precission in a 3D environment. Games are extrodinarily simplistic, not tasking at all for what I am talking about. It is akin to ruling the TNT1's image quality equal to a G400 by comparing them at 640x480 8bit color 60Hz. For games, the Radeon rocks, I thought I made that clear, for high end applications, nVidia is in a completely different league.

Fodd3r-

&quot;ut is an okay benchmark --though the game sucks, but we'll leave that for another thread, since it's basically the only one which shows the frame rate under dx and still support the much of the current feature set.&quot;

UT is a lousy video card benchmark, it is far too CPU dependant and the timedemo code itself is far from perfect. If you run some benches and watch closely you will see lags that simply are not present in the exact same situation when playing, many small hiccups. Something is wrong with the timedemo code, no surprise coming from Epic. A vastly superior D3D bench is Evolva, it supprots far more advanced features then UT and is fillrate intensive, not CPU bound(not nearly as much anyway).

&quot;now that alone shows you that there is a significant fluctuation in the frame rate of the geforce --higher than the radeon, since it has ways to combat over draw penalties. there is a few other consideration that has to be looked at.&quot;

Driver optimizations are far more important with fluctuating FPS then overdraw. Overdraw remains rather linear in the majority of games, not to mention that the Radeon is not close to effective as a board along the lines of the Kyro, it is no tiler. The edge that ATi has with their &quot;Hyper-Z&quot; seems laughable at the moment, if it is even enabled in the drivers yet. They are soundly being bested in performance in bandwith limited situations by boards with quite a bit less bandwith(the GF2 boards). They are clearly far less efficient then the GF2s as of right now. Most of your comments deal with this issue, the fact is that right now ATi is clearly lagging in utilizing bandwith in an effective manner.

&quot;massively parallel pixel rendering architecture: like i said before not all triangles are 4 pixles or larger; therefore, you don't get to have a really high fillrate at all times. so far i have heard of not a single video processor that can work on more than one triangle at a time. lets also not for get the intense geometry accesses --need it per pipeline, even if they are working in tandem.&quot;

The GF1 or GF2 can draw one polygon faster then it can draw four pixels(or eight for that matter). The percent of polys that small, incredibly low in games. I'm not sure what the point of the rest of your statement is supposed to be?

&quot;culling: reduce the affects of over draw.&quot;

Which the GF and GF2 have support for. In fact, all 3D accelerators do, or they would have an overdraw factor in the dozens under many circumstances. Perhaps you are alluding to portions of the Hyper-Z? If they are enabled, ATi has problems somewhere else in their architecture, I don't think any of the benefits have been seen yet.

&quot;dx8: will support all of the radeon features, thus richer feature set makes this card even better.&quot;

I absolutely agree with this statement, but the Radeon is not a fully DX8 compliant board. nVidia is determining what will be in and out based on what features are included in NV20(MS+X-Box= nVidia's call). While the Radeon will be the closest board that is currently out, and by a fair margin I would imagine, it is not going to be 100% DX8 compliant.

The Radeon is the best gaming board out(excluding the insanely expensive GF2Ultra and V5 6K).
 

Techno

Golden Member
Oct 9, 1999
1,063
0
0
Oppss... i must not have seen you write *Application*. I thought you were talking about Games w/ the 3D image thing too.
 

sweetca

Senior member
Nov 7, 1999
279
0
76
www.worldofaralon.com
think about this, Nvidia hyped T&amp;L and hardly anyone uses it yet.

3DFX hyped FSAA while very overshadowed by Geforce and now FSAA is the sheet, thank GOD for

COMPETITION!!!!!!!!!!!
 

Dean

Platinum Member
Oct 10, 1999
2,757
0
0
I have an ATI Radeon 64meg ddr loaded on my machine right now!!(i upgraded from a tnt2 non ultra)and i bought the card knowing full well that the geforce 2's get higher numbers in quake3....shame on me!!!

i can play any game i want at any resolution and color depth i want to and its smooth as silk and looks incredible.

people can throw all these damn benchmarks in your face all they want. Those idiots will never enjoy their machines because they'll only chase higher numbers after higher numbers. The bottom line is, if you have any card from a geforce sdr up. you wont run into any problems running todays games. The nvidia cards are just fine, so is the voodoo 5-5500...buy the card you like and stop listening to these lame ass benchmark drones!!


P.S If you want to talk benchmarks...you always seem to use quake3 benches. But it seems most hardcore gamers seem to like playing UT more. So since UT is the most favorable FPS out there right now i guess the Voodoo 5-5500 is the fastest card for enthusiasts
 

fodd3r

Member
Sep 15, 2000
79
0
0
Driver optimizations are far more important with fluctuating FPS then overdraw. Overdraw remains rather linear in the majority of games, not to mention that the Radeon is not close to effective as a board along the lines of the Kyro, it is no tiler. The edge that ATi has with their &quot;Hyper-Z&quot; seems laughable at the moment, if it is even enabled in the drivers yet. They are soundly being bested in performance in bandwith limited situations by boards with quite a bit less bandwith(the GF2 boards). They are clearly far less efficient then the GF2s as of right now. Most of your comments deal with this issue, the fact is that right now ATi is clearly lagging in utilizing bandwith in an effective manner

driver optimizations don't offer all that much in the way of frame rate fluctuation relief; not without getting overly cpu intesive.

the hyper-Z is enabled, it offer at least a 30 percent improvement, proven via benchmarks, check out www.aceshardware.com for it, i believe it was there.

obviously you have little to no idea what you are talking about, or you just didn't fully read my post. you would have realized how the geforces seem to use bandwidth better, but that's just optimization, to better run quake3, and the weighted nature of demos.

The GF1 or GF2 can draw one polygon faster then it can draw four pixels(or eight for that matter). The percent of polys that small, incredibly low in games. I'm not sure what the point of the rest of your statement is supposed to be?

both gf1 and 2 can only do 4 pixels per clock @ 1 or 2 times texturing respectively! there are quite a few polys that are that small, last i remember most games don't support on the fly geometry scaling based on distance. i imagine there are is a non-negligable amount of polys that would fall into this category.


culling
Which the GF and GF2 have support for. In fact, all 3D accelerators do, or they would have an overdraw factor in the dozens under many circumstances. Perhaps you are alluding to portions of the Hyper-Z? If they are enabled, ATi has problems somewhere else in their architecture, I don't think any of the benefits have been seen yet.


i was talking about the priority buffer of the hyper-z, like i said it is enabled from all the reports i've heard. haven't seen the benifits? um... i think the amazing frame rates the radeon delivers is proof enough.

me thinks someone should bench the two cards, and report their min, max and the usual average fps. we'd be able to really understand the performance of the cards. unfortunately i lack the contacts to get this done, i've asked reviewers to do this before, but they don't. me thinks companies --nvidia most definately would fall into this category-- made them sign some contract not allowing them to, which would be a wise marketing move.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,979
126
First you started off with:

your argument consited of absolutely no facts.

Then followed with:

UT is an okay benchmark

Alrighty then.

since it's basically the only one which shows the frame rate under dx and still support the much of the current feature set. other games are way too old,

How does it show anything to do with Direct 3D if it is entirely dependent on the CPU for performance? As Ben Skywalker says, Evolva is a far better benchmark.

or such as mdk way to geforce centric to matter

How is MDK2 &quot;GeForce centric&quot;? Is it &quot;centric&quot; because the GF based boards win? And I am the one with no facts???

yes they are, because they have a few &quot;trouble&quot; spots, but largely consist of easier frames.

What you just said is pure bullsh*t. The whole point of the crusher demos is to test your video card to the max. Otherwise how is the crusher demo different from a normal demo?

of course they will!

You are just lucky I spent the last 1/2 hour looking for Quaver benchmarks but couldn't find any. Somebody, find some Q3 Quaver benchmarks with a GF2 and Radeon together and post up a link to them.

they employ no culling unit that will reduce the load.

Err, all video cards employ some kind of culling. Otherwise they would be constantly drawing pixels that will never be seen. The only difference is that the Radeon removes hidden surfaces (Z-buffer optimisations).

The Radeon doesn't have true tiling. All it does is minimises hits to the Z-Buffer through a variety of ways, and this in turn reduces the memory bandwidth.

The problem with this is that in order to make any significant gains with this technique you need to have actually been memory bandwidth constricted in the first place. The Radeon with its mediocre fillrate would certainly never have memory problems in the first place.

The Radeon is almost the exact opposite of the GF2.

now that alone shows you that there is a significant fluctuation in the frame rate of the geforce --higher than the radeon, since it has ways to combat over draw penalties. there is a few other consideration that has to be looked at.

True, but in spite of this the GF2 still stays ahead. And how do you gauge fluctuations? If the Radeon is slower to start off with, of course the fluctuation percentage is going to be lower when it goes to higher settings.

Quake 3 scores (max = 640 x 480 normal, min = 1600 x 1200 high quality).

GF2: maximum average: 136 fps, minimum average 39.1 fps
Radeon: maximum average: 85 fps, minimum average: 36.1 fps

Well the GF2 fluctuates more than the Radeon between the quality/resolution settings, but it is still higher at any given setting.

According to your argument, a video card with min 30 fps, max 30 fps (as defined above) is the best because it doesn't have any fluctution between the settings. Would you buy such a video card? If you would you are a very stupid individual.

memory bandwidth: in the hard frame think about how much info will be needed, what makes you think the geforce can handle it?

Because I have seen the benchmarks. All your theorising is fine and dandy but it doesn't mean squat if the practical results show something dfferent. The Hyper-Z technology only reduces hits to the Z-buffer. It doesn't utilise a true tiling scheme. Plus the Radeon's fillrate is too low to actually take advantage of the benefits it gets from Hyper-Z.
 

Techno

Golden Member
Oct 9, 1999
1,063
0
0
Q:

Does anyone (No Nvidiots) have any benchmarks of a Geforce2 GTS 64MB w/ its first Gen drivers and a Radeon 64MB benchmarks w/ its latest beta drivers?

Just wondering.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Fodd3r-

&quot;driver optimizations don't offer all that much in the way of frame rate fluctuation relief; not without getting overly cpu intesive.&quot;

It seems that you don't have too much experience with benchmarking. Driver optimizations are enormously important for fluctuation, second only to CPU power, more important then peak fillrate.

&quot;the hyper-Z is enabled, it offer at least a 30 percent improvement, proven via benchmarks, check out www.aceshardware.com for it, i believe it was there&quot;

No disrespect to Ace's, they are a great site, but I'm waiting for Beyond3D's review, they are looking into things that have yet to be covered.

&quot;obviously you have little to no idea what you are talking about, or you just didn't fully read my post. you would have realized how the geforces seem to use bandwidth better, but that's just optimization, to better run quake3, and the weighted nature of demos.&quot;

Read your post again, you gave a long winded explenation on how the Radeon was supposed to be superior. On the subject of not knowing much-



<< <easy frame>
say the over draw average is 3. no triangle is smaller than 4 pixels, thus maximum fill rate is kept at all times. well simple math tells you, at 800*600 that's 480,000 pixels in every frame, these become 3 times the work, with overdraw. this means that you now have to texture 1,440,000 pixels per frame. not too bad.

<hard frame>
say the over draw average is 4 now. we'll still be nice and say no trianle is smaller than 4 pixels, thus the geforce keeps it's mythical frame rate. the res at 800*600, giving you --you guessed it-- 480,000 pixels in every frame. but with over draw, 1,920,000 pixels per frame.
>>



For a GeForce1, you would hit 333.33FPS on the easy frame and 250FPS on the &quot;hard&quot; frame. This is what you consider a problem scenario? This is why theoretical BS goes out the window and actual proof, by way of benchmarks is used. Does anyone think the GF1 can hit 166.66FPS in Quake3(multitexturing penalty)? Of course not, that is why we benchmark and don't bother with useless calculations. Some are needed, theoretical peak fillrate and theoretical fillrate requirements are absolutely useless.

&quot;now that alone shows you that there is a significant fluctuation in the frame rate of the geforce --higher than the radeon, since it has ways to combat over draw penalties. there is a few other consideration that has to be looked at.&quot;

No, it doesn't simply because neither board is coming anywhere near their peak numbers. Up it to 1600x1200 and a GF1 would still be pushing 80FPS+(40FPS+ multitext) for the &quot;easy&quot; frame and 60FPS+(30FPS+ multitex) for the hard frame, let alone the theoretical peak of the GF2.

&quot;both gf1 and 2 can only do 4 pixels per clock @ 1 or 2 times texturing respectively! there are quite a few polys that are that small, last i remember most games don't support on the fly geometry scaling based on distance. i imagine there are is a non-negligable amount of polys that would fall into this category.&quot;

There isn't any game out that is taxing the GPU's T&amp;L period. Sub 4 pixel polys are indeed extremely trivial as of right now. In order to stress the GF1, you would need to be in the area of 769FPS in Quake3 based on average poly counts rendered per frame, let alone the GF2.

&quot;i was talking about the priority buffer of the hyper-z, like i said it is enabled from all the reports i've heard. haven't seen the benifits? um... i think the amazing frame rates the radeon delivers is proof enough.&quot;

Huh? It is getting handily bested by a board with ~10% less bandwith, the framerates are sad if Hyper-Z is truly working. The main advantage of Hyper-Z is supposed to be reducing the amount of bandwith needed, if it is working then ATi has a lot of work to do on their drivers.

&quot;me thinks someone should bench the two cards, and report their min, max and the usual average fps. we'd be able to really understand the performance of the cards. unfortunately i lack the contacts to get this done, i've asked reviewers to do this before, but they don't. me thinks companies --nvidia most definately would fall into this category-- made them sign some contract not allowing them to, which would be a wise marketing move.&quot;

By the end of this week, Beyond3D will probably have their review up, you can be sure that they will cover much more involving the actual board then you have seen in any other review(though it is aimed at a different audience).
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |