What's better? GeForce 2 Ultra or the new ATI Radeon?

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Doomguy

Platinum Member
May 28, 2000
2,389
1
81
robo: Radeon's seem to have alot of via problems also. The voodoo 5 has no problems because it dosent use most of agp's features. My explanation is simple and true.
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
Doom:

1) I understand your point
2) The Radeon is a brand new card with brand new drivers. The GTS is a revised GeForce using the same drivers. There is no excuse for their poor compatibility. I also dont' see anywhere NEAR as many issues with the Radeons and Via cards.
3) There are peeps with BX boards having problems with the GTS + 6xx. There are *more* peeps with Via boards that have issues, but that doesn't excuse nvidia. EVERYTHING is compatible with a BX board. Christ, you could damn near stick an AMD in there <g>
 

Techno

Golden Member
Oct 9, 1999
1,063
0
0
ATI, just came out w/ brand new drivers for the RADEON that are for D3D and OGL.... i am downloading them right now and i'll just report back if there is any increase/tweaks/better image quality...
 

Weyoun

Senior member
Aug 7, 2000
700
0
0
with the new 6.26 dets i have seem a *HUGE* stability increase, and that very nice increase witnessed when moving to the det3s. ill admit i switched back to 5.3x until they had a new driver release, and, knowing i'd only have to wait a couple of weeks i had new drivers in no time. i'm running on the 133a chipset and my rig is rock solid again.

agreed robotech, stability rawks over perf, although i was tempted for a sec. what i want to see, is when people like anand and tom pabst post new driver reviews like they did with det 3 and a few previous releases, they should look into stability more. although this would be one hell of a work load, i think it would still be nice, like a toll attached to the review where peeps indicate whether their comps had probs with the new drivers etc....

with that 2nd tmu on the vsa-100, yes it would be nice, but wouldnt every vid card be nice if they had just that extra few weeks to implement another feature (like the 2nd tmu), im sure half the content of this debate wouldnt be here if the gf2 had the feature set of the radeon.....
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
weyoun,

congrats on the driver update! Glad it's working for you

the thing that is so odd about the 5500 is that previous voodoo cards have had 2nd TMU's. why they didn't add it on the 5500 is beyond me (cost? bah!)
 

Weyoun

Senior member
Aug 7, 2000
700
0
0
im sure the engineers at 3dfx have just a slightly better knowledge on the subject than us. i do wish, however, they'd follow public opinion a bit more often instead of dictating the rules (32bpp wont be needed, t&amp;l doesnt matter now) and then getting their rules shoved up their ass when the public actually appreciates those things they left out.

what the UN should focus on is the Monstah card, creating a board with every feature they could whack on it, integrating tech from every company that could contribute, what i'd give to get my hands on that, oh yeah

ben might even be able to render at 8kx8k with 16(was it 16?) sample FSAA
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
I havent had a single problem with any of nVidia's official drivers on my trusty old P2B, and Ive used a TNT-1, TNT2-Ultra and a GF DDR.

Out of every person that has problems, I woner how many that dont, you know people who dont have problems wont show, since its hardly common to post a message such as &quot;Installes the new Detonator drivers and all is fine&quot; if you get my point
 

fodd3r

Member
Sep 15, 2000
79
0
0
just a thing about t&amp;l. the greatly touted feature!

IT SUCKS!

even if engines were designed to use it the performance increases wouldn't be that big a deal. so if 3dfx decided to leave it out good for them. now if only they sunk that money into a few other things like the TMU!

=P

as for the radeon having it, well it doesn't suck on the radeon, simple fact of the matter is that it can do the morphing calcs with the key frame interpolation. which when paired with the t&amp;l unit, end up freeing a worthwhile amount of cpu power.
 

jpprod

Platinum Member
Nov 18, 1999
2,373
0
0
3dfx didn't remove the second TMU from VSA-100, they instead turned it into an independent pixel pipeline. Still, they could've make the pipelines dual-texturing, Voodoo5 would've been much more competitive with GeForce2 GTS if this had been the case.

BTW, with Voodoo Graphics -> Voodoo2 it was very easy to add a TMU 'cos TMU was an independent chip Actually I recall Quantum had some sort of a high-end Voodoo Graphics solution with two TMU chips on single card, GLquake readme.txt mentions it...
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Fodd3r-

&quot;just a thing about t&amp;l. the greatly touted feature!

IT SUCKS!&quot;


No, it doesn't.

&quot;even if engines were designed to use it the performance increases wouldn't be that big a deal.&quot;

Have you seen the Unreal2 engine shots? Those are real time on a GF2. It matters a lot, and it certainly is significantly faster then any available CPU.

&quot;as for the radeon having it, well it doesn't suck on the radeon, simple fact of the matter is that it can do the morphing calcs with the key frame interpolation. which when paired with the t&amp;l unit, end up freeing a worthwhile amount of cpu power.&quot;

LOL- Is that a joke? Sure, keyframe interpolation is nice, but do you have any idea how little used it would be even under ideal circumstances?

The amount of vertices that are deemed &quot;non-static&quot; are miniscule in most games. If you are playing a racing sim and flying through thirty different cars with detailed backgrounds of complex tress and no one crashes, you have no vertices for the CPU to handle. The only time that the CPU would need to get involved would be if you crashed and then only if the game supports locational damage(showing your fender dented) and even then it only needs to be utilized until the model is updated. That is it.

Hardware T&amp;L on both the GF boards and the Radeon is an extremely useful feature in terms of possibilities, and that potential is becoming increasingly exploited by developers. It works extremely well when it is used to its potential(see the Unreal2 shots if you have any doubts, remember those are real time on current hardware).
 

Weyoun

Senior member
Aug 7, 2000
700
0
0
i do wish nvidia and ATI would spend mroe time developing these new features like T&amp;L, so it could actually be used in dynamic situations, like the situation ben showed: it could be potentially what makes a different generation of vid cards if they could pull that off. the performance gains would be huge, and they could be the only ones claiming a true t&amp;l implementation. also, t&amp;l should be easier implemented on current applications and games, and a possible addition to the compiler like what intel provided particular compilers to optimise their code with SSE, come on graphics companies, you can do it!!

i do have a feeling this is drifting off topic somehow

woohoo!! im not a junior member anymore!!

sunner:
implementing a poll at the end of the driver review would be a lot easier to get results, as the user doesnt have to sign up a new username for the forum, post on here and have the chance of getting flamed it would also have a larger potential audience as ill bet half the peeps that visit anandtech dont even bother with the forums.

Just my $0.04 (damn australian exchange rate)
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
Weyoun:

<< im sure the engineers at 3dfx have just a slightly better knowledge on the subject than us. i do wish, however, they'd follow public opinion a bit more often instead of dictating the rules (32bpp wont be needed, t&amp;l doesnt matter now) and then getting their rules shoved up their ass when the public actually appreciates those things they left out. >>

I agree. Their marketing department is complete $hit. They do the company a disservice. Unfortunately, I think they have engineers doing all their work for them. They are entirely too logical.

Quick question...when the V3 came out, the TNT2 came out at about the same time. 2 major games had support for 32-bit, but until the GeForce DDr came out, 32-bit in Q3 wasn't really very playable (unless you liked slow framerates) So the public didn't really *need* 32-bit at the time, as it was barely useable. However, 3dfx missed a cycle. had they come out with a card w/32-bit support @ the time of the GeForce, they would've been set.

WRT T&amp;L, it will be an outstanding feature, but it makes a difference in only 2 games right now: MDK2 and Evolva, and only minimally in both. I do agree with you tho. Don't say &quot;well, this feature isn't all that great&quot;, stick EVERYTHING on there. &quot;Better to have and not need than need and not have&quot;, even if that &quot;need&quot; is merely for marketing purposes. Ya dig?

Sunner, you make a very good point, and I know where you're coming from. The thing is, if you check the GeForce FAQ, there are huge sections dedicated to ironing out the bugs that each driver set creates. Look at it from my point of view: If a client has a GeForce card, and he asks me &quot;what drivers should I use&quot;, I'm usually somewhat stumped. With damn near EVERY other piece of computer equipment out there, you can recommend &quot;just get the latest&quot;. I can't do that with the GeForce cards. I gotta sit there and give a lengthy explanation, which includes instructions on where to download the older driver set to be ready to install in case the newer drivers don't work. Blah blah blah. That really shouldn't be necessary.

fodd3r: Saying that T&amp;l sucks is just plain ridiculous. It's just not employed right now. Your comments make it very obviuos what an ATi fan you are, and how much you dislike nvidia. The T&amp;L unit doesn't suck. What sucks is that nvidia convinced everybody and their mother that there would be a bunch of T&amp;L games right now. Like Chuck D used to say &quot;don't believe the hype&quot;

jpprod: as always, the voice of reason. That is what I meant to say. They should've made the pixel pipelines multi-texturing. That would make them right alongside the GTS in everything. B-/ As it stands, we get to see just how bandwidth limited the GTS is. Damn lucky thing for 3dfx that 250 MHz 256-bit QDR doesn't exist right now, or the GTS vanilla would be getting 100 fps @ 1600x1200x32 in Q3. !!!

Ben:

<< The only time that the CPU would need to get involved would be if you crashed and then only if the game supports locational damage(showing your fender dented) and even then it only needs to be utilized until the model is updated. That is it. >>

'That is it'??? dude, that is IT!!! That is EXACTLY what we want to see happen! In racing games, that would just FUGGIN' own!!! Think about firing a rocket at a wall, and seeing it crumble! That would kick ass!!!

Weyoun: as far as implementing a poll, this site here is pretty heavily nvidia-based. Not that that is a bad thing, but if I say a &quot;bad thing&quot; about my GTS (usually just the truth), I usually get my head bit off by several peeps. in other words, we'd need more than just one forum to really do that type of thing justice.

As it stands, a quick perusal of the GeForce FAQ is all that is needed to confirm what several peeps here have been saying all along: The GeForce cards aren't as stable as they should be, due to platform problems, hardware compatibility problems, driver problems, game engine problems, whatever. There are MANY MANY documented issues with the GeForce cards. They are quite a finicky beast. When they run, BOY do they run!!! Unfortunately, they can be rather annoying, as well.

The nv20 looks (assuming the specs to be real) to be quite a barnburner. It'll be wasted on me if they don't fix the issues they've had in the past (and present)




 

fodd3r

Member
Sep 15, 2000
79
0
0
t&amp;l isn't that hot, and that's a fact.

the really processor intensive stuff, is morphing. moving crap and scaling it ain't that big a deal!

interms of not having games out using t&amp;l, well that's because designing t&amp;l engines are much different from cpu based, well if you want to take advantage of the performance. the 3d card round up at www.beyond3d.com addressed this.
 

Weyoun

Senior member
Aug 7, 2000
700
0
0
if 3dfx could actually stick to their proposed '6 month' cycle, then yes, we would see a lot more competition from them. Unfortunately, they might even miss this round. in the space it took nVidia to release 3 cards, although every one of them with the same core (apart from the mx, which only took stuff out i mean, some on, how hard can that be), 3dfx could only come up with the two, one which has a very hard time competing with the gf2 MX and the other with the GTS. I dont really wanna count the months this cycle has taken for 3dfx just to get their cards out the door, sure the tech on the chips rawks, it doesnt do you any good having them sitting in a dark room, going nowhere fast.

yes, that's the idea robotech!! whack it all on there!! works for me

fodd3r:

that's the idea behind a dynamic t&amp;l engine, it could actually handle the morphing of terrain, and would give a mighty hefty performance increase for just about every game on the market, the hard part is once again getting developers to implement it. if nVidia provided a compiler extention that optimised the code straight off, there's almost a generation of development for free, and that would make 3dfx sh!t themselves, 2 generations behind would spell the doom of them....

robotech:

integrating an entirely destructable environment would one HELL of a workload for the developers concerned, it's just that the end result, instead of running like crap, would actually be ok. you could make a literally FULLY destructable environment with physics and force involved, but then we're gonna need a chemistry breakdown chip whacked on the vid card (now there's a thought ) also, you could imagine just how much fun it would be deathmatching in a pit after about 10 hours running on a dedicated server

with regard to your question, yes, it is only a real problem if you skip a generation, but 3dfx are drawing awfully close to doing this *again*
 

fodd3r

Member
Sep 15, 2000
79
0
0
last i checked, a simple compiler extension won't do the trick! the entire rendering engine has to be build in a compeletely different way!

it's not a matter of take code that uses the cpu to do t&amp;l and just throw it through a &quot;special&quot; compiler.
 

Weyoun

Senior member
Aug 7, 2000
700
0
0
and how wouldnt it? if all it required to utilize t&amp;l was a change in the function calls made etc then a compiler patch would easily do the trick... as for the entirely new renderer, i havent heard of that before....

all im saying is that intel managed to pull it off with SSE extensions being implemented by the compiler, and if they can, why not t&amp;l?

ive just begun programming in opengl and have had a sneak peek at the vertex array code at the nVidia developer site, it sure as hell doesnt call for a new rendering engine....
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Robo-

&quot;WRT T&amp;L, it will be an outstanding feature, but it makes a difference in only 2 games right now: MDK2 and Evolva, and only minimally in both.&quot;

Evolva the difference is a bit more then minor, and there are a lot more games that utilize T&amp;L then those two. TD6 is another that had horrible reviews in no small part because of the horrible slow down which simply does not exist with hardware T&amp;L.

&quot;'That is it'??? dude, that is IT!!! That is EXACTLY what we want to see happen! In racing games, that would just FUGGIN' own!!! Think about firing a rocket at a wall, and seeing it crumble! That would kick ass!!!&quot;

For that scenario, build the wall they way you would in real life and you have no problems whatsoever. The hardware T&amp;L unit could easily handle it as long as you build the wall &quot;brick by brick&quot; in the first place. Ignoring that though, you could altercate 13,000 vertices on the CPU at roughly 100FPS for the wall while the T&amp;L unit is handling everything else. Either way, what you are looking for can very easily be done using current hardware T&amp;L.

Fodd3r-

&quot;t&amp;l isn't that hot, and that's a fact.

the really processor intensive stuff, is morphing. moving crap and scaling it ain't that big a deal!&quot;


With extremely low poly counts the strain put on a CPU isn't that high. All you need to do is add either higher poly counts, or higher quality lighting and watch the CPU crawl while hardware T&amp;L doesn't break a sweat. MDK2 with high quality lighting on shows a GF based board to be about 300% faster then a 800MHZ+ PIII. Sounds like a rather big deal to me. Give me a scenario where morphing is going to be a major edge in comparison to the one hardware T&amp;L offers.

&quot;interms of not having games out using t&amp;l, well that's because designing t&amp;l engines are much different from cpu based, well if you want to take advantage of the performance. the 3d card round up at www.beyond3d.com addressed this.&quot;

They have changed their tune more then once on hardware T&amp;L. At first they were saying that the GF based boards were way too slow, now they are too fast but not flexible enough. Making an engine use hardware T&amp;L tends to be very simplistic, for OpenGL it is easier then using CPU optimized code.

&quot;the entire rendering engine has to be build in a compeletely different way!&quot;

No, it doesn't. Utilizing the added power of hardware T&amp;L becomes demanding, not getting the game to support it at all.

&quot;it's not a matter of take code that uses the cpu to do t&amp;l and just throw it through a &quot;special&quot; compiler.&quot;

It isn't much harder either. One programmer was talking about adding hardware T&amp;L support to a game in one eight hour day. DX7 and OpenGL both support hardware T&amp;L, it isn't that hard to exploit a feature native to the API.
 

fodd3r

Member
Sep 15, 2000
79
0
0
&quot;Usually changing your games engine to use that specific call is not a complex thing and we have heard that most developers have their engine running within a couple of weeks. Unfortunately the pay back from this operation is usually very small. Even worse, several games engines actually perform worse with hardware T&amp;L than when they were running in software mode. The most obvious reason behind this is that software T&amp;L and hardware T&amp;L are vastly different, and so what is optimal for one is not optimal for the other. Simply converting your existing 3D engine to use hardware T&amp;L is going to give you a very minor boost in performance. To really use T&amp;L you'll have to dig in and change the way your system works and this kind of tweaking is very time consuming.&quot;

&quot;If you want to change an object- with change, meaning a true change of the looks of the object - the CPU has to take over. The CPU needs to go into the object description and modify it vertex by vertex. The hardware does not support the options to do this, as a result many of the games that &quot;use&quot; T&amp;L are also &quot;using&quot; a lot of CPU power simply because all the dynamic things first need to pass through the CPU to be dynamic.&quot;

as you can see from the above two paragraphs, t&amp;l ain't that hot, nor is it easy to use, or even that useful. now, i realize this is largely due to the limitations of the dx api, but the sad state of affairs at this point is that most games use dx. =(

these exerpts were taken from beyond3d and are their property, save any trademarks which are the property of their respective owners. just had to have that disclaimer to save my butt!

this is where i got it.

http://www.beyond3d.com/roundups/3d_2000a/index3.html
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
&quot;as you can see from the above two paragraphs, t&amp;l ain't that hot, nor is it easy to use, or even that useful. this is why i'm harping on about key frame interpolation, though it can't do huge amounts of morphing it does more than enough to be useful!&quot;

Read this thread. Comment on what is said in there. It is about that article and we did have a discussion over it with the author.
 

Weyoun

Senior member
Aug 7, 2000
700
0
0
heheh, i think i was reading that same article just last night

just a function replace sure wouldnt be a large compiler patch, but as i said before, im sure nVidia are capable of providing at least some depth tweaking, even if it isnt all that deep. Nothing will ever come quite down to just putting it in the compiler and expecting a perfect product on the other side, but it sure would help. t&amp;l would remain to be tweaked just as the TMUs are to avoid the extra pass, both require large forethought, but TMU usage has more experience about it. it's only a matter of time before public tweaking methods become available and it takes just as much priority as texture tweaking.

and as for the nature of the t&amp;l engine nto being able to modify the vertex, that was what my previous posts were about: the integration of a t&amp;l unit which is capable of it. now that would rawk....

oh well, im off to watch kieren perkins (or however it's spelt) kick some serious arse in the 1500m
result: oh well Grant Hackett kinda deserved it
 

silver24

Senior member
Nov 22, 1999
881
0
0
All im gonna say is i just traded a 32 meg GTS for a 32 meg Radeon and i cant describe how much better the Radeon runs, and ive never seen such picture quility and ive had all of them except a Matrox. Im no card expert but the Radeon is the best video card i have owned.
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
Ben:

<< Evolva the difference is a bit more then minor, and there are a lot more games that utilize T&amp;L then those two. TD6 is another that had horrible reviews in no small part because of the horrible slow down which simply does not exist with hardware T&amp;L. >>

Evolva still runs okay without T&amp;L. The main difference is in the bumpmapping.
 

pen^2

Banned
Apr 1, 2000
2,845
0
0
whoa... didnt bother reading the entire thing but i will say this much as an ownder of a radeon: most peeps who got themselves OEM 32megger radeons were able to clock it up to 200/200 altho the ram was rated only 6.0ns. the only way to overclock the card at this point is to use powerstrip. radeon, just like gf2, is memory bandwidth limited so goin from 183mhz to 200mhz does help the card quite a bit. however, i am yet to see any satisfying driver update from ATi. remember it was ATi whose cards achieved higher 3dwinbench scores after a driver update at the cost of suffered frame rate drops in quake1 and other real world games.
 

fodd3r

Member
Sep 15, 2000
79
0
0
well, just recently ati released a driver update --and i'm not talking about some beta drivers, these are supported.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |