A New Interview with Bubba from 3dfx

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

HaVoC

Platinum Member
Oct 10, 1999
2,223
0
0
3DfxAlf: In the first quote I was emphasizing professional OpenGL performance since your colleague, Bubba, compared the V3/V5 to the Quadro. I conceded the 2D quality may be better, but I wanted to point out that those Quadro cards are not solely purchased for 2D performance.

Second quote: Yes, I agree that a smooth edged Lego man would look better than the jaggy version, but I certainly think hardware T&L would have been a better feature than FSAA. I feel that your company certainly has not helped to encourage developers to increase triangle counts in 3D games. I hope your next generation product is more competitive in terms of T&L performance and featureset.

Third quote: Nowhere in my first post did I mention Pixar level animation, but you did bring it up. Now tell me, do you honestly think a low-polygon count, anti-aliased image is closer to Pixar than an image with a aliased, high-polygon count that the NV15, NV15 and ATI Radeon are capable of producing at a good performance level?

Last quote: No I did quote the correct words. Bubba offered the age old argument against hardware T&L, in that it is not "scaleable" or it is "static" Right now nVidia's T&L unit is quite a bit faster than what the PIII/Athlon are capable of. I don't see that lead being relinquished to software T&L within the year. Even still, isn't it nice to have for gamers a high power geometry unit that may reduce the need to upgrade CPU/mobo and possibly memory?

At any rate, I think it is quite amusing you guys will probably introduce a powerful T&L unit with the Rampage later this year. So six months after V5 and suddenly developers are ready for T&L? Sounds more like you guys are ready to brag about it as a feature set.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Only have a quick minute, I do have plenty more to say on this but for now a link dealing with the GF1 pumping out an order of magnitude more polygons then a V5-

http://www.aceshardware.com/Spades/read.php?article_id=5000171

Top chart, both boards paired with an Athlon, has the GF an order of magnitude faster in MCAD40 and over ten times faster in MCAD150. The GF2 is even faster of course.

Lego men are certainly not needed using boards such as the GF, GF2, Quadro or Radeon.
 

Czar

Lifer
Oct 9, 1999
28,510
0
0
Soccerman



<< EMBM is NOT implimented on the NV15, or any of it's predecessors. it has CUBE bump mapping. >>



So wrong, NV15 (GeForce2) has dot matrix bump mapping, it does not have cube bump mapping, its actually called cube enviromental mapping and has nothing to do with bump maps.

Dot Matrox Bump Mapping and Enviromental Bump mapping are the only real bump mapps techinqes awailable, as the results ar simmular. DMBM provides better quality but EMBM provides animated bump mapps.

Emboss bump mapping should not be even considdered a bump mapp. Like saying Photoshop can do 3d when you emboss bump mapp a image.
 

Hardware

Golden Member
Oct 9, 1999
1,580
0
0
LOL
<< 3dfx: One thing everyone needs to remember regarding T&amp;L is that both Intel and AMD based CPUs offer a hardware T&amp;L option through SIMD and 3Dnow!. As you increase your CPU speed (take for instance that you currently have a PIII 600 and in a couple of months you upgrade to a PIV 1.4 GHz CPU) wouldn't it be nice if your T&amp;L unit took an immediate upgrade as well? With a GeForce or GTS you are stuck with a static T&amp;L unit. We have shown that if you're using a GeForce, you are better off using the T&amp;L from a > 500 MHz CPU since it can process the triangles faster. What happens if you more than double that CPU speed (P4 will also be using a 400MHz bus), still want to be using that static T&amp;L Graphics Paralization Unit that could not even process more triangles than your old CPU? For the love of your computer, I sincerely hope not? >>

Hey boys come on he is from 3dfx what do you expect?
He cant tell us the truth or he will loose his job!
Take is as a good laugh!
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
3dfxAlf-

I first must say that out of every PR department, for any company, you are by far the person I hold the most respect for and I in no way am trying to attack you with this post, simply your points.

&quot;1. The fact is, image quality DOES mean jack. And the Quadro/GeForce cards do not pump out an order of magnitude more triangles than the Voodoo5.&quot;

Image quality certainly does matter and the best way to improve that with current technology is to increase polygon complexity. I've already posted a link above showing the GF pushing out an order of magnitude more polygons under Indy3D, if anyone has any links to GLPerf, Exercizer or ViewPerf for the V5 please post them.

Bring up 3DMark2K- Then we are back to image quality first off, second off the V5 suffers horribly because of the limited bandwith afforded to it utilizing AGP1X. Roughly 4million polys max, no matter how fast the processor. A V3 paired with a PIII 450 can best the V5 in poly throughput. Not only that, but in order to hit those numbers they need to switch to a lower level of qality for lighting. MDK2 displays this quite nicely, and also the great performance rift between the V5 and any hardware T&amp;L board.

What's more, with MDK2 the faster the CPU, the larger the gap is between the V5 and GF/GF2 at least(we have numbers for those). You can select hardware T&amp;L for the V5 and run the higher quality lighting and suffer massive performance hit, or run the lesser quality software T&amp;L and still be slower then the GF/GF2/Radeon running the mode with the best visual quality.

&quot;2. Lego men is all you get, and smooth edged Lego men look substantially better than jaggy Lego men. The day we have Pixar level animation on the home PC is still quite a few years in the future. There's not an application out there that will give you non-Lego looking characters and even if there was, the hardware to give it to you in anything approaching real-time is several years in the future.&quot;

Lego men don't need to be all we get, if certain companies were not boat anchors holding down the entire industry with their lack of progress then we could be moving past that now. Pixar level animation is a long way off mainly because of Ray Tracing and Raiosity type effects(though Renderman itself doesn't use them). There are applications that can push non Lego men in real time, Maya, 3DSM, Lightwave etc, and they can push high quality models in real time now using the GF1 or GF2. There is a world of difference between the low poly crap we have now and the high quality animations from the likes of Pixar and ILM, hardware T&amp;L is a step in that direction.

&quot;3. It's not ridiculous, it's factual. If you had a near infinite number of triangles at your disposal that might be different, but as I said in point two, Pixar level rendering in real-time on the PC is still a fair clip down the road. The facts are that no software offers such a level of detail in real-time, no hardware could handle it if it did and the GeForce2 isn't that far ahead of the Voodoo5 in terms of what it can do in real-world situations, even throwing the GeForce2's T&amp;L engine into the mix. .&quot;

So 3dfx is hell bent to make Doom look as good as possible until we can push A Bug's Life in real time? That line of thinking certainly doesn't follow with most people's. Moving forward is what we are talking about, and hardware T&amp;L most certainly is the next major move. In real world situations the GF is leagues beyond what the V5 can do, the V3 is for that matter in terms of poly throughput. The sacrifice made to accomodate two chips by way of reducing the AGP speed is a significant factor. One comment that needs special attention in the above statement-

&quot;At this stage and for the near future, more triangles means more jaggies and even if you jack-up the resolution to minimize (not eliminate, you need FSAA for that) this, it will do nothing to address other visual artifacting issues&quot;

You would have to be a huge moron, completely talentless, or have no clue what you are doing to design a higher poly model that had more visible jaggies then a lower poly model. This is PR BS that both 3dfx and nVidia have come out with and it is a lie no matter who says it. Tell whoever came up with this one to learn a little bit about 3D visualization, the high end not gaming, before they make ignorant comments like this.

For FSAA removing artifacts- You don't mention those that FSAA introduces such as blurring objects from distances and also haloing. I've already talked this over with Dave and Kristoff and don't need to hear about the White Paper. Both those guys are very intelligent but they did not consult any visualization pros when creating the document, only engineers and the like. RGSS in these instances increases the amount of FSAA introduced artifacts. By and large the trade off at the same resolution is well worth it, but double the res from an accuracy standpoint is superior.

&quot;4. I'm a little confused with this one. Your issue with the statement has nothing to do with the actual statement you quoted. Did you accidentally clip the wrong quote on this one? In the quote you provide, Bubba says (quite accurately) that onboard T&amp;L is actually slower than what many systems can do via the processor. Hence, if you buy a card for T&amp;L acceleration on faster systems, you are actually buying it for a feature that will decelerate your potential performance. Of course, the faster the processor, the wider this gap becomes. No where does it even remotely suggest that you should upgrade your processor and motherboard more than the video card.&quot;

Don't know where you guys get your systems from, but my little old GF1 obliterates in terms of performance a dual PIII 866MHZ with G400 built by Dell. For that matter, Spec lists some IR2 workstations getting thrashed by GF based Quadro boards. SGI has moved over to nV based hardware as well on their Linux/NT based workstations, they significantly outran their dual Xeon systems with Cobalt chipsets. What systems are you guys running that can outperform a GF??

Utilizing SIMD can narrow the gap, and if you revert to very low quality hack lighting you can make it almost vanish, but the difference is very clear and even then that is only talking about straight up T&amp;L, not in game situations. In the real world, most games devote between 20%-35% to T&amp;L functions, not anywhere near 100%. CPUs have a long way to go before the catch the GF1 in gaming performance, let alone the GF2 and Radeon.

A gap does widen when comparing a T&amp;L game running on both a non T&amp;L board and a T&amp;L equipped one when faster processors are introduced, but not in the way that you are implying. The truth is that when you offload the CPU of T&amp;L calculations you are increasing the percentage of time dedicated to the game code. If you take a game that is very T&amp;L heavy and figure it is using 50% of its' time on T&amp;L(anyone know of one that is close to that?), then increasing the processor speed by 100MHZ has a larger gain on the T&amp;L equipped system then on non T&amp;L. MDK2 demonstrates this quite nicely.

I know most of this has already been covered, but this round of PR bashing hardware T&amp;L is GOING to be thrown back in your faces, quite harshly, when you launch Rampage. The entire HOSR is required line is BS and can easily be disproven, as can all the static acceleration bashing. This one is going to come back at you guys, though by the noise coming out of 3dfx it certainly doesn't appear that Rampage is going to make a showing this year.
 

Czar

Lifer
Oct 9, 1999
28,510
0
0
BenSkywalker, nice post.

3dfxAlf, one question, why do you allways come to this board to defend bubba instead of just telling bubba about the thrashing he gets here and let him defend himself. He is not your little brother.
 

Hardware

Golden Member
Oct 9, 1999
1,580
0
0
Yeah Alf is not too much Bubba like!
We should not forget the infamous Abit &quot;Oskar&quot; he was great!

So if software t&amp;l is faster then hardware t&amp;l whats next software rendering with the Voodoo 7000?

Blurry FSAA? its not a bug its a feature!

Its no fun for Alf he has a product to promote which is one year behind!

The best joke is when (if) we see Rampage with t&amp;l we will hear from 3dfx something like &quot;hardware t&amp;l is much faster and better then any software t&amp;l....&quot;
 

3dfxAlf

Member
Jan 11, 2000
71
0
0
Czar,

1. For optimum image quality, FSAA is a prerequisite to more triangles. As you indicate, the best mix is to have the two together. The geometry is already there. Having T&amp;L onboard doesn?t give the GeForce2 some kind of edge in the ability to crank out geometry here? the Voodoo5 can render just as complex a scene.

2. Again, the GeForce's T&amp;L does not give it the capability to render a more complex scene than the Voodoo5. It might give it a boost in speed on slower P2 processors, but the V5 can render just as complex a scene and do so just as fast if not faster depending on the processor, even on those very few titles that currently have partial T&amp;L.

3. It doesn't matter if the theoretical numbers beat the Voodoo5 by 100 times or more when the practical numbers are much more fillrate limited. The end result is that the performance of both cards is much closer in the real world. As you say... games right now are not designed for T&amp;L. And even when they are, they are still going to bump against limitations in current generation solutions.

4. Incorrect... all benchmarks that measure this performance show it. They showed it then, they continue to show it now. I'm not aware of any retraction from HardOCP on this matter and can't think of why they would when benchmarks continue to back it up.

KarlHungus,

There is no faulty logic in the statement. It is a verifiable fact. You can throw all the theoretical numbers around that you like but when it all comes down to it, they are just that. If there truly was such a monumental difference, then the GeForce2 would have a much larger lead over the Voodoo5 than it does... especially in 32 bit and at higher resolutions. That simply isn't the case (there... case proven). As for offloading processor time for AI, that hasn't become a bottleneck yet. By the time it does and T&amp;L titles have arrived, 3dfx is going to be on the scene with a capable, balanced T&amp;L solution to handle it.


HaVoC,

First Quote: I think I see where you're coming from on that one. However, we're talking about rendered graphics here and just because a part generates triangles doesn't mean they are rendered triangles.

Second quote: Onboard T&amp;L isn't the only option for increasing triangle count. Hardware T&amp;L is going to be an important thing, but we feel that the best way to attack that is going to be with a balanced part that is not fillrate limited and that is not overshadowed by what processors can already do.

Third quote: They aren't mutually exclusive. The Voodoo5 can render just as complex a scene as the GeForce2.

Last quote: Onboard T&amp;L is already being overshadowed by what P3's and Athlons can do. Just check the numbers. In 32 bit (where nVidia has said matters), it becomes pretty evident. As for the rest of your argument, both cards are good buys and are quite capable, it just depends on your needs and wants as to which will be better for you. But it's a pretty well-established fact (which many sites have commented on, not just HardOCP) that on most processors currently being used, onboard T&amp;L isn't what it's hyped to be.

And we have never knocked hardware T&amp;L. In fact, on many occasions we have said that it is an important step and have given due kudos to nVidia for taking it. We have always said that when T&amp;L hit the scene, we would be there. And a balanced solution capable of handling the full T&amp;L titles yet to come is indeed a feature set to brag about.

Czar and Hardware:
I'm not really defending Bubba, just his statements since I've made similar ones. But judging from the reaction to most of his recent interviews, I'm inclined just to shoot him URLs in the future. Unfortunately, he's busier than I am. ;-)

Ben Skywalker,
Thanks very much for the personal compliments!!
I'm very sorry I didn't get to your reply in this batch, but it is on my desktop and I'll get to it as soon as I can!

Thanks!
Alf

 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Alf, please explain to me how anyone can possibly say that software T&amp;L using SIMD is faster than a GTS?
Sure, if we're talking Q3 at 1600x1200x32 its not going to make any difference, since we're obviously fillrate limited.
But have a look at MDK2, a game designed somewhat with hardware T&amp;L in mind, the Radeon and GTS beats the V5 hands down until you reach the really really high resolutions.
If you say that it doesnt matter cause we should all be playing at resolutions where we're fillrate limited, then thats fine, cause its true, but dont come saying that a P3 or T-Bird can match the performance of a GTS, cause that simply aint true.

I do understand that its your job to tell us this kind of stuff, but cant we just dump it in this particular discussion, cause the vast majority of people here have a fair amount of knowledge when it comes to this stuff, and already know whats what and whats not.

Oh and btw, just a general tip to everybody here, dont EVER, NEVER buy a modem from GVC, they suck, I got disconnected like once every minute while typing this reply :|
 

IvoryGrail

Senior member
Apr 30, 2000
710
0
0
I'd like to see a PIII SSE 1ghz with a V5(HEIDI software Z-buffer) outgun a GF2(OGL) with a 500mhz cpu in 3dstudiomax. Viewing a very high-poly scene(500,000+). Not a chance
 

Truro

Member
Oct 10, 1999
196
0
0
nvidia's limited static t&amp;l doesn't amount to jack. I've read several game developers who said just that. A game would have to be geforce optimized, not just dx7 optimized, and that wouldn't work too well on all the T&amp;L lacking OEM parts nvidia has filled the retail computer market with, so games developers didn't do it.

DX8 may amount to something, what, in 2001 when games are out? You'll need new vid cards to optimise DX8 anyway. As 3dfx said so well, &quot;Geforce will be the Virge of T&amp;L&quot;
 

YU22

Golden Member
Mar 18, 2000
1,156
0
0

Sunner
&quot;Oh and btw, just a general tip to everybody here, dont EVER, NEVER buy a modem from GVC, they suck, I got disconnected like once every minute while typing this reply&quot;


Do you use W2k? I have this problem but only in W2k, when I use W98 everything is just fine.

 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Truro-

&quot;nvidia's limited static t&amp;l doesn't amount to jack. I've read several game developers who said just that.&quot;

First part is wrong, second part other developers have said it is an order of magnitude faster then the fastest CPUs. Depends on if they know how to code properly or not.

&quot;A game would have to be geforce optimized, not just dx7 optimized, and that wouldn't work too well on all the T&amp;L lacking OEM parts nvidia has filled the retail computer market with, so games developers didn't do it.&quot;

Check again, you may want to rerun benches utilizing the new Det3 drivers which utilize proper vertex caching optimizations. Of particular interest was the huge performance jump of Kyle's 3dfx approved T&amp;L test. Using his settings performance is up over 50%, try it yourself.

&quot;DX8 may amount to something, what, in 2001 when games are out? You'll need new vid cards to optimise DX8 anyway. As 3dfx said so well, &quot;Geforce will be the Virge of T&amp;L&quot;

As long as we ignore MDK2, Evolva, HeavyMetalFakk2, SoF, all the Quake engined games etc. I don't feel like reposting the long list of titles that support T&amp;L, but the more support they offer the faster they run on the GF. Look at MDK2, if you select high quality lighting the GF based boards are aroun 300% faster then a V5 on a 800MHZ+ PIII. If the GF is the &quot;Virge&quot; of hardware T&amp;L, then 3dfx isn't worth speaking of.

3dfxAlf-

I have been waiting for a response to my points but unfortunately that hasn't come and I can't wait to respond to the others you bring up in your reply-

&quot;The geometry is already there. Having T&amp;L onboard doesn?t give the GeForce2 some kind of edge in the ability to crank out geometry here? the Voodoo5 can render just as complex a scene.&quot;

At what FPS? With the limits of AGP1X the V5 can't compete with the V3, let alone the GF2.

&quot;It might give it a boost in speed on slower P2 processors, but the V5 can render just as complex a scene and do so just as fast if not faster depending on the processor, even on those very few titles that currently have partial T&amp;L.&quot;

MDK2, enable specular lighting and the V5 is absolutely obliterated in FPS as is every other non hardware T&amp;L board. The GF/GF2 and Radeon absolutely crush the V5, that is using a PIII 800+MHZ, Rev has the numbers up.

&quot;It doesn't matter if the theoretical numbers beat the Voodoo5 by 100 times or more when the practical numbers are much more fillrate limited. The end result is that the performance of both cards is much closer in the real world. As you say... games right now are not designed for T&amp;L. And even when they are, they are still going to bump against limitations in current generation solutions.&quot;

I would much rather play Evolva in high poly mode with bump mapping enabled on a GF/GF2 or Radeon at 640x480 then on a V5 6K at 1280x960 with 4x FSAA, they are leaps and bounds beyond the V5 in terms of visual quality. This is just one example, but high res and FSAA are work arounds for lacking geometry and utilizing detail that is already available in terms of feature support.

&quot;Incorrect... all benchmarks that measure this performance show it. They showed it then, they continue to show it now. I'm not aware of any retraction from HardOCP on this matter and can't think of why they would when benchmarks continue to back it up.&quot;

No, the benchmarks completely disagree with it, check them with the latest drivers. There is one single bench that &quot;backs&quot; the CPU T&amp;L line, and that bench utilizes high quality specular lighting for hardware T&amp;L settings and hack lighting for SIMD. Enable specular for the CPU also and you will see the numbers plummet. Not to mention that the one sub test(3DMark2K high poly) that shows faster numbers in theory also demonstrates that in a in game situation hardware T&amp;L is significantly faster.

&quot;As for offloading processor time for AI, that hasn't become a bottleneck yet. By the time it does and T&amp;L titles have arrived, 3dfx is going to be on the scene with a capable, balanced T&amp;L solution to handle it.&quot;

GF2 Ultra seems quite balanced, as is the original GF. The whole line against the GF2 because it isn't balanced doesn't make sense to me, the GPU has too much power?? Who cares, it is quite a bit faster then the V5 and with the latest drivers also bests the Radeon in terms of performance, but the Radeon does have the edge of even more feature support which helps balance the performance gap considerably.

&quot;Second quote: Onboard T&amp;L isn't the only option for increasing triangle count. Hardware T&amp;L is going to be an important thing, but we feel that the best way to attack that is going to be with a balanced part that is not fillrate limited and that is not overshadowed by what processors can already do.&quot;

Please show one single bench where both CPU and hardware T&amp;L run the exact same settings and the CPU comes out ahead. I'd like to see one bench backing up this claim because as of yet every single one support hardware T&amp;L being significantly faster then SIMD/CPU T&amp;L. All of them.

&quot;Last quote: Onboard T&amp;L is already being overshadowed by what P3's and Athlons can do. Just check the numbers. In 32 bit (where nVidia has said matters), it becomes pretty evident. As for the rest of your argument, both cards are good buys and are quite capable, it just depends on your needs and wants as to which will be better for you. But it's a pretty well-established fact (which many sites have commented on, not just HardOCP) that on most processors currently being used, onboard T&amp;L isn't what it's hyped to be.&quot;

On board T&amp;L isn't what it is hyped to be, hmm, I would say that title support is all that is holding that back. Every game that utilizes it has shown some impressive improvements. The only time that the 32bit performance numbers showed software T&amp;L to be faster was when local memory was exceeded and AGP texturing was forced into play because of the size of the frame buffer(such as in MDK2 when running UHQ settings in 32bit). Unfortunately for that argument the new Dets solve that and then some. With proper vertex caching the Det3's are performing quite a bit faster no matter what settings until you completely fill limit the board. This issue also never was an issue with 64MB boards, no matter it is gone now.

&quot;Thanks very much for the personal compliments!!
I'm very sorry I didn't get to your reply in this batch, but it is on my desktop and I'll get to it as soon as I can!&quot;


Thank you for your continued honorable behaviour in a job that is often associated with the like of politicians.

I hope to see your reply soon, either to my first post or second.
 

Soccerman

Elite Member
Oct 9, 1999
6,378
0
0
why oh why didn't I take the BLUE pill..

come on man, you just resurrected this thread for more arguments? even though you probably won't change your opinions.. bleh..

take my advice, unless you have facts. don't argue. ESPECIALLY not against BenSkywalker, he knows his stuff, and won't be converted till he sees things differently, but he won't, until there is good evidence that nVidia has been unseated.

the closest to do that right now was ATi, but their lack of Win2K driver support is probably one thing people like Benskywalker don't like about it.. other then that, it's only Marginally better then nVidia's GTS, and even then, the Detonator 3 drivers make it a toss up, with the Radeon winning only becuase of features..
 

YU22

Golden Member
Mar 18, 2000
1,156
0
0

I certanly hope that 3dfxAlf didn't use my joke as a cheap excuse not to answer serious questions asked by other members...

 

3dfxAlf

Member
Jan 11, 2000
71
0
0
Ben,

Here's your response!

3dfxAlf-

I have been waiting for a response to my points but unfortunately that hasn't come and I can't wait to respond to the others you bring up in your reply-

I?ve had quite a bit going on lately and I apologize for not getting back to you sooner!

>&quot;The geometry is already there. Having T&amp;L onboard doesn?t give the
>GeForce2 some kind of edge in the ability to crank out geometry
>here? the Voodoo5 can render just as complex a scene.&quot;

At what FPS? With the limits of AGP1X the V5 can't compete with the V3, let alone the GF2.

It depends on the system in question, which is part of my original point. Also, the performance difference between PCI66, AGP 1x, 2x and 4x is not even a factor here.

>&quot;It might give it a boost in speed on slower P2 processors, but the
>V5 can render just as complex a scene and do so just as fast if not
>faster depending on the processor, even on those very few titles
>that currently have partial T&amp;L.&quot;

MDK2, enable specular lighting and the V5 is absolutely obliterated in FPS as is every other non hardware T&amp;L board. The GF/GF2 and Radeon absolutely crush the V5, that is using a PIII 800+MHZ, Rev has the numbers up.

I?ve benchmarked it myself. There is a lead in 16 bit; that I will concede. But if you up the color depth to 32-bit (which nVidia has spent well over a year evangelizing as the only valid color depth) the lead drops dramatically. I?d hardly call that crushing the V5 by any stretch of the imagination.

>&quot;It doesn't matter if the theoretical numbers beat the Voodoo5 by
>100 times or more when the practical numbers are much more fillrate
>limited. The end result is that the performance of both cards is
>much closer in the real world. As you say... games right now are not
>designed for T&amp;L. And even when they are, they are still going to
>bump against limitations in current generation solutions.&quot;

I would much rather play Evolva in high poly mode with bump mapping enabled on a GF/GF2 or Radeon at 640x480 then on a V5 6K at 1280x960 with 4x FSAA, they are leaps and bounds beyond the V5 in terms of visual quality. This is just one example, but high res and FSAA are work arounds for lacking geometry and utilizing detail that is already available in terms of feature support.

Ultimately, the call made for your particular system and taste is your choice and your preference. However, out of the two options, guess which is a checkmark item for military simulators? It could be argued that these people know a thing or two about realism. Also, if you?ll check most of what has been said about FSAA, you?ll see that FSAA is overwhelmingly favored as the most dramatic image quality enhancement. I?ve got a bout 20 links to sites saying just that, giving the Voodoo5 the undisputed image quality crown. Since this is a fairly well-established stance, do I really need to post them?

As for FSAA being a work-around for lacking geometry? that?s simply false. That statement is so far out that I really am at a loss as to how else to respond. FSAA is a significant feature, not an excuse. Again, the vast majority of the market agrees.

>&quot;Incorrect... all benchmarks that measure this performance show it.
>They showed it then, they continue to show it now. I'm not aware of
>any retraction from HardOCP on this matter and can't think of why
>they would when benchmarks continue to back it up.&quot;

No, the benchmarks completely disagree with it, check them with the latest drivers. There is one single bench that &quot;backs&quot; the CPU T&amp;L line, and that bench utilizes high quality specular lighting for hardware T&amp;L settings and hack lighting for SIMD. Enable specular for the CPU also and you will see the numbers plummet. Not to mention that the one sub test(3DMark2K high poly) that shows faster numbers in theory also demonstrates that in a in game situation hardware T&amp;L is significantly faster.

3DMark 2000 defaults to 16-bit color. Test in 32-bit mode using hardware T&amp;L then select SSE and check out the difference for yourself.

>&quot;As for offloading processor time for AI, that hasn't become a
>bottleneck yet. By the time it does and T&amp;L titles have arrived,
>3dfx is going to be on the scene with a capable, balanced T&amp;L
>solution to handle it.&quot;

GF2 Ultra seems quite balanced, as is the original GF. The whole line against the GF2 because it isn't balanced doesn't make sense to me, the GPU has too much power?? Who cares, it is quite a bit faster then the V5 and with the latest drivers also bests the Radeon in terms of performance, but the Radeon does have the edge of even more feature support which helps balance the performance gap considerably.

The GF2?s T&amp;L is NOT balanced, that?s a documented fact and has been since the original GeForce. People have commented on the fact that the Lighting engine limits the Transform (for example, some of the Radeon reviews). Q3 only uses the Transform. Depending on your system, T&amp;L can make a difference in the very few titles that have T&amp;L options, to that point I will agree. However, even with the partial T&amp;L titles on the market now, we have a ways to go before we see any appreciable utilization of this feature and when that point comes, a more balanced solution will be required. And if you up to 32-bit, a large chunk of that benefit simply goes away. We never knocked T&amp;L, we only said we want to release something better and that we would be on the scene when it was a factor.

>&quot;Second quote: Onboard T&amp;L isn't the only option for increasing
>triangle count. Hardware T&amp;L is going to be an important thing, but
>we feel that the best way to attack that is going to be with a
>balanced part that is not fillrate limited and that is not
>overshadowed by what processors can already do.&quot;

Please show one single bench where both CPU and hardware T&amp;L run the exact same settings and the CPU comes out ahead. I'd like to see one bench backing up this claim because as of yet every single one support hardware T&amp;L being significantly faster then SIMD/CPU T&amp;L. All of them.

Run Test Drive 6 in 32-bit color.

>&quot;Last quote: Onboard T&amp;L is already being overshadowed by what P3's
>and Athlons can do. Just check the numbers. In 32 bit (where nVidia
>has said matters), it becomes pretty evident. As for the rest of
>your argument, both cards are good buys and are quite capable, it
>just depends on your needs and wants as to which will be better for
>you. But it's a pretty well-established fact (which many sites have
>commented on, not just HardOCP) that on most processors currently
>being used, onboard T&amp;L isn't what it's hyped to be.&quot;

On board T&amp;L isn't what it is hyped to be, hmm, I would say that title support is all that is holding that back.

That's not the only thing. People have seen T&amp;L in action and the crown is still going to the Voodoo5 for image quality. That's not 3dfx PR there, that's direct feedback from the community. But even forgetting that, the lack of title support for T&amp;L is not an insignificant factor by any means. The 100 titles that nVidia has cited have yet to make it onto the scene nearly a year later. T-Buffer, on the other hand, delivers FSAA, is here, now, and with our implementation you should be able to apply this new technology to just about every 3D game you own as well as just about any 3D game you might buy in the future. There is no lack of title support there. That is far from a feature without merit (and far from the only feture we have going for us with the V5).

Every game that utilizes it has shown some impressive improvements. The only time that the 32bit performance numbers showed software T&amp;L to be faster was when local memory was exceeded and AGP texturing was forced into play because of the size of the frame buffer(such as in MDK2 when running UHQ settings in 32bit). Unfortunately for that argument the new Dets solve that and then some. With proper vertex caching the Det3's are performing quite a bit faster no matter what settings until you completely fill limit the board. This issue also never was an issue with 64MB boards, no matter it is gone now.

Check out Tom?s Hardware for information about the Detonator 3 drivers.

>>&quot;Thanks very much for the personal compliments!!

>I'm very sorry I didn't get to your reply in this batch, but it is
>on my desktop and I'll get to it as soon as I can!&quot;

Thank you for your continued honorable behaviour in a job that is often associated with the like of politicians.

Wow! Thank you again!

I hope to see your reply soon, either to my first post or second.

I figured I?d respond to the most up-to-date version. Hope you don?t mind!


 

Mikewarrior2

Diamond Member
Oct 20, 1999
7,132
0
0
Alf continues to come to the BBS in the manner of his PR partner Bubba, spewing crap mixed in with a partial truth.

T&amp;l MDk2 crushes V5's. i've seen it in action. I get 76 FPS 1152x768 T&amp;l mDK2 in win2k. MY friends V5 got no more than 56 in win98SE.

I dont' know how you could possibly see that as being a close benchmark. Or maybe its because you think hte Geforce is so mem bandwith limited that your V5 non mem bandwith will beat it. Guess not, cause my Memory bandiwth limited card is easily beating a v5, and his card was o/c.

I guess you're just upset cause you may be out of a job soon, cause 3dfx aint doing too well, and no way any respectable 3d company other than BitBoys will hire you. You also keep acting as if you're the only FSAA player in town. Well, you're not. SO give it up like its 3dfx's god send to all people, cause all top line video cards have fsaa.

For Example, on my gf2 in win2k. All in 32bit
640x480. 120fps, eveyrthing max. 4xFsaa, 60fps, everything max.
800x600. 108FPS, everything max. 4xFsaa, 38fps, everything max.

I dont' know, but compared to Anand's FSAA scores on his V5 review, http://www.anandtech.com/showdoc.html?i=1276&amp;p=19, my &quot;software hack&quot; FSAA is faster than your company's &quot;Hardware Method&quot; FSAA.

SO basically, your company has nothing it can really do better than anything else, other than your glide performance. Right now, the RadeON is the best based on features and overall performance, GF2 is a near tie, cept for lack of some features, and the V5 is a long way down at third place.
 

Imperium97

Member
Jul 9, 2000
156
0
0
I'm not sure which is worse, the fact that 3dfx can't hire PR guys that don't spew crap from their mouths or the fact that just to barely compete with other 3d companies they need to make a board that's really two cards rolled into one. Oh, and the ever elusive v5-6000 will be four boards,in effect, with an external power supply. Do i need a special case extension so the damn thing will fit in my case?

Anyway, can you only resort to blah-blahing nvidia's features? You talk an awful lot about how the market agrees with 3dfx, etc. Then why are you stating multiple times the uselessness of T&amp;L on the GF series? It's quite clear that T&amp;L is increasingly becoming standard. If it's useless, then why is the GF2 still beating the v5 in virtually every bench? With or without T&amp;L the GF2 beats the v5-5500 in the majority of tests, despite the GF2's mem bandwidth problems. T&amp;L merely widens the gap between the v5 and GF2.

Maybe 3dfx should get out of the computer industry and go into the heating industry and save freezing children from the harsh winter months by giving them a board with a few more vsa-100 chips on it.
 

Mem

Lifer
Apr 23, 2000
21,476
13
81
LOL....even the Geforce MX beats the V5 on some benchmarks,but not trying to start a flaming war,why can`t they(PR) tell the truth or do they get paid to lie,twist,bend the facts after all we the consumers are there bread &amp; butter.

 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Alf-

It seems there a few misconceptions as to what exactly I was saying.

&quot;It depends on the system in question, which is part of my original point. Also, the performance difference between PCI66, AGP 1x, 2x and 4x is not even a factor here.&quot;

In geometry throughput, it absolutely is key. Run a V3 against a V5, the V3 will roughly double the poly throughput on an AGP 2X system that a V5 can push. The bandwith is limiting the amount of vertex data that can make it to the card.

&quot;I?ve benchmarked it myself. There is a lead in 16 bit; that I will concede. But if you up the color depth to 32-bit (which nVidia has spent well over a year evangelizing as the only valid color depth) the lead drops dramatically. I?d hardly call that crushing the V5 by any stretch of the imagination.&quot;

Leave specular lighting(hardware T&amp;L checked) for both boards which was my original point. You can reduce image quality and run a lot quicker with the V5, but still not as fast as a hardware T&amp;L board and you still are dealing with inferior visual quality. Even using the old Det2s, which poorly cached vertex data the GF2 was still significantly faster then the V5 when utilizing high quality lighting. Rev's numbers

1024x768 32bit color-

V5 - 38.09

GF2 - 82.04

Lower the resolution down to 800x600 still using 32bit color-

V5 - 42.55

GF2 - 113.38

I would say those qualify as noticeable differences in 32bit. Link, though you have to scroll through a few pages as Rev uses an odd setup- http://www.voodooextreme.com/reverend/Reviews/nVidia_GF2GTS/7.html

&quot;Also, if you?ll check most of what has been said about FSAA, you?ll see that FSAA is overwhelmingly favored as the most dramatic image quality enhancement. I?ve got a bout 20 links to sites saying just that, giving the Voodoo5 the undisputed image quality crown. Since this is a fairly well-established stance, do I really need to post them?&quot;

I've been using FSAA for years, 4x4 at resolutions up to 8000x8000 using software based render engines for visualization packages. FSAA is a hack, period. Enabling FSAA reduces image detail, the best solution would be to push a native resolution that exceeds the level at which the human eye can detect the artifacts. 8000x8000 on an average display would do it, though we can't handle that with current monitors, when we can FSAA will be of very little use.

&quot;3DMark 2000 defaults to 16-bit color. Test in 32-bit mode using hardware T&amp;L then select SSE and check out the difference for yourself.&quot;

SSE checked enables low quality diffuse lighting, hardware T&amp;L uses superior quality specual lighting. Because of the simplicity of the bench you can't tell the difference, but games like MDK2 display it quite nicely. How about legit comparisons like ViewPerf, GLPerf and Indy3D, where the CPU isn't allowed to cheat its' way to victory?

&quot;Run Test Drive 6 in 32-bit color.&quot;

Software, 32FPS(Athlon 550). Hardware, 92FPS(GeForce1).

&quot;That's not the only thing. People have seen T&amp;L in action and the crown is still going to the Voodoo5 for image quality. That's not 3dfx PR there, that's direct feedback from the community.&quot;

You don't think that is PR? Anyone here with both a V5 and a GF/Radeon board run Evolva. Give the V5 every advantage it can muster, use 4x FSAA, 32bit color, LOD bias tweaks and then set the GF/Radeon for bump mapped 16bit without FSAA and try and say the V5 doesn't look horrible by comparison. The titles that you want to push are those that are utilizing T&amp;L for increased FPS instead of increased polygon complexity such as Evolva does, that is where the V5, and any other non T&amp;L boards start looking real bad.

&quot;But even forgetting that, the lack of title support for T&amp;L is not an insignificant factor by any means. The 100 titles that nVidia has cited have yet to make it onto the scene nearly a year later.&quot;

Depends on your definition. Any game that utilizes any native OpenGL support for either lighting or transforms benefit in some way from hardware T&amp;L. This isn't just nVidia, this is also ATi and soon to be Matrox. The &quot;support&quot; has been overhyped, but now the big two are both on board.

&quot;T-Buffer, on the other hand, delivers FSAA, is here, now, and with our implementation you should be able to apply this new technology to just about every 3D game you own as well as just about any 3D game you might buy in the future. There is no lack of title support there. That is far from a feature without merit (and far from the only feture we have going for us with the V5).&quot;

FSAA is the only feature of T-Buffer that is going to be widely used unless a small miracle happens. For that, GF/GF2 and Radeon also offer FSAA, the edge that 3dfx has is noticeable but still remains marginal.

&quot;Check out Tom?s Hardware for information about the Detonator 3 drivers.&quot;

What was I supposed to be looking at? Some people are supposed to be having problems, how is that different then any driver release? I can assure you that 3dfx driver revisions create their own share of headaches as evidenced on this board. Perhaps not as many as nVidia, but you guys don't release drivers every other week either

&quot;The GF2?s T&amp;L is NOT balanced, that?s a documented fact and has been since the original GeForce. People have commented on the fact that the Lighting engine limits the Transform (for example, some of the Radeon reviews).&quot;

Then no CPU is balanced that I have ever seen. Utilizing high quality specular lighting brings CPUs to their knees very quickly, I deal with it every day. Non lit geometry by comparison is rather simplistic. I would say again to try one of the industry standard benches that utilize identical lighting techniques for both software and hardware and see what the results are, on average the GF1 is an order of magnitude faster then an Athlon600 MHZ, roughly ten times the throughput.

&quot;Q3 only uses the Transform. Depending on your system, T&amp;L can make a difference in the very few titles that have T&amp;L options, to that point I will agree. However, even with the partial T&amp;L titles on the market now, we have a ways to go before we see any appreciable utilization of this feature and when that point comes, a more balanced solution will be required.&quot;

From the sounds that we have heard over and over on this aspect it sounds like you guys are planning on having a RISC based CPU(or perhaps a modular unit combined with the rasterizer?) on board the Rampage to handle T&amp;L instead of true dedicated hardware. That would fall in line with the PR spin, but from a performance standpoint it leaves much to be desired. Improving the lighting pipelines is one thing, compromising the polygon throughput to enable offloading of IK onto the graphics chip along with other traditonal CPU calculations is something else entirely. Check out the ten CPU MIPS IR2 stations from SGI having a hard time keeping up with the unbalanced GeForce in certain situations. Flexibility is often times not the best solution. I have a dedicated MPEG2 decoder even though my system can handle it with ease. Dedicated hardware is almost always superior for specific tasks.

 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |