Robo-
At first I was thinking I shouldn't respond to some of your comments cuz that would be thread crapping..... then I recalled I started the d@mn thread so I can crap on it
"MBTR aside (is it REALLY that stressful of an engine, or is it just coded poorly?)"
~60FPS on a GF2 Ultra @1024x768 32bit color sounds reasonable given the quality of the graphics. I'll say this, it runs a he!l of a lot faster then Giants
"I really wish they would've done some tests in UT with the high-res textures. Can the Kyro-II manage that?"
Yes.
"also, Ben, I noticed something funny....these new benchmarks you've been rallying about (MBTR and Serious Sam)....I noticed that the 5500 seems to do pretty well in comparison to the MX, don't you?"
Yep, I noticed that What was slugging it out with the GTS and Radeon in 1999 games is now fighting with the budget brothers in 2000/01 games. Hmmmmmm...... future proof anyone
"I really hate the fact that the tester doesn't have the sense to disable T&L for the cards that don't have T&L units in MDK2. that automatically drops your framerate significantly, who the hell plays like that?"
It looks a decent amount better and is playable, why wouldn't you? 100+FPS doesn't do a ton of good in MDK2, for actual gameplay even if I had a V5<shudder> I would definately enable hard T&L. This does bring up another point though, we can now look forward to */compromised benchmarking being a thing of the past. Why doesn't AnandTech use Q3 HQ settings? Because the V5 can't do it. Why did 3DMark2K use 16bit as default? Because 3dfx's at the time latest and greatest couldn't handle 32bit. Why does the V5 have an asterik next to it in MBTR? Because it couldn't run the test like all the other cards could. Then there is the MDK2 T&L situation as another example. The Rampage would have rectified this situation, but speaking for myself I'm sick of benches having to dance around the shortcomings of one companies boards. When the V5 is dropped off the charts we can get away from having 3dfx friendly or asterik included benches
Edit- Teach me to have eight different IE windows and OE open doing multiple things at once, you replied while I was doing something else
"as far as UT, well, using it as a benchmark indicative of DX7 is silly, always has been. They should use Evolva."
D@mn right they should
powerVR2-
I can understand your enthusiasm, but you have a few misconceptions
"it has everything that geforce 2 have less the T&L"
Doesn't have register combiners either.
"but it has some features of geforce3 "dot3 bump mapping":"
The GeForce1 SDR had Dot3. On top of that, right now Dot3 still isn't working properly on the Kyro boards(driver issue it appears). It does..... something, but it doesn't look like either the reference rasterizer or the GeForce/Radeon(which both look like the reference rasterizer as they are supposed to).
"what ?
are you telling that geforce 2 is more scalable because of T & L ?
lol"
I'll tell you this, the GeForce2 handles high poly loads quite a bit better then the Kyro does(traditional versus "tilers" overall for that matter). Every tiler to date has issues when dealing with high enough geometry loads. This is one of the reason many people were quite intrigued with the Gigapixel technology, they had reportedly worked around the issues that have been part of the technology to date.
"fact: if the games are getting more complex i.e. more polygones,triangles, etc... the overdraw will increase... thus this kyro will get "more" fill rate and have to process "LESS" TRIANGLES than the "others" don't you think ?"
No, the Kyro must process every single poly and it is possible it will have to handle a great deal of them several times. Simplified- The way the PVR chips determine visibility is by using the geometry data before applying textures/rasterizing(by "shooting a beam" that stops when it hits something). Without handling every triangle first, the PVR wouldn't have any way to know what is visible. The geometry data is placed in "buckets"(to borrow Dave's term), it all has to be handled. It is possible to require the same geometry data multiple times as the Kyro breaks a scene down into tiles based on pixel size/screen area, not geometric amounts.
"by the way "everybody" knows that the geforce's 2 have a mediocre performer T & L ... geforce 3 is great because "THEY" can program the T & L via software lol like a cpu"
In terms of strictly performance, a static T&L engine(GeForce2) is faster then a flexible(GeForce3) one. You think the 3DMark2K scores that show geometry throughput are a joke? Now, things change considerably when you add other factors in the picture, but in terms of raw performance the GF2 is certainly not mediocre by any stretch of the imagination. In fact, its' only competitor in that area is the Radeon which is absolutely destroyed in terms of T&L when facing a GF2, still gets beaten quite badly by the GeForce1 for that matter.
The Kyro2 is definately a very interesting product, the performance looks extremely impressive and the price is real hard to ignore.
PeAK-
All sounds nice in theory, I still own, and use, mine(paid $320 for my 8MB All In Wonder RagePro) and that performance boost never materialized in games. In fact, the fastest drivers I have seen to date for gaming are the ones that shipped on the included driver CD