A call to boycott Nvidia games for ATI/AMD owners

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Wreckage

Banned
Jul 1, 2005
5,529
0
0
AMD in the other hand has been very sucessfully with it since the HD 2900XT which was very competitive with the 800GTS 640.

The 2900 is one of the worst GPU launches in history. In history. As for competing with the 8800GTS....

http://techreport.com/articles.x/12458/16

Ultimately, though, we can't overlook the fact that AMD built a GPU with 700M transistors that has 320 stream processor ALUs and a 512-bit memory interface, yet it just matches or slightly exceeds the real-world performance of the GeForce 8800 GTS.The GTS is an Nvidia G80 with 25% of its shader core disabled and only 60% of the memory bandwidth of the Radeon HD 2900 XT. That's gotta be a little embarrassing.

So it matches a severely disabled G80.... score! Of course the 8800GTX destroyed it. At least the FX series recovered and was a lot closer to the 9xxx series (even beating it in some games). The R600 was an absolute failure. It forced AMD to accept the role of 2nd place and sell their cards at a steep discount.

2900 = worst GPU ever.
 
Last edited:

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
The 2900 is one of the worst GPU launches in history. In history. As for competing with the 8800GTS....

http://techreport.com/articles.x/12458/16



So it matches a severely disabled G80.... score! Of course the 8800GTX destroyed it. At least the FX series recovered and was a lot closer to the 9xxx series (even beating it in some games). The R600 was an absolute failure. It forced AMD to accept the role of 2nd place and sell their cards at a steep discount.

2900 = worst GPU ever.

Severely disabled? The 8800 GTS 640 offered the best bang of the buck at that time, and the R600 wasn't as bad as the GeForce FX launch which was the worst GPU launch ever, it couldn't even outperform the midrange 9600PRO in DX9 games, at least the HD 2900XT was able to match the 8800GTS regardless of the DX API. Try harder.

http://www.anandtech.com/show/2231/31

At face value, this sounds quite a bit like NVIDIA's NV30 launch, but thankfully we wouldn't go so far as to call this NV30 Part 2: the R600 Story. Even though AMD has not built a high end part, they have built a part that runs very consistently at its performance target (which could not be said about NV30). The HD 2900 XT competes well with the 640MB 8800 GTS.
 
Last edited:

n0x1ous

Platinum Member
Sep 9, 2010
2,572
248
106
lol, I love it when people disregard the HD 5970.

If Fermi is a engineering feat, that makes the 5970 an engineering god, 18 months and still top dog.

You can argue semantics all you want but that's how it's always going to be. nVidia and AMD are different companys with different strategies, it's a known fact that AMD quit going after the single GPU crown a long time ago, their model since the failure of HD 2000 series, is always is to build a good mid-range chip, then scale it up.

The most interesting thing is that the 6970 is actually creeping up to a GTX 580 and with every successive generation it seems to get closer.

Its creeping up in size too, whereas Gf100/110 are smaller than GT200 65nm
 

mosox

Senior member
Oct 22, 2010
434
0
0
Nvidia bought AGEIA only to kill it. Then take a look at this:

As I said, enabling the extra PhysX effects on the Radeon cards leads to horrendous performance, like 3-4 FPS, because those effects have to be handled on the CPU. But guess what? I popped Sacred 2 into windowed mode and had a look at Task Manager while the game was running at 3 FPS, and here's what I saw, in miniature: Ok, so it's hard to see, but Task Manager is showing CPU utilization of 14%, which means the game—and Nvidia's purportedly multithreaded PhysX solver—is making use of just over one of our Core i7-965 Extreme's eight front-ends and less than one of its four cores. I'd say that in this situation, failing to make use of the CPU power available amounts to sabotaging performance on your competition's hardware

http://techreport.com/articles.x/17618/13

Also

The bottom line is that Nvidia is free to hobble PhysX on the CPU by using single threaded x87 code if they wish. That choice, however, does not benefit developers or consumers though, and casts substantial doubts on the purported performance advantages of running PhysX on a GPU, rather than a CPU.

[My emphasis]

http://www.realworldtech.com/page.cfm?ArticleID=RWT070510142143&p=5

Bottom line, Nvidia is fighting tooth and nail any open solutions and this continues with the 3D technology.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
So you're saying that Nvidia a GPU vendor should invest millions in developing a SSE/AVX version of PhysX? Yeah makes absolutely sense.

The claim that you just have to recompile the sourcecode to use SSE and get 2-4x performance gains is ridiculous, ever looked at any code that uses SSE/x87 seriously? That's a lot more work to do than just change a compile switch. Also using open source physix libraries with both x87 and SSE support shows that the difference is far lower than the claimed numbers there..
 

mosox

Senior member
Oct 22, 2010
434
0
0
So you're saying that Nvidia a GPU vendor should invest millions in developing a SSE/AVX version of PhysX? Yeah makes absolutely sense.

No, I claimed that they as a GPU vendor can do whatever they want while me as a GPU buyer I can do the same. I can do it because I bought a rebranded card not knowing it's the same card that I already had in my PC or because I know that open standards are in my benefit.
 

-Slacker-

Golden Member
Feb 24, 2010
1,563
0
76
As I said, enabling the extra PhysX effects on the Radeon cards leads to horrendous performance, like 3-4 FPS, because those effects have to be handled on the CPU. But guess what? I popped Sacred 2 into windowed mode and had a look at Task Manager while the game was running at 3 FPS, and here's what I saw, in miniature: Ok, so it's hard to see, but Task Manager is showing CPU utilization of 14%, which means the game—and Nvidia's purportedly multithreaded PhysX solver—is making use of just over one of our Core i7-965 Extreme's eight front-ends and less than one of its four cores. I'd say that in this situation, failing to make use of the CPU power available amounts to sabotaging performance on your competition's hardware


The bottom line is that Nvidia is free to hobble PhysX on the CPU by using single threaded x87 code if they wish. That choice, however, does not benefit developers or consumers though, and casts substantial doubts on the purported performance advantages of running PhysX on a GPU, rather than a CPU.

Abffffwhaaaaaaaaaat?

Is this @$%& true??
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
Whats with everyone and physx, it wasn't even worth the $15 I payed for a 9600gt. I thought I would give it a try before I bash it. I can happily say I can continue to bash it. The coat in Mafia 2 was nice, but I could hack that to run on my phenom x4 and performance wasnt any worse.

They can criple it or not bother developing it for CPU if they want, I won't be losing any sleep over it. CryTek make a much better physx engine and so does Havok. Even though the particles don't stay there forever or there are only 10,000 and not 100,000 particles, I can't even see that many.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
The new Batman Arkham City might fuel another physX 'war'.
Its not clear whether its being used, I believe it is.

Hopefully AMD wakes up and gets AA working in time for launch otherwise it will make them look really bad. Again.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Hopefully AMD wakes up and gets AA working in time for launch otherwise it will make them look really bad. Again.

Still repeating the same old blatant lie, Batman AA Anti Aliasing always worked with ATi cards since it was launched, I proved it myself, you just had to force it through the CCC instead of the Game's Control panel.
 
Last edited:

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
Or EIDOS could just update the engine to something more modern that has AA support out of the box.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Yes, there is little information about the game engine or multi-player.
Just the game is going to 'rock' and have a much larger world, environment.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
So you're saying that Nvidia a GPU vendor should invest millions in developing a SSE/AVX version of PhysX? Yeah makes absolutely sense.

Of course, but these things take time to evolve. In my mind, nVidia would desire to have the most impressive anything when it comes to Physics -- from the CPU or GPU. Most titles take advantage of the CPU; why wouldn't nVidia desire to have the tools to try to improve multi-core?
 

Genx87

Lifer
Apr 8, 2002
41,095
513
126
Nvidia bought AGEIA only to kill it. Then take a look at this:



http://techreport.com/articles.x/17618/13

Also



[My emphasis]

http://www.realworldtech.com/page.cfm?ArticleID=RWT070510142143&p=5

Bottom line, Nvidia is fighting tooth and nail any open solutions and this continues with the 3D technology.

This was proven to be wrong long ago. Recompiling for SSE did not provide 2-4x performance increases. Unfortunately when shown this proof real world tech simply deleted the threads about it. Which imo destroyed their credibility.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
This was proven to be wrong long ago. Recompiling for SSE did not provide 2-4x performance increases. Unfortunately when shown this proof real world tech simply deleted the threads about it. Which imo destroyed their credibility.

Even Intel admits that an old GTX280 is up to 14 times faster than an i7.

http://www.pcworld.com/article/199758/intel_2yearold_nvidia_gpu_outperforms_32ghz_core_i7.html

Of course why anyone thinks GPU physics should be made to run on a CPU is beyond me.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Even Intel admits that an old GTX280 is up to 14 times faster than an i7.

http://www.pcworld.com/article/199758/intel_2yearold_nvidia_gpu_outperforms_32ghz_core_i7.html

Of course why anyone thinks GPU physics should be made to run on a CPU is beyond me.

14 times faster under certain circumstances, easy to omit that right??

Which means that heavy parallel code that requires no branchy code or dependant code will run like that like graphics, but in serial, branchy or dependant code, the gap is very narrower.

I doubt that the GTX 280 can run an OS. :awe:
 
Last edited:

mosox

Senior member
Oct 22, 2010
434
0
0
This was proven to be wrong long ago. Recompiling for SSE did not provide 2-4x performance increases. Unfortunately when shown this proof real world tech simply deleted the threads about it. Which imo destroyed their credibility.

Anyone can see the performance increase in here:

http://www.youtube.com/watch?v=2DwOtw09oo8&feature=player_embedded

At around 3:30 he enables the multi-core. The latest version of Fluidmark allows that.

http://www.ozone3d.net/benchmarks/physx-fluidmark/

Only 32-bit OS support it seems.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
The 2900 is one of the worst GPU launches in history. In history. As for competing with the 8800GTS....

http://techreport.com/articles.x/12458/16



So it matches a severely disabled G80.... score! Of course the 8800GTX destroyed it. At least the FX series recovered and was a lot closer to the 9xxx series (even beating it in some games). The R600 was an absolute failure. It forced AMD to accept the role of 2nd place and sell their cards at a steep discount.

2900 = worst GPU ever.

don't tell that to apoppin!

2900 launch did suck, though it was greatly influenced by the amd acquisition. if it was up to ati they would have put a dustbuster on it, and cranked the clocks up to ridiculous levels on a few cherry picked chips just to get the title back.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
This was proven to be wrong long ago. Recompiling for SSE did not provide 2-4x performance increases. Unfortunately when shown this proof real world tech simply deleted the threads about it. Which imo destroyed their credibility.

Kinda makes ya proud to be an AT'er, don't it? When Anand screws up he owns up to it, and despite the vitriol some would have to say about him and his website he did not stand in the way of any poster or thread on the matter. (the inclusion of the OC'ed 460 in those AMD GPU reviews that got everyone fired up)
 

Genx87

Lifer
Apr 8, 2002
41,095
513
126
Kinda makes ya proud to be an AT'er, don't it? When Anand screws up he owns up to it, and despite the vitriol some would have to say about him and his website he did not stand in the way of any poster or thread on the matter. (the inclusion of the OC'ed 460 in those AMD GPU reviews that got everyone fired up)

Yup, big time.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Anyone can see the performance increase in here:

http://www.youtube.com/watch?v=2DwOtw09oo8&feature=player_embedded

At around 3:30 he enables the multi-core.
Could you come back as soon as you understand what x87 and SSE even are? I mean I know it's useless to assume you're able to compile a open source physix library for x87 and SSE and look at the results yourself, but I'm pretty sure we had a thread where results were published so just go looking for that ok?

And don't forget that just recompiling a application for SSE won't do much (hey who'd know that writing good compilers for vector computations is hard? Oh right, absolutely everyone)and even with working code paths for x87 and SSE the difference between the two is FAR from 2-4x, just as the article you've got no idea what you're talking about

@IDC: Totally, imho if a site can't admit its own failures (and let's face it: nobody's perfect, there's always a chance to screw something up - though I still wouldn't include the inclusion of the 460 cards there.. that was totally clear for everyone able to read; the specific artice there is just factually wrong) it casts an extremely bad picture on it.
 
Last edited:

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
The OP's views, while passionate and full of zest, which is wonderful to me, is a bit one-sided. The key to PhysX from nvidia may be with SDK 3.0 and hopefully these kinds of one-sided, vile and venom views may be curbed a bit.

It doesn't make logical sense to be in the Physic business and to ignore the CPU. Physx is a multi-platform, device, engine, library and tool set, and so much more than just GPU physics.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |