ATi 4870/4850 Review Thread

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: dreddfunk
Keys - you can't 'downplay' CUDA or PhysX, but they certainly aren't of critical interest to the vast majority of 3D gamers at this time, and 3D gamers are the primary market for these cards.

I understand that you and Rollo both have an interest in pointing out the positive aspects of NVIDIA hardware, and that's valid, but to those of us with no dogs in the fight it seems like you're telescoping in on trees rather than looking at the forest.

[edited for clarity]

Depends how you look at it.

For non PhysX, non multi card I would choose a 4870 over a GTX260 now based on the $100 price difference and similar performance.

OTOH, $100 isn't much money in the world where I have to pay that to fill up my truck, and I do like multi card. Not to mention if a guy is looking to keep his card a year and a half there are a whole lot of PhysX games coming.

So to me it's kind of a wash, buyer has to pick what matters to them. Physics obviously can make a difference in immersion level.

For the first time in a long time, I don't think you can go too far wrong either way, especially short term.

 

JPB

Diamond Member
Jul 4, 2005
4,064
89
91
Originally posted by: keysplayr2003
Originally posted by: JPB
Originally posted by: omber
I recall news few days ago that the nVidia PhysX middileware was hacked to work on ATI cards with equal or better results (I think this was on Slashdot). If this is so that advantage could be lost quickly..

My question: does anyone have an idea of what kind of power supply should we look at for a dual 4870 HD setup?

Here you go. Direct from AMD

Thanks for the PSU link?

Try this one
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: JPB
Originally posted by: keysplayr2003
Originally posted by: JPB
Originally posted by: omber
I recall news few days ago that the nVidia PhysX middileware was hacked to work on ATI cards with equal or better results (I think this was on Slashdot). If this is so that advantage could be lost quickly..

My question: does anyone have an idea of what kind of power supply should we look at for a dual 4870 HD setup?

Here you go. Direct from AMD

Thanks for the PSU link?

Try this one

:thumbsup:
 

dreddfunk

Senior member
Jun 30, 2005
358
0
0
Keys - as my post in reply to Rollo indicates, I have no problem with people pointing out facts, least of all you. But, as I discovered early in my graduate career, selective pointing out of facts is used to mislead people as often as it is to *help* people make informed decisions. So I was merely pointing out that incessantly telescoping in on particular facts that make the GTX260 look good can be every bit as misleading as ignoring those facts all together. Point them out, by all means, but be honest about where those 'trees' stand in relation to the 'forest'.

With regard to, one of those 'trees', PhysX, if course it's been of general interest to gamers for quite some time. But we've yet to see how well the on-GPU implementation of it will impact gameplay. As someone previously mentioned, GPU resources are almost always stretched to the maximum by the continual evolution of the games. Will the GTX260 have enough excess GPU power to implement PhysX features without severely impacting other aspects of gameplay? If it does, then I'll be in complete and total agreement with you.

Until we can see that verified in the real world, however, it remains only an advantage in theory. The 8800GTS 640MB that you did so well benchmarking is DX10 compatible, but that doesn't mean it can run Crysis in DX10 mode effectively. Touting DX10 functionality before proving that such functionality could be accessed with reasonable performance made as little sense to me then as touting PhysX compatibility does now. Like saying, "Our cars are equipped with rockets and can jump over traffic jams," without also saying, "and land you safely." Features are only useful when their implementations bring value to the user. PhysX's implementation of physics has yet to demonstrate that value to me.

That said, realistic physics, when it arrives in a way that also allows good general 3D performance, will be *huge* for the industry. You've got my agreement there.

Right now, as I've said, the benefit of having it on the GTX260, remains in doubt.


CUDA is very important to some people--and not at all important to many others. Like having a pickup truck, some people would use the bed of the truck, others wouldn't. Depending on what your needs are, CUDA could be of critical importance or not at all. But let's be realistic about the size of the relative markets. There is a reason NVIDIA markets the GTX line first as GPUs and not as GPGPUs.
 

allies

Platinum Member
Jun 18, 2002
2,572
0
71
Originally posted by: keysplayr2003
Originally posted by: JPB
Originally posted by: omber
I recall news few days ago that the nVidia PhysX middileware was hacked to work on ATI cards with equal or better results (I think this was on Slashdot). If this is so that advantage could be lost quickly..

My question: does anyone have an idea of what kind of power supply should we look at for a dual 4870 HD setup?

Here you go. Direct from AMD

Thanks for the PSU link?

Looks like a list of certified PSUs...

 

praesto

Member
Jan 29, 2007
83
0
0
Originally posted by: bryanW1995
bfg, check my signature. the first line in particular is quite appropriate to this situation, no?

Seriously, what the hell is up with all this personal hetz from you? If you want to ignore him, then do so, but you don't have to make such a big fuss about it. If nothing else, I don't see chizow posting reply after reply about something completely irrelevant for the rest of us. Ignore him if you want, but at least keep it out of thread, because it really isn't doing any good to keep the focus on the subject of this thread.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: BFG10K
Sorry, no, it can?t. You keep repeating this but repeating it doesn?t make it right. The driver can?t remove a game framerate cap. If the game is limiting frames every 5 ms the driver can?t go in there and adjust the game?s tick loop.

About the only thing the driver could do is ignore vsync in which case the driver is broken; but then reviewers shouldn?t be running with vsync in the first place as it makes their tests broken.
Broken Vsync doesn't explain away FPS capped @60 for single-GPU, but only slightly higher in multi-GPU. Poor CF/SLI scaling maybe, but highly unlikely given the differences (or lack of) between 1920 and 1680 for both single and multi-GPU.

I provided benchmarks that demonstrated it didn?t affect any of the current round of benchmarks and I?m still waiting for your evidence otherwise. Until you provide such evidence please do not bring up this issue again or I will assume you are trolling.
Benchmarks that were never in question to begin with, yet you still haven't answered whether the option needs to be turned off in the .ini or whether it was patched out. Clearly you're being evasive, so I'm going to assume that it is still on by default and you were purposefully being deceptive by trying to claim the feature did not exist or was not a problem. Luckily users CAN turn the feature off in UE3.0 games, but other games might have similar frame caps that are less obvious or can't be turned off.

My first mention of Bioshock had a disclaimer saying I was pretty sure there was no frame smoothing. I clearly identified the difference with the two other UE3.0 games I own, GoW and Mass Effect, which do have it enabled by default.

So what if it averages 72 FPS? That's the game's cap according to Derek.

60 FPS cap? What the hell are you talking about? You just quoted Derek saying there is a 72 FPS cap!

You're saying what exactly? That CF is breaking your fictional 60 FPS cap by getting 72 FPS (which just happens to be the actual cap Derek described), thereby proving multi-GPU can "work around" game caps?

LMFAO.
72 FPS isn't a unilateral frame cap though, I average much higher on a same clocked Quad core and a GTX 280 with 2xAA at 1920. Yet no single GPU was able to average over 60 FPS at 1920x1200, even after dropping down to 1680. You don't think its obvious there is a frame cap for single GPU but multi-GPU can exceed it? There is no Vsync option in Witcher but the game does dynamically change options based on system specs (like available AA level).

But there are plenty of examples were they aren?t hitting a CPU cap but you?re simply making sweeping generalizations. Take the CoD 4 results: the 4870 provides 90% of the GTX280?s performance at less than half the cost and half the VRAM. I don?t know how anyone can claim that is a CPU limitation.
How am I making sweeping generalizations? I'm pointing to specific instances and benchmarks that clearly show flat scaling between resolutions, frame caps across different configs and no scaling with multi-GPU. You're the only one who is pointing out instances where there is clearly no frame capping or CPU bottlenecking occurring as strawman examples. Have I mentioned COD4 once as a bottlenecked situation? No. Have I mentioned 2560 with 8xAA? LMAO. No.

I agree, but that was never under contention.
But even in that example, you can see that the fastest single GPUs like the GTX 280 and 4870 are very close to the multi-GPU solutions that do not scale well. Again, the point is to illustrate that as averages, those single-GPU are being bottlenecked as well as their max is being limited and they're only averaging lower FPS because they're spending more time at lower FPS levels rendering more intensive frames than multi-GPU solutions.

Where? Your link http://www.anandtech.com/video/showdoc.aspx?i=3341&p=21 shows no such thing. Furthermore since when is Oblivion capped at 60 FPS?
You clearly need to look harder. In Witcher up to 1920, no single GPU averages more than 60 FPS except the GTX 280 at 1680x1050 at 60.6 (again FRAPS sync issues no doubt). Without knowing exactly what area Derek and Anand tested, I can say its most likely a frame rate lock at 60FPS, which would imply Vsync except Witcher has no Vsync option. All other cards at that resolution are dropping below 60FPS for various durations to bring their average below 60FPS. Except the multi-GPU solutions do not exhibit this behavior, with every single one exceeding the 60FPS average except the 3870X2 at 1680. Same is true for Assassin's Creed, although the difference is less pronounced. It can't be explained simply as multi-GPU are ignoring Vysnc in either case since the results are far too close to a 60FPS cap and single-GPU figures, but they're able to exceed the 60FPS cap with multi-GPU.

And I don't know when Oblivion became frame capped as I don't own it, but it clearly is capped in AT's review.

Based on your response to what you quoted I don?t think you understand what you quoted. I also don?t think you understand what micro-stutter is or even how multi-GPU scales.

Pre-rendered frames is purely a function of the driver. If the game tick is limiting frames to begin with pre-rendering won?t cause the tick to be lowered.
If the game is limiting frames without a true CPU bottleneck then I don't see how the driver couldn't queue more pre-rendered frames.

You?ve made one sweeping generalization and each time we have this discussion you include more and more games. CPU bottlenecking? Frame capping? In Oblivion at 2560x1600? Please tell me you?re joking.
When did I say anything about 2560x1600. Are you denying that there is frame capping and/or CPU bottlenecking in Witcher, Oblivion, Crysis and Assassin's Creed at resolutions up to 1920 based on the data in Anandtech's review? That is 4 out of 7 games tested in 2 out of 3 resolutions, you can either acknowledge it skews actual performance or you can focus on the other 3 out of 7 games and the resolutions that clearly aren't CPU bottlenecked.

Also I just thought you were telling us Oblivion has a 60 FPS cap but now it?s CPU bottlenecking? And in the next post no doubt you?ll deny you ever mentioned Oblivion and claim I?m making up scenarios. :roll:
I don't own the game and I wasn't there when it was tested, so I can't say for sure but its obvious its one or the other. You can either acknowledge it or you can focus on semantics. I'm sure you'll focus on the latter. That's what you do in absence of substance.

I provided several benchmarks the debunked your claim and until you provide relevant benchmarks to the contrary do not bring up this topic again.
Debunked my claim of what? I never explictly stated UT3 as an example of frame capping or CPU bottlenecking, I merely used it as an example that games are employing frame capping methods that the end-user may or may not know about. I asked if it was disabled by default, you still have not answered so I'm going to assume that you must still disable it in the .INI. I also provided 2 other popular titles that use some type of frame capping or performance smoothing based on CPU speed with Witcher and Assassin's Creed.

No, I think your appraisal of the situation is overblown. I also think you constantly shift the goal-posts to vsync to capping to CPU limitations to multi-GPU whenever it suits your agenda without ever actually providing any evidence to back your claims.
So do you think AT's review of the GTX 280, 4870 and other multi-GPU configurations is an accurate portrayal of performance or not? Its plainly obvious that 4 of the 7 titles are bottlenecked or capped in 2 out of the 3 resolutions tested. The cause, be it Vsync, CPU, GPU, obscure settings, etc. is irrelevant. Is it accurate based on the data?

It might be, but then the situations you describe aren?t the ones people are drawing inferences from.
Judging from the 200+ responses from the AT 4870 article, I would say they are.....

Yep, absolutely. This is what you said:

Well, I'd say its a bit premature to say GT200 is a flop, if you look at this latest round of reviews I think you'll see that there's quite a bit of CPU bottlenecking and frame capping going on, even at higher resolutions like 16x12 and 19x12. That's not to say 4870 isn't a great part, it is, but clearly a large part of the reason its so close to GTX 280 is because of CPU bottlenecking.

For example, quoted from the AT article:
See the highlight, your claim of CPU bottlenecking? Now let?s see what you quoted from Anandtech to ?back? that claim:

Performance of the Radeon HD 4870 continues to be strong, but because of the frame rate cap we're not able to see if the GTX 280 could stretch its legs further and eventually outperform the 4870. In actual gameplay, the 4870 and GTX 280 appear to be equals.
Nowhere in that quote does it mention CPU bottlenecking.
Rofl right, I forgot to mention Frame rate capping in the 2nd instance. I guess next time I should properly reference everything and add my sources in the appendix as well, assuming someone focused more on semantics than substance such as yourself will read it? LMAO.

Now let?s look at the missing section of the quote, the one you conveniently left off when using it as ?evidence? for you claims of CPU bottlenecking:

Assassin's Creed is capped at near 60 fps, which is why we see most cards reaching but not significantly exceeding that marker.
This is why people don?t take you seriously. You chop and change whenever it suits your agenda, mis-quote and then claim you never made such claims when called out.
Rofl, and you think you come off any better when you focus on semantics instead of substance? I used the terms interchangably because it is unclear which limit is at play, but there's obviously something going on which you still fail to acknowledge at all. Jarred also thought there was CPU bottlenecking occurring as well, so I guess he's misquoting AT as well?

Here?s the quote from you ?Case in point is the GTX+ that needed a clock speed boost to push 4850 back into mediocrity."

You?re at the stage now of denying things that you said in the past.
Nice out-of-context quote. Its mediocrity wasn't in comparison to the paper launched GTX+, it was to the 7-8 month old G92 parts. Mentioning the GTX+ was to illustrate a simple clock speed on old parts was all it took to show how mediocre 4850 actually was.

Actually it looks like half of those quotes refute what you?ve been claiming. You were claiming game caps in Assasin?s Creed when Jared points out CPU limitations. You were claiming Anandtech don?t force off vsync when in actual fact they don?t force it off in the driver (but do so in the game, again selective quoting on your part). I could go on but honestly it?s a waste of time with you.
What? Jarred wrote that original AC article and on further inspection, he does run into frame caps/cpu limits as well. The FPS averages higher than 60 are for lower settings. But yes, Jarred's comments are proof that you will focus on semantics when your arguments lack substance.

And when did I say AT doesn't turn off Vsync in-game? ROFL. I pointed out why they don't force it off in the driver, going back to the Crysis Tri-SLI review where Derek found forcing it off in the driver lead to worst performance. Are you saying AT is so incompetent that a distinction would need to be made between in-game and driver Vsync? LMAO. Sorry if I had to choose who to place confidence in, it'd be Anand and Derek over you.
 

Kuzi

Senior member
Sep 16, 2007
572
0
0
Originally posted by: nRollo
Originally posted by: bryanW1995
<---ignoring chizow. hmmm, I might make that into my sig. Seriously, how many of us other than bfg are even trying to reason with chizow any more? A lot of people wanted 3870 to beat 8800gt but they went away as more and more benchies proved the g92 superiority. Guess what? A lot of people wanted gt200 to beat r(v)7xx, but the value is so much greater with ati this time around that only the blind fanboys are even debating the issue at this point. Nvidia will be back, they'll probably stomp the crap out of ati next round or at least be much more competitive, but for now ati is the card to own. period.

Disagree.

While for gaming the 4870 is a great deal at $100 less MSRP than the GTX260, it lacks:

1. 384MB of RAM the GTX260 has.

2. PhysX that will be integrated into 16 games this year, double that next.

3. MultiGPU flexibility. When ATs review says they're adding support for popular games released last Fall after launch, you have to wonder about support for games that aren't reviewed.

If AMD is truly going to go the multi-GPU route for its high end parts, it needs to enable more consistent support for CF across the board - regardless of whether or not we feature those games in our reviews.

They have since started talking about how the era of the large, "monolithic" GPU is over. I think that's hogwash...... Big chips don't suffer from the quirks of multi-GPU implementations, which never seem to have profiles for newly released games just as you'd want to be playing them......

4. CUDA

The 4870s don't really even compete with GTX280s for the most part.


That said, the 48XX series are a big step forward for AMD and a good value. (just not a "ZOMG! these are teh only cards that exist!" type value you're asserting)

Disagree.

For gaming the GTX260 is a bad deal at $100 more MSRP than the HD4870, because:

1. Even though it has 384MB more RAM than the 4870, it still performs on average 5% slower.

2. ATI signed with Havoc.
45 PC titles already released, 18 more coming. Over 70 developers use Havoc.

3. PhysX that will be integrated into 16 games this year, runs on ATI cards.

4. MultiGPU future. Nvidia pissed Intel off and didn?t allow them to build SLI chipsets, only CrossFire will run on future Intel motherboards with next generation CPUs like Nehalem. Most hardcore gamers will be using Intel Nehalem CPUs next year since the performance will be so much higher than Core2 or any AMD CPUs may have at the time.

5. AMD Brook+

The GTX260s doesn?t really even compete with HD4870 for the most part, especially when you consider that it cost $100 more and offers less performance.

Anand:"For now, the Radeon HD 4870 and 4850 are both solid values and cards we would absolutely recommend to readers looking for hardware at the $200 and $300 price points. The fact of the matter is that by NVIDIA's standards, the 4870 should be priced at $400 and the 4850 should be around $250. You can either look at it as AMD giving you a bargain or NVIDIA charging too much"
 

ShadowOfMyself

Diamond Member
Jun 22, 2006
4,230
2
0
Well Diablo 3 was just announced and its a Havok game more points for the 4870! Nnot like it will come out during the 48xx series lifespan, but it seems that the deal Blizzard made with AMD is already at work... Wouldnt be surprised to see D3 being DX 10.1 too
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: JPB
I just snagged up a Hd 4870 from Newegg. The Sapphire model.

As soon as I ordered it, and got the confirmation, it went out of stock again. They must have had ONE :shocked:

I finally get to go back to ATI

But I can say this much. A GTX 260 will be going in my wifes computer. Both mine and her pc's will have IDENTICAL specs.

Should make for some nice benchmark comparisons :thumbsup:

Once both cards are recieved, here is the list of games to be benchmarked.

Oblivion
Prey
Area 51
Call Of Duty 2
Call Of Duty 4
Half Life 2
Orange Box
Battlefield 2142
Doom 3
Quake 4
S.T.A.L.K.E.R
Need For Speed Carbon
Need For Speed Most Wanted
Bioshock
Frontlines Fuel Of War
Unreal 3
Age Of Empires 3
Splinter Cell Chaos Theory

I know a lot of these games are *older* but since most review sites only tests *newer* games. I figured this would help out the guys/gals who play these games

wait until 4870x2 comes out, you should see a nice price drop on gtx 260 by then.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
I am glad you see way I go about this, and you are correct, I try to only put out info I can back up. When you're me (read: under much scrutiny due to your connections) you have to be pretty careful about what you post, there's never a shortage of people wanting to tell me I'm wrong.

For that matter, I want them to if I am, so I don't mind being held accountable.

Agreed on your points, but I'm a solo act, not a "post like this to help NVIDIA" teacher. Although the thought of "Rollo's Posting Boot Camp" cracks me up.

I think that a better way to put it would be "post like this if you intend to have a reasonable and/or intelligent discussion regarding the merits of a particular point of view". I would cringe if chizow was espousing my point of view b/c I know that he'd end up making us all look like idiots.

but enough about that. as virge quoted the other day: great people talk about ideas, average people talk about things, small people talk about other people.

let's instead talk about nvidia's way forward to compete in the gpu segment against amd's sneak attack. what are your thoughts? Will they be able to significantly increase the clocks on gtx 260/280 with going to 55nm? will that be enough to command the price premium that they need to make $$? how soon will they do a "midrange gt 200" on 55nm to truly take the fight back to amd?
 

dingetje

Member
Nov 12, 2005
187
0
0
Originally posted by: chewietobbacca
Whered you hear the rumor? All that will likely happen is a BIOS fix to enable powerplay, but even drivers might do that.

from this article:

http://techgage.com/article/asus_eah4850_512mb/12

quote:

"This is the reason we don't see pre-overclocked cards right now. I've been told by two companies that they are unsure if they will even be able to release the cards they want to, because the overclocks they want to hit, just can't be done without overheating. <Rumor has it that AMD will be releasing a revision of the core, however, which should allow higher clocks to be put in place. Those might even happen as early as three weeks to a month, so stay tuned."



^ I would be really miffed if i would own a hottie 4850 rev.1 when that happens

 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: dreddfunk
Keys - as my post in reply to Rollo indicates, I have no problem with people pointing out facts, least of all you. But, as I discovered early in my graduate career, selective pointing out of facts is used to mislead people as often as it is to *help* people make informed decisions. So I was merely pointing out that incessantly telescoping in on particular facts that make the GTX260 look good can be every bit as misleading as ignoring those facts all together. Point them out, by all means, but be honest about where those 'trees' stand in relation to the 'forest'.

With regard to, one of those 'trees', PhysX, if course it's been of general interest to gamers for quite some time. But we've yet to see how well the on-GPU implementation of it will impact gameplay. As someone previously mentioned, GPU resources are almost always stretched to the maximum by the continual evolution of the games. Will the GTX260 have enough excess GPU power to implement PhysX features without severely impacting other aspects of gameplay? If it does, then I'll be in complete and total agreement with you.

Until we can see that verified in the real world, however, it remains only an advantage in theory. The 8800GTS 640MB that you did so well benchmarking is DX10 compatible, but that doesn't mean it can run Crysis in DX10 mode effectively. Touting DX10 functionality before proving that such functionality could be accessed with reasonable performance made as little sense to me then as touting PhysX compatibility does now. Like saying, "Our cars are equipped with rockets and can jump over traffic jams," without also saying, "and land you safely." Features are only useful when their implementations bring value to the user. PhysX's implementation of physics has yet to demonstrate that value to me.

That said, realistic physics, when it arrives in a way that also allows good general 3D performance, will be *huge* for the industry. You've got my agreement there.

Right now, as I've said, the benefit of having it on the GTX260, remains in doubt.


CUDA is very important to some people--and not at all important to many others. Like having a pickup truck, some people would use the bed of the truck, others wouldn't. Depending on what your needs are, CUDA could be of critical importance or not at all. But let's be realistic about the size of the relative markets. There is a reason NVIDIA markets the GTX line first as GPUs and not as GPGPUs.

I must say that is a spot-on interpretation of physics and CUDA both. I would rank dx10.1 functionality as greater than physics and cuda put together for gamers. otoh, I would rank cuda as a huge part of nvidia's long-term strategy to unseat intel.

Frankly, I think that nvidia could have very easily gotten more GPU performance out of gt 200 if they had so desired, but they underestimated 48x0's competitiveness. AMD almost certainly encouraged this with corporate espionage tactics such as the "480 shaders" misinformation and "32 texture units" rumors. AMD has done a great job this round in playing the underdog role, but nvidia isn't going to underestimate them again imho.
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: bryanW1995
I am glad you see way I go about this, and you are correct, I try to only put out info I can back up. When you're me (read: under much scrutiny due to your connections) you have to be pretty careful about what you post, there's never a shortage of people wanting to tell me I'm wrong.

For that matter, I want them to if I am, so I don't mind being held accountable.

Agreed on your points, but I'm a solo act, not a "post like this to help NVIDIA" teacher. Although the thought of "Rollo's Posting Boot Camp" cracks me up.

I think that a better way to put it would be "post like this if you intend to have a reasonable and/or intelligent discussion regarding the merits of a particular point of view". I would cringe if chizow was espousing my point of view b/c I know that he'd end up making us all look like idiots.

but enough about that. as virge quoted the other day: great people talk about ideas, average people talk about things, small people talk about other people.

let's instead talk about nvidia's way forward to compete in the gpu segment against amd's sneak attack. what are your thoughts? Will they be able to significantly increase the clocks on gtx 260/280 with going to 55nm? will that be enough to command the price premium that they need to make $$? how soon will they do a "midrange gt 200" on 55nm to truly take the fight back to amd?

Yes! Knew that college stuff would pay off in the long run!

In any case, tough to speculate on the rest, not an engineer. We've seen what the 9800GTX+ increases are with the drop to 55nm, but the GTX2XX are much more complex designs, so my guess is they're not going to drop to 55nm and say "Haha! Now we crank the core clocks 250MHz FTW!". Unless they're going to have the GTX290 Peltier Edition.

 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: nRollo
Originally posted by: bryanW1995
I am glad you see way I go about this, and you are correct, I try to only put out info I can back up. When you're me (read: under much scrutiny due to your connections) you have to be pretty careful about what you post, there's never a shortage of people wanting to tell me I'm wrong.

For that matter, I want them to if I am, so I don't mind being held accountable.

Agreed on your points, but I'm a solo act, not a "post like this to help NVIDIA" teacher. Although the thought of "Rollo's Posting Boot Camp" cracks me up.

I think that a better way to put it would be "post like this if you intend to have a reasonable and/or intelligent discussion regarding the merits of a particular point of view". I would cringe if chizow was espousing my point of view b/c I know that he'd end up making us all look like idiots.

but enough about that. as virge quoted the other day: great people talk about ideas, average people talk about things, small people talk about other people.

let's instead talk about nvidia's way forward to compete in the gpu segment against amd's sneak attack. what are your thoughts? Will they be able to significantly increase the clocks on gtx 260/280 with going to 55nm? will that be enough to command the price premium that they need to make $$? how soon will they do a "midrange gt 200" on 55nm to truly take the fight back to amd?

Yes! Knew that college stuff would pay off in the long run!

In any case, tough to speculate on the rest, not an engineer. We've seen what the 9800GTX+ increases are with the drop to 55nm, but the GTX2XX are much more complex designs, so my guess is they're not going to drop to 55nm and say "Haha! Now we crank the core clocks 250MHz FTW!". Unless they're going to have the GTX290 Peltier Edition.

If a GT200b could hit similar clocks to G92, then it would be fine. The problem right now is GT200 is incapable of clocking like G92 could... in fact when it comes to the shaders, it it is worse than G80 on 90nm.

nVidia needs to get their shader clocks up.... I can only hit 1458MHz shader domain on my card I was able to hit 1891MHz with my 8800GTS G92.

I really think nVidia just missed the target with GT200 clocks, that is the problem that we are seeing. Once they get clocks where they should be, whether by a 55nm shrink or by another 65nm revision, then GT200 should do well.
 

Hunt3rj2

Member
Jun 23, 2008
84
0
0
Originally posted by: nRollo
In any case, tough to speculate on the rest, not an engineer. We've seen what the 9800GTX+ increases are with the drop to 55nm, but the GTX2XX are much more complex designs, so my guess is they're not going to drop to 55nm and say "Haha! Now we crank the core clocks 250MHz FTW!". Unless they're going to have the GTX290 Peltier Edition.

Actually it'd be the GTX290 Liquid Nitrogen edition.


All joking aside, it really looks like Nvidia needs to revise the GTX 200 cores to have higher clocks and to be more efficient, I don't know many people who would like a 200 dollar power bill from leaving their computer on all day.


Wait.. is the ATI 48x0 the revision of R600 or is it R700? I'm being led to believe that it is R700 due to the core being called "RV770". I intend on getting a 4850 with a VF1000 when they get pretty cheep and plentiful.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Hunt3rj2
Originally posted by: nRollo
In any case, tough to speculate on the rest, not an engineer. We've seen what the 9800GTX+ increases are with the drop to 55nm, but the GTX2XX are much more complex designs, so my guess is they're not going to drop to 55nm and say "Haha! Now we crank the core clocks 250MHz FTW!". Unless they're going to have the GTX290 Peltier Edition.

Actually it'd be the GTX290 Liquid Nitrogen edition.


All joking aside, it really looks like Nvidia needs to revise the GTX 200 cores to have higher clocks and to be more efficient, I don't know many people who would like a 200 dollar power bill from leaving their computer on all day.


Wait.. is the ATI 48x0 the revision of R600 or is it R700? I'm being led to believe that it is R700 due to the core being called "RV770". I intend on getting a 4850 with a VF1000 when they get pretty cheep and plentiful.

The GT200's idle at like 25W dude.

Linkified
 

tuteja1986

Diamond Member
Jun 1, 2005
3,676
0
0
why are the Nvidia focus group member focusing so much of their power in 4870/4850 review thread.

Anyways :! with games like :
Fallout3
Alan Wake
Starcraft II
Dibalo III

supporting Havok , I think Intel has won this because these game are really games that i am really amped on playing.
 

chewietobbacca

Senior member
Jun 10, 2007
291
0
0
D3 is also supporting DX10.1.... and SC2 supposedly might also.

And yeah, Fallout 3, SC2, and D3 are going to sell millions if not tens of millions of copies each... ATI and Intel must be very happy right now.
 

Kuzi

Senior member
Sep 16, 2007
572
0
0
Trust me SC2 alone will sell at least 10 million, hope they release it this year. The original was my favorite Strategy game of all time.

About the Havoc Physics, I don't think it's possible to run all the physics stuff on a GPU, but at least the parts that can run will run very well on RV7xx GPUs. Hurry ATI and release the drivers.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Kuzi
Trust me SC2 alone will sell at least 10 million, hope they release it this year. The original was my favorite Strategy game of all time.

About the Havoc Physics, I don't think it's possible to run all the physics stuff on a GPU, but at least the parts that can run will run very well on RV7xx GPUs. Hurry ATI and release the drivers.

Havoc runs on CPU only. At least right now.
 

KeithTalent

Elite Member | Administrator | No Lifer
Administrator
Nov 30, 2005
50,235
117
116
Originally posted by: JPB
Originally posted by: pcgamer321
How much would a 2900xt sell for?

Well, in my opinion. Right now since a HD 3870 is around $155 on average. I would say since it would be used....about $120.

Of course, Id be willing to give you that for it :thumbsup:

Funny. I was thinking $200 for my pair. Considering how amazing the prices are for these new cards I'd be very happy to get that.

I swear I paid close to $500 for my X1900XT only a couple of years back and I never imagined cards would actually go down in price over the years. Crazy.

KT
 

lopri

Elite Member
Jul 27, 2002
13,211
597
126
Well, I agree that the thread needs to be kept clean (i.e. re 4850/4870), but since it's brought up I'd like to clear up a few things about PhysX and others.

1. AFAIK, PhysX isn't limited to GT200 based cards? It should work with G80/G92 as well.
2. So does CUDA.
3. How does PhysX work, other than in 3DMark? Does it work off some unused part of the GPU silicon? Does it sacrifice some of the GPU's main purpose (i.e. rendering) to work simultaneously? Or does it require a dedicated (separated) GPU?
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |