ATi 4870 X2 (R700) thread

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: nRollo
Anyway you cut it, this card will be the fastest single slot card at launch, probably by a good margin, and that has value itself. Microstutter, RAM use, etc can't change that.

People with single slot motherboards need options for performance as well.

True, assuming it is 500$. My point about ram usage and all the other is that I am begining to doubt the 500$ price tag.


The 260$ GTX 260 is a very interesting price. at that point it becomes more attractive then the 4870.

The only problem with that is that it is after MIR, and it is the same price before. MIR do pay, but it means about an hour worth of work filling the forms, printing them, sending them, tracking / calling, and eventually depositing the money. And I am lazy.

PS. where did you see a GTX280 for under 500$?
 

dreddfunk

Senior member
Jun 30, 2005
358
0
0
Originally posted by: nRollo

Hmm.

I don't see NVIDIA's ass as kicked.

dreddfunk: [tongue-in-cheek] Too busy admiring NVIDIA's pretty face to see what's happening behind, eh?

...

Where I'm sitting I doubt they'll even lose much market share with current bang for buck.

dreddfunk: [ongue-in-cheek] I'd say that from 'where you're sitting' NVIDIA *can't* lose any market share. Your 'seat' is getting free parts.




Cheers as always!

 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: nRollo
Originally posted by: AmberClad
Originally posted by: nRollo
With GTX260s at $260 and GTX280s at $460, any excitement ATi had going for it is largely over.
If you're talking about that MSI firesale me and a couple of other ATers discussed at length a few days ago, both of them went back up in price.

Maybe some of the other manufacturers will follow suit -- those prices seemed to be pretty attractive.

Those prices are still in stock/effect at ZZF and newegg.

Competition brought better deals for all.

460$ for a GTX280? where?


Originally posted by: dreddfunk
Seriously, though, at least both companies are executing for a change.

I agree, the price drops and the performance boosts are phenomenal. I am salivating here.
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: dreddfunk
Originally posted by: nRollo

Hmm.

I don't see NVIDIA's ass as kicked.

dreddfunk: [tongue-in-cheek] Too busy admiring NVIDIA's pretty face to see what's happening behind, eh?

...

Where I'm sitting I doubt they'll even lose much market share with current bang for buck.

dreddfunk: [ongue-in-cheek] I'd say that from 'where you're sitting' NVIDIA *can't* lose any market share. Your 'seat' is getting free parts.




Cheers as always!


I don't want to go any further off topic in this thread, I'd be happy to discuss this with you in another if you'd like to start one.

I'm interested to see the 4870X2, and as taltamir points out, the fact behind the pre-launch info.

Worst case scenario (another 3870X2 with 4870s) it will still be a high performing part and give people more options.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
true, even if it is not deliver anything that wasn't there already in the 3870x2, it is still two RV770 cores on slot. So we can expect it to be the most power per single slot out there for a while (until nvidia gets 55nm G200, or even later).
 

nib95

Senior member
Jan 31, 2006
997
0
0
Can I just ask, why is Rollo allowed to post when he is a Nvidia focus group member and does nothing but Nvidia cheer lead?

Can anyone really expect a non bias response from him when he gets the kind of gear (free) he does from Nvidia....seriously....
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,979
126
Originally posted by: toslat
I don't think you need to get rid of AFR (dont know what you meant by ARP) to fix this as the offset problem can be ameliorated by introducing a delay/buffer in the path of the GPUs. The delay would have to be adaptive in a practical scenario as the correct value depends on FPS and all you really want to get rid of is the bias
The delay solution was discussed at length in the micro-stutter thread we had a while back. Yes, in theory it would work but in practice it would have to extremely adaptive so it might cause more harm than good if it didn't adapt properly.

That and it won't address the issue of input lag caused by AFR.
 

Aberforth

Golden Member
Oct 12, 2006
1,707
1
0
Originally posted by: nRollo
Originally posted by: AmberClad
Originally posted by: nRollo
With GTX260s at $260 and GTX280s at $460, any excitement ATi had going for it is largely over.
If you're talking about that MSI firesale me and a couple of other ATers discussed at length a few days ago, both of them went back up in price.

Maybe some of the other manufacturers will follow suit -- those prices seemed to be pretty attractive.

Those prices are still in stock/effect at ZZF and newegg.

Competition brought better deals for all.

You call this competition? Where was NV's competitive prices before Radeon 4800 series? This seems to me as a cheap stunt rather than competition, you know whats fair competition? If each of the products have similar configuration and perform close to each other and offering same price- so user can make decisions based on reliability. This isn't the case...you've got 1.4 billion transistor GPU performing only 10% better than 4870. Just a few weeks back NV was downplaying DX10.1 even though it resolves issues related to DX10 and what we have now? Nothing...they are just going to bring another card in Q4 to with DX10.1 and GDDR5 and ripp people's wallets. I think NV and all it's supporters are just a bunch of jerks telling people what they want to hear.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: Aberforth
You call this competition? Where was NV's competitive prices before Radeon 4800 series? This seems to me as a cheap stunt rather than competition, you know whats fair competition? If each of the products have similar configuration and perform close to each other and offering same price- so user can make decisions based on reliability. This isn't the case...you've got 1.4 billion transistor GPU performing only 10% better than 4870. Just a few weeks back NV was downplaying DX10.1 even though it resolves issues related to DX10 and what we have now? Nothing...they are just going to bring another card in Q4 to with DX10.1 and GDDR5 and ripp people's wallets. I think NV and all it's supporters are just a bunch of jerks telling people what they want to hear.

8800GT? 9600GT? these were some of the recent launches provided quite abit of value for alot of people.

You forget about how HD3870s were priced.

What issues related to DX10? These aren't issues. The things that DX10.1 brings onto the table really is trivial. Nobody is downplaying it, but rather viewing at how useful DX10.1 really is and that incorporating the required hardware has any benefit at all in a design/cost standpoint. How many DX10.1 titles are out there as of today? Exactly. By the time DX10.1 pick up some steam, newer and faster parts (probably cheaper too) will be out to make today's generation DX10.1 parts meaningless.

Competition is never fair (since wen was it fair?!) especially when you add business and politics into the mix. The world isnt full of dandelions and graceful animals wondering about with birds singing. I think most people know this by now
 

Aberforth

Golden Member
Oct 12, 2006
1,707
1
0
Originally posted by: Cookie Monster
Originally posted by: Aberforth
You call this competition? Where was NV's competitive prices before Radeon 4800 series? This seems to me as a cheap stunt rather than competition, you know whats fair competition? If each of the products have similar configuration and perform close to each other and offering same price- so user can make decisions based on reliability. This isn't the case...you've got 1.4 billion transistor GPU performing only 10% better than 4870. Just a few weeks back NV was downplaying DX10.1 even though it resolves issues related to DX10 and what we have now? Nothing...they are just going to bring another card in Q4 to with DX10.1 and GDDR5 and ripp people's wallets. I think NV and all it's supporters are just a bunch of jerks telling people what they want to hear.

What issues related to DX10? These aren't issues. The things that DX10.1 brings onto the table really is trivial. Nobody is downplaying it, but rather viewing at how useful DX10.1 really is and that incorporating the required hardware has any benefit at all in a design/cost standpoint. How many DX10.1 titles are out there as of today? Exactly. By the time DX10.1 pick up some steam, newer and faster parts (probably cheaper too) will be out to make today's generation DX10.1 parts meaningless.

The geometry shader in first generation DX10 cards is definitely buggy mainly because there is a issue with DX10 API itself, technically it is supposed to perform better than DX9 by routing the calls to GPU but something went wrong and this was addressed in DX10.1 with more features like cube mapping and mandatory AA without performance drop. If you work with DX10 SDK you can't understand why all your apps run faster with 10.1 api even in software emulation mode. Also since most DX10 games are are heavily dynamic and shader dependent, the shader processors in GPU didn't have enough juice to make it run faster, developers didn't understand how to balance DX10 shaders. That's why there is a hesitation among game developers about dx10.1 because it is different and tricky and all the performance figures can only come when properly implemented. But nevertheless we are going to see Starcraft2, HL3, Diablo using it.
 

toslat

Senior member
Jul 26, 2007
216
0
76
Originally posted by: BFG10K
Originally posted by: toslat
I don't think you need to get rid of AFR (dont know what you meant by ARP) to fix this as the offset problem can be ameliorated by introducing a delay/buffer in the path of the GPUs. The delay would have to be adaptive in a practical scenario as the correct value depends on FPS and all you really want to get rid of is the bias
The delay solution was discussed at length in the micro-stutter thread we had a while back. Yes, in theory it would work but in practice it would have to extremely adaptive so it might cause more harm than good if it didn't adapt properly.

That and it won't address the issue of input lag caused by AFR.

Could you (or anyone else) please provide a link to the micro-stutter thread. Did some searches but couldn't come up with anything that i thought fit the bill. Thanks.
 
Apr 27, 2004
32
0
0
Originally posted by: chizow
Originally posted by: toslat
On being CPU limited:
A simple test using two quads at two different clocks (all else same) should put this to rest
I've been linking them throughout, but its *very* obvious in any bench that uses 4GHz and some of the exotic configs out there, like Tri-SLI or CrossFireX. Anyways, here's two really good examples from Tweaktown:

4870 Crossfire @ 3GHz

4870 Crossfire @ 4GHz

Summary: Up to 1920, there is very little difference in performance between 4870CF and 4850CF and much less scaling compared to a single 4870 to 4870CF vs 4850 to 4850CF. Scaling at 2560 is less consistent, but you still see big gains meaning you are not completely GPU bottlenecked. Tweaktown also has 4850CF @ 3GHz and 4GHz and GTX 280 in SLI/Tri-SLI that echoes similar results. What you will notice is that the GTX 280 also scales very well as a single-GPU depending on CPU speed, meaning it is also CPU bottlenecked up to 1920. I'd expect similar from a faster 4870 variant up until it capped out about the same as the GTX 280.

On PLX 'bridge' and shared PCIe:
The PLX chips on x870X2 do not split lanes i.e. 16x switched ~= 8x:8x. It is wrong to use 8x:8x performance of 4850 CF to predict performance of 4870X2

IIRC the 3870X2 has a 48 lane 3 port v1.1 PEX8547 switch (not a bridge) and thus each GPU gets full 16x access when it is granted and not 8x. The advantage of this is that since the PCIe connection to the north bridge carries bursty traffic (actual traffic pattern depends on the game), the impact of latency is reduced and bandwidth preserved, as opposed to the bandwidth bottlenecking that would occur with a fixed 8x:8x split.

In the 4870X2, the new switch (possibly PEX8648) will support v2.0 which will double the available data rate (though with slight latency increase).

Also its is rumored that inter GPU traffic will improve. Not sure if inter GPU traffic went through the NB (and memory) in the 3870X2 since the switch functionally could route traffic between the GPUs. So I expect, at the least, that inter GPU traffic is switched at the PEX chip, and if possible they might have a common/duplicate memory area.

So there is no basis yet to state that the 4870X2 will be bottlenecked by the 'bridge' unless you can provide stats that show that the traffic pattern on the bus is sustained at >50% for a 4870 (single or CF),and even then show that ( 2x individual data - common data) exceeds 100% of 16x in CF

That does look to be true about the PLX switch, but that inherently assumes a normal PCIE card will be using the bus at less than 50% efficiency plus you add latency into the equation over a straight split. Here's a pretty good write-up on the 3870 and the switch, with diagram (about 1/2 way down) Digital-Daily 3870X2 Bridge Chip. It actually seems like there's more overhead with a switch rather than a splitter regardless of available bandwidth.
from the article first article where the 4870 CF only gets ~100 FPS
Drivers: Forceware 175.16 (9 Series) 177.34 (GTX 260/280) & Catalyst 8.6
from the 2nd article where 4870 CF gets ~120 FPS
Drivers: June Hot Fix Catalyst Driver (HD 4800 Series), Catalyst 8.6, Forceware 175.16 (9 Series) 177.34 (GTX 280),

Kinda hard to say that its 100% CPU bottleneck when they used different drivers, especially when the AMP GXT 280 only changed from ~93 to ~95 FPS when going from 3ghz to 4ghz. It is highly more probably the 4870 increases were due to driver improvements.

 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: NullSubroutine
Originally posted by: chizow
Originally posted by: toslat
On being CPU limited:
A simple test using two quads at two different clocks (all else same) should put this to rest
I've been linking them throughout, but its *very* obvious in any bench that uses 4GHz and some of the exotic configs out there, like Tri-SLI or CrossFireX. Anyways, here's two really good examples from Tweaktown:

4870 Crossfire @ 3GHz

4870 Crossfire @ 4GHz

Summary: Up to 1920, there is very little difference in performance between 4870CF and 4850CF and much less scaling compared to a single 4870 to 4870CF vs 4850 to 4850CF. Scaling at 2560 is less consistent, but you still see big gains meaning you are not completely GPU bottlenecked. Tweaktown also has 4850CF @ 3GHz and 4GHz and GTX 280 in SLI/Tri-SLI that echoes similar results. What you will notice is that the GTX 280 also scales very well as a single-GPU depending on CPU speed, meaning it is also CPU bottlenecked up to 1920. I'd expect similar from a faster 4870 variant up until it capped out about the same as the GTX 280.

On PLX 'bridge' and shared PCIe:
The PLX chips on x870X2 do not split lanes i.e. 16x switched ~= 8x:8x. It is wrong to use 8x:8x performance of 4850 CF to predict performance of 4870X2

IIRC the 3870X2 has a 48 lane 3 port v1.1 PEX8547 switch (not a bridge) and thus each GPU gets full 16x access when it is granted and not 8x. The advantage of this is that since the PCIe connection to the north bridge carries bursty traffic (actual traffic pattern depends on the game), the impact of latency is reduced and bandwidth preserved, as opposed to the bandwidth bottlenecking that would occur with a fixed 8x:8x split.

In the 4870X2, the new switch (possibly PEX8648) will support v2.0 which will double the available data rate (though with slight latency increase).

Also its is rumored that inter GPU traffic will improve. Not sure if inter GPU traffic went through the NB (and memory) in the 3870X2 since the switch functionally could route traffic between the GPUs. So I expect, at the least, that inter GPU traffic is switched at the PEX chip, and if possible they might have a common/duplicate memory area.

So there is no basis yet to state that the 4870X2 will be bottlenecked by the 'bridge' unless you can provide stats that show that the traffic pattern on the bus is sustained at >50% for a 4870 (single or CF),and even then show that ( 2x individual data - common data) exceeds 100% of 16x in CF

That does look to be true about the PLX switch, but that inherently assumes a normal PCIE card will be using the bus at less than 50% efficiency plus you add latency into the equation over a straight split. Here's a pretty good write-up on the 3870 and the switch, with diagram (about 1/2 way down) Digital-Daily 3870X2 Bridge Chip. It actually seems like there's more overhead with a switch rather than a splitter regardless of available bandwidth.
from the article first article where the 4870 CF only gets ~100 FPS
Drivers: Forceware 175.16 (9 Series) 177.34 (GTX 260/280) & Catalyst 8.6
from the 2nd article where 4870 CF gets ~120 FPS
Drivers: June Hot Fix Catalyst Driver (HD 4800 Series), Catalyst 8.6, Forceware 175.16 (9 Series) 177.34 (GTX 280),

Kinda hard to say that its 100% CPU bottleneck when they used different drivers, especially when the AMP GXT 280 only changed from ~93 to ~95 FPS when going from 3ghz to 4ghz. It is highly more probably the 4870 increases were due to driver improvements.

supposedly, the 4GHZ machine had 4GB of ram and the 3GHZ machine had 2GB of ram..

So:
2 more gigs of ram
newer drivers
overclocked CPU

all those things contribute to the higher FPS, hard to say how much each does (but my guess is that it is mostly the ram.)
 
Apr 27, 2004
32
0
0
Originally posted by: taltamir
Originally posted by: NullSubroutine
Originally posted by: chizow
Originally posted by: toslat
On being CPU limited:
A simple test using two quads at two different clocks (all else same) should put this to rest
I've been linking them throughout, but its *very* obvious in any bench that uses 4GHz and some of the exotic configs out there, like Tri-SLI or CrossFireX. Anyways, here's two really good examples from Tweaktown:

4870 Crossfire @ 3GHz

4870 Crossfire @ 4GHz

Summary: Up to 1920, there is very little difference in performance between 4870CF and 4850CF and much less scaling compared to a single 4870 to 4870CF vs 4850 to 4850CF. Scaling at 2560 is less consistent, but you still see big gains meaning you are not completely GPU bottlenecked. Tweaktown also has 4850CF @ 3GHz and 4GHz and GTX 280 in SLI/Tri-SLI that echoes similar results. What you will notice is that the GTX 280 also scales very well as a single-GPU depending on CPU speed, meaning it is also CPU bottlenecked up to 1920. I'd expect similar from a faster 4870 variant up until it capped out about the same as the GTX 280.

On PLX 'bridge' and shared PCIe:
The PLX chips on x870X2 do not split lanes i.e. 16x switched ~= 8x:8x. It is wrong to use 8x:8x performance of 4850 CF to predict performance of 4870X2

IIRC the 3870X2 has a 48 lane 3 port v1.1 PEX8547 switch (not a bridge) and thus each GPU gets full 16x access when it is granted and not 8x. The advantage of this is that since the PCIe connection to the north bridge carries bursty traffic (actual traffic pattern depends on the game), the impact of latency is reduced and bandwidth preserved, as opposed to the bandwidth bottlenecking that would occur with a fixed 8x:8x split.

In the 4870X2, the new switch (possibly PEX8648) will support v2.0 which will double the available data rate (though with slight latency increase).

Also its is rumored that inter GPU traffic will improve. Not sure if inter GPU traffic went through the NB (and memory) in the 3870X2 since the switch functionally could route traffic between the GPUs. So I expect, at the least, that inter GPU traffic is switched at the PEX chip, and if possible they might have a common/duplicate memory area.

So there is no basis yet to state that the 4870X2 will be bottlenecked by the 'bridge' unless you can provide stats that show that the traffic pattern on the bus is sustained at >50% for a 4870 (single or CF),and even then show that ( 2x individual data - common data) exceeds 100% of 16x in CF

That does look to be true about the PLX switch, but that inherently assumes a normal PCIE card will be using the bus at less than 50% efficiency plus you add latency into the equation over a straight split. Here's a pretty good write-up on the 3870 and the switch, with diagram (about 1/2 way down) Digital-Daily 3870X2 Bridge Chip. It actually seems like there's more overhead with a switch rather than a splitter regardless of available bandwidth.
from the article first article where the 4870 CF only gets ~100 FPS
Drivers: Forceware 175.16 (9 Series) 177.34 (GTX 260/280) & Catalyst 8.6
from the 2nd article where 4870 CF gets ~120 FPS
Drivers: June Hot Fix Catalyst Driver (HD 4800 Series), Catalyst 8.6, Forceware 175.16 (9 Series) 177.34 (GTX 280),

Kinda hard to say that its 100% CPU bottleneck when they used different drivers, especially when the AMP GXT 280 only changed from ~93 to ~95 FPS when going from 3ghz to 4ghz. It is highly more probably the 4870 increases were due to driver improvements.

supposedly, the 4GHZ machine had 4GB of ram and the 3GHZ machine had 2GB of ram..

So:
2 more gigs of ram
newer drivers
overclocked CPU

all those things contribute to the higher FPS, hard to say how much each does (but my guess is that it is mostly the ram.)

Then why wouldnt the GTX 280 AMP only gain ~2-3FPS @ 90+ if it was just ram? I can't think any reason unless CF uses more RAM than a single Nvidia card. Even the CF 4850 CF saw great gains except at 2560, which could be indicitive of video memory bandwidth (as opposed to what one would think be happening such as a frame buffer limit of 512MB).
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
I am pretty sure i heard in many places that multi GPU setups take more system ram then a single card. for both companies.

Besides, the video bandwidth did not change in those two tests. the cards were the same, the drivers were newer, there was more system ram, and there was an OCed CPU. that is it. same video bandwidth.
 
Apr 27, 2004
32
0
0
Originally posted by: taltamir
I am pretty sure i heard in many places that multi GPU setups take more system ram then a single card. for both companies.

Besides, the video bandwidth did not change in those two tests. the cards were the same, the drivers were newer, there was more system ram, and there was an OCed CPU. that is it. same video bandwidth.

4850 scores

test 1
100
112
115

test 2
100
131
148

What I am saying is that the drivers improved the speed. The 4870 saw an increase at all resolutions, but the 4850 in the 2560 test saw none, but saw a good amount in the other two tests (1280 would get both drivers+CPU increase). I am saying the reason the 4870 would see a boost in the 2560 when the 4850 might not because of video bandwidth restrictions. I don't think 2GB of ram is going to make the 4870 to perform faster @ 2560 but not the 4850.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
are you comparing the CF or single card results?

same speed at 2560, makes sense, GPU limited entirely.
Faster at lower speeds, again makes sense, you have a 33% OC on the CPU. at the lowest settings it went up from 115 to 148, almost 33%, and lowest resolutions are known for being CPU limited most of the time (and in insanely high FPS)
 
Apr 27, 2004
32
0
0
Those scores are 4850 CF vs 4850 CF, and yah, the lower resolution most certainly gains a FSP boost because of overclock, but also from driver. I was just trying to figure out (which I think I did possibly) why the 4850 CF saw no life in performance at 2560 when the 4870 CF did, while taking into consideration the boost in performance was mostly due to driver change (and not CPU @ 2560 because the GTX 280 AMP only saw 3fps increase).
 

Hauk

Platinum Member
Nov 22, 2001
2,808
0
0
Originally posted by: Aberforth
You call this competition? Where was NV's competitive prices before Radeon 4800 series? This seems to me as a cheap stunt rather than competition, you know whats fair competition? If each of the products have similar configuration and perform close to each other and offering same price- so user can make decisions based on reliability. This isn't the case...you've got 1.4 billion transistor GPU performing only 10% better than 4870. Just a few weeks back NV was downplaying DX10.1 even though it resolves issues related to DX10 and what we have now? Nothing...they are just going to bring another card in Q4 to with DX10.1 and GDDR5 and ripp people's wallets. I think NV and all it's supporters are just a bunch of jerks telling people what they want to hear.

There are the key statements. I think nV's error this cycle was believing the masses would just buy whatever it is they were selling. Somewhat arrogantly, they downplayed DX10.1. ATI's been supported 10.1 for some time now, and it's been known for some time they intended to use a 256-bit GDDR5 solution.

Did nV think the majority would blindly continue purchasing their product without considering ATI's offering using the latest technology? Perhaps if R7xx hadn't been such a little scrapper that would have been the case. But given it's performance, and the enhanced future-proofing perception it offers, it's a solid choice and sales are proving that.

I wouldn't say as one did that nV got their asses kicked; but I would say their nose got bloodied. Lessons in humility are often the toughest to learn.
 

toslat

Senior member
Jul 26, 2007
216
0
76

Thanks for the link (had actually read the thread before).
Read through the thread and though it was a nice read, it was disappointing that there was more of bickering on existence and importance of micro-stutter, than suggestion on fixes.

I still feel a delay would ameliorate things without additional lag being added. Closer to optimal is to have a predictor that varies the time when the processing of a frame is started.
 

dreddfunk

Senior member
Jun 30, 2005
358
0
0
Originally posted by: nRollo
Originally posted by: dreddfunk
Originally posted by: nRollo

Hmm.

I don't see NVIDIA's ass as kicked.

dreddfunk: [tongue-in-cheek] Too busy admiring NVIDIA's pretty face to see what's happening behind, eh?

...

Where I'm sitting I doubt they'll even lose much market share with current bang for buck.

dreddfunk: [ongue-in-cheek] I'd say that from 'where you're sitting' NVIDIA *can't* lose any market share. Your 'seat' is getting free parts.




Cheers as always!


I don't want to go any further off topic in this thread, I'd be happy to discuss this with you in another if you'd like to start one.

I'm interested to see the 4870X2, and as taltamir points out, the fact behind the pre-launch info.

Worst case scenario (another 3870X2 with 4870s) it will still be a high performing part and give people more options.

My post was meant completely playfully. I hoped to generate a chuckle from you, nothing more. Still, when the current roles were reversed (i.e. NVIDIA releasing products that made AMD drastically cut prices), I suspect that you had a different view. At any rate, you can PM me if you are interested in discussing the merits of NVIDIA's case, as you point out, it doesn't--and never did--belong in this thread about the 4870X2.



I'm curious about the 4870X2 primarily because of the manufacturing premise--e.g. start with the performance/enthusiast segment and scale from there, with multi-chip solutions if necessary.

In some ways, I hope the dual-card (or dual-chip on a single card) issues are resolved, for the simple reason that it gives both NVIDIA and AMD more design flexibility, and consumers more purchasing flexibility.

I think it will give both companies the ability to leverage existing R&D better. Then again, I'd be curious to hear from someone who knows more about the manufacturing process. It could be that designing and fabbing a chip isn't any more difficult that designing and fabbing a PCB.


 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: dreddfunk
Originally posted by: nRollo
Originally posted by: dreddfunk
Originally posted by: nRollo

Hmm.

I don't see NVIDIA's ass as kicked.

dreddfunk: [tongue-in-cheek] Too busy admiring NVIDIA's pretty face to see what's happening behind, eh?

...

Where I'm sitting I doubt they'll even lose much market share with current bang for buck.

dreddfunk: [ongue-in-cheek] I'd say that from 'where you're sitting' NVIDIA *can't* lose any market share. Your 'seat' is getting free parts.




Cheers as always!


I don't want to go any further off topic in this thread, I'd be happy to discuss this with you in another if you'd like to start one.

I'm interested to see the 4870X2, and as taltamir points out, the fact behind the pre-launch info.

Worst case scenario (another 3870X2 with 4870s) it will still be a high performing part and give people more options.

My post was meant completely playfully. I hoped to generate a chuckle from you, nothing more. Still, when the current roles were reversed (i.e. NVIDIA releasing products that made AMD drastically cut prices), I suspect that you had a different view. At any rate, you can PM me if you are interested in discussing the merits of NVIDIA's case, as you point out, it doesn't--and never did--belong in this thread about the 4870X2.



I'm curious about the 4870X2 primarily because of the manufacturing premise--e.g. start with the performance/enthusiast segment and scale from there, with multi-chip solutions if necessary.

In some ways, I hope the dual-card (or dual-chip on a single card) issues are resolved, for the simple reason that it gives both NVIDIA and AMD more design flexibility, and consumers more purchasing flexibility.

I think it will give both companies the ability to leverage existing R&D better. Then again, I'd be curious to hear from someone who knows more about the manufacturing process. It could be that designing and fabbing a chip isn't any more difficult that designing and fabbing a PCB.

That would change the world as we know it, and be a welcome change.

You're right, when you get NVIDIA's best free, you're not likely to go out and buy the competition too often. (although I did up to the 1800 series and you never know)
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |