ATI 4xxx Series Thread

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: ArchAngel777
Originally posted by: bryanW1995
If TMU were identified as a crippling bottleneck with RV670, simply doubling them when everything else is doubled does nothing for your overall performance

huh?? if tmu are your crippling bottleneck, and you double them along with everything else, wouldn't you double your overall performance?

I don't think that is what he meant. I think what he meant was that if the TMU was the problem with R600, then doubling everything would still leave it TMU bound, regardless of the improved performance and would therefore leave doubts that the 'only' problem with the R6XX series was a lack of TMU performance.

Yes, exactly, thanks for the clarification. I think that was what the rest of what I wrote conveyed, but saying "does nothing for overall perfromance" is incorrect, it would certainly increase performance, just not by 2x imo.
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: chizow
Originally posted by: ArchAngel777
Originally posted by: bryanW1995
If TMU were identified as a crippling bottleneck with RV670, simply doubling them when everything else is doubled does nothing for your overall performance

huh?? if tmu are your crippling bottleneck, and you double them along with everything else, wouldn't you double your overall performance?

I don't think that is what he meant. I think what he meant was that if the TMU was the problem with R600, then doubling everything would still leave it TMU bound, regardless of the improved performance and would therefore leave doubts that the 'only' problem with the R6XX series was a lack of TMU performance.

Yes, exactly, thanks for the clarification. I think that was what the rest of what I wrote conveyed, but saying "does nothing for overall perfromance" is incorrect, it would certainly increase performance, just not by 2x imo.

I'm still wondering why you don't think that doubling (in case of TMU, MORE than doubling) everything will not produce 2x RV670 performance. What makes you think that 2x shader performance, 2.2x texture performance, 1.72x memory bandwidth, and 10% more pixel performance will not result in 2x total performance? I've already shown that ROPs are not a bottleneck for current cards, especially with the high clockspeeds of RV770.

Already with 3870 X2 we see 2x 3870 performance in many cases, despite the overhead of Crossfire. And it's also quite possible that RV770 will be faster per-clock than R600 (judging by past releases; almost never is a new generation not faster per clock.) Certainly we've seen that before in R300->R420, GeForce 6 -> GeForce 7, and so on.
 

superbooga

Senior member
Jun 16, 2001
333
0
0
Originally posted by: Extelleron
I'm still wondering why you don't think that doubling (in case of TMU, MORE than doubling) everything will not produce 2x RV670 performance. What makes you think that 2x shader performance, 2.2x texture performance, 1.72x memory bandwidth, and 10% more pixel performance will not result in 2x total performance? I've already shown that ROPs are not a bottleneck for current cards, especially with the high clockspeeds of RV770.

The reason is GPUs are so much more than shaders, TMUs, ROPs and memory bandwidth. Increasing all the things doesn't help if utilization stays low. The GPU itself is like an OS kernel; it needs share the resources efficiently and prevent stalls. Things like thread scheduling, thread granularity, data structures, etc. all have an effect on efficiency. Optimizing the driver to keep all the units busy is also a challenge. There's a lot of complexity that goes far, far beyond what you mentioned.
 

allies

Platinum Member
Jun 18, 2002
2,572
0
71
Originally posted by: superbooga
Originally posted by: Extelleron
I'm still wondering why you don't think that doubling (in case of TMU, MORE than doubling) everything will not produce 2x RV670 performance. What makes you think that 2x shader performance, 2.2x texture performance, 1.72x memory bandwidth, and 10% more pixel performance will not result in 2x total performance? I've already shown that ROPs are not a bottleneck for current cards, especially with the high clockspeeds of RV770.

The reason is GPUs are so much more than shaders, TMUs, ROPs and memory bandwidth. Increasing all the things doesn't help if utilization stays low. The GPU itself is like an OS kernel; it needs share the resources efficiently and prevent stalls. Things like thread scheduling, thread granularity, data structures, etc. all have an effect on efficiency. Optimizing the driver to keep all the units busy is also a challenge. There's a lot of complexity that goes far, far beyond what you mentioned.

Are you implying that the utilization of R6XX series is subpar? I thought that for what it was working with (handicapped with low TMU) it has done quite a fair job.
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: superbooga
Originally posted by: Extelleron
I'm still wondering why you don't think that doubling (in case of TMU, MORE than doubling) everything will not produce 2x RV670 performance. What makes you think that 2x shader performance, 2.2x texture performance, 1.72x memory bandwidth, and 10% more pixel performance will not result in 2x total performance? I've already shown that ROPs are not a bottleneck for current cards, especially with the high clockspeeds of RV770.

The reason is GPUs are so much more than shaders, TMUs, ROPs and memory bandwidth. Increasing all the things doesn't help if utilization stays low. The GPU itself is like an OS kernel; it needs share the resources efficiently and prevent stalls. Things like thread scheduling, thread granularity, data structures, etc. all have an effect on efficiency. Optimizing the driver to keep all the units busy is also a challenge. There's a lot of complexity that goes far, far beyond what you mentioned.

If you double the execution resources (shaders, TMU, etc) you get double the performance unless another bottleneck exists.

GPUs are extremely parallel and are not like CPUs where you have to worry about all resources (cores) being utilized. Whatever is there is utilized, unless there is a bottleneck in the design. If you take a full R600 design and give it 10 GB/s memory bandwidth, then it won't be able to take advantage of all its shading resources. If you give R600 4 TMU instead of 16, then it won't be able to take advantage of its shading resources and pixel power.

But if you give the card double the memory bandwidth, double the texture performance, and double the shading performance, with increased pixel performance, then you will see a great increase in performance (2x). Look at R600 vs RV630 or RV610; performance scales very linearly with execution resources.

There is certainly more to a GPU than the units we are talking about, but for all intents and purposes they are what define performance. Certainly buffer sizes/cache inside the video card effects things and as you said there is a lot more to a GPU than TMUs/SPs/ROPs. But once you have that base design of a GPU, you add on execution units. You double the execution units, you double performance.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
@ allies Compared to the G80/G9x id say yes. Especially since all R6x0 architectures are based on a VLIW design + vec5 like shaders. This means to increase or sustain acceptable utilization means optimizing at a driver level (since the complier will be the bottleneck in this case) for each individual game. While ATi has an architecture that has a massive peak floating point numbers, it made life a whole lot harder to reach these figures since games out now and from the past all use varying shaders, channel lengths, register count and etc.

What superbooga said is correct. Theres other important aspects (not the big 4 i.e ALUs, TMUs, ROPs, Memory bandwidth) of a GPU architecture that can act as serious bottlenecks. For example, its almost a fact that G80 and its variants out now are bottlenecked by its triangle setup more than anything else. So simply doubling the big4 doesn't equate to doubling of total performance because there are other underlying parts of the GPU that can hinder the overall efficiency of the GPU.

Atleast AMD/ATi has finally realized that shader usage isnt as exponential as they've marketed to be i.e resulting in there architectures having a very higher ALU to TMU ratio. (R580 was 3:1, R600 was 4:1) Since RV770 would be based on the RV670, they've gone to a 3:1 ALU:TMU ratio. (480ALUs packed into groups of 5 is 96 "vec5" like shaders). Also if they can speed up AA resolve on the shaders, i think AMD/ATi can be very competitive with their parts.
 

superbooga

Senior member
Jun 16, 2001
333
0
0
Originally posted by: Extelleron
GPUs are extremely parallel and are not like CPUs where you have to worry about all resources (cores) being utilized.

Being extremely parallel actually exacerbates the problem.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: ArchAngel777
Originally posted by: bryanW1995
no, he said that doubling EVERYTHING does nothing for your overall performance. He didn't say that it did nothing to help you find out if the tmu's were the bottleneck.

That is what he said, but that isn't what he meant. This much is obvious. Were human, we make mistakes and do not always type out properly what we mean to convey.

I wasn't trying to nitpick what he said, it's just that starting off a long post with a statement like that could easily confuse some of our n00bs...leaving them vulnerable to getting "rollo'd"
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Cookie Monster
@ allies Compared to the G80/G9x id say yes. Especially since all R6x0 architectures are based on a VLIW design + vec5 like shaders. This means to increase or sustain acceptable utilization means optimizing at a driver level (since the complier will be the bottleneck in this case) for each individual game. While ATi has an architecture that has a massive peak floating point numbers, it made life a whole lot harder to reach these figures since games out now and from the past all use varying shaders, channel lengths, register count and etc.

Didn't ATi put a Thread Scheduler on the chip to offload the work of optimizing and maximizing the GPU utilization from the driver, hence lower CPU overhead? That's why the magic driver that it's supposed to push the performance of this architecture beyond the 8800GTX Levels never made it, seems that the TMU bottleneck is the main cause, also probably the Thread Scheduler isn't working very efficiently like it did with the R5X0 architecture. Also considering the TMU handicap, VLIW architecture and Anti Aliasing resolve on Shaders, the card isn't doing that bad after all, but definitively need enhancements at the architectural level.

 

SniperDaws

Senior member
Aug 14, 2007
762
0
0
i wouldnt get your hopes up for either the Nvidia or ATI new cards, the last lot were a big disapointment.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: evolucion8
Originally posted by: Cookie Monster
@ allies Compared to the G80/G9x id say yes. Especially since all R6x0 architectures are based on a VLIW design + vec5 like shaders. This means to increase or sustain acceptable utilization means optimizing at a driver level (since the complier will be the bottleneck in this case) for each individual game. While ATi has an architecture that has a massive peak floating point numbers, it made life a whole lot harder to reach these figures since games out now and from the past all use varying shaders, channel lengths, register count and etc.

Didn't ATi put a Thread Scheduler on the chip to offload the work of optimizing and maximizing the GPU utilization from the driver, hence lower CPU overhead? That's why the magic driver that it's supposed to push the performance of this architecture beyond the 8800GTX Levels never made it, seems that the TMU bottleneck is the main cause, also probably the Thread Scheduler isn't working very efficiently like it did with the R5X0 architecture. Also considering the TMU handicap, VLIW architecture and Anti Aliasing resolve on Shaders, the card isn't doing that bad after all, but definitively need enhancements at the architectural level.

No. I think you are misunderstanding the difference between a thread scheduler and the compiler. Threads are simply a stream of executions and the thread scheduler simply prioritizes these to maintain instruction/data throughput for the entire chip. I guess its another factor in ultilization but not as big of an factor as the compiler.

So the thread scheduler is there to feed the shader core efficently. However the compiler or you can even call it the assembler (since it "packs" instructions) does all the work to make sure that each of these "Vec5" like shader unit to be busy all the time. This is where it gets tricky because all games use shaders of all shapes and size. Thats why software engineers has to optimize at a driver level because there are more gains to be had then simply leaving it to the general algorithm that is handling all the compiling.
 

Bateluer

Lifer
Jun 23, 2001
27,730
8
0
I'll assume that, without reading this entire thread, that nearly everything is, thus far, speculation and rumor? Someone bump this thread and edit the title, or post a new thread when solid information is obtained, k?
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Cookie Monster
Originally posted by: evolucion8
Originally posted by: Cookie Monster
@ allies Compared to the G80/G9x id say yes. Especially since all R6x0 architectures are based on a VLIW design + vec5 like shaders. This means to increase or sustain acceptable utilization means optimizing at a driver level (since the complier will be the bottleneck in this case) for each individual game. While ATi has an architecture that has a massive peak floating point numbers, it made life a whole lot harder to reach these figures since games out now and from the past all use varying shaders, channel lengths, register count and etc.

Didn't ATi put a Thread Scheduler on the chip to offload the work of optimizing and maximizing the GPU utilization from the driver, hence lower CPU overhead? That's why the magic driver that it's supposed to push the performance of this architecture beyond the 8800GTX Levels never made it, seems that the TMU bottleneck is the main cause, also probably the Thread Scheduler isn't working very efficiently like it did with the R5X0 architecture. Also considering the TMU handicap, VLIW architecture and Anti Aliasing resolve on Shaders, the card isn't doing that bad after all, but definitively need enhancements at the architectural level.


No. I think you are misunderstanding the difference between a thread scheduler and the compiler. Threads are simply a stream of executions and the thread scheduler simply prioritizes these to maintain instruction/data throughput for the entire chip. I guess its another factor in ultilization but not as big of an factor as the compiler.

So the thread scheduler is there to feed the shader core efficently. However the compiler or you can even call it the assembler (since it "packs" instructions) does all the work to make sure that each of these "Vec5" like shader unit to be busy all the time. This is where it gets tricky because all games use shaders of all shapes and size. Thats why software engineers has to optimize at a driver level because there are more gains to be had then simply leaving it to the general algorithm that is handling all the compiling.

My bad, I meant the Ultra Thread Dispatcher. I saw in many R600 architecture reviews that is quite handy to unload optimizitation work from the driver and etc. Unless if I misunderstood it, that's what stated.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Cookie Monster
Ultra thread dispatcher's just a fancy term for the thread scheduler (thread arbiter + sequencer).

They're running out of thread based names, like Hyper Threading, Ultra Thread Dispatcher, Gigathread, now what? Mega Thread, TeraThread, Petathread, super thread, Supercalifragilisticexpialidocious thread :laugh:
 

SinfulWeeper

Diamond Member
Sep 2, 2000
4,567
11
81
Originally posted by: Extelleron
Originally posted by: chizow
After seeing more firm specs on the RV770 I don't think there's too much good news here. The only good news for ATI is that they'll have the fastest single-gpu card for a month or two until GT200 releases, at which point they'll get lapped again in terms of performance. Than an X2 version might put them in a competitive position again at which point NV will respond with a die-shrink or SLI-on-a-card solution of their own or both. All while maintaining a comfortable lead at the high-end with a $2000 GT200 Tri-SLi solution.

As for 4870, I don't think it'll be much faster than 9800GTX/8800GTX/Ultra in terms of performance. Maybe 15-25% faster, max. 16 > 32 TMUs seem to be the biggest gain here and specifically mentioned as a major bottleneck for ATi R600 parts. Still, that only puts ATI's texture fill-rate equivalent to a 9600GT, not counting any advantages from different vendor design. The rest of the specs seem rather unspectacular with questionable gains, although shaders may also scale well as that seemed to be another weak point of R600. Going from 64 to 96 real shaders, or 320 to 480 super scalar along with unlinked shader clocks should help close any gaps in shader performance in unoptimized games where NV held a lead previously.

This part would've been a great answer to G80/G92 6 months ago when RV670 released, or even a year ago when R600 released. But at this point I think it'll be obvious that its too little too late, mostly competing with G80/G92 and made obsolete again when NV fires back with GT200 later this quarter.

If you look at the specifications vs. the performance of the 3870, then you are dead wrong.

Looking at pure numbers, the HD 4870 is a solid ~2X improvement in just about every area of the GPU over HD 3870. The only area where performance hasn't been improved much is the ROP area; the ROPs are not much of a bottleneck, and with a faster core speed, ATI already has a significant advantage in that area over nVidia.

Looking at the numbers to back up what I said:

In terms of shader performance:
HD 4870 (480 * 2 * 1.050) = 1008
HD 3870 (320 * 2 * 0.775) = 496
4870 = 2.03X 3870

In terms of texture performance:
HD 4870 (32 * 0.850) = 27.2
HD 3870 (16 * 0.775) = 12.4
4870 = 2.19X 3870

In terms of memory bandwidth:
HD 4870 (3880 * 0.032) = 124.2 GB/s
HD 3870 (2250 * 0.032) = 72.0 GB/s
4870 = 1.725X 3870

The 4870 improves in every aspect significantly, and it is more balanced than the current design. Shader performance remains strong, but now the texture performance is there to back it up. The GPU has plenty of power and also plenty of memory bandwidth to keep it fed.

Looking at 3870 reviews... there is no situation that I can find where doubling the 3870's performance does not equal better performance than the 8800 Ultra. In many cases the gain is very significant.

The 4870 X2 should definitely exceed the performance of the 9800GX2 by a wide margin. Obviously GT200 is another story. But how powerful is GT200 really going to be? Considering the current die size of G92, which is huge as it is, how much room does nVidia have to expand on it? I cannot imagine that GT200 would be anything more than 40-50% faster than 9800GX2 if it is a single GPU, and from what I see that would make it slightly faster than the 4870 X2, if that.

The problem with nVidia is of course die size, as I mentioned. From what I have seen, RV770 should be much smaller than nVidia's G92 and exceed its performance greatly. Even nVidia moving to a 55nm process would likely make G92 around equal to RV770 in die size. Considering rumors point to GT200 on 65nm, it would likely be a chip like G80, in the range of ~500mm^2. That's not a GPU that any company wants to produce; it is a lot easier and cheaper to fab (2) 250mm^2 chips than to fab a single 500mm^2 chip. If GT200 is really single GPU, then that will likely be the situation.

As for nVidia responding with a die shrink, they are usually way behind in moving to a new process. AMD moved to 55nm in Nov 07, and nVidia does not have a single 55nm GPU out 5 months later. By the end of this year, RV770 could be shrunk to 45nm if TSMC's process is ready in time.

Might possibly make sense with a business mind in sense. Or Bill Gates ultra efficient home. But in real world joe blow material. Die size makes no arguement if there is another product that performs better. 55nm tech can blow me since 65nm technology is performing better and as of late, the only thing AMD can do is lower prices.

If this comes out and performs better, I might just update this response. Till then Nvidia all the way. Taking 2 companies on and doing very well on both fronts
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I personally don't care to get into who's the King NV or ATI . But I perfer ATI cards to NV. I just like ATI's IQ. Even tho NVs is really good.

But if The new 4000' series improves performance buy 50% I will be more than happy with an ATI 4870 X2. It will do me great till larrabee. Than I will swith to Intel not for frame rates but for what other task larrabee should beable to perform.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Nemesis 1
I personally don't care to get into who's the King NV or ATI . But I perfer ATI cards to NV. I just like ATI's IQ. Even tho NVs is really good.

But if The new 4000' series improves performance buy 50% I will be more than happy with an ATI 4870 X2. It will do me great till larrabee. Than I will swith to Intel not for frame rates but for what other task larrabee should beable to perform.

This is OT for a second, but will the Larrabee platform still allow for a discrete graphics card add in? Because Larrabee will most likely not offer anywhere near the performance of competing AMD/Nvidia cards. And then we have Intels drivers to deal with. Who knows how that will go.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I would say yes. I just don't think they'd commit suicide like that.

How many years was Intel pushing mobos without an AGP slot due to their integrated graphics? Don't rule out them trying to force nV out of a good chunk of the market by way of not supporting an interface they can use. Since they have started having a graphics capable add in slot across their product line Intel's marketshare has fallen sharply, I'm sure they have noticed this.
 

Dkcode

Senior member
May 1, 2005
995
0
0
Originally posted by: BenSkywalker
I would say yes. I just don't think they'd commit suicide like that.

How many years was Intel pushing mobos without an AGP slot due to their integrated graphics? Don't rule out them trying to force nV out of a good chunk of the market by way of not supporting an interface they can use. Since they have started having a graphics capable add in slot across their product line Intel's marketshare has fallen sharply, I'm sure they have noticed this.

Which would force enthusiasts and hardcore gamers into buying Nvidia based motherboards.

Does not seem like a good way to make more money.
 

v8envy

Platinum Member
Sep 7, 2002
2,720
0
0
Has anyone confirmed that NV has a license to produce a chipset for any of the Nehalem sockets? If not, we may soon be looking at a choice of intel CPU + feeble graphics vs feeble AMD cpu + enthusiast graphics. Much worse than having to choose between SLI and a decent chipset today. Could be just the thing to give AMD the boost they need, if they can last that long.

Of course, might not be long until AMD pulls a similar stunt. They don't need NV constantly eating their cake either, I'm sure they'd love to slow down the pace of graphics R&D. And if Intel's best enthusiast solution is low end they could leverage this to move enthusiasts to their platform.
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: SinfulWeeper
Originally posted by: Extelleron
Originally posted by: chizow
After seeing more firm specs on the RV770 I don't think there's too much good news here. The only good news for ATI is that they'll have the fastest single-gpu card for a month or two until GT200 releases, at which point they'll get lapped again in terms of performance. Than an X2 version might put them in a competitive position again at which point NV will respond with a die-shrink or SLI-on-a-card solution of their own or both. All while maintaining a comfortable lead at the high-end with a $2000 GT200 Tri-SLi solution.

As for 4870, I don't think it'll be much faster than 9800GTX/8800GTX/Ultra in terms of performance. Maybe 15-25% faster, max. 16 > 32 TMUs seem to be the biggest gain here and specifically mentioned as a major bottleneck for ATi R600 parts. Still, that only puts ATI's texture fill-rate equivalent to a 9600GT, not counting any advantages from different vendor design. The rest of the specs seem rather unspectacular with questionable gains, although shaders may also scale well as that seemed to be another weak point of R600. Going from 64 to 96 real shaders, or 320 to 480 super scalar along with unlinked shader clocks should help close any gaps in shader performance in unoptimized games where NV held a lead previously.

This part would've been a great answer to G80/G92 6 months ago when RV670 released, or even a year ago when R600 released. But at this point I think it'll be obvious that its too little too late, mostly competing with G80/G92 and made obsolete again when NV fires back with GT200 later this quarter.

If you look at the specifications vs. the performance of the 3870, then you are dead wrong.

Looking at pure numbers, the HD 4870 is a solid ~2X improvement in just about every area of the GPU over HD 3870. The only area where performance hasn't been improved much is the ROP area; the ROPs are not much of a bottleneck, and with a faster core speed, ATI already has a significant advantage in that area over nVidia.

Looking at the numbers to back up what I said:

In terms of shader performance:
HD 4870 (480 * 2 * 1.050) = 1008
HD 3870 (320 * 2 * 0.775) = 496
4870 = 2.03X 3870

In terms of texture performance:
HD 4870 (32 * 0.850) = 27.2
HD 3870 (16 * 0.775) = 12.4
4870 = 2.19X 3870

In terms of memory bandwidth:
HD 4870 (3880 * 0.032) = 124.2 GB/s
HD 3870 (2250 * 0.032) = 72.0 GB/s
4870 = 1.725X 3870

The 4870 improves in every aspect significantly, and it is more balanced than the current design. Shader performance remains strong, but now the texture performance is there to back it up. The GPU has plenty of power and also plenty of memory bandwidth to keep it fed.

Looking at 3870 reviews... there is no situation that I can find where doubling the 3870's performance does not equal better performance than the 8800 Ultra. In many cases the gain is very significant.

The 4870 X2 should definitely exceed the performance of the 9800GX2 by a wide margin. Obviously GT200 is another story. But how powerful is GT200 really going to be? Considering the current die size of G92, which is huge as it is, how much room does nVidia have to expand on it? I cannot imagine that GT200 would be anything more than 40-50% faster than 9800GX2 if it is a single GPU, and from what I see that would make it slightly faster than the 4870 X2, if that.

The problem with nVidia is of course die size, as I mentioned. From what I have seen, RV770 should be much smaller than nVidia's G92 and exceed its performance greatly. Even nVidia moving to a 55nm process would likely make G92 around equal to RV770 in die size. Considering rumors point to GT200 on 65nm, it would likely be a chip like G80, in the range of ~500mm^2. That's not a GPU that any company wants to produce; it is a lot easier and cheaper to fab (2) 250mm^2 chips than to fab a single 500mm^2 chip. If GT200 is really single GPU, then that will likely be the situation.

As for nVidia responding with a die shrink, they are usually way behind in moving to a new process. AMD moved to 55nm in Nov 07, and nVidia does not have a single 55nm GPU out 5 months later. By the end of this year, RV770 could be shrunk to 45nm if TSMC's process is ready in time.

Might possibly make sense with a business mind in sense. Or Bill Gates ultra efficient home. But in real world joe blow material. Die size makes no arguement if there is another product that performs better. 55nm tech can blow me since 65nm technology is performing better and as of late, the only thing AMD can do is lower prices.

If this comes out and performs better, I might just update this response. Till then Nvidia all the way. Taking 2 companies on and doing very well on both fronts

It matters a lot to the business people, and the business people run nVidia/AMD and decide what kind of products will be produced. Die size and yield make a big difference to them.... will they go with a huge chip with low yield or two smaller chips with higher yield?

Performance of ATI's cards versus the 8/9 series cards has nothing to do with process technology; 55nm tech outperforms 65nm tech in terms of die size and power consumption, it doesn't have an effect on transistor performance though as it is just a half-node shrink of 65nm.
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: v8envy
Has anyone confirmed that NV has a license to produce a chipset for any of the Nehalem sockets? If not, we may soon be looking at a choice of intel CPU + feeble graphics vs feeble AMD cpu + enthusiast graphics. Much worse than having to choose between SLI and a decent chipset today. Could be just the thing to give AMD the boost they need, if they can last that long.

Of course, might not be long until AMD pulls a similar stunt. They don't need NV constantly eating their cake either, I'm sure they'd love to slow down the pace of graphics R&D. And if Intel's best enthusiast solution is low end they could leverage this to move enthusiasts to their platform.

I've been told twice by NVIDIA (once in a conference call, and once in an email) that their cross licensing agreement with Intel covers Nehalem and any Intel CPU for that matter.

In AMDs current financial position, I'm skeptical they would consider limiting their CPU sales by taking nForce motherboards out of the equation.

(I also think they have a similar cross licensing agreement, although this I haven't directly asked)

 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
here's an interesting question: what does nvidia do if amd does go under? intel could scoop up ati on the cheap and just open up their own can of whoop ass on the house jen-hsun built. I think that nvidia would be forced to aggressive pursue a deal with ibm/via/amd or SOMEBODY just to keep intel from freezing them completely out of the market. Basically, I'm saying that amd is probably better off with a weak but still alive daamit.

Of course, intel would run into some serious anti-trust problems if amd just went away, so maybe this little 3-way that we've seen lately will last for a little while...

whoops, forgot to post my link:

http://www.nordichardware.com/news,7682.html
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |