"AMD’s next-generation family of high-performance graphics cards is expected to ship

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

GaiaHunter

Diamond Member
Jul 13, 2008
3,649
216
106
Perhaps where you live, you are unfortunate enough to experience local vendors price gouging NV products in your country?

Europe (EU) is indeed a small market...

What happens is that outside US you don't see the amazing deals you see on places like frys. mc or newegg.

In Europe the prices are closer to the MSRP and they seem to go down for AMD and NVIDIA in similar chunks.

In Europe the cheaper GTX460 I've seen have never been cheaper than the 6850. In fact it isn't easy to find GTX460 1GB (at least in Portugal and UK).

In Europe, while the average 6970 is indeed more expensive than the average GTX570, you can find 6970 barely more expensive than the cheapest GTX570. Same for 6950 1GB/2GB vs GTX560Ti.

So while in the US market the 6970 might not make much sense when you have a GTX570, in Europe the 6970 and the GTX570 are pretty much the same price.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Well, AMD and Nvidia want to sell CF/SLI. And with Kepler/SI being supposedly again 60+% faster, I don't see the point in gaming at 100+fps.

True, even 4xSGSSAA (which is enough) costs roughly 50%, but it looks so good. Crysis 2 has FXAA, not SGSSAA. Have you ever played Just Cause 2? It just looks horrible without any kind of SSAA. Massive shimmering on textures and shaders almost everywhere. I don't want to play without SGSSAA anymore, no way. And more choices are always good.

Problem with MSAA is, that it doesn't combat shader and texture aliasing. TSSAA only works on certain textures, i.e. transparent ones. I played Crysis with MSAA and it did almost nothing to smoothen the image. Turn on TSSAA and you get a huge performance hit anyway, so why not turn on SGSSAA after all.

AMD surprised everyone when they reintroduced SGSSAA in their HD5000 Series. And now look where we are today:

  • No SGSSAA in DX10/11 or OpenGL
  • No AAA in DX10/11 or OpenGL unless the dev specifially implements it
  • Blurry "slow" MLAA
  • No downsampling (aka OGSSAA)
It's not only about speed, it's also about the result that you get on your screen. Some people don't care if it shimmers and crawls like an anthill. Others do. Some progress in this area would be greatly appreciated, especially by the enthusiasts they want to sell their highest-end cards and CF to.

Imho,

Didn't surprise everyone because they did have a super-sampled ability with their earlier Super-AA methods and some ideal abilities. The hybrid modes actually offered a virtual free super-sampled component.

3d rendering is hitting walls -- it's painfully obvious. Noticed this trend with X1900XTX CrossFire platform and could enjoy virtually any title maxed with levels of IQ added to boot. Without features to take advantage of GPU's they would be irrelevant and integrated --- this is why both nVidia and AMD have been trying to improve gaming experiences like adding super-sampled, multi-monitor gaming, GPU Physics, GPU processing, 3d stereo, transparency, higher levels of enhancements.

Some-what ignore whiners that whine about performance hits --- because it's the performance hits that improve the immersion and gaming experience -- make the GPU's more relevent.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Nope. They're keeping the small die strategy. It's worked fine for them since the HD 4000 series and they're able to get the same performance as NVIDIA in all but the top card while costing them lot less to manufacture, being a lot easier to manufacture, consuming less power, and also being comparatively cheaper to buy. The Cayman die is 389mm2, while GF110 is 529mm2. Clearly a huge difference; GF110 is 36% bigger. For that much, it enabled them to get 10-15% higher performance comparing the top cards based on both architectures, so clearly NVIDIA isn't "smacking them around". If anything, it's AMD's engineers that were much smarter. Making large monolithic dies means higher costs of manufacturing, very high power consumption, lower yields, and not a whole lot more performance.

This gets old! Have you seen nVidia's margins?
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
I think you need to factor in why NV's GPU division keeps on making a huge profit. Their brand recognition is huge. The gtx580 may be a bad buy for a lot of users, but because its still the top dog, ppl flock to buy low-end and mid-range NV gpus. That and the HPC sector dominance.

28nm will be huge its pent up from several generations on 40nm. They will be going all out or they going home, IMO. The small die strategy has given them a platform and support (market share) base to enable a shot at the crown.

Sales have been indicating that high end discrete graphics are a shrinking market, due to the influx of IGP and low end parts (which most users buy). What nvidia and AMD typically do is sell the low end parts at low margins, while the biggest profit margins are with high end discrete cards. And the typical user is not willing to spend 400-500$ on a gpu for their computer.

This is going to be a questionable market going forward, sales have been lowering continuously. So it does make sense to make smaller parts with more performance per watt, because thats where the market is unfortunately heading. Many board makers are hesitant to even commit to the highest end 28nm parts, which definitely wasn't the case with previous generation gpu's.

http://www.digitimes.com/news/a20111026PD214.html

"Since high-end discrete graphics cards are currently seeing weakening sales, while demand for mid-range and entry-level graphics cards is gradually being replaced by AMD and Intel's GPU-integrated processors, demand for discrete graphics cards is only expected to become weaker, the sources noted."
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I agree about MLAA/FXAA; these are trash AA modes and I don't use them. I understand the rationale of enabling AA at a small performance hit and that with so many games using deferred shading, it's about the only solution to not kill framerates. But, it looks like s**t! It blurs your entire screen and reduces the quality of textures.

Don't tell that to HardOCP. They think MLAA/FXAA are superior in performance and image quality to traditional MSAA. :hmm:

Look at Crysis 2, the game looks significantly worse with AA on! blur fest.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
I think you need to factor in why NV's GPU division keeps on making a huge profit. Their brand recognition is huge. The gtx580 may be a bad buy for a lot of users, but because its still the top dog, ppl flock to buy low-end and mid-range NV gpus. That and the HPC sector dominance.

28nm will be huge its pent up from several generations on 40nm. They will be going all out or they going home, IMO. The small die strategy has given them a platform and support (market share) base to enable a shot at the crown.

I tend to look at things differently. When I think of the higher end chip -- I think of its flexibility for sku's, price-points and three families --- GeForce, Quadro and Tesla. What kind of revenue and margins can the hardware architecture and software bring in?

The small die strategy is leaving performance on the table and too many trade-offs, sacrifices for growth, added revenue and margins.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Don't tell that to HardOCP. They think MLAA/FXAA are superior in performance and image quality to traditional MSAA. :hmm:

Look at Crysis 2, the game looks significantly worse with AA on! blur fest.

Maybe their view is more-so based on 2560 x 1600 and surround gaming, where a combined higher resolution and MLAA/FXAA help -- considering MSAA doesn't touch texture artifacts and MLAA/FXAA do help a bit on alphas, specular, shader aliasing.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Sales have been indicating that high end discrete graphics are a shrinking market, due to the influx of IGP and low end parts (which most users buy). What nvidia and AMD typically do is sell the low end parts at low margins, while the biggest profit margins are with high end discrete cards. And the typical user is not willing to spend 400-500$ on a gpu for their computer.

This is going to be a questionable market going forward, sales have been lowering continuously. So it does make sense to make smaller parts with more performance per watt, because thats where the market is unfortunately heading. Many board makers are hesitant to even commit to the highest end 28nm parts, which definitely wasn't the case with previous generation gpu's.

http://www.digitimes.com/news/a20111026PD214.html

"Since high-end discrete graphics cards are currently seeing weakening sales, while demand for mid-range and entry-level graphics cards is gradually being replaced by AMD and Intel's GPU-integrated processors, demand for discrete graphics cards is only expected to become weaker, the sources noted."

The key is attached rates -- when you see this go down.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Maybe their view is more-so based on 2560 x 1600 and surround gaming, where a combined higher resolution and MLAA/FXAA help -- considering MSAA doesn't touch texture artifacts and MLAA/FXAA do help a bit on alphas, specular, shader aliasing.

Ok but if you have blurred textures, all benefits of AA are eliminated. The point of AA is to improve sharpness of game's objects by reducing aliasing, not reduce sharpness by blurring textures. It's like applying an AA filter that works against your AF on textures.

Also, MLAA often produces a worse performance hit over MSAA, Edge Detect or EQAA modes.

True, even 4xSGSSAA (which is enough) costs roughly 50%, but it looks so good. so why not turn on SGSSAA after all.

As far as I am aware, Super Sampling AA doesn't work in DX10 and DX11 modes.

The 4x Edge Detect and/or 4xEQAA mode is often a good compromise too.

This "The Upgrade Challenge - Know Your 3D Options" article highlights the differences well.


This is going to be a questionable market going forward, sales have been lowering continuously. So it does make sense to make smaller parts with more performance per watt, because thats where the market is unfortunately heading. Many board makers are hesitant to even commit to the highest end 28nm parts, which definitely wasn't the case with previous generation gpu's.

If anything, it's AMD's engineers that were much smarter. Making large monolithic dies means higher costs of manufacturing, very high power consumption, lower yields, and not a whole lot more performance.

Maybe for AMD the smaller die strategy works since their GPUs don't make a lot of $ for them outside of gaming applications. As a result, the cost of R&D cannot be allocated across various profitable product offerings, as is the case for NV. Also, AMD isn't selling their top end HD6970 for $400-500 are they? Why aren't they interested in making more $$$ and increasing their ASP, in turn improving profitability?

NV's engineers are no less capable. The difference is NV's business strategy -- their GPUs are also strong sellers in GPGPU/professional segments. NV makes a lot of $ selling their cards in Quadro and Tesla lines. NV's strategy has definitely worked better than AMD's in the last 5+ years. They are making way more $ by selling their GPUs in markets outside of gaming and have far higher profit margins as a result. On top of that, NV has something like a 59% desktop discrete market share in gaming too. NV's engineers weren't stupid when they made Fermi a GPGPU beast. About a year ago, I hypothesized that it was rather intentional/strategic.

The large die size is simply a result of a far more well-rounded GPU that can be used for general purpose computing (i.e., scientific and financial tasks, CUDA apps, etc.) and excels outside of gaming. OTOH, AMD primarily focuses on a lean gaming GPU. We'll have to see if GCN changes this. But so far, AMD has made no headway whatsoever in either the high-end workstation or scientific or financial community because their GPU/drivers are far less adapt at general purpose computing. Overall then, NV's engineers have actually done a far better job. They were able to produce a GPU that excels in both gaming and GPGPU tasks, something AMD's Cayman/Cypress cannot do.

I agree with you that the performance/watt part is critical though.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
I believe AMD's GCN will have lower performance per die size than Evergreen(5800) and Cayman(6900).

Transistor count could be double that of Cayman but SP(streaming Processors) will not be double the number.

By going the GPGPU way like NV, they will sacrifice gaming performance per die size. That means GCN architecture graphics chip at the same die size as Cayman will have lower gaming performance than Evergreen/Cayman Architecture chip at 28nm, that will make them even less competitive against NV's single 500mm2+ Kepler chip if they will aim for a small die this time.

GCN architecture could give a significant jump of performance in DX-11 Tessellation due to more tessellation units.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Ok but if you have blurred textures, all benefits of AA are eliminated. The point of AA is to improve sharpness of game's objects by reducing aliasing, not reduce sharpness by blurring textures. It's like applying an AA filter that works against your AF on textures.

Also, MLAA often produces a worse performance hit over MSAA, Edge Detect or EQAA modes.

That may depend on title though and the higher resolutions make games more clear. AF isn't just about offering texture detail but about removing texture artifacts, too.

Trade-offs, and in Hardocp's subjective view -- they like them.


As far as I am aware, Super Sampling AA doesn't work in DX10 and DX11 modes.

The 4x Edge Detect and/or 4xEQAA mode is often a good compromise too.

This "The Upgrade Challenge - Know Your 3D Options" article highlights the differences well.

nVidia offers transparency in DirectX 10 and 11 and also super-sampled in DirectX 10 and 11.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
By going the GPGPU way like NV, they will sacrifice gaming performance per die size. That means GCN architecture graphics chip at the same die size as Cayman will have lower gaming performance than Evergreen/Cayman Architecture chip at 28nm, that will make them even less competitive against NV's single 500mm2+ Kepler chip if they will aim for a small die this time.

Not necessarily. Taking some write-up from AnandTech's GCN Pre-Review

For starters, each GCN Compute Unit will be able to do more work. Also, one of the weaknesses of VLIW is that it’s statically scheduled ahead of time by the compiler. As a result if any dependencies crop up while code is being executed, there is no deviation from the schedule and VLIW slots go unused. With a GCN Computer Unit, scheduling is moved from the compiler to the hardware. It is the CU that is now scheduling execution within its domain. With these hardware changes, GCN will be able to concurrently work on tasks with relative ease. Overall the scalar unit can execute one instruction per cycle, which means it can complete 4 instructions over the period of time it takes for one wavefront to be completed on a SIMD (VLIW-4 design).

"Cayman’s dual graphics engines have been replaced with multiple primitive pipelines, which will serve the same general purpose of geometry and fixed-function processing. Primative pipelines will be responsible for tessellation, geometry, and high-order surface processing among other things. Whereas Cayman was limited to 2 such units, GCN will be fully scalable, so AMD will be able to handle incredibly large amounts of geometry if necessary."
- AnandTech


Thanks for reminding me to re-read about GCN architecture. It got me more excited about HD7900 series

Also, we are assuming AMD doesn't have more tricks up its sleeve. There are still other ways to improve performance. AMD can include virtual memory space (which will allow CPUs and GPUs to use the "unified" virtual memory), pre-emption, enhance the ability of GPU to autonomously process the data without the help of CPU and so on.
 
Last edited:

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
That's great news because GPU processing will be utilized more and tessellation will be used more. I'm not pleased with the amount of tessellation used today and desire much, much more to improve realism and immersion.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
Maybe for AMD the smaller die strategy works since their GPUs don't make a lot of $ for them outside of gaming applications. As a result, the cost of R&D cannot be allocated across various profitable product offerings, as is the case for NV. Also, AMD isn't selling their top end HD6970 for $400-500 are they? Why aren't they interested in making more $$$ and increasing their ASP, in turn improving profitability?

NV's engineers are no less capable. The difference is NV's business strategy -- their GPUs are also strong sellers in GPGPU/professional segments. NV makes a lot of $ selling their cards in Quadro and Tesla lines. NV's strategy has definitely worked better than AMD's in the last 5+ years. They are making way more $ by selling their GPUs in markets outside of gaming and have far higher profit margins as a result. On top of that, NV has something like a 59% desktop discrete market share in gaming too. NV's engineers weren't stupid when they made Fermi a GPGPU beast. About a year ago, I hypothesized that it was rather intentional/strategic.

The large die size is simply a result of a far more well-rounded GPU that can be used for general purpose computing (i.e., scientific and financial tasks, CUDA apps, etc.) and excels outside of gaming. OTOH, AMD primarily focuses on a lean gaming GPU. We'll have to see if GCN changes this. But so far, AMD has made no headway whatsoever in either the high-end workstation or scientific or financial community because their GPU/drivers are far less adapt at general purpose computing. Overall then, NV's engineers have actually done a far better job. They were able to produce a GPU that excels in both gaming and GPGPU tasks, something AMD's Cayman/Cypress cannot do.

I agree with you that the performance/watt part is critical though.

AMD has never really been more successful than its competitors when it comes to the enterprise/professional market. That's probably the reason why they get lower margins. NVIDIA's strategy has definitely been better, but that doesn't mean the GPUs themselves have been better... for consumers anyway, since that's what I'm referring to. NVIDIA definitely makes better general-purpose GPUs, but AMD's small die strategy is better for consumers. NVIDIA should find a way to slim down consumer GPUs. I imagine it would cost a lot, but what about making two separate ways: make a GPU for the GeForce line, and another one for the Tesla and Quadro lines instead of making one [GF110] and tweaking it a bit to perform better at one thing or the other? Wouldn't it allow for much smaller dies for each line while dedicating performance solely to what it was meant to do?

I don't think AMD GPUs are gonna gain traction on enterprise for now, even if GCN turns out to be good. NVIDIA has completely taken that market by storm and they'll make sure they keep their lead there.
 
Last edited:

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
One good thing, because in Quarter 3 of 2010, AMD did take discrete market share leadership. They had the ball. However, with a more mature, complete Fermi, nVidia dramatically reversed this trend, specifically in discrete mobile and stayed the course in discrete desktop.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Ok but if you have blurred textures, all benefits of AA are eliminated. The point of AA is to improve sharpness of game's objects by reducing aliasing, not reduce sharpness by blurring textures. It's like applying an AA filter that works against your AF on textures.

Also, MLAA often produces a worse performance hit over MSAA, Edge Detect or EQAA modes.



As far as I am aware, Super Sampling AA doesn't work in DX10 and DX11 modes.

The 4x Edge Detect and/or 4xEQAA mode is often a good compromise too.

This "The Upgrade Challenge - Know Your 3D Options" article highlights the differences well.






Maybe for AMD the smaller die strategy works since their GPUs don't make a lot of $ for them outside of gaming applications. As a result, the cost of R&D cannot be allocated across various profitable product offerings, as is the case for NV. Also, AMD isn't selling their top end HD6970 for $400-500 are they? Why aren't they interested in making more $$$ and increasing their ASP, in turn improving profitability?

NV's engineers are no less capable. The difference is NV's business strategy -- their GPUs are also strong sellers in GPGPU/professional segments. NV makes a lot of $ selling their cards in Quadro and Tesla lines. NV's strategy has definitely worked better than AMD's in the last 5+ years. They are making way more $ by selling their GPUs in markets outside of gaming and have far higher profit margins as a result. On top of that, NV has something like a 59% desktop discrete market share in gaming too. NV's engineers weren't stupid when they made Fermi a GPGPU beast. About a year ago, I hypothesized that it was rather intentional/strategic.

The large die size is simply a result of a far more well-rounded GPU that can be used for general purpose computing (i.e., scientific and financial tasks, CUDA apps, etc.) and excels outside of gaming. OTOH, AMD primarily focuses on a lean gaming GPU. We'll have to see if GCN changes this. But so far, AMD has made no headway whatsoever in either the high-end workstation or scientific or financial community because their GPU/drivers are far less adapt at general purpose computing. Overall then, NV's engineers have actually done a far better job. They were able to produce a GPU that excels in both gaming and GPGPU tasks, something AMD's Cayman/Cypress cannot do.

I agree with you that the performance/watt part is critical though.

But ATI was at the fore-front with GPU processing and GPU Physics at the same time as nVidia -- ATI wouldn't STFU about it -- just like nVidia.

Something happened or changed when AMD did take over and the focus may of changed and resources may of been spent on Fusion.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Not necessarily. Taking some write-up from AnandTech's GCN Pre-Review

For starters, each GCN Compute Unit will be able to do more work. Also, one of the weaknesses of VLIW is that it’s statically scheduled ahead of time by the compiler. As a result if any dependencies crop up while code is being executed, there is no deviation from the schedule and VLIW slots go unused. With a GCN Computer Unit, scheduling is moved from the compiler to the hardware. It is the CU that is now scheduling execution within its domain. With these hardware changes, GCN will be able to concurrently work on tasks with relative ease. Overall the scalar unit can execute one instruction per cycle, which means it can complete 4 instructions over the period of time it takes for one wavefront to be completed on a SIMD (VLIW-4 design).

VLIW-5/4 is much better suited for graphics and Scalar (NVidia) is better for GPGPU.
In order for the Scheduling to be performed in hardware, more logic is needed, thus more transistors, thus bigger die.

"Cayman’s dual graphics engines have been replaced with multiple primitive pipelines, which will serve the same general purpose of geometry and fixed-function processing. Primative pipelines will be responsible for tessellation, geometry, and high-order surface processing among other things. Whereas Cayman was limited to 2 such units, GCN will be fully scalable, so AMD will be able to handle incredibly large amounts of geometry if necessary."
- AnandTech
Same design principles as NVidia’s Fermi Arch which im sure it will continue with Kepler.
More Primitives will balloon the transistor count and die size (same happened with Cayman), in order to keep the balanced of the Arch more primitives will need more cores, more cores will need more memory/rops etc etc and all those will make a bigger chip.
Thanks for reminding me to re-read about GCN architecture. It got me more excited about HD7900 series


Also, we are assuming AMD doesn't have more tricks up its sleeve. There are still other ways to improve performance. AMD can include virtual memory space (which will allow CPUs and GPUs to use the "unified" virtual memory), pre-emption, enhance the ability of GPU to autonomously process the data without the help of CPU and so on.
I have no idea if they will implement this in the next architecture (7xxx)
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
but what about making two separate ways: make a GPU for the GeForce line, and another one for the Tesla and Quadro lines instead of making one [GF110] and tweaking it a bit to perform better at one thing or the other? Wouldn't it allow for much smaller dies for each line while dedicating performance solely to what it was meant to do?

They both AMD and NVDIA doing that already,

AMDs 68xx series and GF114(GTX460 and 560/Ti) are made for higher performance (Gaming) per die size and per watt , when 69xx and GF110 was meant for DX-11 Tessellation and GPGPU (although GF114 has 64bit FP).

They maybe both will do the same again in 28nm, 78xx series could be a revised HD69xx with no 64bit FP simpler memory subsystem in order to maximize performance for Games with no GPGPU characteristics.

NV will most certainly do the same with the 660(?) although Nvidia keeps pushing the GPGPU in the entire range of their GPUs.
 
Last edited:

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
It's a tough one for AMD.
Either they:
a) stick to graphics only. This allows a smaller more efficient chip, but it'll only sell in consumer graphics markets which is just about break even (for 5 & 6 series this was the case).

b) go for a GPU compute combi chip. This will lead to a larger less efficient chip due to all the extra logic. This means if it is only sold in the consumer graphics market it will loose AMD money. It has to be successful in the higher margin gpu compute and professional markets too. However that's a big ask as nvidia are so far ahead - not just the hardware (effectively this will be nvidia's 3rd gen gpu compute chip) but in software.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,109
1,260
126
Don't tell that to HardOCP. They think MLAA/FXAA are superior in performance and image quality to traditional MSAA. :hmm:

Look at Crysis 2, the game looks significantly worse with AA on! blur fest.

MLAA/FXAA are the welfare of AA modes. They are absolutely horrible. You turn it on and it looks like another layer of the screen image has been super-imposed over top of its self, but just a little off. Blur-city. It is just awful.

You can use SGSSAA in Crysis 2 using some tweaks with nvidiainspector, it improves some aspects of the image but not much and hammers your framerate. I played C2 with FXAA/MLAA turned off, the blur from that AA in C2 is even worse than it is in BF3.

All these games using deferred shading are costing some major IQ in terms of using proper AA. Blizzard is doing it as well, first with SC2 and now Diablo 3 will be using the same thing. You can't run AA in the D3 beta right now either, but again, they will be releasing retail with a DX11 path to allow AA.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Imho,

You wanna hear something that helps out on aliasing? Stereo3d. Crysis 2 is littered with aliasing and use Stereo3d, and the aliasing is curbed, and garner depth and pop out. It's like garnering a x2 super-sampled ability as a bonus.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
There is so much OT going on in this thread, I have no idea how a Mod hasn't come in here and lost his mind.

Anyway, who sells more for less and makes higher profits is for the most part irrelevant to SI shipping for revenue in Dec. Prices go up and down relative to each other all of the time. Way too many words are being wasted on it.

On topic: "shipping for revenue later this year", unfortunately, doesn't mean you or I are going to be able to buy one this year. If you are a betting man, wager lots the other way, IMO. I do still believe that AMD will get their next gen out months before the competition though. Assuming they do, let's see what they do with the lead this time. They did jack with it with Evergreen and actually let it evaporate by the time Cayman rolled around. New CEO, maybe they'll do better this time?
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
They did a lot with making profits from the Graphics division and did garner market share leadership -- certainly enjoyed the fruits of an execution advantage.
 
Feb 19, 2009
10,457
10
76
Don't tell that to HardOCP. They think MLAA/FXAA are superior in performance and image quality to traditional MSAA. :hmm:

Look at Crysis 2, the game looks significantly worse with AA on! blur fest.

Sorry but you are wrong. MSAA implementation for BF3 is nothing like traditional MSAA. It's not even like traditional deferred MSAA. They use mesh edge detection that only removes aliasing from a small part of the scene and it affects the texture quality, particularly near the texture edges giving them a blur look.

Post AA in BF3 on medium generates the best IQ at 1080p and up, good overall mix of AA and relatively little texture blurring.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |