[Eurogamer] Deep Dive on the PS4 PRO GPU (Polaris + Vega features!)

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Shivansps

Diamond Member
Sep 11, 2013
3,873
1,527
136
AMD didn't screw over anyone, this is an absurd statement.

PC Polaris has less features than PS4 Polaris, and at the same time, AMD delayed Vega after PS4 PRO launch.

Show me the error in my reasoning. (Saying its a custom APU its not valid, neither is to blame it on DX11/12)
 

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
PC Polaris has less features than PS4 Polaris, and at the same time, AMD delayed Vega after PS4 PRO launch.

Show me the error in my reasoning. (Saying its a custom APU its not valid, neither is to blame it on DX11/12)
Primarily because a graphics card is advertised to perform a certain way at a certain price. That deal didn't change. And can you provide any proof Vega was delayed? You actually think AMD would purposely delay it's next major Graphics chip for 2% of consumers "perceived" value in PS4 PRO?
 

brandonmatic

Member
Jul 13, 2013
199
21
81
PC Polaris has less features than PS4 Polaris, and at the same time, AMD delayed Vega after PS4 PRO launch.

Show me the error in my reasoning. (Saying its a custom APU its not valid, neither is to blame it on DX11/12)

That's easy. There are a bunch of errors in your reasoning but the most obvious is your presumption that AMD owed you a duty to sell you a product with the same features as in the PS4 Pro - which may or may not have been ready at the time Polaris launched. The duty AMD owed you is to accurately describe the price, features, and performance of the product it sold you - which it did. So, no, you weren't screwed over.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
The ID buffer seems like it could be a game changer. A pretty obvious innovation in hindsight.

The extra 1GB of DDR3 was a bit of a surprise as well, but does a good job of explaining why the Pro has more usable memory for games.

This GPU is going to be able to hit way above its weight class. Certainly above a 480 despite a modest deficit in raw numbers. Console devs will make use of FP16 everywhere they can get away with it given time. If Vega has the ID buffer as well, I could see it besting the new Titan given proper software support. Of course, I imagine most rendering techniques would be incompatible with other GPUs, making any comparison necessarily apples-to-oranges.

I wonder if AMD could use the ID buffer in VEGA to implement something similar to ASW for non-VR games. That would be absolutely massive, as it would result in effectively a doubling of the framerate. Of course it does come with some caveats that means it's primarily only useful if you're running a relatively high base framerate in the first place (45 fps or higher).
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
So AMD screwed us, the PC gamers, by selling an inferior product than they sold to consoles AND delaying superior products than consoles, thank you again, and thank you all who recomended to buy an RX480 because its the same gpu used on the PS4 PRO, its not either.

It's for Console game not PC and Requested by Sony not AMD.If Pc version does not work with this feature then why AMD should add them?
Also , Reported.
 

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
Vega and what has been discussed gives at least hope that this will be first true next generation GPU from any of the Vendors(Polaris is just slightly tuned Tonga on 14 nm process, and Pascal is slightly tuned Maxwell on 16 nm process). If I have to use analogy, Vega in change of architectures will be like Maxwell was for Kepler, or Pascal GP100 for Maxwell. Tuned, but much different.

Pure nonsense. I've been hearding this for every uarch since Kepler came out. The supposition being that Fermi was the last "real" uarch and rest are minor tweaks with new nodes. The same BS has been made (invariably by the same cast of people) with regards to AMD.

If you don't understand something, just claim the current GPU is a tweak of an older GPU with a new node. Old trick but seemingly never goes out of fashion for the ignorant. Spend some time at B3D and educate yourself.
 
Last edited:
Reactions: Headfoot

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
Awesome write-up by Digital Foundry and great level of detail by Cerny. Unlike MS that wants to abandon console generations, Cerny outright hints he wants a PS5 as a clean slate design, even fin it means breaking BC with PS4/Pro. I am looking forward to that. I hope we see PS5 with 7-8Tflops by 2020 at the latest, preferably by Fall 2019. Given how great The Order 1886, Uncharted 4, Driveclub, InFamous First Light look on a 1.8Tflop PS4, I am pretty exited for PS5 by the end of this decade. Horizon Zero Dawn looks incredible when looking at rather mediocre low-end specs of PS4. I am not particularly excited about PS4 Pro since Sony will not allow any exclusives. Seems like a nice mid-cycle $399 console for those who haven't purchased the original PS4 yet. Still, the prospects of a 2019-2020 PS5 with 6-8 core Zen, Navi+ era GPU with 16GB of HBM2 would provide for a great generational leap.


Well said. With specs like that, native 4K doesn't just become a possibility but almost a mandate. 1080p would be wasted. Herein lies an interesting debate. Even if 4K TVs are selling quite well, the vast majority of TVs are still 1080p. Backwards compability would have to be re-phrased. The current understanding is that of a previous generation of console, but it would then have to be moved to now mean a previous generation of TVs with a lower resolution. The bottleneck goes from being your console to your TV.

The same goes for HDR support. Having such a beastly console being forced to serve a substantial portion of 1080p TVs as its market mean that either games will be held back or compensate by using effects in the 1080p version that do not exist in the 4K version, and/or higher framerates.

Either way, it will also be interesting to see what kind of new VR headset would be sold with PS5.
 
Reactions: RussianSensation

Glo.

Diamond Member
Apr 25, 2015
5,763
4,667
136
Pure nonsense. I've been hearding this for every uarch since Kepler came out. The supposition being that Fermi was the last "real" uarch and rest are minor tweaks with new nodes. The same BS has been made (invariably by the same cast of people) with regards to AMD.

If you don't understand something, just claim the current GPU is a tweak of an older GPU with a new node. Old trick but seemingly never goes out of fashion for the ignorant. Spend some time at B3D and educate yourself.
Well if you define new architecture by how much it gains clock for clock from previous generations then yes, Pascal consumer cards are not new architecture, but GP100 - is.

How Nvidia achieves increases in performance is by increasing the number of resources each, smaller number of cores compared to previous generation arch, gets fed by. The number of resources for 192 of Cores in Kepler and 128 cores in Maxwell is the same. Thats why Nvidia claimed that 128 cores in Maxwell uarch. has 90% of performance of Kepler, which was reported by Anandtech. How much resources is available to each 128 cores in Consumer Pascal GPUs? Exactly the same as 128 cores for Maxwell. How many cores gets the same amount of resources as 128 Maxwell cores in Pascal? 64. The difference in Pascal compared to previous generations of uarch. is the addition of dynamic scheduling, next generation memory compression, and redesigning the top count for each SM. All of performance/watt gains come from process itself.

Unfortunately, this is the same architecture as Maxwell, with few tweaks. And remember, Maxwell was supposed to be 20nm arch. first, but because of the fail of the process it had to be ported back to 28 nm, with excluding of some of the features from the uarch.
 
May 11, 2008
20,055
1,290
126
This sure is interesting about GCN :

https://forum.beyond3d.com/threads/gcn-and-mixed-wavefronts.59610/
One GCN CU can have up to 40 waves running concurrently (10 per SIMD). It doesn't matter where each wave has originated. There can be any mix of pixel/vertex/geometry/hull/domain/compute shader waves in flight at the same time (from any amount of queues). Instructions from these 40 waves are scheduled to the CU SIMDs in a round robin manner. If some of these 40 waves is waiting for memory, the GPU simply jumps over it in the round robin scheduling.

There is no need to store wave's data in off-chip memory. Each CU has enough on-chip storage for the metadata of these waves. Waves are grouped as thread groups. A single thread group needs to execute on a single CU (this is true for all GPUs, including Intel and Nvidia). This is because threads in the same thread group can use barrier synchronization and share data through LDS (64 KB on-chip buffer on each CU). All GPU architectures use static register allocation. The maximum count of registers used during a shader life time (even if some branch was never taken) needs to be allocated for the wave. Simplified: The GPU scheduler keeps track of available resources (free registers, free LDS, free waves) on each CU. When there's enough registers on some CU for a new thread group (thread group = 1 to 16 waves), the scheduler spawns a thread group for that CU. Each GCN CU has 256 KB of registers, 40 wave slots and 64 KB of LDS. There is no need to context swap kernels (*). Each thread group is guaranteed to finish execution once started. GPU programming model doesn't support thread groups waiting for other thread groups (atomics are supported, but the programmer is not allowed to write spinning locks). Nvidia GPUs work similarly. Intel has per wave (they call them threads) register files. Their register allocation works completely differently (shader compiler outputs different SIMD widths based on register count).

(*) Exceptional cases might need context switch (= store GPU state to memory and restore later). Normal flow of execution doesn't.

That beyond3d member sebbbi sure knows his stuff. Very interesting to read his(hers ?) posts.

16 waves * 64 threads/wave = 1024 threads. That's the biggest thread group size supported (required) by DirectX. 64 KB register file / 1024 threads = 64 registers/thread. If you need more than 64 registers per thread, the compiler is going to spill to memory (or LDS). A CU can run two 1024 thread (16 wave) thread groups at the same time, if your shader needs 32 or less VGPRs.

The image printed on wall of every console graphics programmer:

1024 threads (16 waves) is not an optimal thread group size for GCN. It doesn't evenly divide the 2560 thread CU maximum concurrent thread count (40 waves). 1024 thread group cannot achieve max occupancy (max latency hiding).

Good GCN thread group sizes (max occupancy): 64, 128, 256, 320, 512, 640.

But maximum occupancy is only achievable in shaders with no more than 256 KB / (2560 threads * 4 bytes/register) = 25 registers/thread.

Sometimes larger thread groups (than 640) are the best choice, as it allows sharing data between larger groups of threads. Max occupancy is not always equal to best performance.


Sure is interesting :

Heterogeneous loads are a big performance win in many cases on GCN. There are many real world examples available (look at recent GDC/SIGGRAPH presentations).

Common examples are:
- One kernel is sampler bound and the other isn't. Example: parallax mapping (N trilinear/aniso taps for root finding), anisotropic filtering in general (multiple textures), bicubic kernels (3x3 bilinear taps per pixel), blur filters (N bilinear taps), etc. A pure compute task (no tex filter instructions) can fill the execution gaps nicely.
- One kernel uses lots of LDS (thread group shared memory) and this limits occupation. Other kernel with no LDS usage increases occupancy.
- One kernel is heavily memory (and L1 cache) bound, while the other is mostly math crunching (ALU instructions using registers and/or LDS).
- Tail of another kernel. Work finishes at wave granularity (after last barrier). Waiting for the last wave to finish means that processing cycles are lost on each CU (every time kernel changes).
- Resource bottlenecks (waves/registers/LDS). Kernel allocates some CU resources heavily, while other resources are left unused. It is better to schedule thread groups from multiple kernels (different shader) to utilize the CU resources better.
- Uneven split of resources. There are cases where a single kernel doesn't evenly divide all CU resources (waves/registers/LDS) between thread groups (see my post above). Reminder is left unused. Better to schedule multiple kernels (with different resource counts) to use all the resources.

The most common heterogeneous task is executing vertex and pixel shader on the same CU. Vertex + hull + domain shader is another example. GCN is not limited to running multiple compute shaders concurrently on the same CU. You can have a mix of different graphics kernels and compute kernels.


I wonder how vega and the gpu in the ps4 pro work differently with respect to this explanation.
Anybody has some ideas about that ?

EDIT : Added some more quotes.
 
Last edited:

hrga225

Member
Jan 15, 2016
81
6
11
That beyond3d member sebbbi sure knows his stuff. Very interesting to read his(hers ?) posts.
Sebbbi was lead console engine programmer at Ubisoft (one of their engines,don't know which one).He is now lead for Unigine next gen.He definitively knows his stuff.
Also,when laymen like us can understand complex stuff he is talking about,that is straightforward sign that he really knows what he is talking about.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
Sebbbi was lead console engine programmer at Ubisoft (one of their engines,don't know which one).He is now lead for Unigine next gen.He definitively knows his stuff.
Also,when laymen like us can understand complex stuff he is talking about,that is straightforward sign that he really knows what he is talking about.

Yeah, he was the lead graphics programmer at redlynx(the guys who make trials). In fact, the latest engine that he worked on before he left still hasn't been deployed in any released game yet. Still very excited to see what it can do.

But he's actually indie now, and co founded his own company. So, not unigine(or unity if that's what you meant).
 

hrga225

Member
Jan 15, 2016
81
6
11
Yeah, he was the lead graphics programmer at redlynx(the guys who make trials). In fact, the latest engine that he worked on before he left still hasn't been deployed in any released game yet. Still very excited to see what it can do.

But he's actually indie now, and co founded his own company. So, not unigine(or unity if that's what you meant).
Yeah,I meant Unity.
Guess I have mixed him with someone else. Nevertheless he is top class programmer.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
PC Polaris has less features than PS4 Polaris, and at the same time, AMD delayed Vega after PS4 PRO launch.

Show me the error in my reasoning. (Saying its a custom APU its not valid, neither is to blame it on DX11/12)

1. Vega supposedly comes with HBM2, which may be supply constrained and/or may have been too expensive to manufacture in volume during most of 2016.

2. It's also possible AMD needs to hit specific yields & profitability targets when considering die size vs. wafer cost. Almost no one will purchase a Vega 10 single chip card for $899-$1200. AMD doesn't have a large userbase of PC gamers willing to throw whatever $ it takes to have the latest. Historically, ATI/AMD flagship cards priced above $549 don't sell well, and Fury X is no exception as it didn't sell well at all at $649.

The first 2 points are probably critical since AMD has finished the design of Vega 10 a long time ago.

3. Process node optimization for higher GPU clocks - this is probably another huge point that has delayed Vega 10's launch. If only RX 480 launched with 1400-1450mhz clocks, it would have easily outperform the GTX1060. Vega 10 would be a lot more competitive for AMD if it launches with 1400-1450mhz clocks as opposed to 1266mhz clocks. Hopefully AMD has learned something from Polaris 10/11's launch to be able to increase GPU clocks for Vega.

As far as Polaris 10 not having some features that PS4 Pro's GPU does, do you realize that RX 480's specs are way more powerful than the GPU in PS4 Pro? Polaris 10 launched way before PS4 Pro and if AMD had incorporated those features into RX 480, it could have made the die size larger and delayed the launch. Sony requires certain things that RX 480 users will never care for -- like 4K checkered upscaling. Since the CPU in current gen consoles is so underpowered, the GPUs have to do more heavy lifting which isn't required for the RX 480. Your comment is strange because the GPU in Xbox 360 was more advanced than the X1800/X1900 series of the time too. Are you suggesting ATI screwed over PC gamers during X1900 era as well? It's common for some GPUs inside consoles to be more advanced than the consumer variants since MS/Sony/Nintendo can and do pay for custom design.

Considering NV launched the FE GTX1060 for $299 less than 6 months and it's already possible to buy an RX 480 4GB for $155-180, AMD delivered huge price/performance for PC gamers, I don't understand how AMD screwed PC gamers?! Even more so when we see that in the latest titles such as Infinite Warfare and Titanfall 2, RX 480 is beating GTX1060.

Unfortunately, this is the same architecture as Maxwell, with few tweaks. And remember, Maxwell was supposed to be 20nm arch. first, but because of the fail of the process it had to be ported back to 28 nm, with excluding of some of the features from the uarch.

Paxwell can do some concurrent Async compute + graphics though and Maxwell/Kepler cannot.

Compare AIB 970 vs. AIB 780Ti vs. AIB 1070 in 3DMark Fire Strike Extreme vs. TimeSpy:

FireStrike Extreme
MSI Gaming 780Ti is 5.5% faster than MSI Gaming 970
Asus Strix 1070 is 65.9% faster than MSI Gaming 970

TimeSpy
MSI Gaming 780Ti is 0.6% slower than MSI Gaming 970
Asus Strix 1070 is now 74.7% faster than MSI Gaming 970

G1 Gaming 1080 is 27.7% faster than EVGA 980Ti SC+ under Fire Strike Extreme but that grows to 36.8% under TimeSpy. 1070 AIB is almost 15% faster under TimeSpy than 980Ti AIB is.
http://www.hardware.fr/articles/952-13/benchmark-3dmark-fire-strike-time-spy.html

It's more reasonable to call Pascal Maxwell+ since it's theoretically better at handling DX12/Async compute. The fact that NV introduced dynamic scheduling into Pascal shows that if they were to scale their CUDA cores, they may also run into shader under-utilization as GCN has. It's probably why NV is building the ability to perform more parallel concurrency into their future GPU architectures since they know DX12 is the future and DX11 serial workloads will be outdated at some point. It seems logical that the more shaders there are, the more chance of shader underutilization.

It's logical that Vega's architecture for DX12 will be much more advanced than Pascal since rumours have it AMD will shrink Vega 10 in 2018 and it'll remain in the line-up as some RX 500/600 series card. It means AMD designed Vega to not only compete with 2017 Pascal cards but also with some 2018 Volta cards. For AMD's plan to work though, it would mean even more DX12 games with Async Compute, even more AMD GE titles, and counting that the shrunk Vega's higher GPU clocks will be enough to compensate for Volta's newer architecture + GPU clocks (the risk is that Volta may be even better suited for DX12).

I think AMD would benefit even more if they released faster RX 465, 475, 485 (or RX 560/570/580) cards than $550-650 Vega, and got back into making competitive mobile dGPUs. All that effort on Vega for 2-3% of the entire dGPU market in AMD's case considering most $500+ buyers simply buy NV anyway. They are better off figuring out a way to make more power efficient GCN SKUs for the mobile market. I personally feel that the market for flagship $500+ cards is so unrealistically biased in favour of the competitor that Vega's true benefit will only come into play once it's shrunk in 2018 and priced at $299-399. HD7870, HD7950, R9 290 and RX 480 are far more beneficial for AMD than a $600 flagship.
 
Last edited:
Reactions: Mondozei

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
It's more reasonable to call Pascal Maxwell+ since it's theoretically better at handling DX12/Async compute. The fact that NV introduced dynamic scheduling into Pascal shows that if they were to scale their CUDA cores, they may also run into shader under-utilization as GCN has. It's probably why NV is building the ability to perform more parallel concurrency into their future GPU architectures since they know DX12 is the future and DX11 serial workloads will be outdated at some point. It seems logical that the more shaders there are, the more chance of shader underutilization

From messing around with asynchronous compute in GoW4 and Time Spy, it seems I get around a 4-5% increase in the former, and a 6-7% increase in the latter. Nothing to write home about really, and it doesn't suggest that asynchronous compute has a large effect on shader utilization for Pascal.. What I find the most useful about asynchronous compute, is that it seems to have the ability to maintain higher framerates. In other words, from my experience you're much morel likely to have higher minimums than with asynchronous compute disabled. This makes sense, because processing graphics and compute serially requires context switching which takes time and increases idleness. Unlike parallel or concurrent processing which takes no extra time and reduces idleness..

With Fiji, the gains from asynchronous compute are highest because the architecture itself is inefficient and has a much higher degree of shader under utilization; not because of the virtue of simply having more shaders.. Polaris by comparison, typically has lower gains from asynchronous compute, because it's more efficient than Fiji..

So basically, the more efficient an architecture, the less likely it is to gain significantly from asynchronous compute. That's NOW. But I do agree that as developers start to use DX12 and Vulkan more effectively and leverage more parallel workloads, then the performance impact of asynchronous compute will likely increase..
 

HurleyBird

Platinum Member
Apr 22, 2003
2,726
1,342
136
Sony requires certain things that RX 480 users will never care for -- like 4K checkered upscaling.

4K checkered upscaling isn't a hardware feature per se. The hardware feature is the ID buffer, which enables techniques like Sony's checkerboard rendering. And the ID buffer is definitely something I'd love to see in PC graphics, since it opens up many possibilities in addition to upscaling. Eventually I'm sure the feature will become ubiquitous, but in the meantime AMD could use it to give Nvidia a taste of their own medicine - eg. vendor locked settings.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
4K checkered upscaling isn't a hardware feature per se. The hardware feature is the ID buffer, which enables techniques like Sony's checkerboard rendering. And the ID buffer is definitely something I'd love to see in PC graphics, since it opens up many possibilities in addition to upscaling. Eventually I'm sure the feature will become ubiquitous, but in the meantime AMD could use it to give Nvidia a taste of their own medicine - eg. vendor locked settings.

I don't think the id buffer is required for checkerboard really. It just improves the effectiveness.

Also, I believe Nvidia might already be using id buffers as well.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,726
1,342
136
I don't think the id buffer is required for checkerboard really. It just improves the effectiveness.

It's required, unless you were to replicate the ID buffer in software, which isn't practical. You can't just do it "less quickly" because the algorithm requires information from the ID buffer to function. You either have the information and you can apply the technique, or you don't and you can't. There's no in between.

Another thing to keep in mind is that Sony's implementation might not even be the best out there. From the sounds of it, several developers have created their own 4K reconstruction implementations (which is probably a better term for what's going on than "upscaling," which has certain connotations) utilising the ID buffer that might be superior. As time goes on, I'm sure we'll come up with even better implementations.

Also, I believe Nvidia might already be using id buffers as well.

If Nvidia has it, it would already (or will very soon) be used in proprietary middle ware. "Temporal AA-Works" and such. Why else would they build in such a feature? It might be an outside chance that something about Nvidia's architecture lets them emulate an ID buffer at some more or less reasonable performance cost, though that's beyond my knowledge.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
It's required, unless you were to replicate the ID buffer in software, which isn't practical. You can't just do it "less quickly" because the algorithm requires information from the ID buffer to function. You either have the information and you can apply the technique, or you don't and you can't. There's no in between.

Well, I don't know for sure. Rainbow Six used a custom checkerboard implementation. No idea if they used an ID buffer or not.

If Nvidia has it, it would already (or will very soon) be used in proprietary middle ware. "Temporal AA-Works" and such. Why else would they build in such a feature? It might be an outside chance that something about Nvidia's architecture lets them emulate an ID buffer at some more or less reasonable performance cost, though that's beyond my knowledge.

I think someone said it was required for tiled rasterization. Probably on B3D somewhere.
Don't remember exactly, might've been something else.
 
Last edited:

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
The ID buffer is a really cool new feature since it's ideal for detecting boundaries of a mesh. It would be nice of Microsoft to expose that hardware functionality in DX12 once their porject Scoprio hits the shelves with the same features ...
 

HurleyBird

Platinum Member
Apr 22, 2003
2,726
1,342
136
Well, I don't know for sure. Rainbow Six used a custom chaeckerboard implementation. No idea if they used an ID buffer or not.

I'm assuming not, unless they did so in software which seems unlikely. Frame reconstruction in a checker board pattern is about as broad a description as post process anti aliasing. Eg. a lot of ways you can go about it. Another example, temporal AA has been around forever, but better detection of mesh edges via the ID buffer enables higher fidelity implementations.

I think someone said it was required for tiled rasterization. Probably on B3D somewhere.
Don't remember exactly, might've been something else.

I'm not an expert on the subject, but I know there's some speculation about exactly how tile based Maxwell and Pascal are. Nvidia tends to hold their cards a lot closer to their chest when it comes to technical documentation.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,726
1,342
136
Ubisoft did implement checkerboard techniques in Rainbow Six: Siege. Page 48

Frame reconstruction in a checker board pattern is about as broad a description as post process anti aliasing. Eg. a lot of ways you can go about it. Another example, temporal AA has been around forever, but better detection of mesh edges via the ID buffer enables higher fidelity implementations.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
I'm assuming not, unless they did so in software which seems unlikely...Another example, temporal AA has been around forever, but better detection of mesh edges via the ID buffer enables higher fidelity implementations.

I don't get it. I thought you said an ID buffer was required.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,726
1,342
136
Let me spell it out. Obviously you can do some kind of frame reconstruction or upscaling utilising a checker board pattern without an ID buffer. You could do that with no other information other than the pixels on the untouched portions of the checker board if you really wanted to, although the results wouldn't be very good and the check board pattern obvious. Like I said, it's a broad term that any number of techniques are able to fall under, much like post process AA could mean TAA, FXAA, DLAA, SMAA, MLAA, NFAA etc, or any of the various proprietary implementations. Despite falling under the same general category, these are distinct and unique. Don't conflate Rainbow Six's checker board technique with Sony's, they are distinct from one another, and unless Ubisoft has implemented something like the ID buffer in software, it can't physically run Sony's implementation because it doesn't have access to as much information. Likewise, the ID buffer is extremely useful for any number of post AA techniques, but that doesn't mean that you can conflate previous post AA techniques like the ones I mentioned with new techniques that require an ID buffer to function.
 
Last edited:
Reactions: Headfoot

Despoiler

Golden Member
Nov 10, 2007
1,966
770
136
Awesome article. It's sweet that they've implemented hardware that allows for a rendering version of "pixel shifting" that some home theater projectors use to take a 1080p image and turn it into a 4k image. It gets you to about 90% of real 4k quality, but at much less cost. Unless you are able to A/B the image you won't know the difference.

I also find it really interesting that because AMD is so flexible on their semi-custom designs that they get to tap the skills and ideas of their partners without having them on their payroll. Those ideas are later getting folded into the PC hardware. Really smart of them.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |