You are assuming that Oxide makes the decision which API and which features should be picked per default for each vendors hardware platform.
Hint: That is not how it works. That's why both vendors are sending engineers over to ensure that the game runs on their hardware as good as possible...
That's not a benchmark. Not at all. It's a challenge, of how much graphic effects you can press into 4kB program code. Not 4k resolution, but 4kB binary size.
And it just happens to be remotely fitted for the GCN architecture, with little to no respect to the common best practice guides on how...
Careful about that. The "doubling the performance" part might be true for *some* Pascal chips, especially in combination with also increasing FP64 throughput and the introduction of FP16 and improved 8bit integer arithmetic, but there's a chance that these Pascal cards are not going to be GPUs...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.