[DX12] Fable Legends Beta Benchmarks

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Also let's be even more real, we bought gcn cards that couldn't be fully utilized in dx11 and I don't we the massive outrage from anyone about this. Yet a huge deal is made about maxwell now. This is being made into a big deal because there are fanboys on both sides that look for one thing their favorite vendor does then blows it out of proportion. At the end of the day no matter what choice you make now you're making a compromise. Neither vendor has straight up 100% winner at the moment.

Who's blowing the performance benefit out of proportion? Seems like even trying to mention it gets people labeled by those that want to sweep it under the rug.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
First bold: oddly, if you ask me this is all stemming from AMD's marketing side. And I'm not even mad that AMD is finally playing dirty. They are using their hardware advantage to murky up the water. They sponsored a game that puts their version of AS (ie hardware driven) in an extreme positive light. Kudos AMD, bring the gauntlet!

Second bold: WTF!? Let's talk about concepts. I guess Bulldozer is not really two cores per module, because...well I won't even get into it, since we'd just get all kinds of messy.

The Red: Aye dios mio. It's probably as useful as the second 4GBs of VRAM on 390/390x. IE, you'll probably run out of GPU processing before you can use it all up. "But what about mGPU!!!" But 4GBs is good enough for Fiji for 4K!!!!!

I already own two games which can exceed 3.5GB even at 1080p, and VRAM requirements are not going down...
 

tential

Diamond Member
May 13, 2008
7,355
642
121
This whole debate is fueled by Nvidia marketing. It's eerily similar to the 4GB of ram on the GTX 970. Sure it's "there" but is all of it useful?

It wouldn't be a big deal if they hadn't advertised Async as a feature for Maxwell cards.

So can you show across 3-5 titles due to Async, AMD is faster than Nvidia in DX12 games?



So GTX 980Ti is still the best card.
The performance difference between the other cards is 1-2 frames....

Are we seriously making a massive deal over the fact that now, the R9 390x is faster than a GTX 980? It's been faster than a GTX 980 in some titles before...
It's faster than a DX 12 title though, and now ASYNC is king, and if you aren't GCN you're performance is horrible!!!

Great, I'm so happy I have an R9 290. I can't wait to play all of the DX12 titles that will be released during the lifespan of my card!

Unless you're playing on of they VERY FEW DX 12 titles being released soon, and unless that title takes advantage of Async heavily, it won't matter what card you get...
 
Last edited:

EarthwormJim

Diamond Member
Oct 15, 2003
3,239
0
76
So can you show across 3-5 titles due to Async, AMD is faster than Nvidia in DX12 games?



So GTX 980Ti is still the best card.
The performance difference between the other cards is 1-2 frames....

Are we seriously making a massive deal over the fact that now, the R9 390x is faster than a GTX 980? It's been faster than a GTX 980 in some titles before...
It's faster than a DX 12 title though, and now ASYNC is king, and if you aren't GCN you're performance is horrible!!!

Great, I'm so happy I have an R9 290. I can't wait to play all of the DX12 titles that will be released during the lifespan of my card!

Unless you're playing on of they VERY FEW DX 12 titles being released soon, and unless that title takes advantage of Async heavily, it won't matter what card you get...

You're putting words in my mouth, did I ever complain about frame rates for Nvidia vs. AMD?

No, my complaint is "support" they advertised for a feature intended for hardware.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
You're putting words in my mouth, did I ever complain about frame rates for Nvidia vs. AMD?

No, my complaint is "support" they advertised for a feature intended for hardware.

well if you care about support then well you're right to be mad.

Don't buy the cards based on that principal of Nvidia lying. I'm doing the SAME thing. I'm not buying a GTX 970 because NVidia lied. I'm not buying an Fury X because AMD opened their mouth and called it an OCer's dream when it CLEARLY was not.

But otherwise, for anyone purchasing a card.... this is a nonissue. Purchase whatever card plays the games you want now, and realize that future games may benefit a different vendor, but the competitors card in the same price bracket will still be within 10% or so of your card.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
I already own two games which can exceed 3.5GB even at 1080p, and VRAM requirements are not going down...

Just because a game uses X amount of vram, does not mean it REQUIRES that much. There have been many tests that show that the Fury is not memory limited even at 4K.

Please name these titles that REQUIRE that much vram at 1080P. Because I cannot think of a single one.
 

EarthwormJim

Diamond Member
Oct 15, 2003
3,239
0
76
Just because a game uses X amount of vram, does not mean it REQUIRES that much. There have been many tests that show that the Fury is not memory limited even at 4K.

Please name these titles that REQUIRE that much vram at 1080P. Because I cannot think of a single one.

Custom textures can do it for Oblivion, but that's a 3rd party modification.

Average frame rates won't necessarily reveal vram limitations, minimums (99th percentile) will. An on rails benchmark probably won't either. Quickly spinning your camera around isn't often demonstrated in benchmarks.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
So can you show across 3-5 titles due to Async, AMD is faster than Nvidia in DX12 games?



So GTX 980Ti is still the best card.
The performance difference between the other cards is 1-2 frames....

Are we seriously making a massive deal over the fact that now, the R9 390x is faster than a GTX 980? It's been faster than a GTX 980 in some titles before...
It's faster than a DX 12 title though, and now ASYNC is king, and if you aren't GCN you're performance is horrible!!!

Great, I'm so happy I have an R9 290. I can't wait to play all of the DX12 titles that will be released during the lifespan of my card!

Unless you're playing on of they VERY FEW DX 12 titles being released soon, and unless that title takes advantage of Async heavily, it won't matter what card you get...

We don't know that. Just like you're skeptical because we've had few examples, for the sme reason we don't know what other differences might emerge.

For me though if I had to wager which would likely support and perform overall better with DX12 I'd choose GCN. YMMV.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
But otherwise, for anyone purchasing a card.... this is a nonissue. Purchase whatever card plays the games you want now, and realize that future games may benefit a different vendor, but the competitors card in the same price bracket will still be within 10% or so of your card.

1) But the differences under DX12 thus far are more than 10%.

In Ashes, 390 beats 970 by 16% at 1080P high preset.

In Fable, 290X beats 970 by 13%.



So 13-16% advantage for 390/290X that compete against 970. 16% difference is nearly as much as between a $330 GTX970 and $550 GTX980 at launch.

2) You aren't considering the entire context.

For example, at 1080P, R9 380 is about 8% faster than the GTX960 4GB. If DX12 games give GCN 13-16% benefit on average, that 8% turns into 1.08 * (1.13-1.16) = 22% to 25% advantage! That's HUGE.

3) If you only compare cards in similar price brackets, you miss the other aspect - price/performance.

For example, a $450 GTX980 is 12% faster at 1080P and 8% at 1440P compared to a $280-300 R9 390. All of a sudden 13-16% boost under DX12 games for the 390 means that someone who bought a 980 just threw $150-170 into the toilet.

Similarly, the 970 becomes very difficult to recommend against the 390 since 13-16% benefit under DX12 ensures 390 wins in raw GPU performance and VRAM. That's a huge key battleground in Q4 2015/Q1 2016 since 970/980 and 390 will still be sold for the next 3-5 months before 16nm GPUs launch.

And of course we have R9 280/285/280X/290 vs. 950/960 2GB/960 4GB cards. 13-16% more performance for the GCN makes it literally impossible to recommend the 950/960 series for gaming for anyone who intends to keep their cards for 2-3 years and will play DX12 games. Do we know for sure that the 13-16% advantage is an accurate estimate, no, but it's a trend that's emerging in 2/2 DX12 games.

4) You are not taking into account how much DX12.1 was touted as a key selling feature for Maxwell over GCN in marketing documents and online. It was plastered all over by the media/NV. How many gamers were on the fence between AMD/NV cards and picked Maxwell since they thought over the next 2-3 years of ownership it was safer to go with a 'better' DX12-compatible + newer Maxwell architecture? Don't you think this type of marketing is misleading to the consumer?

5) Also, while you can make the argument that DX12 performance for modern cards doesn't matter because for next generation DX12 games gamers will need to upgrade to future GPUs, you would be mostly right. However, since not everyone times the market and some gamers who will buy GPUs over the next 3-5 months have extra information to make a more informed decision. What if the DX12 benchmarks we've seen do not even use AC extensively and thus the GCN architecture isn't even showing the full potential? We don't know that either but looking at GTX470/580 vs. HD5850/5870 and seeing how lack of good tessellation leveled HD5850/5870 cards, I'd rather recommend the more safer architecture that supports more next gen features and performs better with them. Will 13-16% be enough to play 2016-2018 DX12 games? No but it's a free bonus on GCN at least from the two benchmarks we have.

6) Fable Legends is a UE4-based titles. From the DX11 UE4 benchmarks we have, Maxwell walked all over GCN. The fact that a 925mhz 7970 pummels the 680, 285 destroys the 960 and 290X beats 970 under UE4 is an eye-opener. This could mean that under a more brand-agnostic DX12 game engine, GCN would show even greater performance advantages. Given how well UE4 games have generally run on Maxwell cards, and possible lack of wide use of AC (on a % basis from the total game engine code) in Fable Legends, things could be even worse for Maxwell than it seems. Counter to that point is that NV could strengthen its efforts on GW front even more and influence the use of AC in DX12 games to diminish any advantage GCN may have. We'll have to see what happens over the next 6-12 months with DX12 games to try and get a better picture.

So while the current recommendation for $650 980Ti vs. $650 FuryX doesn't change, the rest of NV's line-up looks far worse now than it did prior to the DX12 scores based on what we have seen thus far. Luckily for NV, if Maxwell really takes a huge performance hit with AC+graphics context switching under DX12, it's already October 2015 which means they only have to coast for ~6 months before they can close the chapter on Maxwell as the hype for Pascal will take over.
 
Last edited:

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
Just because a game uses X amount of vram, does not mean it REQUIRES that much. There have been many tests that show that the Fury is not memory limited even at 4K.

Please name these titles that REQUIRE that much vram at 1080P. Because I cannot think of a single one.

Well of course you can lower settings. I don't see how that's relevant to the point though. We also know that Fiji is pulling some voodoo magic with that HBM and isn't as affected by memory limits as GDDR5 cards.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
1) But the differences under DX12 thus far are more than 10%.

In Ashes, 390 beats 970 by 16% at 1080P high preset.

In Fable, 290X beats 970 by 13%.

...

So 13-16% advantage for 390/290X that compete against 970. 16% difference is nearly as much as between a $330 GTX970 and $550 GTX980 at launch.

Perhaps worth mentioning that a 390 is normally about the same speed as a 970 at 1080P, whilst the 290X is slightly slower (about 2.5% slower), so it's actually a 16% performance boost for both cards relative to the 970.

The 285 is roughly 24% faster than a 960, whereas normally it would only be about 3% faster, so roughly a 20% relative performance boost.

So all in all it would appear that at least for Ashes and Fables, AMD sees a performance boost of 15-20% relative to Nvidia. The notable exception to this is of course the various Fiji GPUs, which appear to be at roughly status quo relative to GM200.
 

Spanners

Senior member
Mar 16, 2014
325
1
0
I love it when people conclude things from beta benchmarks.

Seems a pretty reasonable thing to do when it's the only information available. Especially when it's qualified by the posters already as being preliminary and not definitive.
 
Reactions: Grazick

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I missed it running in AOTS. There's a lot of back and forth about Maxwell and Async compute but not much of substance. All I read is the "don't worry, be happy", it doesn't matter responses. So, where is AOTS running ASync compute on Maxwell?

Isn't that why Nvidia requested they turn it off, because it blew hardcore on their hardware?

IE an GCN application of AS is bad for NV hardware since they aren't using hardware for the switching.

Yet, if there is AS in the Fable benchmarks, it didn't crush NV like it did in AOTS.

Again, I'm not saying that Maxwell2 has a better implementation. For all we know it is bare minimums to fit the requirements and claim support. But because it blows doesn't mean it isn't supported.

I already own two games which can exceed 3.5GB even at 1080p, and VRAM requirements are not going down...

And I got games using up 6GB of VRAM, I don't see your point.

If it fits your agenda sure. It's why I think this whole thing is absurd. Buy the gpu you want these games are not out and still being worked on.....

As was said to me something along the lines "you act like people upgrade every time a new card comes out." Haha.

So they got to make their sales for their team. Have you seen that AMD stock. Woof!
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
semi off topic but kinda not: how firm are the rumors about Spring 2016 for 16nm cards? After all the delays with 14 and 16 at Intel, Samsung and TSMC I'm skeptical that TSMC keep their schedule here for GPUs... Especially with Apple apparently buying up a ton of their fab capacity on the node

I'm thinking the pre-16nm DX12 battle may last longer than 6 months. I hope it doesn't, but im afraid it might.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
People act like one they purchase a card they're married to it. If by next year, I can sell my 300 card for 250 and pick up a new 300 card that's far better suited for my games that's far more worth it than just sitting on a card that doest handle my games just because.

It's just plain insane to judge dx12 performance off a sample size of 2 beta benchmarks.... Might as well base the whole dx 11 based off ac unity and project cars.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
semi off topic but kinda not: how firm are the rumors about Spring 2016 for 16nm cards? After all the delays with 14 and 16 at Intel, Samsung and TSMC I'm skeptical that TSMC keep their schedule here for GPUs... Especially with Apple apparently buying up a ton of their fab capacity on the node

I'm thinking the pre-16nm DX12 battle may last longer than 6 months. I hope it doesn't, but im afraid it might.
Could last til December 2016 for all I care.


We're judging dx12 performance based off framerates that are a joke for actual game play for beta games.
 
Last edited:

thesmokingman

Platinum Member
May 6, 2010
2,307
231
106
semi off topic but kinda not: how firm are the rumors about Spring 2016 for 16nm cards? After all the delays with 14 and 16 at Intel, Samsung and TSMC I'm skeptical that TSMC keep their schedule here for GPUs... Especially with Apple apparently buying up a ton of their fab capacity on the node

I'm thinking the pre-16nm DX12 battle may last longer than 6 months. I hope it doesn't, but im afraid it might.


Yeap, I don't see a shrink on the near horizon either. That plus the shortage of HBM1, let alone HBM2 will work to delay release even more.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I'm hoping for a Summer release, that would give me a full year with my 980 Ti and if I do decide to upgrade, I won't feel like I didn't get my money's worth.

Eitherway, the 980 Ti goes to the GF, her Lightning gets offered to family/friends first at going out of sale price, otherwise ends up on Ebay/Craigslist/Closet.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
People act like one they purchase a card they're married to it. If by next year, I can sell my 300 card for 250 and pick up a new 300 card that's far better suited for my games that's far more worth it than just sitting on a card that doest handle my games just because.

I have said this before but dont expect a better GPU at $200/$300 in 2016 than what you may buy today.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I have said this before but dont expect a better GPU at $200/$300 in 2016 than what you may buy today.

At the same price point? Or a $200/300 better than current flagships?

I'd hope that with a new node (whether HBM or not) can increase performance at least at the same price point.

(Also, I hope you aren't using heavily discounted 290/290X as the base of the argument.)
 

tential

Diamond Member
May 13, 2008
7,355
642
121
I have said this before but dont expect a better GPU at $200/$300 in 2016 than what you may buy today.
My price range is up to $1000.
I'm pc gaming if I wanted o be cheap I'd get a console. If amd and nvidia don't provide a massive increase in performance next gen they're both jokes of a company in my eyes.
If arctic Islands dual gpu stupid isn't glorious I'll be upset beyond belief. Fury x was already a massive disappointment and I will be extremely upset if it happens again. Same goes for nvidia but I'm not buying nvidia until gsync is on a 65 inch+ screen like freesync.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Lets assume you buy the R9 390 at $300 today, you will not be able to get much faster GPU at $300 next year. Only perhaps +10% or +15% at half the power.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |