superstition
Platinum Member
- Feb 2, 2008
- 2,219
- 221
- 101
I didn't post that argument.So it will be "unfair" to use modern video codecs, renderers or mathematical workloads / benchmarks which implement AVX2 and newer, just because Zen doesn't have competitive AVX2 performance? It's time to stop having double standards and stop treating AMD like a disabled child.
What I spoke to was taking a niche context and extrapolating it into general performance. Shenanigans.
If, let's say, ABD (a hypothetical instruction superset) results in a 25% performance boost in 4% of the typical workload an enthusiast gamer deals with and ABD performance is used to construct the backbone of a benchmark (let's say 60% of its total performance) don't you think it's rather incorrect to make that the community de facto standard for general CPU performance comparisons?
Or, we could ask ourselves if measuring the FPU performance of a quad FPU 8 integer design is equivalent to measuring the performance of an 8 integer quad FPU design for the purpose of comparing CPUs' overall performance.
Also, still wondering about this: If AMD were to use the maximum possible chip size for its 14nm process and four Zen CPU cores, what is the maximum iGPU it could squeeze in?
That's why I asked The Stilt that question. It would be interesting to know what the maximum is that AMD could achieve with its tech (assuming it wanted to go the route of a big chip) in order to put things like a 7770 into context.Problem is, HD7770 is already marginal for 1080p gaming except for older or less demanding games. And by the time Zen apus come out, there will be an entire new generation of 14/16 nm dgpus in the hundred dollar range that will offer much better performance than the ancient HD7770. And when making bandwidth comparisons with a dgpu one must also consider that the already limited bandwidth (and thermal budget) must be shared with the cpu.
Last edited: