- Dec 15, 2021
- 4,384
- 2,756
- 106
I am doubtful that can be achieved in a single generation.Current RDNA 3 and Ada feature parity is what Qualcomm needs aim for for X Elite 2 compute wise.
I am doubtful that can be achieved in a single generation.Current RDNA 3 and Ada feature parity is what Qualcomm needs aim for for X Elite 2 compute wise.
It can be done, Apple did it and in some ways surpassed NV with M2->M3.I am doubtful that can be achieved in a single generation.
Either way you cut it, it’s not a progression. I see he was comparing results to the 7840U which isn’t quite like-for-like (though I suppose normalizing for clock speed helps remove a bit of bias there). Just seems underwhelming. Maybe battery life will benefit?If mobile Zen5 cores are nerfed compared to desktop Zen5 cores (as David Huang found), then this result is expected.
Yeah, which is why everyone copied it.Are you saying that GCN is the only viable design for heavy compute GPU?
They don't lack features, their uarch is just ass for modern big boy games.Current RDNA 3 and Ada feature parity is what Qualcomm needs aim for for X Elite 2 compute wise.
it also sucks at 3d. Blender, cinebench, octane. nadaThey don't lack features, their uarch is just ass for modern big boy games.
It looks like the top X Elite SKU is faster than the top Strix Point SKU, in Geekbench.
View attachment 102992
Wasn't Nvidia the first, with Fermi?Yeah, which is why everyone copied it.
Graphics Core Next (GCN) throws out the Terascale playbook with a focus on predictable performance for general purpose compute. Terascale’s 64-wide wavefront stays around, but GCN is otherwise so different that it isn’t even a distant relative. GCN’s instruction set looks like that of a typical CPU, or Nvidia’s Fermi.
Yeah but that's not the relevant workload set.it also sucks at 3d. Blender, cinebench, octane. nada
No, Fermi still has rudimentary memory model, NV had no fence-based atomics up until A100 iirc.Wasn't Nvidia the first, with Fermi?
Rasterized graphics continued to dominate gaming in the early to mid 2010s. AMD scaled out GCN’s work distribution hardware in Hawaii, but Nvidia countered with huge gains in Maxwell and Pascal. GCN still struggled to match them in both performance and power efficiency.
Well, the good news for Adreno is that Nvidia and AMD have shown the way. It's easier to play catch-up, than pushing the limits of technology to new heights.But GCN’s design is vindicated by modern trends, even if that’s of little comfort to AMD in 2012. Fixed function graphics hardware continues to be important, but games have gradually trended towards using more compute. Raytracing is a well publicized example. It’s basically a compute workload and doesn’t use the rasterizer. But even without raytracing, compute shaders are quietly playing a larger role in modern games. Modern designs have adopted elements of GCN’s design. RDNA keeps the scalar datapath and uses a similar instruction set. Nvidia added a scalar path (called the uniform datapath) to their Turing architecture and kept it in subsequent designs.
*3D is a huge umbrella covering everything from early VFX in Andromeda Strain or Futureworld to UE5.it also sucks at 3d. Blender, cinebench, octane. nada
They wouldn't be the first in the mobile GPU arena.Well, the good news for Adreno is that Nvidia and AMD have shown the way. It's easier to play catch-up, than pushing the limits of technology to new heights.
I wouldn't take any synthetic benchmark as a good indicator.They were?
View attachment 103013View attachment 103014
The ARM Immortalis GPU in the D9300 does perform better in 3DMark SNL (which leverages compute significantly), but it doesn't seem to be a huge advantage vs Adreno.
Year | Snapdragon | Node | Oryon | Adreno | Memory |
2024 | 8 Gen 4 | N3E | V1 | 830 | LPDDR5X 9600 |
2025 | 8 Gen 5 | N3P | V2 | 840 | LPDDR6 10667 |
2026 | 8 Gen 6 | N2P | V2+ | 850 | LPDDR6 12800 |
2027 | 8 Gen 7 | A16 | V3 | 930 | LPDDR6 14400 |
2028 | 8 Gen 8 | A14 | V4 | 940 | LPDDR6X 17066 |
2029 | 8 Gen 9 | A14P | V4+ | 950 | LPDDR6X 19200 |
2030 | 8 Gen 10 | A10 | V5 | 1030 | LPDDR7 21600 |
SDXE looks even better when comparing Geekbench INT subtest scores. I compared good scoring SDXE and STX runs and SDXE has a ~10% lead in INT.AMD's own slide at their tech day admits that the Snapdragon X Elite 84 is faster than their top dog Strix Point SKU in CB2024 1T
View attachment 103148