- Dec 15, 2021
- 4,384
- 2,756
- 106
Adreno 700 compute performance is weakNot Adreno 700, no.
Adreno 700 compute performance is weakNot Adreno 700, no.
As per leaks, the next generation Adreno 800 series architecture brings 3 new technologies:Good to hear, the rumors are that the next gen will also be a redesign.
Though I must say I'm not nearly as hopeful of their the driver department at the moment. Qualcomm just doesn't seem to take the PC market anywhere near seriously enough (being stuck with ARM v8 while court-battling ARM also doesn't help on the CPU side).
I consider MTL as such. It wasn't fit for release but they released it any way to dupe paying customers with an NPU that doesn't even meet M$'s minimum requirements. Until they develop some sort of fix to get the whole SoC to meet M$'s Copilot+ requirement, it's beta as far as I'm concerned.155H is beta silicon?
So... Qualcomm is in a very bad position right now.What happened to the X1 Elite SKUs? I cannot seem to find any review using the 4.3GHz chip
Even the 3.4G chips seems to not have that M1 "competitive" efficiency. Something is amiss.
Additionally, the CPU seems to do well in CB24 and GB6, but in things like native 7Zip, WinRAR, Blender, etc., and others seems to take a beating.
This slide was ~5 years ago, it is insane how fast time flies and how long these projects take to come to market.
The fact that they depicted Zen 2 getting beaten soundly but have to face off Zen 5 at launch is nuts.
I was fairly confident their Adreno is garbage, having used them on Linux ( professionally somewhere ) it is lacking many HW/SW features that is commonly present on AMD/Intel platforms with upstream Linux (even 2CU RDNA2 has a richer feature set than Adreno). Can't say more than that.
The reviews show exactly that.
Their NPU was a rebrand of the age old Hexagon DSP with some upgrades not a grounds up purpose built design. Hexagon was OK, not great not terrible, nothing groundbreaking.
Will be interesting to see how this Hexagon NPU is going to look vs XDNA2. XDNA2 with roots to re programmable arrays could be an interesting case study on how to rearrange the logic arrays to fit popular frameworks. Kind of like how we reprogram the old CPLDs at assembly line when the logic function is finalized after manufacture of the devices.
Yes, Meteor Lake was created primarily to test a lot of new features like LP E-cores, new tiled architecture with separated media unit and memory controller, new Intel 4 node and packaging, Thread Director, new NPU unit, etc. Having all those features work is already a big achievement.I consider MTL as such. It wasn't fit for release but they released it any way to dupe paying customers with an NPU that doesn't even meet M$'s minimum requirements. Until they develop some sort of fix to get the whole SoC to meet M$'s Copilot+ requirement, it's beta as far as I'm concerned.
I hope Qualcomm will continue working on the second generation of their platform. It makes sense for them.In short, their IP stack is weak and has many problems. They'll have to invest significantly and work hard to fix everything. Will they move forward with the Quest for the PC market, or will they throw the towel?
I also want to see the marketing meeting rooms working on the slide presentations for Strix and Lunar. The giddiness, the laughter, so many jokes being cracked.Would love to see the finger pointing going on at QC HQ at this very moment
Nvidia's GPU drivers are pretty much a given, but the CPU performance?Well looks like this was kind of a dud, thanks for the entertainment y'all. See you back here next year for the Nvidia ARM SoC launch? Maybe they'll even have working graphics drivers
nah, what we really want to know though is: do you still want the marble or not?I'd like to put up this quote from the Lord of the Rings:
No.nah, what we really want to know though is: do you still want the marble or not?
Or if MS Windows is not correctly tuned. Power management and scheduling are OS tasks. And I'm not sure MS properly optimized that for Snapdragon yet.This is a good neutral review, web browsing battery life still appears to be less than the M3. I wonder if it's due to lack of efficiency cores or if the Oryon core itself is noticably less efficient.
So... Qualcomm is in a very bad position right now.
Their GPU is lacking in featurset and is complemented by subpar drivers.
We still have the higher-end GPU to contend with in the 84 / 00 SKUs ("4.6 TFLOPS"), which I've not yet seen benchmarked. The 80/78/Plus are, according to Qualcomm, just "3.8 TFLOPS".
Qcom isn't core binning the GPU. Only frequency binning.Assuming this is all just a single die, does it make sense for QCOM to disable such a percentage of the die - in such a large percentage of dies, or is the higher rated SKU just running at higher power limits?
If the top die is less than 5%, then they are disabling a big chunk of die in 95% of dies...
So to achieve the full performance of the X Elite chip in CB2024, we need to run it in a performance mode with fans blasting?Maybe a translation error: it seems like nT performance does drop unplugged, even in the lowest "Whisper Mode", if you use ASUS' default energy efficiency profile.
Snapdragon X Elite vs. Intel Core und AMD Ryzen im Test
Wie gut ist Windows 11 on Arm mit dem Qualcomm Snapdragon X Elite? ComputerBase hat mit dem Asus Vivobook S 15 den Test gemacht.www.computerbase.de
This is presumably the 1 GHz underclock Windows Central mentioned; it's quite underclocked, so the power targets become a little irrelevant:
View attachment 101465
It seems like the Windows Power Plans control the peak frequencies, while the ASUS app profiles control the power limit. This may be an ASUS-specific thing: we'll need more data points.
//
ComputerBase also thinks they can extract the clocks per ASUS profile (and after disabling the claimed 1 GHz Energy Efficiency Power Plan). ASUS provided NBC the SoC+RAM power targets per profile, so...
Whisper Mode: 20W for SoC+RAM, 2.2GHz CPU peak
Default / Standard: 35W for SoC+RAM, 2.7 GHz CPU peak
Performance: 45W for SoC+RAM, 3.0 GHz CPU peak
Full Speed: 50W for SoC+RAM, 3.2 GHz CPU peak
Qcom isn't core binning the GPU. Only frequency binning.
4.6 TFLOPS = 1.5 GHz
3.8 TFLOPS = 1.25 GHz
The 4.6 TFLOPS version consumes 50% more power than the 3.8 TFLOP version according to Qualcomm's own graphs.It this is segmentation, it would be dumb to segment such a large percentage of dies into lower performance
If only ~5% or less of the dies can run the iGPU at that clock speed, that is quite bad...
Almost as if that was a marketing hype years in advance nobody should have taken seriously (but many did).Given X1E-78-100 gets 1805pts (NotebookCheck's numbers) and this was the original target:
View attachment 101447
Yeah I think that's a safe assumption.
Not like M1 was the second coming of Christ. It just had much better marketing. Seeding hardware only to friendly/safe media that won't run much tests and just stuff reviews with vague impressionable stuff about how it's "faster than anything" and has weeks if not months of battery life... praising the binary translation and not looking for faults that were or still are there. Never measuring actual power consumption so that the wrong idea that the TDP is like 5W could spread...So... Qualcomm is in a very bad position right now.
They were hyping it upto be an M1 moment, but it has fallen far short of that.
Their CPU has failed to reach their power/performance targets.
Their GPU is lacking in featurset and is complemented by subpar drivers.
Their NPU is lacking in featureset and programmable software cannot be run on it.
In short, their IP stack is weak and has many problems. They'll have to invest significantly and work hard to fix everything. Will they move forward with the Quest for the PC market, or will they throw the towel?
I'd like to put up this quote from the Lord of the Rings:
View attachment 101456
Yeah that reminds me of AMD apologists waiting for Zen5, ready to believe anything as long as that fits their love.Not like M1 was the second coming of Christ. It just had much better marketing. Seeding hardware so that only hardware from friendly media that won't run much tests and just say impressionable stuff about how it's "faster than anything" and has weeks if not months of battery life... praising the binary translation and not looking fro faults (surely will be fixed xsoon!).
I think there's a bigger difference in the scrutiny the product got and carefully crafted messaging done to preempt that, than in the actual technical merits of the SoCs.
While PCMark 10 is old, I've confirmed the Office Battery Life sub-test supports Arm (I assume that means native Arm binaries, and not just Prism, lol).
ComputerBase usually tests the battery life of notebooks with the streaming of a YouTube video and with the PCMark 10 battery test. In both cases, the screen is normalized to a brightness of 200 cd/m² in the center of the display (determined at 100 percent white content) and all energy-saving settings (display darkening, adaptive brightness, adaptive contrast, etc.) are deactivated.
That was also the case in this case, but PCMark 10 didn't want to: The software recognizes the Arm platform and refuses to test because it is not yet available as a native Arm app and the result is therefore not representative. For "Non-Arm64 apps", of course, it would still be.
Battery tests don't work: Our lab folks have been attempting to run battery tests on a variety of Snapdragon laptops, including the Surface Pro, and of three different tests, all failed to run. First, our own Battery Informant test, which is written in C# .NET and surfs the web using Edge, failed to open at all. Then they tried PCMark10's battery test and it refused to run. Finally, the Procyon battery test failed after an hour because it requires you to install Outlook, which we don't have for Arm (perhaps someone can get it, but it's not even part of our Office 365 package).