Read the thread again it's all been said.Are Zen4 GB6 ST speeds valid?
Um... so what's your point? x86 Object Detecton is valid but not M4?Read the thread again it's all been said.
Zen 4 is still only on par with its competition, Raptor Lake, in that subtest. Both using 256 bit vector operations.
SVL for M4 is 128 bits and the ZA size is 128x128 bits and it can do that far fewer cycles. So it ends up twice the score of its competition overall. Cool, but much more niche.
Object detection makes almost no difference in the comparison between Raptor Lake / Zen 4 / M3. They're all on par.Um... so what's your point? x86 Object Detecton is valid but not M4?
So basically, M4's score is so high that GB6 is no longer valid.Object detection makes almost no difference in the comparison between Raptor Lake / Zen 4 / M3. They're all on par.
M4 object detection is twice the score as all of them. So now it's an issue because now it makes a difference and it is implemented using an even less applicable extension (where 80% of the instructions are for workloads most software will run on the NPU anyway).
And people have been complaining about GB6 since its release for numerous reasons. But I'm sure this is all lost on you. GB6 says M4 is 50% faster than M2 in ST. This is good, Apple is good, so GB6 must be good. Nevermind that it doesn't agree even with Apple's own claims.
Maybe GB6 should properly optimize their tests for Intel/AMD SoCs too, since those have their dedicated NPUs?So basically, M4's score is so high that GB6 is no longer valid.
Cinebench 2024 is better than GB6 even though it’s a renderer benchmark. Just avoid GB6. Looking at SPEC, M4 is not a major leap that GB6 indicates.Cinebench R23?
That's a weird statement since Cinebench has never known to be correlated with anything. Even in renderer tasks, it's a niche. Use Blender benchmarks instead.Cinebench 2024 is better than GB6 even though it’s a renderer benchmark. Just avoid GB6. Looking at SPEC, M4 is not a major leap that GB6 indicates.
GB has a separate NPU benchmark.Maybe GB6 should properly optimize their tests for Intel/AMD SoCs too, since those have their dedicated NPUs?
How is that proper optimisation? The NPUs of the ARM world have never been used for the GB6 CPU test so why should Intel/AMD’s?Maybe GB6 should properly optimize their tests for Intel/AMD SoCs too, since those have their dedicated NPUs?
Are you complaining that they didn't, at their own expense, develop DX12 drivers for their GPU?
I'm not sure if it is still in force, but Microsoft had an exclusive deal with Qualcomm for Windows on ARM. So both of those companies share the blame you can't buy a Macbook and natively boot Windows on it.
If Apple really wanted to go walled garden on the Mac, they would locked the bootloader to prevent booting other operating systems. Something that some Windows PCs have done - do you complain about them, or do you not consider that a "walled garden" because all you care about is running Windows?
How can we be sure that Apple's SME implementation isn't tapping into its NPU?How is that proper optimisation? The NPUs of the ARM world have never been used for the GB6 CPU test so why should Intel/AMD’s?
Don't disagree with all that. But it's mostly MEH for me since I'm not a data scientist, not an artist, not an animator, not an AI junkie, not a musician, not someone trying to look cool in public etc. I'm just a computer enthusiast who does a lot of browsing and gaming and media consumption and for those use cases, despite all the advantages of power efficiency on its side, Apple devices make no money sense for me.Loving the denials here.
Apple's M series CPUs have always had higher perf/watt, raw performance, and IPC than AMD & Intel. They're generations ahead regardless of the node.
Probably because that'd be much slower because you'd be moving data off the CPU caches?How can we be sure that Apple's SME implementation isn't tapping into its NPU?
Is that your actual concern here? That the NPU is taking over? If that’s the case maybe Intel and AMD should take notes. In any case I’m not sure what proper optimisation could be done on Primate Labs’ end if that’s just how things are set up to work on a given SoC.How can we be sure that Apple's SME implementation isn't tapping into its NPU?
SVL for M4 is 128 bits and the ZA size is 128x128 bits and it can do that far fewer cycles. So it ends up twice the score of its competition overall. Cool, but much more niche.
GB6 says M4 is 50% faster than M2 in ST. This is good, Apple is good, so GB6 must be good. Nevermind that it doesn't agree even with Apple's own claims.
How can we be sure that Apple's SME implementation isn't tapping into its NPU?
Is Intel running some GB CPU benchmark code on GPU? Any link/URL? I just want to know the real reason for the Object Detection test's "anomalous" result. Apple didn't announce SME support. GB6 updated their test suite with SME support just before the M4 reveal. So everyone is assuming that Apple is using SME. If they are, what kind of acceleration is GB using for Intel/AMD SoCs? If the test is properly accelerated for one CPU and not others, is that fair?I just don't get it why you worry about Apple running some CPU code on an NPU but disregard the possibility of, say, Intel running some CPU code on the GPU. It's the same thing after all.
Shouldn't it be their responsibility to ensure that all CPUs are being used to the best of their capabilities? Or should they just take "under the table" gifts to include acceleration for one CPU and then wait for gifts to arrive from the other CPU vendors before bothering to include relevant acceleration for their CPUs?In any case I’m not sure what proper optimisation could be done on Primate Labs’ end if that’s just how things are set up to work on a given SoC.
Is Intel running some GB CPU benchmark code on GPU? Any link/URL?
I just want to know the real reason for the Object Detection test's "anomalous" result. Apple didn't announce SME. GB6 updated their test suite with SME support just before the M4 reveal. So everyone is assuming that Apple is using SME.
If they are, what kind of acceleration is GB using for Intel/AMD SoCs? If the test is properly accelerated for one CPU and not others, is that fair?
Shouldn't it be their responsibility to ensure that all CPUs are being used to the best of their capabilities? Or should they just take "under the table" gifts to include acceleration for one CPU and then wait for gifts to arrive from the other CPU vendors before bothering to include relevant acceleration for their CPUs?
I previously posted a link comparing two Intel chips using the same generation, one with AMX, one without it. And Object Detection speedup was already present. So in a way one could argue GB favored Intel back then, especially given that it's guaranteed much fewer people will be using Intel chips with AMX than Apple chips with AMX/SME.Is Intel running some GB CPU benchmark code on GPU? Any link/URL? I just want to know the real reason for the Object Detection test's "anomalous" result. Apple didn't announce SME support. GB6 updated their test suite with SME support just before the M4 reveal. So everyone is assuming that Apple is using SME. If they are, what kind of acceleration is GB using for Intel/AMD SoCs? If the test is properly accelerated for one CPU and not others, is that fair?
I'm not saying Apple should be penalized. I'm just interested in the technical details of how they are achieving a better score. AMD still has AVX-512 enabled. The GB6 internals document has not been updated if SME is being used in it. It also says that MobileNetV1 is being used which I suppose is an outdated CNN based detector? And that brings me to the problem. It's too early to say the M4 is incredible based on one possibly outdated test result. Come on back down to reality, folks, from whatever plane of existence you guys have transcended to in your euphoria induced excitement.I also don't really see why it is fair to penalize Apple who give you a state of the art matrix coprocessor in a tables just because Intel has decided to cut AVX-512 from their consumer CPUs.
Excellent point. So if there is evidence that GB is favoring or has favored certain vendors in certain tests, is it a relevant benchmark and should people be excited over the score of a version bump just released not too long ago?So in a way one could argue GB favored Intel back then
I'm not saying Apple should be penalized. I'm just interested in the technical details of how they are achieving a better score. AMD still has AVX-512 enabled. The GB6 internals document has not been updated if SME is being used in it. It also says that MobileNetV1 is being used which I suppose is an outdated CNN based detector? And that brings me to the problem. It's too early to say the M4 is incredible based on one possibly outdated test result. Come on back down to reality, folks, from whatever plane of existence you guys have transcended to in your euphoria induced excitement.