- Mar 3, 2017
- 1,747
- 6,598
- 136
Didn't see the comparison between 9950X and 7950X? That uplift isn't from a generational IPC bump.
AVX-512 is very much being used.
I read the blog post for it and they do support using OpenVINO as the framework, which I know supports AVX-512, but that's still not a guarantee that they are actually using it.
AVX2 < AVX512_CORE < AVX512_CORE_VNNI< AVX512_CORE_BF16 < AVX512_CORE_FP16 < AVX512_CORE_AMX <AVX512_CORE_AMX_FP16,
New Apple AI benchmark released
Download Geekbench AI
www.geekbench.com
No idea if this is a good or bad score for Zen5 16core
View attachment 105414ASUS System Product Name - Geekbench
Benchmark results for an ASUS System Product Name with an AMD Eng Sample: 100-000001277-60_Y processor.browser.geekbench.com
Cant find a single 13900k / 14900k result, crashing all these systems ? 🤣
DDR5-5600 with horrible timings and the inter-thread latency issue of 9950X probably got something to do with it.curiously the 9700X is on top of the charts above the 9950X.
Computerbase included GB ML inference in their MT tests but only for 32 bit precision apparently, curiously the 9700X is on top of the charts above the 9950X.
AMD Ryzen 9 9900X und 9950X im Test: Benchmarks in neuen Anwendungstests
Ryzen 9 9950X & 9900X im Test: Benchmarks in neuen Anwendungstests / Intel-Instabilität und AGESA-Updates sind die (neue) Tagesordnungwww.computerbase.de
Noticed something in the GB AI recent results?GB ML is the older version of GB AI. Said another way, GB AI is the new version of GB ML.
GB ML is the older version of GB AI. Said another way, GB AI is the new version of GB ML.
We called prior preview releases for our machine learning benchmark “Geekbench ML.” But in recent years, companies have coalesced around the term “AI” for these workloads (and their related marketing). To ensure that everyone, from engineers to performance enthusiasts, understands what this benchmark does and how it works, it was time for an update.
It'd be nice if some owner could have run this latency benchmark (either the one Dom used or the one CopeframeX created) in some non-Windows OS or in Win10 instead of Win11 (which all of reviewers seem to use)
PS Here's latency chart for 7950x3d, so the inter-chiplet access time increased almost 3 times which is bewildering to put it midly considering that the IO die is the same and IFOPs probably didn't change as well, at least on the physical level.
View attachment 105378
Actually GB AI is nothing new but just built out of GB ML.
Geekbench AI 1.0 - Geekbench Blog
www.geekbench.com
That's not the rumor mill. That MLID. Drama is his schtick. MLID recipe is 20% actual leaks, 20% made up info, and 60% Jerry Springer show drama nonsense.
Like hitman said it’s an updated version of GB ML. This benchmark has scores for fp32, fp16 and int8.Actually GB AI is nothing new but just built out of GB ML.
Geekbench AI 1.0 - Geekbench Blog
www.geekbench.com
DDR5-5600 with horrible timings and the inter-thread latency issue of 9950X probably got something to do with it.
Slightly related: Phoronix made comparisons of various apps without and with AVX-512 including OpenVINOThe title of the executable had AVX2 in it. It's possible that it still has AVX512 under the hood, but like I said, I was going by the name. Zen 5 doesn't need AVX-512 to get a big boost over Zen 4 in certain work loads, but it certainly helps. I read the blog post for it and they do support using OpenVINO as the framework, which I know supports AVX-512, but that's still not a guarantee that they are actually using it.
You cannot cite inter-thread latency as reason behind every issue Also since ML workloads afaik (but I don't work in the field) are basically matrix multiplications they don't call for a lot of communication between threads. Here you can find OpenVino tests from phoronix https://www.phoronix.com/review/amd-ryzen-9950x-9900x/13 scroll down past tensor flow. Dual CCDs are leading the pack.DDR5-5600 with horrible timings and the inter-thread latency issue of 9950X probably got something to do with it.
yep! AVX-212 is being used. I've run my 7950X3D (stock) with AVX512 enabled and disabled in BIOS, and makes a difference.Looking at the Zen 4/5 results against RPL, it does seem like AVX-512 is being used, despite the name. I can't think of another explanation as to why it would perform so much better, otherwise.
I don't agree on any delaysAMDs response to this inquiry will be very telling. I doubt they admit what is really going on. Is it a regression that was necessary due to architectural design choices, a result of halting design at a specific point to meet an internal launch date goal, or is it a silicon level bug that might or might not be fixable by new stepping or microcode.
More and more it looks like desktop Zen 5 should have just been delayed, even if it would be a 6 month plus delay, to get this and other performance anomalies ironed out. I cant wait to see the core latencies on Zen 5C 3nm Turin, which is rumored to have the fabled 16 core CCX.