- Mar 3, 2017
- 1,747
- 6,598
- 136
No, spending on this will at best stagnate. They will all have more pressing issues to solve, even if we consider only their supply chain issues and ignore everything else that's mounting.Surely, you’ve noticed that multiple countries have begun spending hundreds of billions of dollars to ensure that they have angstrom scale fabs inside their borders. Each of those countries will massively increase spending and several other nations will join them.
Seriously though, is Zen "5.5" one of the targets they are working on until the launch of Turin? Which, at this time, is still communicated as "in 2H 2024" AFAIK. (3,638 hours left.)Zen 5.5* 🤣
I am afraid it's about bad marketing message and miscommunication. The materials were mentioning that decoders are statically partitioned in SMT mode. Now traditionally when you wanted to turn off SMT, you went to BIOS and disabled it. Now the question is, is the SMT mode static when enabled [If SMT is on in the BIOS is the core always in SMT mode] or is it dynamic like the interviews are leading us to believe.
It can be seen that AMD intentionally limited the performance of the processor front end and some instruction combinations in the previous microcode.
The classic cores in Strix have the same amount of L3 per core as desktop tho, 4MB.From a technical perspective, for the Strix mobile parts, the front end could be curtailed in the interest of efficiency if the memory subsystem cannot sustain the BW needed. (e.g. smaller L3, slower memory).
In which case, if true, really supports the argument that the fabric and memory/cache hierarchy from L2 onwards need an big improvement.
You think they'd release the client lineup without enabling all of the core's features because they aren't ready? Seems highly unlikely, they'd rather postpone the launch I think. They don't need early reviews to be less positive than they could be.Seriously though, is Zen "5.5" one of the targets they are working on until the launch of Turin? Which, at this time, is still communicated as "in 2H 2024" AFAIK. (3,638 hours left.)
What I get from this: zen32% still possible 😎
I am a straight up bubble huffer. I say that AI functions will be the main reason for sales of computing devices by 2040. I still find it bizarre there’s so many people on various text site forums don’t see that AI is THE future of computing in society.
1 decode cluster = 16% IPCWhat I get from this: zen32% still possible 😎
/s
Memory bottleneck?Hmm why does Zen5C have higher FP IPC than vanilla Zen5 ? (performance/GHz)
View attachment 104342
Somethings fishy with the clockspeeds i guess
I wonder how AMD/Intel will perform in FP if AVX512 is used in relation to Apple?
To do per core test, you need to pin the test to the core. The thing is, they running under WSL2, so they are running a Linux inside Hyper-V virtual machine [this what WSL2 is], so they can pin all they want, hypervisor won't care... https://github.com/Microsoft/WSL/issues/3827 unless M$ fixed that but the issue is still open on github. In other words they may think they are running the test on the Z5c core, but in reality it might be running on either Z5 or Z5c or on both of them.Hmm why does Zen5C have higher FP IPC than vanilla Zen5 ? (performance/GHz)
View attachment 104342
Somethings fishy with the clockspeeds i guess
Hmm why does Zen5C have higher FP IPC than vanilla Zen5 ? (performance/GHz)
Where'd you get this idea from?The idea of having to actually disable SMT to get a 1t uplift on say a 9950x doesn’t seem too bad, I mean that would work well for my use cases at least
Remember what "IPC" is.Hmm why does Zen5C have higher FP IPC than vanilla Zen5 ? (performance/GHz)
L2$ and L3$ densities of the two CCXs are the same, except that the 5c cluster has more "stops" on the L3$'s bus.With Zen5c-256 being denser and targeted at lower clock speeds, are they achieving lower internal cache access latencies as compared to full Zen5-256?
Honestly, I think it's the only logical conclusion./s1 decode cluster = 16% IPC
2 clusters = 16% * 2 = 32%
/s
Hey, bring that back. This thread can't become any more of a dumpster fire than it already is.- removed stoopid joke
There are at least AnandTech's core-to-core latency measurements, in which latency (in nanoseconds, not necessarily in cycle counts) between dense cluster members is larger than between classic cluster members. Granted, the classic cluster has got only half the number of cores attached to its internal bus as the dense cluster has, IOW the classic cluster's bus is topologically smaller.I've no idea if AMD tried any of this, or just left Zen5c as just a denser Zen5 with no other notable changes.