It's holiday 2027 afaik and probably using chiplets aka multiple nodes.Case in point - I wouldn't be surprised if the PS6 uses N3. Yes... for a console that might be coming out no earlier than the end of 2028.
It's holiday 2027 afaik and probably using chiplets aka multiple nodes.Case in point - I wouldn't be surprised if the PS6 uses N3. Yes... for a console that might be coming out no earlier than the end of 2028.
There is none.The GPU die would depend on where AMD/NVIDIA is with their design libraries.
Vanilla N2 doesn't exist when N2p ramps.The compute die would likely be trailing the leading edge by at least a partial node, so maybe a Vanilla N2, or N3P
Meaning that the N2 production line will be converted to N2P?Vanilla N2 doesn't exist when N2p ramps
Yeah, nodelets roll over just like that.Meaning that the N2 production line will be converted to N2P?
Isn't this what RiscV (with its pros and cons) is all about ?
I said client.
It takes years to design/develop/manufacture a CPU. Zen 5 development started 5 years ago, for comparison.What RISCV are you talking about? There is such a proliferation out there you can call it almost anything you want. Everyone in the RISCV world was patting themselves on the back for the RVA23 profile that's supposed to solve all their problems by specifying a more realistic set of what is mandatory and it does - but it also has tons of extensions. You thought ARM had a lot of optional stuff you ain't seen nothing yet!
I haven't dug into much because frankly I just don't give a crap about RISCV, and it will depend on exactly what extensions get adopted, but I'm pretty sure those complex addressing modes are in there. So sure if you use a stripped down RISCV you can claim it is a pure RISC that has only simple loads/stores. But those early pure RISCs made multiplication support optional - and so does RISCV if you choose the most stripped down version. If RISCV ever makes it into Android phones in any number I'll bet it will be with something that looks very much like ARM in terms of complex addressing modes. Because it would be dumb to do otherwise.
In H1 2018 Mr. Clark reported AMD had been working on Zen 5 already.Zen 5 development started 5 years ago, for comparison.
Btw there are AMD mobile 2025-2026 roadmaps leaked. They list no Zen 6 in 2026, just refreshes. So the theory of H2 2026/2027 fits.
Qualcomm had a great opportunity this year;Qualcomm has a good opportunity to go all out and not mess up in 2026.
At this rate only Nvidia can save WoA, if they fail then this platform has no chance of ever taking off.Qualcomm had a great opportunity this year;
- Only silicon vendor for Windows-on-ARM
- First to Microsoft's Copilot+ Initiative
- Beginning of the great upgrade cycle that will be stimulated by Microsoft phasing out support for Windows 10 in 2025.
Yet they didn't go all out (example: not investing in a sufficiently large GPU for X Elite), and messed up a few things (such as the Dev Kit cancellation).
The refreshes are a must since AMD gotta wave with *something* during January CES 2026. Let's hope they won't just overclock the AI engine again but do something meaningful like Rembrandt.Interesting. They're actually going to refresh Strix and Kraken? Wonder if those will wind up being like Rembrandt.
I just don't buy Windows on ARM ever having more than at most 10% of the Windows market, and even that's probably optimistic. Sure in reality it is fine for most people but they will hear tales from people for whom it is a problem. All one needs to be a naysayer about ARM PCs is one application, one game, one driver for an 8 year old printer that they're still using that they can't properly run and they'll tell others to stay away.
Honestly smartphones are a bigger threat to x86 than ARM PCs, because a lot of younger people grow up used to only using a smartphone for their own "personal computing" needs. Their only exposure to a PC is at school and at work, so it carries the baggage of that negative association to being forced to do stuff they don't want to do in a place they don't want to be to where they will only buy a PC for personal use if they have absolutely no choice.
Strix refresh has a different name.What about Bald Eagle Point?
AMD Bald Eagle Point leaks as Zen 5 APU lineup with major cache upgrade
AMD could be preparing new laptop APUs to follow up the Zen 5-based Strix Point chips. Named “Bald Eagle Point”, the APUs also reportedly utilize Zen 5/5c cores like Strix Point alongside an RDNA 3.5 iGPU. However, the APUs apparently bring a major addition to the cache structure.www.notebookcheck.net
Or was that a meme?
It's a shrink of Zen 1 with improved clock rates that took 13 months to follow Zen.
Granted that Intel "got rid of it" because E cores couldn't do it, but I believe that this was a die size issue for BOTH E and P cores.The crazy wide execution of AVX512 is arguably not the biggest advantage the AVX512 brought to the table for x64, but people focus on that stupid number because it is in the name and Intel in its infinite wisdom mandated that 512b execution must be supported, while 256b and 128b are optional extensions
Intel got rid of it, because E-cores implementing AVX512 according to specification would be either terribly slow or the area cost would grow to the point e-cores could no longer be so easily spammed in such quantities as they needed to spam them. Of course they got rid of the support in the next release cycle after introducing AVX512 support to client devices to make software developers happier... AVX10 is the effort to get the benefits back without having to pay the cost of 512b shuffle units.
I also don't understand the comment about software compilers support. Two out of three C/C++ biggest compilers are open source and Intel has no power to force them to do anything. Not to mention you could fork them and add the support back if you needed. If anything AMD is so successful due to Intel's work to add the support for AVX512 to those compilers. And Intel's ICX (clang's fork, AOCC is AMD's fork) still compiles code fine for Zen, for example Y-cruncher is using that for Zen5 optimized binaries as in Mystical's evaluation it's still doing a better job than upstream clang. So while I understand Intel's CPU execution leaves a lot to be desired, their software efforts in open source domain are more prominent than AMD's own.
But yes, that's probably off-topic here
Not quite. First of all, INT scheduler has nothing to do with SSE and AVX2 integer operation, FP/SIMD scheduler is responsible for those. The throughput is not halved for those operations, but if you max the schedulers out, you will get one extra cycle of delay on one cycle instructions. Since SIMD integer adds are natively 1 cycle, they get the latency hit. Throughput stays the same, 4 int adds at whatever SIMD width you want. Speaking of desktop Zen5.Z5 had few regressions in the 128b SSE int add and 256b AVX2 int add throughput/latencies. Like the throughput is halved. I suppose it is something to do with the new changes for unifying int scheduling.