Win2012R2
Senior member
- Dec 5, 2024
- 647
- 609
- 96
Shifting more work to GPU is easier - it inherently forces you to be parallel and gives far bigger scale than puny CPU cores.Properly multithreaded games are the future and always will be.
Shifting more work to GPU is easier - it inherently forces you to be parallel and gives far bigger scale than puny CPU cores.Properly multithreaded games are the future and always will be.
Idtech games are literally here, now and you can play them.Properly multithreaded games are the future and always will be.
Well it's not their job.PhDs can seldom write working MT code
GPUs are poo. If you can get away with not touching the GPU, you should.Shifting more work to GPU is easier - it inherently forces you to be parallel and gives far bigger scale than puny CPU cores.
Yes this was the one example that did come to mind, they seem to have put a lot of effort into optimising it either for insane frame rates on good hardware or acceptable frame rates on mediocre hardware - something Valve once was a big advocate of with Source engine, but seems to have slipped from with their Steam/SteamOS/VR etc pursuits.Idtech games are literally here, now and you can play them.
Far better paying job if you are good with programming GPU...GPUs are poo. If you can get away with not touching the GPU, you should.
they seem to have put a lot of effort into optimising it
Only possible if all you care about is expensive productivity/creativity software, browsing, Unix-y or Apple software development and AI.I'll keep a ghetto x64 machine for old games but future spending is lamentably going to locked down Apple crap because AMD and Intel simply cannot compete (or refuse to do so, I can't tell the difference).
Hell no, HL: Alyx still runs like butter.but seems to have slipped from with their Steam/SteamOS/VR etc pursuits.
Not really?Far better paying job if you are good with programming GPU...
It's vague enough to be hyperbole and probably meant to be a trap for gullible investors.I'm curious to see if ARM Ltd, AMD or Intel will pick up on this Blueshift Memory's new Cambridge architecture.
Certainly possible, they did announce a RISC-V based reference design back in November, and seem to have some partnerships in the works including something HBM based according to this article....It's vague enough to be hyperbole and probably meant to be a trap for gullible investors.
Anyone remember BitBoys? Someone's revolutionary point cloud rendering technology that was gonna destroy DirectX compatible GPUs by making graphics photorealistic with infinite detail overnight? Softmachines' reverse HT?
Still waiting...
Presumably we won't end up with a shorter pipeline, higher IPC ARM chip clocking higher than x64 parts. That would be too embarrassing. So I assume they'll hit a frequency wall like x64 did this year.What's with this "clock speed" thing about ARM? before it was that ARM couldn't clock high and that's why they had high IPC, now it's that they are close to x86 frequency so there is little margin left.
We don't know if Apple stopped having much IPC increases because of hard wall, lack of talent or because it was easier to just get increases via clock bumps, but ARM players have had higher YoY performance, until it slows down, let's wait before saying the wall is here.
Blueshift based HBM and non-Von Neumann architecture. That means ARM and x86 need not apply. I'll probably never see this go mainstreamStartup Blueshift Memory launches low-power RISC-V processor design
RISC-V core design with smart memory controller accelerates computation by up to 50 times and saves up to 65% energy.www.eenewseurope.com
The market is slowing down - people can't drop grand and a half on new toy every year which isn't much different than the one they've got, purchase cycles increased, the "AI" stuff isn't working yet to get people buying.
Yeah, it's amazing what one can do when selling 200 mln+ premium devices per year - personally I'd prefer if they got battery life in my iPhone to 7 days.
Well it had some games. Then Apple killed 32b apps.
Apple goes fast and loose with platform de-featuring so it's never, ever gonna be a viable gaming platform.
Its strange all AMD mobile parts excluding fire range ( thats a desktop part) are clocked at 5.1GHz. This might be because of N4. Zen6 client is more focused on mobile and is on N3E, we may see improvements there.Presumably we won't end up with a shorter pipeline, higher IPC ARM chip clocking higher than x64 parts. That would be too embarrassing. So I assume they'll hit a frequency wall like x64 did this year.
Gacha casinos aren't gaming. Next.Apple's gaming revenue is higher than the entire console market (PS5+Xbox+everything else) and also higher than the entire PC market. It isn't all that far off from being higher than both combined.
Oh no I don't like the casino. Woe is me.So why do should they try to gain a few scraps off the PC gaming market they are already much bigger than? Apple has plenty of games, just not the games you personally like on the platform you want to play them on.
cost.Its strange
you can ignore cost.excluding fire range
Because money.This might be because of N4
Who said that?Zen6 client is more focused on mobile
Its strange all AMD mobile parts excluding fire range ( thats a desktop part) are clocked at 5.1GHz. This might be because of N4. Zen6 client is more focused on mobile and is on N3E, we may see improvements there.
Some notable regressions and removals for Zen 5: no no-op fusion, no loop buffers, and many AVX/SSE instructions have an additional cycle of latency (was this deliberate and planned for? who knows).Does anyone know specifically what was removed from Zen 5 architecturally that had been present in Zen 4 that he may have been referring to?
Zen 6 and Zen 5 before it, target real world applications performance in server first, then client second. AMD is currently leading most major applications benchmarks over both Intel and ARM with this approach. I find it impossible to believe this will change in 2025, and if it does, it will be Intel in laptop with Panther Lake, I still don't see how Clearwater Forest dethrones Turin D in DC, but that one at least I can envision.But less immediately when their performance per watt is 3x AMD and Intel. How else did Apple go from merely matching Zen 3 with M1 to eclipsing Zen 5 by 15% (or more) with M4? It's easier for them to boost clock rates than it is for AMD/Intel to find IPC improvements.
I don't think 1T is that much of a design focus for Zen6 as it has greatly diminishing returns in PC real world usage.Go plot the GB6 1T or SPECint scores of A and X series. It really isn't slowing down like x64. ARM might suck at it doing it in a decent area but their "partners" are doing better.
Zen 6 is a 10% generation so I'm pretty sure AMD won't even catch up to M4 by 2026. And by that fall Cortex X will be ahead of them in performance and performance per watt (probably not on area, however, which matters for server).
All processor designs make design decisions to best meet the intended market demands. Zen6 will very likely be a big commercial success in the markets it will be designed to serve.They can't keep going forever. They will hit a plateau sooner or later but yes, there is a good chance that they may leave the x86 players quite behind and then those players will spend quite sometime catching up. It's actually good for x86 because ARM is showing them that more performance is within reach. Lunar Lake's performance is miraculous compared to Meteor Lake and we probably wouldn't have seen it materialize without M1.
There is no magic pixie dust on core design, and all processor designers are held to limits within the ability of the fan technology they are designed to be built on. The fab technology is limited by the equipment it uses, and that equipment is limited by physics and cost. You notice how no one here has a 10 Ghz Tejas processor?Yep. They'll probably use that in the next few years. But that's a 13% clock rate increase combined with already 30% higher SPECint 1T. Do you think Zen 6 stands a chance in 1T?
And when businesses start using SoecINT for their workflow, this will be very important 😁,Okay. Well it doesn't. I'll give you a prediction: The fastest ARM laptop CPU (probably from Apple) will be about 42% higher SPECint 2017 1T score than Zen 6 laptop (whenever it launches).
It may not be a design focus of Zen 6 but 1T integer performance yields the most benefit to the most users.I don't think 1T is that much of a design focus for Zen6 as it has greatly diminishing returns in PC real world usage.
Magic has nothing to do with it. Zen 6 may need magic to push clock rates to even more absurd heights. But Apple doesn't need magic. They have been catching up with AMD's clock rates for the last 4 years. They're not far off Strix Point in clock rate (-600MHz) but still 13.6 vs 9.9 SPECint 2017 1T with a similarly absurd performance ratio in GB6 1T and CB R24 1T.There is no magic pixie dust on core design, and all processor designers are held to limits within the ability of the fan technology they are designed to be built on. The fab technology is limited by the equipment it uses, and that equipment is limited by physics and cost. You notice how no one here has a 10 Ghz Tejas processor?
It may not be a design focus of Zen 6 but 1T integer performance yields the most benefit to the most users.
And outside of a few people here we're buying interactive workstations or consumer PCs. Zen 6 isn't going to be good for us, it'll be behind the ARM competition for both workstation and laptop. In certain areas in the server space (core spam) it's likely it will lose to Darkmont and its derivatives.