- Mar 3, 2017
- 1,684
- 6,227
- 136
You only need to read page 2 that contains the summary, and then click on Multithreading for example. It's a well structured document.
Here's the text:
And here's a shorter version in case 3 paragraphs is too much, emphasis mine:
This assumption is basically the result of a near-universal set of beliefs that all culminate in the following -This is a very good approximation of how current phones work, but not how many people use their laptops/desktops, especially if you multitask.
A lot of software has components that can run in parallel independently and ones that cannot. This can be usually seen as "a wall" in scaling, where such software no longer scales with more cores/threads.
One example would be code compilation where you can split the work in many processes but eventually have to link everything together. Even though linking was historically a serial process, it has been made parallel to some degree so it is not a complete single-thread part. Still, when we compare 64+ core CPUs one can see they do not scale much anymore (depends on source code size).
What am I trying to get to? There are a *a lot* various MT behaviors. Scaling varies from software to software, it also depends on the data. Trying to model one of them and telling "only this one represents client workloads" makes the benchmark pretty useless to my eyes.
Speaking of code compilation, gb6 has a subtest that compiles 96 files in parallel. That is not a lot of parallelism compared to for example compiling the Linux kernel or Chromium or LLVM itself. This subtest then indeed appears to work independently and not on a shared data, but the flaw is that it is pretty small. And the long compile times are usually complained when working on big source code, reducing the time there is more important than making the 96 files case better (of course there is a parallelism limit here as well, depending on the source code).
I agree that people with strong sentiments about GB viability as a consumer MT benchmark should probably find and use the GB thread.
Going back to Zen 5, have there been any recent hints as to when we might expect the first consumer products with the new arch?
. So hey, maybe we aren't all that far off!~April'24 or therein.
That's probably desktop one, mobile will be later.Ask our all things Zen overlord @adroc_thurston. This was discussed breifly recently. I said March-April, but I know nothing. He replied
. So hey, maybe we aren't all that far off!
That's settled it then - Cinebench/Blender/SPEC_rate are all useless. Only Geekbench 6 is valid.Trying to model one of them and telling "only this one represents client workloads" makes the benchmark pretty useless to my eyes.
A lot of software has components that can run in parallel independently and ones that cannot. This can be usually seen as "a wall" in scaling, where such software no longer scales with more cores/threads.
One example would be code compilation where you can split the work in many processes but eventually have to link everything together. Even though linking was historically a serial process, it has been made parallel to some degree so it is not a complete single-thread part. Still, when we compare 64+ core CPUs one can see they do not scale much anymore (depends on source code size).
What am I trying to get to? There are a *a lot* various MT behaviors. Scaling varies from software to software, it also depends on the data. Trying to model one of them and telling "only this one represents client workloads" makes the benchmark pretty useless to my eyes.
Speaking of code compilation, gb6 has a subtest that compiles 96 files in parallel. That is not a lot of parallelism compared to for example compiling the Linux kernel or Chromium or LLVM itself. This subtest then indeed appears to work independently and not on a shared data, but the flaw is that it is pretty small. And the long compile times are usually complained when working on big source code, reducing the time there is more important than making the 96 files case better (of course there is a parallelism limit here as well, depending on the source code).
Or watch the latest GN video and despair at how bad the Intel parts are for power efficiency:
View attachment 90738
Zen 5 barely has to iterate to dominate. Intel needs to step it up.
Is that Meteor Lake?Or watch the latest GN video and despair at how bad the Intel parts are for power efficiency:
View attachment 90738
Zen 5 barely has to iterate to dominate. Intel needs to step it up.
Is that Meteor Lake?
Ah yes, a normal adult with things to do plays Cyberpunk for 8 hours everyday.Or watch the latest GN video and despair at how bad the Intel parts are for power efficiency:
View attachment 90738
Zen 5 barely has to iterate to dominate. Intel needs to step it up.
I'm genuinely shocked at how some people manage to do that. Even at the peak of my "yolo" and "fck it I don't have any responsibilities" (aka senior year of HS) phase, I only managed to play valorant like 6 hours a day max, spread out between the afternoon till like 3 AM, and on the weekends, while grinding to Diamond.Ah yes, a normal adult with things to do plays Cyberpunk for 8 hours everyday.
It was very weird and quirky but idk why someone who has as much "journalistic integrity" as Steve from Gamers Nexus can't just present data for the sake of presenting data. It's also strange how defensive and "extra" these tech youtubers are on twitter (not just GN) ¯\_(ツ)_/¯No it was a comparison of desktop chips. Someone bet that a 14700k was more efficient than a 7800X3D. It didn't go well.
I'm genuinely shocked at how some people manage to do that. Even at the peak of my "yolo" and "fck it I don't have any responsibilities" (aka senior year of HS) phase, I only managed to play valorant like 6 hours a day max, spread out between the afternoon till like 3 AM, and on the weekends, while grinding to Diamond.
It was very weird and quirky but idk why someone who has as much "journalistic integrity" as Steve from Gamers Nexus can't just present data for the sake of presenting data. It's also strange how defensive and "extra" these tech youtubers are on twitter (not just GN) ¯\_(ツ)_/¯
I live in the southern part of the US and game on a 12900H, the extra heat part is deff very relatable in the already pretty hot summers lolAnd I remember being in high school certainly not playing games for 8 hours a day but still warming up my room about 10 degrees compared to others. It was annoying.The extra cost? Meh. The extra heat, unbearable.
It's petty. Disputing a nonsense claim is fine, but like bringing it up multiple times in a video "in response" to a troll post is a bit much. I mean it's good entertainment, but like... srsly?I also don't see what is so "defensive" about disputing a nonsense claim. Personaly I like GN. Even moreso after the LTT debacle.
I live in the southern part of the US and game on a 12900H, the extra heat part is deff very relatable in the already pretty hot summers lol
It's petty. Disputing a nonsense claim is fine, but like bringing it up multiple times in a video "in response" to a troll post is a bit much. I mean it's good entertainment, but like... srsly?
But perhaps the most egregious example of stuff like this is just like most of HWUB's entire twitter interactions. Responding to trolls is not something that exactly helps their image, IMO.
I actually prefer Intel being dragged in the mud over their power consumption. It should help to motivate them
You misunderstand. Ice Lake is designed to take a glacier or ice cap and terraform it into a lake.Maybe they'll have a chip called Ice Lake that sips power soon! /s
This is now pure desperation from intel's side. From AMD being "left out in a rear view mirror", to these pathetic performance and perf./watt numbers from "game changing" MTL architecture, they are losing the plot.Won't help when this is what Intel thinks (yes this is from an official Intel slide)View attachment 90771