Question Geekbench 6 released and calibrated against Core i7-12700

Page 21 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

okoroezenwa

Member
Dec 22, 2020
66
71
61
I don't know If AMD has AMX support in their CPUs.
Dunno either, though they got similar uplifts in the object identification subtest as shown in this previous thread by @poke01. Maybe due to AMX? 👀

He could've held off the SME changes until the end of the year when M4, Zen 5 and Lunar Lake will all be available.
Why? Only one of those is Arm-based so it won't matter. Also I have a very hard time believing people like you would deal with that in any sensible way.

If there were multiple CPUs with SME being released, I would get that and it wouldn't seem "suspicious".
lol, you'd just claim it was some Arm bias anyway.
 

Hitman928

Diamond Member
Apr 15, 2012
5,622
8,847
136
Dunno either, though they got similar uplifts in the object identification subtest as shown in this previous thread by @poke01. Maybe due to AMX? 👀


Why? Only one of those is Arm-based so it won't matter. Also I have a very hard time believing people like you would deal with that in any sensible way.


lol, you'd just claim it was some Arm bias anyway.

AMD CPUs don't support AMX. They get an uplift as object identification has AVX-VNNI support (AVX512-VNNI is also supported on applicable CPUs). AMX isn't supported on client Intel CPUs either, it's a special accelerator that Intel only puts in Xeons.
 

poke01

Golden Member
Mar 8, 2022
1,455
1,683
106
It has a USEFUL output. It's not just a useless benchmark. The developer is passionate about optimization. He cares about Pi digits and that's what his optimizations make possible in lesser and lesser time.
This is such a double take because it ruins your speculation on why SME was added to GB before retail release. So it’s okay for one developer to add optimisations but not the other before release?

Let’s call it Zen-cruncher from now on then.
 
Reactions: Orfosaurio
Jul 27, 2020
18,015
11,740
116
This is such a double take because it ruins your speculation on why SME was added to GB before retail release. So it’s okay for one developer to add optimisations but not the other before release?
One is a benchmark developer. Other is an application developer. First one can be bought and it misleads their users. Second one even if bought still benefits their users because their work finishes in LESS time. That's the difference.

A simple "I was sponsored by Apple to include SME" in this benchmark on his blogpost would've been enough.

Or if that were not the case, he could put a disclaimer saying that he was NOT sponsored and did it because he always wanted to add SME.

Right now, we have no idea why SME was added so close to M4.
 

TwistedAndy

Member
May 23, 2024
139
104
71
I always felt like Geekbench was trash. Especially gb5, that was insanely, absurdly memory bound. I remember just tuning my memory increased my gb5 score by 40%, lol.

Geekbench 6 is not that different here as well. Let's take a closer look on the technical details of the tests included in Geekbench 6:

1. File Compression
Almost useless. Instead of LZ4 and ZSTD, it makes sense to use deflate (gzip/zip), which is used everywhere on the web and system-wide.

2. Navigation
Useless. Both Google and Apple Maps usually do not perform those calculations on the device. Even in the case of offline navigation, that test is not representative because different apps are used.

3. HTML5 Browser
It's not representative. Instead of using the actual headless browser or Node.js, Geekbench decided to use some libraries to parse and render HTML/CSS. Surprisingly, the most compute-heavy part (JS) is not included.

4. PDF Render
The first useful test!

5. Photo Library
It's a very weird test. Instead of measuring some useful things like JPG/PNG/WebP scaling/compression/decompression, they added a lot of other useless steps like running an image classification model and storing tags in the SQL database.

6. Developer workloads

- Clang. It's not representative because Clang is used mostly on Apple OS and some Unix systems. Technically, it's possible to use Clang on Windows or Linux, but it's not a common scenario. Windows uses MSVC by default. Linux - GCC.

In general, this benchmark represents the compile performance in Xcode for Apple devices and is not relevant for other platforms.

- Text Processing. It looks to be a valid benchmark, but it makes sense to process all those files using Node.js, JVM, or PHP. It's a more common scenario.

- Asset Compression. I do not work with 3D assets, but it looks to be valid.

7. Machine Learning Workloads

Despite having ML in the name, it's a useless benchmark. The actual apps that detect objects blur backgrounds, remove objects, etc., usually use GPU or NPU for that. Also, these kinds of workloads are very sensitive to code optimization and used libraries and frameworks.

8. Horizon Detection, Photo Filter, and HDR

These tasks also heavily depend on code optimization and the libraries used. The performance in different apps may be very different from the results in this section.

9. Ray Tracer

Useless. In most cases, the GPU makes the ray tracing calculations and scene rendering. There are some cases when it makes to do that on CPU, but it's an exception.

10. Structure from Motion

It looks to be a valid benchmark, but I'm not sure how frequently it is used.

Summary

Geekbench 6 is not the best benchmark available. Many tests heavily depend on the actual implementation and platform optimization, not to mention SME, AVX-512, etc.

Instead of using open and widely used tools, libraries, and apps like Node.js, Electron, Blender, JVM, etc., it uses some custom implementations for very questionable tasks.

Cinebench R23 and 2024, on the other hand, measure one specific task, but it's based on real commercial software.
 
Reactions: igor_kavinski

poke01

Golden Member
Mar 8, 2022
1,455
1,683
106
A simple "I was sponsored by Apple to include SME" in this benchmark on his blogpost would've been enough.
Thats the thing they weren't?? Geekbench also supports SVE which Apple doesn't use. SME is not made by Apple but by ARM.
Qualcomm/ARM also use geekbench and actively promotes it, it could have been them and they gave feedback to GB.
 
Reactions: Orfosaurio

poke01

Golden Member
Mar 8, 2022
1,455
1,683
106
Instead of using open and widely used tools, libraries, and apps like Node.js, Electron, Blender, JVM, etc., it uses some custom implementations for very questionable tasks.
Thing is Blender also uses AVX-512 in the cycles test. So yeah, you cannot have a popular benchmark that doesn't use CPU extensions.

What I don't get @TwistedAndy you parrot GB for using CPU extensions but the benchmarks you prefer extensivity use AVX2, AVX512, SSE etc?
 
Last edited:

Doug S

Platinum Member
Feb 8, 2020
2,508
4,113
136
Cinebench R23 and 2024, on the other hand, measure one specific task, but it's based on real commercial software.

So what if it is based on commercial software? The one thing it does is something most PC users (let alone smartphone users) never do.

If Microsoft released a benchmark that was based on what Excel does when you recalculate a big spreadsheet would you think that's also great "because it is based on real commercial software"? I bet you'd find some objections to it if it didn't show the results you want to see.
 

Doug S

Platinum Member
Feb 8, 2020
2,508
4,113
136
It has a USEFUL output. It's not just a useless benchmark. The developer is passionate about optimization. He cares about Pi digits and that's what his optimizations make possible in lesser and lesser time.

In what world is the digits of pi "useful" output? You can download the already calculated value of pi to far more digits than your PC could calculate in a year, or your lifetime for that matter.
 
Jul 27, 2020
18,015
11,740
116
Qualcomm/ARM also use geekbench and actively promotes it, it could have been them and they gave feedback to GB.
Could be. At best, this could be the reason and the dev was too naive to think that people might blame him for being in cahoots with Apple if he released it early. Or he didn't know when Apple was gonna release M4. But for me (coz I'm the conspiracy theorist type), the simplest explanation is what seems to be the obvious one to me: Apple's dirty money made someone happy and eager to please.
 
Jul 27, 2020
18,015
11,740
116
In what world is the digits of pi "useful" output? You can download the already calculated value of pi to far more digits than your PC could calculate in a year, or your lifetime for that matter.
And how would the user know for sure that the downloaded output is correct? He would generate it himself to be sure.
 
Jul 27, 2020
18,015
11,740
116
If Microsoft released a benchmark that was based on what Excel does when you recalculate a big spreadsheet would you think that's also great "because it is based on real commercial software"? I bet you'd find some objections to it if it didn't show the results you want to see.
Not Microsoft but I use a benchmark sheet from overclock.net forums. It's good because it engages the calculation engine of a real spreadsheet that I installed on my PC.
 

Doug S

Platinum Member
Feb 8, 2020
2,508
4,113
136
Could be. At best, this could be the reason and the dev was too naive to think that people might blame him for being in cahoots with Apple if he released it early. Or he didn't know when Apple was gonna release M4. But for me (coz I'm the conspiracy theorist type), the simplest explanation is what seems to be the obvious one to me: Apple's dirty money made someone happy and eager to please.

Your fixation with and hatred of Apple has made you lose touch with reality. You think Apple is paying off the developer of a benchmark to make M4 look better? If Apple was paying for it, why wouldn't they get some mileage out of that by highlighting M4's GB6 results when they announced iPad Pro?

I suppose you have an excuse for that too, but you're like a flat earther who will concoct ever more wild fantasy explanations - which is what happens when you start with a conclusion and choose your "facts" to get you there.
 
Jul 27, 2020
18,015
11,740
116
why wouldn't they get some mileage out of that by highlighting M4's GB6 results when they announced iPad Pro?
Because then there would be too much finger pointing: "Oh look, Apple touting the GB6 score of a version that just released a month ago! I bet they made it happen with their wads of cash". By remaining silent, they avoid implicating themselves in a potential scandal.

I suppose you have an excuse for that too, but you're like a flat earther who will concoct ever more wild fantasy explanations - which is what happens when you start with a conclusion and choose your "facts" to get you there.
It's not my final conclusion. It's a possible conclusion. Pointing out a possibility shouldn't be met with cries of "oh but you are wrong!".
 

poke01

Golden Member
Mar 8, 2022
1,455
1,683
106
Could be. At best, this could be the reason and the dev was too naive to think that people might blame him for being in cahoots with Apple if he released it early. Or he didn't know when Apple was gonna release M4. But for me (coz I'm the conspiracy theorist type), the simplest explanation is what seems to be the obvious one to me: Apple's dirty money made someone happy and eager to please.
I can prove this theory wrong with this. GB also added SVE support, something only CPUs from ARM support so far.

ARM also recently published a SME article in their blog site. Apple never uses Geekbench to present performance improvements so until proven otherwise I don’t think Apple payed Primate Labs.


Looking at the 6.1 release did ARM pay primate labs to add SVE support and did AMD pay them too for AVX512-FP16?
It’s funny how you think something that benefits Apple you immediately assume that Apple payed but when something benefits AMD or ARM, @TwistedAndy and you stay quiet and don’t bring it up.

  • Upgrade to Clang 16 Geekbench 6.1 is built with Clang 16 on all platforms. Geekbench 6.1 also improves the optimization switches used when building Geekbench.
  • Increase workload gap Geekbench 6.1 increases the workload gap (the pause between workloads) from two seconds to five seconds. The increased workload gap minimizes thermal throttling and reduces run-to-run variability on newer smartphones such as the Samsung Galaxy S23.
  • Introduce support for SVE instructionsGeekbench 6.1 includes SVE implementations of several image processing and machine learning functions.
  • Introduce support for AVX512-FP16 instructions Geekbench 6.1 includes AVX512-FP16 implementations of several image processing functions.
  • Introduce support for fixed-point mathGeekbench 6.1 introduces fixed-point implementations of several image processing functions. Geekbench uses fixed-point math to implement some image processing functions on systems without FP16 instructions.
  • Improve Multi-Core PerformanceGeekbench 6.1 improves the multi-core implementations of the Background Blur and Horizon Detection workloads, especially on high-end desktop processors such as 12- and 16-core AMD Ryzens, AMD Threadrippers, and Intel Xeons.

“Thanks to these changes, Geekbench 6.1 single-core scores are up to 5% higher, and multi-core scores are up to 10% higher than Geekbench 6.0 scores. As a result of these methodological differences, which have a non-trivial impact on scores, we recommend users not compare Geekbench 6.1 scores against Geekbench 6.0 scores.”

Since these improvements also improved AMD CPUs and ARM I guess it’s fine right but as soon Primate Labs increase Apple’s score with SME extension, it’s bad and Geekbench is corrupt.
 
Last edited:
Reactions: Orfosaurio
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |