Question Geekbench 6 released and calibrated against Core i7-12700

Page 18 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

TwistedAndy

Member
May 23, 2024
68
47
46
Why are you guys arguing with him? He clearly wants a benchmark to tell him what he wants to hear: that Intel CPUs are the best. Any benchmarks that show ARM or AMD CPUs beating Intel are biased. The ones that show Intel winning are legit. You won't convince someone like that with logic, it is a waste of time.

Which benchmarks should we use to compare the CPU performance for the software development stack (Node, PHP, SQL, etc.)?

I want to see how ARM crushes Intel while running PHP, for example. PHPBench, for example, shows pretty clear results. Maybe Python results are different? Nope. What about Ruby? Same story.

There's no correlation with Geekbench scores.
 
Last edited:

xiewe3wq

Junior Member
May 31, 2024
8
16
36
Which benchmarks should we use to compare the CPU performance for the software development stack (Node, PHP, SQL, etc.)?

I want to see how ARM crushes Intel while running PHP, for example. PHPBench, for example, shows pretty clear results. Maybe Python results are different? Nope. What about Ruby? Same story.

There's no correlation with Geekbench scores.
Geekbench 6 is a general purpose consumer benchmark. Software development is one of the subtests that get taken into account. If you are only interested in software development then you can simply look at the clang subtest.

And if you claim that the clang subtest has no correlation with software development then it's clear that you are arguing in bad faith and there is no point in continuing this discussion.
 

TwistedAndy

Member
May 23, 2024
68
47
46
Geekbench 6 is a general purpose consumer benchmark. Software development is one of the subtests that get taken into account. If you are only interested in software development then you can simply look at the clang subtest.

And if you claim that the clang subtest has no correlation with software development then it's clear that you are arguing in bad faith and there is no point in continuing this discussion.

The Clang test represents the performance of the Clang compiler, which is used for C, C++, etc.

However, the development is not limited to the compiled C languages. Actually, most of the web uses PHP, Ruby, Python, Node, etc.. The performance there is very different from Clang.

For example, Core i7 12700H is 56% faster than Apple M2 in the single-core PHPBench (1,278,571 vs 816,455). In Python (PyBench), the situation is the same. But if we take Clang, Apple M2 is 51% faster than 12700H (3372 vs 2233 comparison).

We got the opposite results for the same CPUs. In one benchmark, Apple M2 is 50% faster; in another two, it is 50% slower.

If we take the Node.js V8 Web Tooling Benchmark, it will show a similar performance between Apple M2 and 12700H, which also does not correlate with a 50% difference in Clang scores.

So why should we consider Geekbench to be a good test suite for developers?
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
5,527
8,601
136
The Clang test represents the performance of the Clang compiler, which is used for C, C++, etc.

However, the development is not limited to the compiled C languages. Actually, most of the web uses PHP, Ruby, Python, Node, etc.. The performance there is very different from Clang.

For example, Core i7 12700H is 56% faster than Apple M2 in the single-core PHPBench (1,278,571 vs 816,455). In Python (PyBench), the situation is the same. But if we take Clang, Apple M2 is 51% faster than 12700H (3372 vs 2233 comparison).

We got the opposite results for the same CPUs. In one benchmark, Apple M2 is 50% faster; in another two, it is 50% slower.

If we take the Node.js V8 Web Tooling Benchmark, it will show a similar performance between Apple M2 and 12700H, which also does not correlate with a 50% difference in Clang scores.

So why should we consider Geekbench to be a good test suite for developers?

Are you getting the M2 scores from openbenchmark.org?
 

TwistedAndy

Member
May 23, 2024
68
47
46
Are you getting the M2 scores from openbenchmark.org?
In the case of PHPBench, PyBench, Optcarrot, and Node.js V8 Web Tooling Benchmark I used the data from the openbenchmark.org database. You may find links in my previous posts.

There are 4 compatible public PHPBench results for Apple M2 with an average of 818,994 +/- 18,810 and 21 results for 12700H with 1,278,571 +/- 34,956. The dispersion is pretty small.

Those numbers are in line with the ones in the Phoronix reviews on Apple M1 and M2. For example, Apple M1 has achieved 595,000 in PHPBench (link). Apple M2 is 37% faster than M1.

If we take Geekbench, the Clang results are one of the weirdest ones:



It looks like Apple has pushed some additional optimizations to Clang to make it work fast on their chips. Actually, they use that for Xcode and it makes sense.

So, ultimately, the Clang test uses highly optimized code for Apple and the regular one for Intel. That's why we have 50% more performance.

That's totally fine. Intel also did that to many other libraries, including their C/C++ compiler. So, if you develop on C/C++ on the x86 platform, it makes sense to use the Intel compiler because it's much faster.

In the case of PHP, Ruby, Python, and many others, we have a reversed situation when we compare the code optimized for Intel and AMD with the regular code for ARM and Apple Silicon. As a result, Intel and AMD show much better performance. And that's fine.

So, if you are a developer, it makes sense to check the actual benchmarks for the stack that you're using. Geekbench does not represent the actual performance here.
 
Last edited:
Reactions: Hitman928

Doug S

Platinum Member
Feb 8, 2020
2,430
3,932
136
Wow the copium you have to inject into your veins to come up with an excuse why Intel really is faster than Apple despite all the facts being against you is hilarious. Intel contributes massively to open source compilers, but don't let that get in the way of the story you have to tell yourself.
 

TwistedAndy

Member
May 23, 2024
68
47
46
Wow the copium you have to inject into your veins to come up with an excuse why Intel really is faster than Apple despite all the facts being against you is hilarious. Intel contributes massively to open source compilers, but don't let that get in the way of the story you have to tell yourself.

Could you provide some facts to prove that Apple is faster than AMD and Intel in PHP, Python, Ruby, etc.?

Now, we have the case of Geekbench v6 using the Clang compiler, which was optimized by Apple for Xcode. Clang was developed and maintained mostly by Apple. And now it's a part of Geekbench

So, it's no surprise that it shows significantly better results on Apple Silicon. We literally have 51% more performance compared to the 12% average difference.

Intel has a compiler called Intel C++ Compiler. I don't see any reason why Intel should contribute to Clang, the main compiler of macOS. Apple is not using Intel CPUs.
 
Last edited:

SpudLobby

Senior member
May 18, 2022
926
623
106
Wow the copium you have to inject into your veins to come up with an excuse why Intel really is faster than Apple despite all the facts being against you is hilarious. Intel contributes massively to open source compilers, but don't let that get in the way of the story you have to tell yourself.
Andy is completely dishonest about this. Like, can’t sleep at night thinking about Apple’s performance.
 

TwistedAndy

Member
May 23, 2024
68
47
46
Andy is completely dishonest about this. Like, can’t sleep at night thinking about Apple’s performance.
And where exactly I am dishonest?

Apple has created and actively maintaining Clang, because it is used in their products. Intel is working on its own compiler and stopped contributing to Clang a long time ago, because Apple has switched to Arm. As for Windows, it has MSVC.

And now we have that test in Geekbench to measure performance for the development tasks. Obviously, those results do not correlate with benchmarks for other development stacks. And the difference is huge.

Actually, it's the same problem as we have with Cinebench R23, but showing even more misleading results.
 

poke01

Golden Member
Mar 8, 2022
1,243
1,412
106
Could you provide some facts to prove that Apple is faster than AMD and Intel in PHP, Python, Ruby, etc.?
Sure heres something not from Openbenchmark.org or phoronix but real world users.
From here:
Rust:
AMD 7950X (16 P cores), 16 threads (SMT off) and Linux:

cargo build: `9.74`
cargo build --release: `12.18s`

M3 Max, 16 cores(12 P + 4 E) 12 threads:

cargo build: `12.50s`
cargo build --release: `15.10s`

See, Apple does an excellent job. Keep in mind the 7950X has much higher clocks and uses a lot more power.

You asked for Python, well I will show a real world test:
Both M1 Air and 12600K are on python 3.11 on vscode. i5 system on Windows.

26.1 sec for M1
48.4 sec for 12600k

This is why I don't like phoronix tests, they are not the only source of truth.

Now, we have the case of Geekbench v6 using the Clang compiler, which was optimized by Apple for Xcode. Actually, Clang was developed by Apple
You can optimize clang for different operating systems. Apple does it for macOS, because they own the OS.
Now, we have the case of Geekbench v6 using the Clang compiler, which was optimized by Apple for Xcode. Clang was developed and maintained mostly by Apple. And now it's a part of Geekbench
No, it’s not maintained by Apple mostly. Are you even a developer?

You have no idea of Clang history do you? Where it’s used and such? Android uses Clang as its only compiler and I am 100% sure Google contributes to Clang. Firefox and chrome are built using clang on Windows. There is a reason its part of geekbench. openbsd uses clang as its default complier on x86-64.
Intel has a compiler called Intel C++ Compiler. I don't see any reason why Intel should contribute to Clang, the main compiler of macOS. Apple is not using Intel CPUs.
Intel does contribute to Clang. So what if Apple moved on from using Apple CPUs. Intel still needs to support it, clang is also used very much in non-Apple operating systems.


You can’t admit that Apple cores have the highest IPC right now, something that the rest of the industry agrees with. Be it Qualcomm, Intel or AMD. They would never show a 1t benchmark that’s normalised for clocks against Apple because that would show Apple's massive lead unless they beat Apple.
 

poke01

Golden Member
Mar 8, 2022
1,243
1,412
106
Apple has created and actively maintaining Clang, because it is used in their products. Intel is working on its own compiler and stopped contributing to Clang a long time ago, because Apple has switched to Arm. As for Windows, it has MSVC.
Why do dig your own hole?


"The LLVM open source project is a collection of modular and reusable compiler and toolchain technologies supporting multiple processor architectures and programming languages. The Clang open source project provides a C/C++ frontend supporting the latest language standards for the LLVM project. LLVM, including Clang, are maintained by a large and very active development community.

There are many benefits in adopting LLVM, but let’s start with faster build times. Clang is fast. We all can appreciate that! We measured a 14% reduction in build times when using the Intel C/C++ compiler included in the Intel oneAPI 2021.3 toolkits. In addition to helping reduce build times, adoption of Clang has allowed us to contribute to, and benefit from, community efforts to support the latest C++ language standards."
 

TwistedAndy

Member
May 23, 2024
68
47
46
Intel contributes to their own openAPI compiler based on LLVM. Obviously, the generic Clang/LLVM compiler does not include optimizations made within the openAPI project.

As for Google, they withdrew from active participation in the Clang project a few years ago after the infamous ABI vote.

As for the development tests, it's hard to do the measurements property and get the consistent results. That's why we have packages like PHPBench, PyBench, and others. Also, that's why on the openbenchmarking website there is a basic measurement analysis with dispersion calculation and data filtering. So those results look less ugly than Geekbench.

For example, in the case of PHP you need to ensure that you are running tests with the normalized PHP configuration. That's one of the things which PHPBench does. If you just run a random script on two machines, the results may be random just because of the different configuration. For example, there may be jit and opcache enabled on one machine and disabled on another one. And that's sufficient to throw all your results to the trash bin.

And yes, you can't extrapolate results for PHP or Python to compiled languages like Rust, C, etc. They are entirely different.

If we take Rust, there's also a few different compiler options with a plethora of different flags and settings.

In general, cross-platform benchmarks are very hard to perform correctly. It's a big challenge to get consistent results on different machines on the same platform, especially if you want to measure the performance of PHP, Python, Node, etc.. You need to ensure that the settings are correct, that the caches are cleared, and even check the way you measure time.

And here we have Geekbench with Clang optimized for certain platforms, issues with multi-core benchmarks, issues with SME, AVX-VNNI, AMX, etc., which do not make the comparison more objective.

Actually, if you install PHP, Composer, and Laravel, then open a browser and measure TTFB a dozen of times, you will get much more realistic performance results than Geekbench just because the code is the same, the caches are empty, and the PHP setting are set to defaults.
 
Last edited:

poke01

Golden Member
Mar 8, 2022
1,243
1,412
106
And here we have Geekbench with Clang optimized for certain platforms, issues with multi-core benchmarks, issues with SME, AVX-VNNI, AMX, etc., which do not make the comparison more objective.
no one is disputing that. It’s just you can’t say you prefer to choose Cinebench R23 as your preferred benchmark when it’s worse than Geekbench 6 for judging performance of a core.

At least you can look at the subtests for Geekbench to make a proper evaluation.


In general, cross-platform benchmarks are very hard to perform correctly. It's a big challenge to get consistent results on different machines on the same platform, especially if you want to measure the performance of PHP, Python, Node, etc.. You need to ensure that the settings are correct, that the caches are cleared, and even check the way you measure time.
The industry settled on SPEC suite as a standard to measure performance and its far from perfect and is a much better representation than most tests in Openbenchmarking.org because these are not optimised for AArch64.

That's why we have packages like PHPBench, PyBench, and others. Also, that's why on the openbenchmarking website there is a basic measurement analysis with dispersion calculation and data filtering. So those results look less ugly than Geekbench.

“Overall it was a very interesting benchmark battle... In some workloads the Apple M2 on Asahi Linux fell behind due to possibly not being as well optimized for AArch64, but in other cases the outcome was incredible.”

This is what Phoronix had to say about the PyBench result.
That’s why I say the Openbenchmarking.org and Phoronix tests are not properly optimised for arm and are not an indicator of performance of Apple/ARM cores.
 

Nothingness

Platinum Member
Jul 3, 2013
2,668
1,226
136
Searching for "apple python benchmark" shows hits where Apple is faster than Intel for Python benchmarks. That doesn't mean Python is as good on Arm platforms as on x86, but it surely has improved since early benchmarks went out (see this Arm blog post).

So basically we have the typical case which demonstrates why OpenBenchmarking is not reliable: results often rely on old versions of software which were not properly for AArch64 some years ago.
 

TwistedAndy

Member
May 23, 2024
68
47
46
So basically we have the typical case which demonstrates why OpenBenchmarking is not reliable: results often rely on old versions of software which were not properly for AArch64 some years ago.

In the case of Python, it heavily depends on the environment and settings.

Here's a good recent example: M1 Max vs Intel Core i9 Python Race | XPS 15 2022. Apple M1 Max finished the Python test in 35 seconds, but Dell XPS 15 finished the same test in 57 seconds on Windows, 40 seconds on WSL, and 29 seconds on Linux.

As for Geekbench, it also depends on the operating system. Usually, you can get 5-10% better results on Linux/Unix than on Windows.

And feel free to choose the results you like
 

TwistedAndy

Member
May 23, 2024
68
47
46
I've found an interesting statement from Intel regarding LLVM support in the official GitHub repo made a few years ago:

DISCONTINUATION OF PROJECT

This project will no longer be maintained by Intel.

Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.

Intel no longer accepts patches to this project.

If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project.

As for AMD, the last patch for Zen 4 support was sent to LLVM two years ago.

It looks like both companies (Intel and AMD) are focused mostly on GCC and MSVC for their CPUs. That makes sense because those are the default compilers for many popular Linux distros and Windows.

LLVM is supported by Apple, Qualcomm, Google, and other companies producing ARM chips.
 

Nothingness

Platinum Member
Jul 3, 2013
2,668
1,226
136
I've found an interesting statement from Intel regarding LLVM support in the official GitHub repo made a few years ago:



As for AMD, the last patch for Zen 4 support was sent to LLVM two years ago.

It looks like both companies (Intel and AMD) are focused mostly on GCC and MSVC for their CPUs. That makes sense because those are the default compilers for many popular Linux distros and Windows.

LLVM is supported by Apple, Qualcomm, Google, and other companies producing ARM chips.
Intel own compiler is based on LLVM. This means either they will change it (doubtful) or they'll keep their changes internal. Get ready to look at suspect fluctuations in official SPEC results.

Or more simply: they send patches directly to the official project

EDIT: Yeah that's likely. They contribute directly to mainline, it's their project that is closed (and has been so for 2 years it seems).
 

TwistedAndy

Member
May 23, 2024
68
47
46
Or more simply: they send patches directly to the official project

Intel decided to go a different route. They forked the official LLVM repository, pulled changes from there, implemented all the customizations, and called it the oneAPI DPC++ compiler. From time to time, an Intel employee commits back to LLVM, but his contribution is very small and, probably, is his private initiative.

Most of the C/C++ software for Windows I use is compiled using MSVC. In Linux it is GCC. Technically, it's possible to use Clang, but usually it's not worth it.

That explains why Intel and AMD continue to actively support GCC. They want to sell servers

As for Geekbench 6, it was compiled using Clang for all the platforms. Unfortunately, there's no information about the optimization options which were used. Also, unlike Geekbench 5, there is no information on which target architecture was used for the Clang test. They probably switched from ARM to platform-specific architecture, which makes cross-platform comparison less objective.

In general, I think that Geekbench 5 is way more objective test than the 6 version. At least, there's a correlation between SPEC2017 and Geekbench 5.
 

Hitman928

Diamond Member
Apr 15, 2012
5,527
8,601
136
Intel own compiler is based on LLVM. This means either they will change it (doubtful) or they'll keep their changes internal. Get ready to look at suspect fluctuations in official SPEC results.

Or more simply: they send patches directly to the official project

EDIT: Yeah that's likely. They contribute directly to mainline, it's their project that is closed (and has been so for 2 years it seems).

AMD’s compiler is now LLVM based as well.
 
Reactions: Nothingness

Nothingness

Platinum Member
Jul 3, 2013
2,668
1,226
136
Intel decided to go a different route. They forked the official LLVM repository, pulled changes from there, implemented all the customizations, and called it the oneAPI DPC++ compiler. From time to time, an Intel employee commits back to LLVM, but his contribution is very small and, probably, is his private initiative.
In May and June of this year there are 109 commits by 20 different people from Intel. You should look at the dev branch which is where development occurs.

Most of the C/C++ software for Windows I use is compiled using MSVC. In Linux it is GCC. Technically, it's possible to use Clang, but usually it's not worth it.

That explains why Intel and AMD continue to actively support GCC. They want to sell servers
Then they should work harder as only 8 people from Intel pushed 89 commits in May and June.

Of course the number of commits doesn't tell the whole story.

As for Geekbench 6, it was compiled using Clang for all the platforms. Unfortunately, there's no information about the optimization options which were used. Also, unlike Geekbench 5, there is no information on which target architecture was used for the Clang test. They probably switched from ARM to platform-specific architecture, which makes cross-platform comparison less objective.

In general, I think that Geekbench 5 is way more objective test than the 6 version. At least, there's a correlation between SPEC2017 and Geekbench 5.
You'll never learn.
 

TwistedAndy

Member
May 23, 2024
68
47
46
In May and June of this year there are 109 commits by 20 different people from Intel. You should look at the dev branch which is where development occurs.

There's no dev branch in the LLVM repository. They use one main branch and the release branches. Maybe we are talking about completely different repositories.

You'll never learn.

Yep, people use Geekbench 6 for fun, not to get some objective measurements.

AMD’s compiler is now LLVM based as well.

It's even less popular than Intel's one.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |