Why are desktop CPUs so slow at improving?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

AMDisTheBEST

Senior member
Dec 17, 2015
682
90
61
again, you need to know and understand exactly what geekbench tests and how before you can determine what exactly, if anything, intel is deficient at. apparently, the previous version of geekbench included SHA1 as a large part of the integer testing. all you need to have a huge score in that is a small fixed-function unit. apple had it, intel didn't. also, seems like a lot of the tests were extremely cache friendly, so didn't test the greater memory subsystem at all.
On JavaScript benchmarks like octane and antutu html5, A11 clearly gave intel i5, and even i7s a run for the money. I tested it personally on my laptop which runs a skylake i5u
 

whm1974

Diamond Member
Jul 24, 2016
9,460
1,570
96
Don't get me wrong, I'll take higher clock speeds as long the CPUs stays at 65w TDP and I don't have to give up having extra cores to get it. Or pay a higher price tag.
 

JackTheBear

Member
Sep 29, 2016
46
12
41
On JavaScript benchmarks like octane and antutu html5, A11 clearly gave intel i5, and even i7s a run for the money. I tested it personally on my laptop which runs a skylake i5u

Running a javascrupt benchmark on your iPhone is not directly comparable to running the same javascript benchmark on your laptop. The benchmark is running under two different operating systems that each have their own overhead. Windows overhead is MUCH HIGHER than iOS or Android.

If you try to run Windows 10 on the A11, it's not going to perform like an i5 and definitely not like a high end i7. Snapdragon 835 running Windows 10 is giving the same performance as a 6W celeron. http://www.pcgamer.com/leaked-benchmarks-of-windows-10-on-arm-are-not-impressive/

A11 is faster, so maybe it creeps up into low end i3 performance. Maybe. Point is, you can't easily compare the benchmark scores from one platform to another, especially from a very light weight phone operating system to a very robust desktop/enterprise operating system like Windows.
 

AMDisTheBEST

Senior member
Dec 17, 2015
682
90
61
ARM is much more suited to low power solutions and isn't dragging around a 30+ year old architecture, so it's going to have an edge in this niche. That Intel does as well as it actually does in those market segments is a testament to their engineers.

Also, please find me some posts from people who are claiming that SoC performance would stop. I seriously doubt these exist in any quantity or are coming from anyone that the regulars here treat as an expert.
Exactly, much of the circuits in x86 cpu are there just there to maintain backward Compatibility. Even if no users today ever use it, they are still required to support OS and codes from eras like the windows 98
Running a javascrupt benchmark on your iPhone is not directly comparable to running the same javascript benchmark on your laptop. The benchmark is running under two different operating systems that each have their own overhead. Windows overhead is MUCH HIGHER than iOS or Android.

If you try to run Windows 10 on the A11, it's not going to perform like an i5 and definitely not like a high end i7. Snapdragon 835 running Windows 10 is giving the same performance as a 6W celeron. http://www.pcgamer.com/leaked-benchmarks-of-windows-10-on-arm-are-not-impressive/

A11 is faster, so maybe it creeps up into low end i3 performance. Maybe. Point is, you can't easily compare the benchmark scores from one platform to another, especially from a very light weight phone operating system to a very robust desktop/enterprise operating system like Windows.
Alright, if you wish to compare them like that, we can eliminate the software altogether from the equation and directly compare flops between the two processors. But again, we will have some people disatisfied and claim that flops can hardly translate into 100% real life perfomance(which is true even though hardwarely, more flops means indeed faster computation per seconds)

Now, do you want to test a hardware by the float point a proccesor can perform per second or do you want to compare processors by how well it runs a particular software regardless if it is well coded for a particular platform or not? Later yields real world performance even if its biased while the former are unbiased metric but means little in real word usage.
 
Reactions: whm1974

whm1974

Diamond Member
Jul 24, 2016
9,460
1,570
96
OK I'm keep reading that either 5nm or even as low as 3nm is the lowest we can go with using silicon for semiconductor devices. If 3nm is true, how much more performance per watt and number of cores can we expect with x86?
 

TheELF

Diamond Member
Dec 22, 2012
3,993
744
126
If 3nm is true, how much more performance per watt and number of cores can we expect with x86?
Performance per wat only improves at the sweet spot so it's only relevant to portable devices,it doesn't affect top(as in unlocked high clocked CPUs) performance.

As for the second part of the question,as AMD just showed us the only limitation to how many cores a company can fit into a CPU is the amount of desperation they are in.
 
Reactions: frozentundra123456

whm1974

Diamond Member
Jul 24, 2016
9,460
1,570
96
Performance per wat only improves at the sweet spot so it's only relevant to portable devices,it doesn't affect top(as in unlocked high clocked CPUs) performance.

As for the second part of the question,as AMD just showed us the only limitation to how many cores a company can fit into a CPU is the amount of desperation they are in.
I was thinking of an 8-core CPU at 3600Mhz with a 65w TDP.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
If you try to run Windows 10 on the A11, it's not going to perform like an i5 and definitely not like a high end i7. Snapdragon 835 running Windows 10 is giving the same performance as a 6W celeron. http://www.pcgamer.com/leaked-benchmarks-of-windows-10-on-arm-are-not-impressive/

1200 score in single thread Geekbench is slower than 2013's Atom, Bay Trail. The already existing Atom is over 1600 and the just announced next generation one gets over 2000.

The low score in this case is due to x86 translation overhead. Native ARM Geekbench gets 2200 for 835 that was tested.

There is no magic ISA. If it were possible to make a RISC CPU at 35W that performed the same as an 8700k at 91W, it would have been done

It's possible, if the 91W CPU is made by a team that's less capable than the team that makes the 35W one. Also, power scales drastically lower than performance, so just 10-15% sacrifice in performance may get from 91W to 35W.

Even if Moore's Law scaling completely dies, there's still the effect of good execution by level headed management and realistic goal setting.
 

Boris Morozov

Member
Jun 11, 2007
170
13
81
Meh, I’d say my 5930k still has many many years left on it.

What about GPU’s? And video game graphics? Why does Crysis 3, a game from 2013, still look better than modern day titles? Most rigs will struggle to run that game above 1080p for gods sake.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Geekbench scores:


A11 (6c/6t@ 2.4 GHz):
Single core: 4200
Multi-core: 10,000


i7-8650U @2.11 GHz (4c/8t)
Single-core: 3900
Multi-Core: 13,000

Geekbench alone = chips are the same? Really? I thought everyone knew single artificial benchmark comparisons were junk by now
 
Reactions: whm1974

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,425
8,388
126
It's possible, if the 91W CPU is made by a team that's less capable than the team that makes the 35W one. Also, power scales drastically lower than performance, so just 10-15% sacrifice in performance may get from 91W to 35W.

Even if Moore's Law scaling completely dies, there's still the effect of good execution by level headed management and realistic goal setting.
but that's not an ISA discussion.


On JavaScript benchmarks like octane and antutu html5, A11 clearly gave intel i5, and even i7s a run for the money. I tested it personally on my laptop which runs a skylake i5u
with apple and its custom ARM cores i think we have to be especially careful trying to separate out what the general execution units in the cores are capable of vs. what is the benefit of vertical integration in a computer company. apple almost certainly carefully profiles iOS and its loads and builds the processor with that in mind (and vice versa). i wouldn't be shocked to find that apple's chips have more fixed function units in them than anyone else, in order to accelerate common tasks with a minimum of power consumption.
 
Last edited:

rvborgh

Member
Apr 16, 2014
195
94
101
As i understand it...

cpu performance isn't improving at a great rate these days because there is only so much potential ILP that can be extracted from the existing code base despite throwing more transistors at the problem, and the marginal cost (power and transistors) for each additional percent of improvement goes up up up. For new software, better compilers help processors extract more ILP, but the basic paradigm for a lot of programming is still single threaded in nature as well.

hardware/frequency wise... power goes up up up and we can't extract heat fast enough.

So given the above, programmers can take advantage of multi cores (thread pools, etc)... but its also more expensive to write and debug multi threaded software in most cases (race conditions, etc)... and there is an upper limit for performance gains going on that axis of freedom as well (you cannot remove all single threadedness due to true logic dependencies).
 
Last edited:

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
All core turbo is 3.3GHz. So it runs all cores without modification at 3.3GHz at or under 65w assuming proper cooling.
I doubt TDP is calculated with all the cores at turbo clocks.

Intel says base clocks and AMD is very vague.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
I doubt TDP is calculated with all the cores at turbo clocks.

Intel says base clocks and AMD is very vague.

The tests I have seen have 100% load on a 1700 at being at or below 60w. Intel's numbers are based on full AVX2 load which basically will only run at base clocks. So a 7700 for example running at all core turbo is probably under TDP.

TDP isn't power usage. It's a power design for cooling requirement. But on Desktops it's going to be pretty damn close to power usage. I know the 1800x can be on the other side of it though pushing 95-100w at 100% CPU usage.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
The tests I have seen have 100% load on a 1700 at being at or below 60w. Intel's numbers are based on full AVX2 load which basically will only run at base clocks. So a 7700 for example running at all core turbo is probably under TDP.

TDP isn't power usage. It's a power design for cooling requirement. But on Desktops it's going to be pretty damn close to power usage. I know the 1800x can be on the other side of it though pushing 95-100w at 100% CPU usage.
What are the clocks at 100% load for the 1700? I have seen the power figures somewhere, but I do not recall the site listing the clock speeds. Tom's shows 82 watts for the torture loop, but doesn't give the clocks. Reviewers don't seem to list the clocks at a given power consumption level for the desktop chips they review, or whether the clocks are sustained.
Does the I5-8400 actually drop to base clocks with an AVX2 load, for example?

That seems to be a mobile chip thing.
 

goldstone77

Senior member
Dec 12, 2017
217
93
61
Thermal design power
The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by a computer chip or component (often the CPU or GPU) that the cooling system in a computer is designed to dissipate under any workload.
The TDP is typically not the largest amount of heat the CPU could ever generate (peak power), such as by running a power virus, but rather the maximum amount of heat that it would generate when running "real applications." This ensures the computer will be able to handle essentially all applications without exceeding its thermal envelope, or requiring a cooling system for the maximum theoretical power (which would cost more but in favor of extra headroom for processing power).

Since safety margins and the definition of what constitutes a real application vary among manufacturers, TDP values between different manufacturers cannot be accurately compared. For example, while a processor with a TDP of 100 W will almost certainly use more power at full load than a processor with a 10 W TDP from the same manufacturer, it may or may not use more power than a processor from a different manufacturer that has a 90 W TDP. Additionally, TDPs are often specified for families of processors, with the low-end models usually using significantly less power than those at the high end of the family.

The dynamic power consumed by a switching circuit is approximately proportional to the square of the voltage:[7]


where C is capacitance, f is frequency, and V is voltage.

There is no set standard for TDP, and it depends entirely on what the manufacturer decides constitutes a XXW TDP workload.

Intels definition of TDP from it's product specification:
"Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload."
Key points:
Average Power (not max power)
Dissipated at BASE FREQUENCY (not turbo frequency)
and the most relevant, INTEL DEFINED WORKLOAD
Note: high-complexity doesn't involve AVX workloads!
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
What are the clocks at 100% load for the 1700? I have seen the power figures somewhere, but I do not recall the site listing the clock speeds. Tom's shows 82 watts for the torture loop, but doesn't give the clocks. Reviewers don't seem to list the clocks at a given power consumption level for the desktop chips they review, or whether the clocks are sustained.
Does the I5-8400 actually drop to base clocks with an AVX2 load, for example?

That seems to be a mobile chip thing.

One Review I saw from Guru had Package power at 57w for 3.2 clock on a 98% load.

The other problems I have with reviewers is I don't know how they are isolating CPU power usage. Does the Tomshardware stuff leave motherboard and therefore southbridge power possibly even NVME power usage. They are small but they add up. But I haven't seen anything definitive that states that a 1700 uses more than 65w on the regular. Whereas I believe the 1800x does use more than 90 (but not the 1700x).
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
Thermal design power


There is no set standard for TDP, and it depends entirely on what the manufacturer decides constitutes a XXW TDP workload.

Intels definition of TDP from it's product specification:
"Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload."
Key points:
Average Power (not max power)
Dissipated at BASE FREQUENCY (not turbo frequency)
and the most relevant, INTEL DEFINED WORKLOAD
Note: high-complexity doesn't involve AVX workloads!
T
Yes, we have discussed the definitions of TDP used by AMD and Intel many times.

If Intel desktop chips ever run all cores at base frequency, I can't ever recall seeing it happen.

We don't know what loads AMD or Intel use to set the TDP.
 

goldstone77

Senior member
Dec 12, 2017
217
93
61
T
Yes, we have discussed the definitions of TDP used by AMD and Intel many times.

If Intel desktop chips ever run all cores at base frequency, I can't ever recall seeing it happen.

We don't know what loads AMD or Intel use to set the TDP.

It would be easier to measure power consumptions per applications, which can be done at the EPS. Here are a few examples from gamersnexus
Note: That the i9's are delided, liquid metal, and using Kraken X62 rad.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
but that's not an ISA discussion.

Of course that statement is right.

Real world is not a vacuum. It's not a benchmark where you can normalize things. It's never x86 vs ARM, but Intel versus ARM. Things will never be compared equally as people expect to be. Even in a hypothetical scenario where ARM "wins".
 

gdansk

Platinum Member
Feb 8, 2011
2,492
3,397
136
One contributing factor is that there's less competition. There are only two real x86 core designers: Intel & AMD. And only one with any real R&D expenditure. There are a boatload of ARM core designers: ARM, Apple, Cavium, Nvidia, Samsung, Qualcomm and more. And a lot of them are investing significant amounts of money into core designs.

Some caveats: prefer SPECint/fp than Geekbench. And I'm not seeing someone measuring the actual power draw of the A11 during those tests.
 

goldstone77

Senior member
Dec 12, 2017
217
93
61
One contributing factor is that there's less competition. There are only two real x86 core designers: Intel & AMD. And only one with any real R&D expenditure. There are a boatload of ARM core designers: ARM, Apple, Cavium, Nvidia, Samsung, Qualcomm and more. And a lot of them are investing significant amounts of money into core designs.

Some caveats: prefer SPECint/fp than Geekbench. And I'm not seeing someone measuring the actual power draw of the A11 during those tests.

I look at it like consumer electronics. How many TV makers are their out there? Black Friday had 55" 4K Smart TVs with options galore on sale $289! Intel cut the throat of the only competition, AMD, with anti-competitive practices, and was only slapped on the hand. We as a nation didn't/don't care enough to say or do anything about it. 10 years of quad cores and ~4 generations of rebranded CPU's, and they still cost the same as they did 10 years ago. Despite massive cost saving from shrinking the process node. We are blind fools.
 
Reactions: trparky
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |