DisEnchantment
Golden Member
- Mar 3, 2017
- 1,770
- 6,720
- 136
It's Global Foundries that claimed 10% improvement.
These CPUs(Ryzen/Coffeelake/SKLX) are all running at their limits. Process changes won't result in much gains, if at all. 10% claim may be realistic for lower frequency parts such as server or mobile.
The CPU is a solid improvement. Nice job by AMD.
And thanks @ Stilt for the benchmarks. I think you are doing a better job than most review sites.
Their 7nm is essentially an IBM node, that came along with the package. IBM has a long history of doing high clock speeds, and doing them well. Of course we won't know for sure until the parts are actually on the market. But it is more than plausible.Makes me wonder about those 5+ GHz claims on 7nm by GloFo if it ever shows up in H2 2018/Q12019.
Their 7nm is essentially an IBM node, that came along with the package. IBM has a long history of doing high clock speeds, and doing them well. Of course we won't know for sure until the parts are actually on the market. But it is more than plausible.
A good 2600K was able to do 4.8-5 GHz+ on water cooling. I had one that was able to bench single threaded at 5.2 GHz on the stock dinky heatsink. Frequency wise, Intel has gone nowhere since 32nm.The IBM chips have very high TDPs to reach those clocks though.
And max conventional cooling overclocks only went up by 200MHz between 14nm+(KBL) and 14nm++(CFL).
Sandy Bridge at 32nm could do 4.5GHz overclocks. The 14nm++ transistors in Coffeelake are 50-80% better performing. That resulted in less than 15% frequency improvement. The base clock gains are higher, but Intel has been just eating into overclocking headroom to give us the base increases.
Netburst chips with ridiculously high number of pipeline stages were cancelled because at certain clocks the heat would be so concentrated at very high frequencies that engineers said it would have the thermal density of our sun. Back then the limit where scaling would be drastically harder were 4-5GHz. Nothing has changed.
A good 2600K was able to do 4.8-5 GHz+ on water cooling. I had one that was able to bench single threaded at 5.2 GHz on the stock dinky heatsink. Frequency wise, Intel has gone nowhere since 32nm.
So has over-clocking become a bug or a feature now? If CPU's have enough sensors to intelligently manage themselves better than manual tweaking, this is a new era. If someone is buying a CPU *just* to over-clock it, when they could get another that does not need to be, but could be, is that a misguided purchase?
Intel's product stack feels so meh and blah after AMD decided to leave all cpu's unlocked and let a mid tier chipset OC them.
But if we are Fmax limited, how much wider can we go, or rather how wide are we already that we can't benefit from going wider?
For desktop use, day to day computing... not for specialized workloads.
Have we eliminated all stalls? Can we still gain by reducing latencies.... (although this is not very strictly core related)
Time to switch to new materials? or would we gain going VLIW style
Or direct to quantum? we are still some ways off from quantum though.
CPU space needed innovation, at least for now we are scaling core wise. We will hit that wall soon here too.
We are waiting for gallium nitride chips perhaps...
There have been many people in the field touting this as the next step.
Alex Lidow claims it is a matter of time.
But i am sure there are many hurdles to be taken.
Right now GaN rf power transmitters in the gHz range are very common but that is totally different from being billions of tiny mosfets cramped together to form a modern cpu.
https://venturebeat.com/2015/04/02/move-over-silicon-gallium-nitride-chips-are-taking-over/
The advantage is that current manufacturing technologies can be used, though.
But it is wait and see...
I do designs on GaN from time to time (RFIC) and I don't see them taking over anytime soon (or ever) in the consumer digital world (i.e. where x86 CPUs and most ARM CPUs live). The Fmax of GaN isn't even really higher than Si at the same node, but it has much higher voltage/thermal tolerances, can have much higher power density, and is more efficient. Of course there are downsides to GaN as well not the least of which being cost. It is interesting to read his claim that modern GaN-on-Si is cheaper than a standard CMOS process once packaging is included. This has not been my experience at all, not when comparing the same node, at least not against a standard CMOS process.
He is probably comparing it against power MOSFETS (e.g. LDMOS) which largely takes the economy of scale away from the silicon and then adding which is a big part of why GaN is more expensive than standard CMOS in the first place, so it is not really a valid comparison for digital processors. The other reason GaN is more expensive is because it is just difficult to make compared to standard Si (again, same node comparison) and yields have only recently (last 10 years or so, with another corner turned in the last 5) started to look good enough to consider using GaN in lower cost applications (e.g. automotive). Even then, I think it is still going to cost more but will be used where GaN's advantages are worth it and trying to get Si to perform as well in those situations would actually cost more like Lidow says. Additionally, enhancement mode GaN is even less mature and less supported and I imagine would be required for traditional digital design. In the end, GaN can have some great advantages in things such as RFPAs and power converters like he mentions, but a GaN based CPU is probably not going to happen.
TL;DR GaN is great for high power and power management devices and is successfully expanding into those areas, but is not suitable at this time for CPUs compared to other options and will probably never be.
Interesting.
As alternative, IBM is seriously doing research in carbon sheets, what do you think will happen ?
Usually when IBM takes a direction, we can be sure that the rest of the tech world is going into that direction as well.
IBM research is way ahead in modelling and in theoretical approach of how atoms really behave.
But the thing, is they use huge laboratory equipment to line up atoms to create crystal lattices without any defects to get a specific behavior. Of course for theoretical research.
Doesn't ASUS have option for aouto OC on C7H at 4.5GHz ST and 4.3GHz for MT?
@The Stilt you said you had problem with TDP rating?
De8auer did power per MHz and with voltage and he came around that at 4050 all cores in CB R15 it will use around 105W.
Ryzen is very power efficient CPU, I don't know how people get that R7 2700X uses like 2x more power than i7 8700k.
Saying that i5 8400 is 65W TDP and uses same power as i5 7600 is also big thing. I have locked i5 8400 to 65W TDP on one PC and even at non AVX it started to throttle to ~3.5GHz... yeah you heard right! With AVX at full load it will go way down to 3.2GHz.
The biggest thing is expecting i7 8700 running at 4.3GHz at 65W TDP. I mainly won't care about power, but lot of people do care in GPU section. There is a lot of people who will argue about P/W, but then they run i7 7700K or i7 8700K at 5GHz+. Kinda weird.
Back to ryzen and TDP. Well my Ryzen goes above TDP out of the box too. Well at stock it won't go exactly, but features that MB has (C6H) and overcloked ram to 3200MHz it will just use around ~75W HWINFO64 while clocked at 3.2GHz (AIDA64 - cache/fpu/core).
EDIT:
Anyway here are results for TDP reading hwinfo/aida64.
R7 1700 (definitely not golden)- ASUS C6H results (HWInfo64 reading)
Auto BIOS - performance boost ON (3.2GHz all core)
Prime95 small FFT :
- 65W package power
- 68W SOC+CPU
AIDA64 (cache/fpu/core) :
- 75W package power
- 78W SOC+CPU
BIOS - performance boost OFF (3GHz all core)
Prime95 small FFT :
- 55W package power
- 51W SOC+CPU
AIDA64 (cache/fpu/core) :
- 65W package power
- 50W SOC+Cores?
I would say impressive. Can you compare this to R7 2700X?
I am waiting for 7nm. AMD did good improvement in 1 year, as you can see they are NOT late, they are improving where they should and I like it.
I don't know how (or with what) der8auer measured those readings, however if they're SMU reported figures then they're not comparable.
My measurements are based on controller telemetry (DCR).
Based on the tests I made today on different samples, the power consumption seems to vary quite a lot (up to 12%) between the specimens.
I don't know how (or with what) der8auer measured those readings, however if they're SMU reported figures then they're not comparable.
My measurements are based on controller telemetry (DCR).
Based on the tests I made today on different samples, the power consumption seems to vary quite a lot (up to 12%) between the specimens.
So power consumption is based on quality of the silicon ?
SIDD (static leakage) mostly, it seems.
SIDD (static leakage) mostly, it seems.
With the release of the polaris gpu from amd, cpuz had this option about asic quality.
since both ryzen and polaris are made on the same process that got me wondering.
Is there such a number also for ryzen ?
I never really understood what asic quality meant.
Was it not :
The higher the asic quality number, the lower the leakage and the lower the maximum overclock ?
Was it not that more leakage (static power consumption) means more overclock ?
Maybe i have it mixed up. I am not sure.