AMD Ryzen Gen 2 Set For Q2 2018

Page 16 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

itsmydamnation

Platinum Member
Feb 6, 2011
2,868
3,419
136
I think it's looking pretty good overall. Slight improvement in single core and massive improvement in multicore scores probably due to reduced L3 and inter CCX latencies. That should transfer directly to gaming results.

I would just look at int and FP scores, crypto and memory just dilute the meaningful scores. in that case we are looking at

int 3808 vs 4009
FP 4025 vs 4188

So we are looking @ 5% "IPC" based on
https://browser.geekbench.com/v4/cpu/compare/7144831?baseline=6690641

add a 200mhz clock bump(to base and XFR) and we are at 10% perf improvement.......

Ryzen 2 made at TSMC. 64 Core Epyc processor.

I dont even need to watch the video to know that this going to be wrong........
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
I dont even need to watch the video to know that this going to be wrong........

Exactly. Gary Patton CTO of GF confirmed that 7LP will be ramping in early 2019 with a couple of key partners. Its very clear AMD is one of the partners and the first product will be 7nm Rome in H1 2019. Ryzen 3000 will follow next and will arrive towards mid-2019. The way I see it TSMC will manufacture 7nm Vega and Navi and GF will manufacture 7nm Rome,Ryzen in 2019 and 7nm Ryzen APUs in H1 2020.
 

Hitman928

Diamond Member
Apr 15, 2012
5,622
8,847
136
For structure. Can't leave those spaces empty, then expect the lid to stay straight when the cooler is fastened down.

That's what it is in threadripper. In the video I think he's saying that next-gen Epyc would have a die dedicated to IO / Memory controller and then the other 4 die would just have computational units and cache. I think it's crazy but I'm also not entirely sure that's what he was saying.
 

moinmoin

Diamond Member
Jun 1, 2017
4,994
7,765
136
He's essentially saying AMD will go further assembling chips from multiple dies that then aren't self contained CPUs anymore like the current Zeppelin die is. Essentially similar to what Intel announced wanting to do with EMIB, mixing and matching different IPs.

 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
He's essentially saying AMD will go further assembling chips from multiple dies that then aren't self contained CPUs anymore like the current Zeppelin die is. Essentially similar to what Intel announced wanting to do with EMIB, mixing and matching different IPs.


Two things. As it is if Intel is able to pull off EMIB (and Kaby-G isn't the EMIB pipecleaner that Intel has been suggesting). I will be the better solution in the long run so I hope AMD is heading towards something like that. The second is I don't see Zen 2 being what Adored suggested. Personally it seems waaay to soon for that kind of departure. On top of that it would be really awkward for desktop usage. On Epyc adding the control chip would cost 25% more on top of the "core chips". Not bad. On Ryzen 3k it would mean doubling the cost of production. Also they would probably have to design a control chip for each version of Ryzen. They probably could get away with just disabling features from the top down, but then you have on your highest volume option a chip that is 2-4 times larger than it needs to be.

To me if AMD is going to do any 5 chip EPYC, Adored is on the right track but seriously off. Instead of a control chip I could see AMD doing an off Die L4, specifically for Epyc. So a 5th chip with like 128-256MB that would be in similar die size to the other 4 die size. Something they wouldn't need to provide for Thread Ripper and Ryzen.
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
Two things. As it is if Intel is able to pull off EMIB (and Kaby-G isn't the EMIB pipecleaner that Intel has been suggesting). I will be the better solution in the long run so I hope AMD is heading towards something like that. The second is I don't see Zen 2 being what Adored suggested. Personally it seems waaay to soon for that kind of departure. On top of that it would be really awkward for desktop usage. On Epyc adding the control chip would cost 25% more on top of the "core chips". Not bad. On Ryzen 3k it would mean doubling the cost of production. Also they would probably have to design a control chip for each version of Ryzen. They probably could get away with just disabling features from the top down, but then you have on your highest volume option a chip that is 2-4 times larger than it needs to be.

To me if AMD is going to do any 5 chip EPYC, Adored is on the right track but seriously off. Instead of a control chip I could see AMD doing an off Die L4, specifically for Epyc. So a 5th chip with like 128-256MB that would be in similar die size to the other 4 die size. Something they wouldn't need to provide for Thread Ripper and Ryzen.
Not that I'm agreeing with Adored's speculation but why do you say this?

Fabbing 5 chips does not mean 25% increase in costs especially if you're eliminating duplicated circuitry. Each of the core die will be smaller and the total silicon area should be smaller, leading to even greater yields, which might be essential for 7nm. You will have to fab 2 separate designs and this will increase costs, but without details such as sales volumes, etc,` no one here can accurately predict the final composite cost.

What am I missing?
 

moinmoin

Diamond Member
Jun 1, 2017
4,994
7,765
136
The cost for a theoretical split up Zen 2 wholly depends on the balance between 7nm yield and the vastly increased complexity of interconnects an MCM consisting of many relatively tiny parts will require. If yield is high enough AMD's current approach of having two dies covering everything of the whole products range and matching partially faulty dies with respective lower end offerings will be the safe option. To me the switch to more aggressive MCMs may be an optimization that they may target for Zen 3 or beyond where significant process node improvements are not yet a given.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
Not that I'm agreeing with Adored's speculation but why do you say this?

Fabbing 5 chips does not mean 25% increase in costs especially if you're eliminating duplicated circuitry. Each of the core die will be smaller and the total silicon area should be smaller, leading to even greater yields, which might be essential for 7nm. You will have to fab 2 separate designs and this will increase costs, but without details such as sales volumes, etc,` no one here can accurately predict the final composite cost.

What am I missing?

Simple they will have 5 dies roughly the same size. The actual size of the die makes no difference here. If like EPYC 7K you only had 4 of them, adding a 5th might would be a 25% increase in silicon cost compared to using 4. This might be as a whole smaller than splitting everything from the control chip into the 4 other dies. Just saying adding a 5th piece of of the same size die is a 25% increase. Again this might not matter in EPYC because the 25% can be well worth the offering when comparing it to the possible size of the 4 dies with the control features split between them. This is more about Ryzen and to a smaller extent Threadripper. Now you have a single Die. So AMD has two choices one use the same trick from Zeppelin and use the same control die (since now the CPU dies don't have important components). So now you have two dies that are roughly the same size. One of those chips has 128 lanes of PCIe, 8 Memory channels, and a host of other stuff that the desktop CPU doesn't need and can't use. So now those die savings are lost while you have great yields on both chips, the effective yield is much smaller compared to a normal single die, single chip, desktop chip because you basically halve the production numbers (because of having to use every other wafer for an over-sized control chip). The other possibility is for AMD to design a control chip for each version the CPU, which while not unreasonable (and a control chip for Ryzen 3k would be so much smaller than the EPYC one, which would increase yields even more). It gets AMD farther away from what there original goal with the Zen arch which is one die fits all now they have to balance control chip production with die binning, it gets them farther and farther from the flexible environment Zeppelin lets them stay in.

That is why I think a 5th chip might be not just EPYC specific. But SKU specific. A very large cache L4 chip for certain server markets, like Database systems and HPC make much more sense then increasing different Die productions and sacrificing production margin on their volume CPU sales. So not that Adored is wrong on the 5th chip rumors, but just being waay off the mark and basically becoming a Redditor "What if AMD is really doing this" thinking of amazing ways AMD could use their tech if they were like 100 times larger than they are.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
The cost for a theoretical split up Zen 2 wholly depends on the balance between 7nm yield and the vastly increased complexity of interconnects an MCM consisting of many relatively tiny parts will require. If yield is high enough AMD's current approach of having two dies covering everything of the whole products range and matching partially faulty dies with respective lower end offerings will be the safe option. To me the switch to more aggressive MCMs may be an optimization that they may target for Zen 3 or beyond where significant process node improvements are not yet a given.

That's my belief. I could see this being the eventual plan for AMD somewhere out there. It's possible that Zen 3 might be a good place to start. I think personally it might be post that with 5nm where yields could be a huge crap shoot and they wouldn't want to be stuck in the boat Intel is in. His points on taking the current XCC die size and applying it to Intel's 10nm probably imperfection rate and coming up with 2 usable dies is was a very good point. I am sure Intel is beyond that now. But still it's eye popping when someone actually shows it to you. It completely explains why Intel wants people to think that Kabylake-G uses EMIB. They need the public to think they have a solution for what is probably a long ramp up before they can start to make the larger server dies on 10nm. So I get the idea of making the dies smaller and I get the idea of moving services off to smaller dies in an mcm package to increase the usable wafer space. I just can't see AMD taking that extreme of a move unless Glo-Flo or TSMC's 7nm is having the same issue. But if that was the case they would have to do it as soon as Zen 2 because Zen 3 would be having better yields anyways and the current MCM solution for EPYC even with badish yields would still be a strong contender against whatever Intel has on 10nm. Specially if it comes at the cost of designing multiple control chips, or throwing away all the die savings on Ryzen CPU's which will be their volume product.
 

bsp2020

Member
Dec 29, 2015
105
116
116
If AMD were to use 5 dies, as speculated (leaked?), I think it makes sense to have 1 inter socket communication die that will take care of all the communication between chips in different sockets. Looking at the current Zeppelin die, it seems to have too much resources dedicated to implanting IF for communication. There are total 6 IF IO ports (4 IFOP and 2 IFIS/PCIe) and they will have to add more if they want to build 4/8 socket systems. Most of those are used only in the server/HEDT versions of the chip. I'd build a 1 one large IFOP per chip and use a bridge/router chip (with L4/directory cache?) to take care of inter socket communication. That way, you don't waist as much transistors for consumer dies and support 4/8 socket systems without increasing worst case latency or increase pins on the socket.
 

moinmoin

Diamond Member
Jun 1, 2017
4,994
7,765
136
If AMD were to use 5 dies, as speculated (leaked?), I think it makes sense to have 1 inter socket communication die that will take care of all the communication between chips in different sockets. Looking at the current Zeppelin die, it seems to have too much resources dedicated to implanting IF for communication. There are total 6 IF IO ports (4 IFOP and 2 IFIS/PCIe) and they will have to add more if they want to build 4/8 socket systems. Most of those are used only in the server/HEDT versions of the chip. I'd build a 1 one large IFOP per chip and use a bridge/router chip (with L4/directory cache?) to take care of inter socket communication. That way, you don't waist as much transistors for consumer dies and support 4/8 socket systems without increasing worst case latency or increase pins on the socket.
Two points:
1) Everything seemingly superfluous and a "waste of transistors" actually works as built in redundancy: If that part is faulty the die can still be used in products that don't use it.
2) If everything is split up all the routing complexity is moved from silicon to interconnects. The latter are more coarse and as such it's hard to map all the routing 1-1 there. In many parts there will be added interface logic to the silicon just for standardizing the interconnects which at some point may not worth the area overhead. Note that Epyc/Zeppelin has an area overhead of 10% for the added interfaces enabling the 4 dies chips/2 sockets systems. The smaller the parts the relatively bigger will be the area overhead needed just for connecting them all.
 

bsp2020

Member
Dec 29, 2015
105
116
116
Two points:
1) Everything seemingly superfluous and a "waste of transistors" actually works as built in redundancy: If that part is faulty the die can still be used in products that don't use it.
2) If everything is split up all the routing complexity is moved from silicon to interconnects. The latter are more coarse and as such it's hard to map all the routing 1-1 there. In many parts there will be added interface logic to the silicon just for standardizing the interconnects which at some point may not worth the area overhead. Note that Epyc/Zeppelin has an area overhead of 10% for the added interfaces enabling the 4 dies chips/2 sockets systems. The smaller the parts the relatively bigger will be the area overhead needed just for connecting them all.

1) Not in this case since you can't move the IO pins around. To have redundant IO, you will need a different package substrate and I don't know of any case that uses different package substrate to utilize IO redundancy.
2) No response since I did not understand your comment.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,689
1,224
136
There is in progress studies which replacing wired interconnects with wireless interconnects. Might be preferred for HPC going chips.

Blurb;
The reduction in average packet latency and average packet energy for the wireless multichip system varies between applications due to the variation in off-chip traffic patterns from different memory access patterns. However, for all application-specific traffic patterns considered here, the performance of the wireless multichip system is better than the interposer based wireline configuration. The average reduction in packet latency and packet energy for the wireless multichip system is 54% and 45% compared to the interposer based system. This is due to the energy efficient single-hop wireless links connecting processing chips and memory stacks. It is worth noting that these performance benefits can be achieved with negligible active area overhead of 0.3 mm squared per transceiver.
--
Computing modules with multiple smaller processing chips with in-package memory stacks are becoming prevalent in platform based and HPC systems due to their performance benefits and cost-effectiveness. In this work, we explore the advantages possible if the chip-to-chip and memory-to-chip communication in multichip systems can be realized with state-of-the-art mm-wave wireless links operating in the 60GHz band. The wireless links are capable of establishing direct communication channels between cores in different chips and memory stacks via on-chip embedded transceivers and antennas. Such integration mechanism results in significant gains in performance and energy consumption in data communications in a multichip environment with negligible overhead of 0.3mm squared per transceiver.

Custom Design for 5G Wireless Base Stations: The company’s application-specific integrated circuit (ASIC) design systems (FX-14 and FX-7) enable optimized 5G solutions (functional modules) by supporting wireless infrastructure protocols on high-speed SerDes, solutions to integrate advanced packaging, monolithic, ADC/DAC and programmable logic. The 5G solution includes 32G BP and 32G SR SerDes to support CPRI, JESD204C standards. It also includes advanced packaging solutions such as 2.5D and MCM, with mmWave capable ADC/DAC data converters and digital front end (DFE). FX-14 is available to customers today while volume production is expected in 2019 for FX-7.
FX-14 supports 12LP.

So, ideally one is looking at ~10-20 mm squared of space for something that can provide 2 Terabyte per second interconnects. With no interposer or wires, while basically being picosecond latency and low energy for the bandwidth provided.

Extra:
https://dl.acm.org/citation.cfm?id=2206822
Heterogeneous System Architectures (HSA) that integrate cores of different architectures (CPU, GPU, etc.) on single chip are gaining significance for many class of applications to achieve high performance. Networks-on-Chip (NoCs) in HSA are monopolized by high volume GPU traffic, penalizing CPU application performance. In addition, building efficient interfaces between systems of different specifications while achieving optimal performance is a demanding task. Homogeneous NoCs, widely used for many core systems, fall short in meeting these communication requirements. To achieve high performance interconnection in HSA, we propose HyWin topology using mm-wave wireless links. The proposed topology implements sandboxed heterogeneous sub-networks, each designed to match needs of a processing subsystem, which are then interconnected at second level using wireless network. The sandboxed sub-networks avoid conflict of network requirements, while providing optimal performance for their respective subsystems. The long range wireless links provide low latency and low energy inter-subsystem network to provide easy access to memory controllers, lower level caches across the entire system. By implementing proposed topology for CPU/GPU HSA, we show that it improves application performance by 29 percent and reduces latency by 50 percent, while reducing energy consumption by 64.5 percent and area by 17.39 percent as compared to baseline mesh.
https://www.computer.org/csdl/trans/tc/2017/07/07795230.html
 
Last edited:

LightningZ71

Golden Member
Mar 10, 2017
1,661
1,946
136
I speculate that breaking up the individual die tasks differently on the EPYC package will enable AMD to leverage the existing 12LP node production capacity with the new 7nm capacity. For example, high speed DRAM controllers don't need to be made at the bleeding edge node to be effective. The same for I/O controllers.

With that in mind, why not address the biggest weakness of EPYC, which is inter CCX communication latency? Build one slightly larger 7nm die that can house 8-12 CCX units linked by an on chip IF mesh with 5 high bandwidth serial I/O links. Those links go to 4 on package memory, I/O and L4 cache die that link to the on board PCI-E slots, DRAM, etc with the 5th link reserved for a direct connection to the 5th die on the second CPU.

The advantage here is that only the CPU core die needs to be 7nm. The memory controller dies can be easily made on the mature 12lp process without hampering performance. They can also easily hold 32-128 MB of L4 cache each. Such an arrangement could net you an Epyc processor with 128-512 MB of L4 cache each. The second advantage of such an arrangement comes with future proofing. Need to change the DRAM controllers to DDR5? Just change the I/O dies. Need to support pci-e 5.0? Just change the i/O chip? Want to support a large install base of servers with a new core design or a newer node that clocks faster? Change the core die. Finally get 7nm production to a very high yield ratio and have extra capacity? Move the i/O dies to it and quadruple the L4 cache amounts.

The issue is supporting so many different dies. I also don't expect that making an 8-12 CCX 7nm die will be at all easy.
 

french toast

Senior member
Feb 22, 2017
988
825
136
I speculate that breaking up the individual die tasks differently on the EPYC package will enable AMD to leverage the existing 12LP node production capacity with the new 7nm capacity. For example, high speed DRAM controllers don't need to be made at the bleeding edge node to be effective. The same for I/O controllers.

With that in mind, why not address the biggest weakness of EPYC, which is inter CCX communication latency? Build one slightly larger 7nm die that can house 8-12 CCX units linked by an on chip IF mesh with 5 high bandwidth serial I/O links. Those links go to 4 on package memory, I/O and L4 cache die that link to the on board PCI-E slots, DRAM, etc with the 5th link reserved for a direct connection to the 5th die on the second CPU.

The advantage here is that only the CPU core die needs to be 7nm. The memory controller dies can be easily made on the mature 12lp process without hampering performance. They can also easily hold 32-128 MB of L4 cache each. Such an arrangement could net you an Epyc processor with 128-512 MB of L4 cache each. The second advantage of such an arrangement comes with future proofing. Need to change the DRAM controllers to DDR5? Just change the I/O dies. Need to support pci-e 5.0? Just change the i/O chip? Want to support a large install base of servers with a new core design or a newer node that clocks faster? Change the core die. Finally get 7nm production to a very high yield ratio and have extra capacity? Move the i/O dies to it and quadruple the L4 cache amounts.

The issue is supporting so many different dies. I also don't expect that making an 8-12 CCX 7nm die will be at all easy.
This sounds amazing, but seems to optimistic for the time frame imo.
That's really innovative if they can pull it off.

Now if they could do something similar with Navi...could be exciting.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
I speculate that breaking up the individual die tasks differently on the EPYC package will enable AMD to leverage the existing 12LP node production capacity with the new 7nm capacity. For example, high speed DRAM controllers don't need to be made at the bleeding edge node to be effective. The same for I/O controllers.

With that in mind, why not address the biggest weakness of EPYC, which is inter CCX communication latency? Build one slightly larger 7nm die that can house 8-12 CCX units linked by an on chip IF mesh with 5 high bandwidth serial I/O links. Those links go to 4 on package memory, I/O and L4 cache die that link to the on board PCI-E slots, DRAM, etc with the 5th link reserved for a direct connection to the 5th die on the second CPU.

The advantage here is that only the CPU core die needs to be 7nm. The memory controller dies can be easily made on the mature 12lp process without hampering performance. They can also easily hold 32-128 MB of L4 cache each. Such an arrangement could net you an Epyc processor with 128-512 MB of L4 cache each. The second advantage of such an arrangement comes with future proofing. Need to change the DRAM controllers to DDR5? Just change the I/O dies. Need to support pci-e 5.0? Just change the i/O chip? Want to support a large install base of servers with a new core design or a newer node that clocks faster? Change the core die. Finally get 7nm production to a very high yield ratio and have extra capacity? Move the i/O dies to it and quadruple the L4 cache amounts.

The issue is supporting so many different dies. I also don't expect that making an 8-12 CCX 7nm die will be at all easy.

I think this is unlikely in the extreme.

One of the big reasons went to MCM for Epyc/TR is so they could leverage only doing 1 single die, and using it in multiple markets.

What you suggest here is creating multiple dies for a single market.
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,153
136
Interesting. IPC for Pinnacle Ridge wasn't supposed to be much, if any, better than Summit Ridge. I'm intrigued.
Memory latency has been a real issue with Ryzen in many tasks. AMD is on record saying this is one of the things they targeted when making Pinnacle, so it would make sense that some workloads see a major improvement. Cache latencies too, and I wouldn't be surprised to see improvements in the CCX to CCX latency. There's likely some optimization work on the branch predictor too. You can see how the multithreaded results benefitted more, but even in single threaded there's some improvement.

Do you remember Vishera (Piledriver)? Remember the large improvements over Bulldozer? What would you say if I told you that Piledriver was no different in scope to Zen+?
It had the same die size and (near same) transistor count of Zambezi, and in CPU-Z was listed as a base layer revision of the same Orichi die. AMD called it a different architecture for the sake of marketing and getting away from the trashed Bulldozer name. "Hey look! We know that Bulldozer was crappy, but we got yearly architectural updates lined up! Don't worry!". The first one that could be called something like Zen 2 would be Steamroller.

Pinnacle Ridge will outperform people's expectations from what I've been seeing on the web, as long as it breaks the 4GHz Fmax wall. If it still can't break that wall the IPC improvements will somewhat make up for expectations on overall performance, but it won't outperform expectations.
 
Last edited:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Pinnacle Ridge will outperform people's expectations from what I've been seeing on the web, as long as it breaks the 4GHz Fmax wall. If it still can't break that wall the IPC improvements will somewhat make up for expectations on overall performance, but it won't outperform expectations.

https://www.computerbase.de/2018-01/amd-ryzen-threadripper-2000/

I fully expect Pinnacle Ridge to blow past even 4.6 Ghz. AMD has given hints that GF 12nm is very close to competitor 14+ process at the CES Tech day. Its obvious the competitor is Intel 14+ process. KBL clocked 5 Ghz easily. I think AMD will get very close to KBL with around 4.8 Ghz for max clocks. In fact I think everybody is going to be surprised with max clocks of PR.
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,153
136
https://www.computerbase.de/2018-01/amd-ryzen-threadripper-2000/

I fully expect Pinnacle Ridge to blow past even 4.6 Ghz. AMD has given hints that GF 12nm is very close to competitor 14+ process at the CES Tech day. Its obvious the competitor is Intel 14+ process. KBL clocked 5 Ghz easily. I think AMD will get very close to KBL with around 4.8 Ghz for max clocks. In fact I think everybody is going to be surprised with max clocks of PR.
The matching 14+ figure is for efficiency, not drive current or Fmax or anything of the sort that would tell you absolute speed.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
The matching 14+ figure is for efficiency, not drive current or Fmax or anything of the sort that would tell you absolute speed.

I am confident you will see at launch that PR closes the gap completely in terms of single core turbo. CFL will have an advantage in max OC as it can hit 5.2-5.4 Ghz which will not happen on 12LP. I have done my analysis and think AMD have gone for max clocks with a small die size increase. I think Zen+ will use 12LP (MMP=56nm CPP=84nm) 10.5T libraries for max frequency.
 
Reactions: Drazick

LightningZ71

Golden Member
Mar 10, 2017
1,661
1,946
136
I think this is unlikely in the extreme.

One of the big reasons went to MCM for Epyc/TR is so they could leverage only doing 1 single die, and using it in multiple markets.

What you suggest here is creating multiple dies for a single market.

Would that were the case. However, AND currently has at least 4 different dies in various stages of production:
The original summit ridge die in desktop Ryzen 1xxx
A B revision of Summit Ridge as seen in Epyc and TR
Raven Ridge just entering full volume distribution
Pinnacle ridge that is now ramping for FRP

This excludes the existing bristol ridge product, the two SOCs for the XBOX1 and PS4 and at least one other legacy embedded product.

We know that EPYC isn't scheduled to see the Pinnacle Ridge die from earlier interviews and releases, so there is still substantial lead time here. We also know that AMD touted IF as a magic glue for chips, the likes of which would enable the above proposal.

What I propose is a two chip solution that can comfortably serve both the EPYC and the TR product stacks. TR could use the same package and dies by using recovered Epyc packages as well as purpose built ones as needed. It could enable a wider variety of products for the TR stack with increased core counts and, with proper planned in chip and MCM substrate flexibility, it could even fully leverage the second pair of IO links from the core die.

This would move EPYC and TR from a single high commonality die to a pair of unique dies that just share design elements with the high volume ryzen dies. The difference now is that AMD has healthy revenue once again.
 
Reactions: Drazick

exquisitechar

Senior member
Apr 18, 2017
666
904
136
I am confident you will see at launch that PR closes the gap completely in terms of single core turbo. CFL will have an advantage in max OC as it can hit 5.2-5.4 Ghz which will not happen on 12LP. I have done my analysis and think AMD have gone for max clocks with a small die size increase. I think Zen+ will use 12LP (MMP=56nm CPP=84nm) 10.5T libraries for max frequency.
As much as I would love to see that, even AMD's own figures for the performance improvement aren't quite near that (a 10% improvement was mentioned by multiple, reliable sources IIRC). They could be rather conservative and probably are, but I'm expecting a 15% improvement in general, significantly better in some cases but worse in others. Still, I hope that your prediction about the 12LP process they will be using are correct.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |