64 core EPYC Rome (Zen2)Architecture Overview?

Page 15 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Yet there's only one reason to use interposer and chiplet-design, to reduce cost vs integrated circuit. Last such a cpu was Intel's Clarkdale, cheap cpu get chiplet-design where more pricier versions(nehalem) got unified silicon design.
.

WUT?
 

naukkis

Senior member
Jun 5, 2002
768
634
136

Clarkdale used 32nm cpu-chiplet with 45nm memory-controller-igpu chiplet within same cpu package. Memory latency wasn't good but that was more about fsb-interface between chips than design itself.

Decoupling cpu and northbridge isn't a new thing - and putting them together in cpu package either.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Clarkdale used 32nm cpu-chiplet with 45nm memory-controller-igpu chiplet within same cpu package. Memory latency wasn't good but that was more about fsb-interface between chips than design itself.

Decoupling cpu and northbridge isn't a new thing - and putting them together in cpu package either.

Except this was a case of moving to more integration not the other way around.

The first Nehalems didn't have any GPU in the package.

Westemere (Clarkdale/Arrandale) had a separate GPU die in the package. The memory controller was in the GPU part, likely because that is where they thought it would be more sensitive.

Sandybridge integrated GPU into the die.

We have been moving toward more integration all along. This was why AMD/ATI merged.

Since Ryzen-Threadripper-Epyc, people have been over-hyped about multi-chip packages.
 

naukkis

Senior member
Jun 5, 2002
768
634
136
Westemere (Clarkdale/Arrandale) had a separate GPU die in the package. The memory controller was in the GPU part, likely because that is where they thought it would be more sensitive.

Because it was cheaper to produce very small cpu cores at state-of-art limited capacity and other parts of cpu with mature older process. With maturing 32nm process they switch to unified die design. Pretty much same situation that now with 7nm, do as small chips as possible to keep production costs down.

Memory latency will suffer with separated memory controller-cpu chips, but with optimized solutions that handicap can be as small as 1ns which is tolerable. More difference come from that 7nm memory controller could probably be more advanced and higher clocked than eg 14nm controller but that difference is also pretty small.
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
Yes they could. I'm still sceptical they will do it with the first generation. Doing APUs with chiplets immediately complicates things considerably:

1. Building an interposer and at least 2-3 different chiplets is expensive. You can't really sell that anywhere near the price of a 2200G or a 2500U, which is the price level most OEMs are willing to pay for AMD models still.

2. APUs are mostly for low-power (mobile). So in addition to all the new architectural problems, they face, they also need to tackle all the battery-life problems. There is definitely lots of room of improvement even on the relatively simple Raven Ridge (idle power, LPDDR support). Making a multi-module chip zip power at low-intensity tasks (e.g. web-browsing) - while crossing chiplet boundaries id definitely way harder. My guess would be that they would do it later, perhaps with zen 3 once they've already mastered the current chiplet design.

I have no doubt AMD will eventually do it, after all the benefits of such modularity could be massive. E.g. They could release new CPU and GPU chiplets out-of-sync. E.g. release a processor with a 12nm Vega graphics GPU (as Navi isn't ready), then update it to Navi later, without changing any other chiplets. Do configurable models with/without HBM, etc ...

I'm not yet convinced they'll do it the generation. I wouldn't mind to be wrong though

There are a lot of myths regarding chiplets and interposers.

Small passive interposers are very cheap to produce. ~$2/100mm^2 some time ago.
Having 3-4 CPU & GPU chiplets to be combined in many ways is cheaper than producing several larger monolithic designs.
Impedance across a silicon interposer is no higher than an on die interconnect (per AMD).
Latency hit is ~ 1ns across the interposer (per Xilinx).

The benefits are smaller chiplets having better yields.
Upgrades possible without totally completely new designs.
Flexibility to produce a wider range of products with lower overall design costs.
Easier inventory control (less unique designs) with the flexibility to respond quickly to varying sales. Only the final assembly stage produces the desired product.
 

scannall

Golden Member
Jan 1, 2012
1,947
1,638
136
I'm not buying this chiplet argument. Too much added cost. My bet is on the 64 core part using 8 dies, with 4 of them leech dies like the 2990WX uses. Maintains socket compatibility, and less validation time and cost compared to the more exotic ideas floating around.
 
Last edited:
Reactions: yuri69

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
I'm not buying this chiplet argument. Too much added cost. My bet is on the 64 bit part using 8 dies, with 4 of them leech dies like the 2990WX uses. Maintains socket compatibility, and less validation time and cost compared to the more exotic ideas floating around.
I suppose you wrote that too quickly. Care to explain? 64 bit part?
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Except this was a case of moving to more integration not the other way around.

We have been moving toward more integration all along. This was why AMD/ATI merged.

Since Ryzen-Threadripper-Epyc, people have been over-hyped about multi-chip packages.

Yup.

Chiplets and MCM packages are becoming more common because process scaling isn't giving us the benefits we're used to, and actually less and less every generation. All the while being more expensive than ever.

Clarkdale used 32nm cpu-chiplet with 45nm memory-controller-igpu chiplet within same cpu package. Memory latency wasn't good but that was more about fsb-interface between chips than design itself.

And they brought total integration with the Sandy Bridge Tock.

By the way, Clarkdale used QPI to connect the two chips together. Latency is because of the extra physical distance due to the memory controller being on the other die.
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,153
136
Yet there's only one reason to use interposer and chiplet-design, to reduce cost vs integrated circuit. Last such a cpu was Intel's Clarkdale, cheap cpu get chiplet-design where more pricier versions(nehalem) got unified silicon design.

For AMD chiplet design is cheaper alternative for sure, but as Clarkdale shows it usually comes with inferior performance vs integrated silicon version.
Time to market and flexibility. Being able to mix and match IP quickly to meet customer demands. It would be great for AMD's semi custom division if any of the Big 7 decided they want a custom I/O chip for their EPYC for whatever reason.
 

naukkis

Senior member
Jun 5, 2002
768
634
136
By the way, Clarkdale used QPI to connect the two chips together. Latency is because of the extra physical distance due to the memory controller being on the other die.

No it isn't. Clarkdale's memory latency was worse than with MB-installed memory controllers. For chiplet design additional latency can be as small as 1ns, which is low enough to not be a issue. Intel Crystallwell uses chiplet-design for L4 cache and latency is much, much less than with dram with silicon integrated memory controller.
 

Manabu

Junior Member
Jun 25, 2008
9
10
81
7. There are seemingly many packaging options:

(a) Organic MCM. Since the connection between the CPU die and SC die are now very short and direct (at most 2-3 mm), and are located at the edge of the dies, the drivers can be very low power. Organic MCM might be sufficient for the job. This would be the cheapest option.

(b) Passive interposer. Similar to (a) but using a passive silicon interposer in place of the organic substrate. It would offer better performance than organic MCM but is also much more expensive. Interposer size would exceed reticle limit and require stitching. In this case, the SC die cannot be made too large as it would simply make the interposer even bigger. This means a meaningfully large L4 cache may not be practical. Seems like paying a high price but not getting a commensurate payback in performance. I think this option is overkill.

(c) Active interposer. In this case, the CPU dies will be stacked on top of an active interposer which is also the SC die. The interposer will be large but will not exceed reticle limit. Normally there is no need to use 14nm node to make this interposer. But if the rumor that the SC die uses 14nm is true, then you might as well make full use of the area available by adding a large L4 eDRAM cache. The result would be a monster! The L4 cache would mitigate the increased memory latency resulting from moving the memory controller off-die and the limited bandwidth of 8ch DDR4.

(d) EMIB. Intel's EMIB looks like the perfect packing option for connecting the CPU dies to the SC die. But obviously AMD can't use EMIB. I consulted someone in the packaging business and he told me that there are currently no commercially available alternatives to EMIB. Even though Intel claims that EMIB is theoretically able to accomodate up to 8 bridges per die; in practice it is very diffcult to achieve perfect alignment with more than a couple of bridges; yields will be very bad. Interestingly, AMD has a patent that describes an alternative to EMIB: https://patents.google.com/patent/US20180102338A1/en?oq=20180102338 …. But it is not clear if this is what they will use.

I really liked the idea of active interposers. It offers lower latency and thus higher performance than passive interposers, due to the shorter signal paths. Also, the possibilities with 3D layout of the function blocks (butter donuts and whatnot) are very exciting. The heat dissipation is troubling but can probably be managed by limiting the power usage by the SC and leaving some gaps between the chiplets and around the more active border for a direct heat path from the SC.

One idea is to put a least half of the L3$ data blocks down in the active interposer, leaving the all the performance critical and more power hungry L3$ tags and most of the L3$ control in the 7nm chiplet. This fixes the inbalance between the size of the interposer and the sum of the chiplets area (the SC die would only have 270mm³ of active area versus 8x64mm² = 512+mm² for the 32MB L3$ 8c16t chiplets) while not adding much heat to the SC. If the impedance across the interposer is indeed identical to the conventional on-chip interconects this could even improve the latency by making things physically closer. Despite being big, the yields on the SC would still be good with all the redundancy possible, with bad ones reused for lower end Rome and Threadripper.

For Matisse another trick would be needed to fix the inbalance. There we have the opposite (better) problem: the SC die is naturally much bigger than the 8c16t chiplet, especially if we add the misc fixed function graphics stuff (display, codecs, etc) in the SC while just a 20mm² 7nm vega/navi compute chiplet would already be at the saturating point of DDR4 bandwidth (something you overlooked in your analysis).

On the other hand, if add 80mm² of vega/navi chiplets and a 90mm² HBM2 memory on top for RX580 level performance (for notebooks, PS5, etc), the SC die becomes too small for an interposer. Maybe AMD can somehow mount the HBM2 memory on the border of the SC (still on top) and another slab of dumb silicon or another material to support the hanging side of the HBM2 die. Kinda similar to intel's EMIB, so I don't know if they can do that.

So Matisse will probably have only 3~9CUs in the SC itself and live with the inbalance (helps with the heat dissipation from the SC), and another design would be the high-end graphics APU.

In the end, for Matisse standard passsive silicon interposers might be the better way, but then a CPU chiplet designed to be on top of an active silicon on Rome couldn't be re-used for Matisse, forcing Rome to also use passive silicon interposers like your design. If I had to bet, AMD will use all passive interposers this time around, including for Matisse, so we have just a single tapeout at 7nm and the smallest manufacturing costs as well, as the cost per transistor at 7nm is actually a little higher than at 14nm AFAIK.
 
Last edited:

DisEnchantment

Golden Member
Mar 3, 2017
1,682
6,197
136
Regarding heat, I think AMD put some thought into it (see 20170200672)

[0057] In the described embodiments, chiplets can be mounted on interposer 100. Generally, in "mounting" chiplets on interposer 100, the chiplets are fixedly/permanently or removably/temporarily coupled to interposer 100 in such a way that chiplets are enabled to perform one or more operations using, for, and/or with interposer 100. For example, a mounted chiplet may use signal routes in interposer 100 to exchange signals, data, etc. with other chiplets mounted on interposer 100, with interposer 100 itself (e.g., to circuit elements in interposer 100), and/or with an external device coupled to interposer 100. As another example, a mounted heat sink chiplet may absorb heat from interposer 100, from other chiplets mounted on interposer 100, and/or from a device external to interposer 100.



However in newer patents (20180102338) AMD is thinking of only carving out these holes on the substrate to insert an interconnect chiplet instead of having a full sized interposer with empty spaces when not fully configured.

10 Year long Lurker enticed by AMD's chiplets.
Also this concept is kind of novel to me as well.
 

Glo.

Diamond Member
Apr 25, 2015
5,759
4,666
136
For Matisse another trick would be needed to fix the inbalance. There we have the opposite (better) problem: the SC die is naturally much bigger than the 8c16t chiplet, especially if we add the misc fixed function graphics stuff (display, codecs, etc) in the SC while just a 20mm² 7nm vega/navi compute chiplet would already be at the saturating point of DDR4 bandwidth (something you overlooked in your analysis).
Matisse has to be capable of fitting 8C/16T CPU, with 3GHz all core boost clock into 25W TDP, because of Epyc 250W TDP target. Heat will not be that big of a problem.

The only thing that pops question marks over my head is the matter of Soldering IHS, with Interposer, and chiplets, underneath. Is it even possible to do it, or we are going to get Toothpaste TIM?
 

Manabu

Junior Member
Jun 25, 2008
9
10
81
Regarding heat, I think AMD put some thought into it (see 20170200672)
Well, in my thoughts the "heatsink chiplet" would likely be of a more complex shape than the rectangles pictured, with holes in it for the other actual chiplets to fit in. If fitted first, it could even aid the correct positioning of the other chiplets. Well, for my non-HBM2 Matisse idea, it could be a simple rectangular slab however.

However in newer patents (20180102338) AMD is thinking of only carving out these holes on the substrate to insert an interconnect chiplet instead of having a full sized interposer with empty spaces when not fully configured.

10 Year long Lurker enticed by AMD's chiplets.
Also this concept is kind of novel to me as well.
In that patent I didn't see anything about active interposers. And is the difference to intel's EMIB only the fact the AMD interconect chiplet has connections in the bottom too? I don't like reading patents.

Anyway, that would work well for a Matisse with beefy graphics and the HBM2 memory by the side, with a thin strip of high density connections on top of the SC and the rest featureless, supported by the package substrate, like in Intel's implementation.

The 8C/16T and navi chiplets could be fully on top as there is area for that. I don't know enough about GPU architecture to know what low power component could be put in the SC bellow the chiplet in this case, but I guess the data banks for the caches would again be the choice. The thing is there are many more individual caches in the GPU and I don't know if the pin density between the two chips would support such a thing meaningfully.

Matisse has to be capable of fitting 8C/16T CPU, with 3GHz all core boost clock into 25W TDP, because of Epyc 250W TDP target. Heat will not be that big of a problem.
On a desktop you will want to clock that 8C/16T chiplet to 5Ghz using much more than 25W, while on a thin laptop the 25W will already be a lot of heat to dissipate. That 25W number by itself is meaningless.

The real problem is the path of heat dissipation from the SC. The 8C/16T chiplet has a cool 30~50ºC heatsink on top of it, while the SC will have the hot 50~80ºC 8C/16T chiplet on top, that additionally is also not designed to conduct heat away from it and will be heated by it.

Thus the necessity, I think, of some empty areas on top of the SC in the areas where the SC is the hottest (like GPU for Matisse or the cache coherence stuff for Rome), with possibly some heatsink chiplets as in that AMD patent.

The only thing that pops question marks over my head is the matter of Soldering IHS, with Interposer, and chiplets, underneath. Is it even possible to do it, or we are going to get Toothpaste TIM?
Epyc/Threadripper use a kind of chiplet architecture and have a solder IHS, so I don't see why not. On the other hand, it seems that Vega64 uses paste and it is the example of actual interposer design.
 

Manabu

Junior Member
Jun 25, 2008
9
10
81
It was basically a repetition of what he already said here. But I want to comment on one point I forgot:
7. There are seemingly many packaging options:
(b) Passive interposer. Similar to (a) but using a passive silicon interposer in place of the organic substrate. It would offer better performance than organic MCM but is also much more expensive. Interposer size would exceed reticle limit and require stitching. In this case, the SC die cannot be made too large as it would simply make the interposer even bigger. This means a meaningfully large L4 cache may not be practical. Seems like paying a high price but not getting a commensurate payback in performance. I think this option is overkill.
I don't see why stitching would be needed. You only need to print connections in the center part of the interposer, up to 2~3mm from each side of the borders of the SC die. Then the patterning machine could make a big jump to the center of the next interposer. With the EMIB-like patent (I'm still not sure of the difference), not even those dumb unpatterned borders are needed.

If the reticle limit for the passive interposer is 800mm², the SC die could be up to 25x25=625mm². If we can exclude the borders, then each passive interposer would cost about $10 from a $600 300mm wafer. If not, then about $16 per passive interposer. Compared to the $60~$100 for the 625mm² 14nm SC plus ~$100 for the 8x64mm² 7nm chiplets, it isn't much. For a 300mm² 14nm SC, the SC would be $35, the full passive interposer $8 and the chiplets would continue to be $100 for 64 core.

I don't know about the packaging costs and yields versus MCM, but I think the higher performance and lower energy consumption for a $1,000~10,000 part justify the extra $8~16 cost of the silicon interposer, if that is the only extra cost over the MCM solution. For a $100~500 Matisse part the silicon interposer would be $3.
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
Well, in my thoughts the "heatsink chiplet" would likely be of a more complex shape than the rectangles pictured, with holes in it for the other actual chiplets to fit in. If fitted first, it could even aid the correct positioning of the other chiplets. Well, for my non-HBM2 Matisse idea, it could be a simple rectangular slab however.


In that patent I didn't see anything about active interposers. And is the difference to intel's EMIB only the fact the AMD interconect chiplet has connections in the bottom too? I don't like reading patents.

Anyway, that would work well for a Matisse with beefy graphics and the HBM2 memory by the side, with a thin strip of high density connections on top of the SC and the rest featureless, supported by the package substrate, like in Intel's implementation.

The 8C/16T and navi chiplets could be fully on top as there is area for that. I don't know enough about GPU architecture to know what low power component could be put in the SC bellow the chiplet in this case, but I guess the data banks for the caches would again be the choice. The thing is there are many more individual caches in the GPU and I don't know if the pin density between the two chips would support such a thing meaningfully.


On a desktop you will want to clock that 8C/16T chiplet to 5Ghz using much more than 25W, while on a thin laptop the 25W will already be a lot of heat to dissipate. That 25W number by itself is meaningless.

The real problem is the path of heat dissipation from the SC. The 8C/16T chiplet has a cool 30~50ºC heatsink on top of it, while the SC will have the hot 50~80ºC 8C/16T chiplet on top, that additionally is also not designed to conduct heat away from it and will be heated by it.

Thus the necessity, I think, of some empty areas on top of the SC in the areas where the SC is the hottest (like GPU for Matisse or the cache coherence stuff for Rome), with possibly some heatsink chiplets as in that AMD patent.


Epyc/Threadripper use a kind of chiplet architecture and have a solder IHS, so I don't see why not. On the other hand, it seems that Vega64 uses paste and it is the example of actual interposer design.
I think the reason for paste in Vega might be due to the fact that the HBM modules have a different height to the GPU, which will not be the case for this CPU.
 

coercitiv

Diamond Member
Jan 24, 2014
6,369
12,746
136
I think the reason for paste in Vega might be due to the fact that the HBM modules have a different height to the GPU, which will not be the case for this CPU.
Maybe I'm too tired and memory is playng tricks on me, but what Vega & HBM based product comes with a heatspreader?
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
Maybe I'm too tired and memory is playng tricks on me, but what Vega & HBM based product comes with a heatspreader?
Correct, no heatspreader for GPU. Just replying to this in post #365.

"Epyc/Threadripper use a kind of chiplet architecture and have a solder IHS, so I don't see why not. On the other hand, it seems that Vega64 uses paste and it is the example of actual interposer design".
 

Zapetu

Member
Nov 6, 2018
94
165
66
I've been reading this thread for a while now and wanted to finally contribute with a couple of images I made today of some of the package layouts suggested here. As a non-native speaker I apologize beforehand if some of the sentences are a bit weiird. To start with, here's the basic layout of the current AMD Epyc (Naples) package:

Thermal characteristics of the current EPYC processors seem rather good maybe partly because MCM chiplets are located relatively far apart and not all bundled together. While the 8 CPU chiplets and 1 central chiplet rumor seems to be true and probably will soon be confirmed, it's much harder to find an optimal layout for 9 or more different chiplets than just four. Some of the things I'm presenting here may not be possible to manufacture just yet and need more time to be mass production ready (maybe Zen3).

I really liked the idea presented in AMD's patent #20180102338 Circuit Board with Bridge Chiplets and if I understood it correctly, embedded chiplets can be of any materlal including fully active silicon die. Others have suggested this before but here are some examples where the system controller die (I/O chiplet) is embedded inside the organic substrate:

The first one uses ~75 mm2 square dies for "maximum" yields. As far as I know, and I might also be a littlle mistaken, the reticle limit for current steppers is about 33 mm x 26 mm. Here's another layout with chiplets having about the same siize (~75mm2) but different dimensions:

This might be a better layout for thermal performance but still requires I/O chiplet to be embedded inside the packgage substrate. If this actually works, there are some benefits to this method:
  • CPU chiplets (7 nm) are directly connected to tIhe I/O chiplet (14 nm) using so called microbumbs and there is no need for a separate silicon interposer which adds complexity and probably also more latency. I can't think of any better solution (with shorter passive signal path) than to connect all the CPU chiplets directly to the I/O chiplet
  • All power wiring for all chiplets can be routed using normal solder pumps and organic subsrate. This is also described in the previously mentioned patent where it was ok to use normal solder bumbs for topside chiplets outside of embedded chiplets.
It might still be too hard to manufacture any of the previous layouts but AMD might also do something like Intel has done with EMIB:

This design requires additional bridge chiplets that can be small passive silicon interposers or some other material entirely as long as they provide higher density signal paths than the base organic substrate allows. Signal paths are quite short but still relatively much longer than connecting the dies togerher directly. Atleast something like the above can work today because Intel and AMD has used EMIB for Kaby Lake G. AMD probably can't use EMIB directly but maybe something similar.

The last option is in my opinion the worst because it requires a passive silicon interposer much larger than the reticle limit (33 mm x 26 mm) and therefore makes things and solder bumb placement quite hard (because only the center part can be used). There are better layouts, as kokhua has presented, but here is mine anyway:

I really like the first two where the system controller chiplet is embedded and the CPU chiplets directly connect to it partly on top of it. Organic substrate wiring would then be mostly dedicated to power delivery, system memory and PCIe/IFIS. Might not be possible though, atleast not yet.

All the images in this post are free to use and can be used to make better layouts later on. Each 10 x 10 pixel square corresponds to an 1 mm x 1 mm area IRL.

Let's see what AMD tells us today. Also big thanks to kokhua for all the hard work with different ideas and diagrams/schemas. Very interesting speculation indeed.
 

kokhua

Member
Sep 27, 2018
86
47
91
Let's see what AMD tells us today. Also big thanks to kokhua for all the hard work with different ideas and diagrams/schemas. Very interesting speculation indeed.

Really cool pics! I hope you are right.
 
Mar 11, 2004
23,155
5,623
146
Matisse has to be capable of fitting 8C/16T CPU, with 3GHz all core boost clock into 25W TDP, because of Epyc 250W TDP target. Heat will not be that big of a problem.

The only thing that pops question marks over my head is the matter of Soldering IHS, with Interposer, and chiplets, underneath. Is it even possible to do it, or we are going to get Toothpaste TIM?

Has 64 core EPYC been confirmed for that TDP? (I really don't know, so am asking if you're basing that off the current EPYC TDP or if AMD has actually said this).

I have to wonder if we aren't getting ahead of ourselves, in that I believe AMD has only said that EPYC will offer 64 core/128 thread capability at some point, but they did not say that happens now with Zen 2 on 7nm. From what I've gathered from AMD themselves, we should only be expecting 48 core/96 thread EPYC chips for Zen 2 EPYC for awhile (I don't believe they have said 64 core EPYC will be Zen 2, so it could be Zen 2+ or even Zen 3). Which that means AMD has time and resources to build to that situation (I've seen people say that AMD would be limited on what they'd be able to do as far as people speculating on the design of these 64 core EPYC designs, because of time and resources, well if that's not happening for a while then that gives them more time and resources, ones that don't necessarily have to be tied to how they operated before).

I personally would expect that 64 core EPYC will have more than 8 memory channels, unless DDR5 really brings about a significant increase in memory bandwidth (it should bring some but will it bring enough that it would mitigate sticking with the same number of memory channels they had at 32 cores?). And yes there's other ways to help with the memory issues. Speaking of compression, is it possible that's not a purely bandwidth related thing, and that there's other benefits (like during compression they encrypt it for security)?

As for the interface, would a new layout lead to a new interface design? Why not use the empty space (in the chiplet patent image posted earlier) as spacers (like on the older chips when they had bare die). Would thermal paste on bare dice offer better thermals than IHS even with solder? The people assembling servers are not average consumers so I'd think they could get by with ditching the IHS to enable better thermal interface. Or they come up with some other solution (would it be possible to make the area under the IHS a vapor chamber?).
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |