Intel Chips With “Vega Inside” Coming Soon?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
There are three pieces of actual info that say the plan does exist (or used to at least). Out of those, two are public and not "debunked": the BenchLife leak, and the NUC roadmap that matches it.

The bench life stuff looks fake and doesn't actually indicate anything at all about AMD.

People read way too much into the rumors. The reason Intel reportedly signed a deal with AMD on the graphics had to do with an expiring deal with Nvidia..

Except Intel never signed such a deal. Both AMD and Intel denied it. They can refuse to comment on things, but they can't outright lie in public about big deals or they would face shareholder lawsuits.

The very premise for the rumored deal was also flawed. Intel signed a deal for all the NVidia GPU patents, for entire life of those patents. That did not expire.

What expired was Intel payments to NVidia, and adding new patents to the deal to go with those payments. Any NVidia patented tech Intel is using in it's GPUs, they can keep using until those patents expire.

Intel does not need a new GPU IP deal, and has apparently not signed one.
 
Reactions: Jan Olšan

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
Except Intel never signed such a deal. Both AMD and Intel denied it. They can refuse to comment on things, but they can't outright lie in public about big deals or they would face shareholder lawsuits.

The very premise for the rumored deal was also flawed. Intel signed a deal for all the NVidia GPU patents, for entire life of those patents. That did not expire.

What expired was Intel payments to NVidia, and adding new patents to the deal to go with those payments. Any NVidia patented tech Intel is using in it's GPUs, they can keep using until those patents expire.

Intel does not need a new GPU IP deal, and has apparently not signed one.

They can lie. I don't get where you think they can't. They can. All companies lie except in spots they have a legal obligation not to like conference calls or advertising. A press company calls and asks you if you signed a contract is not one of those occasions.

But the point wasn't even about whether the deal was real. What the deal if there was one was about IP and not chip design or or chip supply. Same one with Nvidia. Also just because the deal with Nvidia didn't expire doesn't mean that for whatever reason they didn't also seek out a deal with AMD. Remember they already have a cross licensing deal with AMD. Asking them to throw in a couple of GPU specific patents for a development change they want to make isn't going to be a big thing.
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
Maybe Only for Apple ? perhaps Apple asks both Intel/AMD to make such those MCM.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
Maybe Only for Apple ? perhaps Apple asks both Intel/AMD to make such those MCM.
That's always been the rumor with the Kaby Lake G theory. Apple would be the only company that could pull this off and they have always liked AMD GPU's over Nvidia in the past. But then the question would be what kind of system would this go into? It would be too power hungry for a single CPU that wasn't specifically designed to work with a GPU that wasn't specifically designed to work with the GPU. So this isn't a macbook air option. The can always go dGPU in just about every other piece of equipment easily. I just don't see the market vs. the performance. It's a performance selection, that would be so easy to beat with two solo parts in anything that matters.

It would be completely different if this was a APU (AMD or Intel) with HBM because you could reign in the power and the increased power usage of HBM over DDR4 wouldn't be significant in comparison to the packaging advantage. But for Intel to pull it off they would need to actually license, integrate, and produce Intel versions of AMD's GPU. Which one means years of development post signing the contract and two way to much of licensing policy. It's one thing to license out a patent. It's another thing to actually sell your design away. A third issue is that whatever that was even if it was completely based on the Vega design, it wouldn't be Vega and Intel would call it something completely different. Remember they still call X86-64/AMD64 EMT64 even though it was a complete adoption of AMD's IP.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
They can lie. I don't get where you think they can't. They can. All companies lie except in spots they have a legal obligation not to like conference calls or advertising. A press company calls and asks you if you signed a contract is not one of those occasions.

http://www.marketwatch.com/story/intel-and-amd-license-rumors-should-finally-be-dead-2017-05-22

The AMD CEO made denials at a conference, and that resulted in declines in the stock price. If she lied, anyone that lost money could sue AMD.

With denials from both companies, people still clinging to the thoughts that deal is happening, aren't basing that belief on reality.

The original rumor belief is what fed "market" for more AMD related rumors after that.

None of this ever made any sense to me. Sure if Intel actually, suddenly had no IP cover, it would likely need to either go to NVidia or AMD or face being sued, and this does seem to be what the first rumors were based on. Someone misunderstood the end of the Intel-NVidia deal and started a rumor based on that faulty assumption.

But without that faulty assumption, there is nothing but a house of cards here.

Creating an Intel GPU MCM package with AMD GPU/HBM inside would be a significant engineering effort (cost) for what would be a VERY expensive, specialized part. What is the long term point here? You are married to competitors product in your CPUs?

For the end product you could just do a standard laptop CPU + dGPU design, and it would be less costly, and less specialized. This really doesn't make sense, especially when using competitor parts.

If Intel is going down the MCM road long term, they are going to do it with their own GPU. It is really hard to judge what Intel could do with their GPUs now, with a lot more transistors and memory bandwidth (and access to all NVidias GPU patents which they still have). Intel hasn't been pushing the iGPU much. Likely because OEMs haven't shown much interest in more expensive parts with better iGPUs. The current iGPU is likely viewed as "good enough" for many use cases, with dGPUs taking over after that.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
Didn't know about the denial during a conference call, just the denial. But yeah everything else is pretty much what I was suggesting. There is room for a patent agreement if Intel wants/needs it badly enough. But nothing about the AMD video in an Intel CPU package makes any sense.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Didn't know about the denial during a conference call, just the denial. But yeah everything else is pretty much what I was suggesting. There is room for a patent agreement if Intel wants/needs it badly enough. But nothing about the AMD video in an Intel CPU package makes any sense.

Sure, If Intel need GPU patents they could approach AMD, but they arguably already have better ones that they got from NVidia. NVidia seems to have Tile Based Rendering that actually works, more efficient use of Memory bandwidth, etc...

But the main point was, that the original rumor was almost certainly false, based on the assumption that Intel was losing access to NVidia patents. It may have even been an attempt at market manipulation.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
Sure, If Intel need GPU patents they could approach AMD, but they arguably already have better ones that they got from NVidia. NVidia seems to have Tile Based Rendering that actually works, more efficient use of Memory bandwidth, etc...

But the main point was, that the original rumor was almost certainly false, based on the assumption that Intel was losing access to NVidia patents. It may have even been an attempt at market manipulation.

I get that. I don't know if it really matters much outside the market manipulation whether it was true or false because the end result means nothing. The only thing feeding this is everyone that thinks KL-G or whatever -G iteration that may ever see the light of day as some kind of melding of a GPU by someone other than Intel on an Intel CPU which isn't going to happen (unless it's some embedded product with no iGPU with Andreno or PowerVR chip).
 

jpiniero

Lifer
Oct 1, 2010
14,839
5,456
136
Creating an Intel GPU MCM package with AMD GPU/HBM inside would be a significant engineering effort (cost) for what would be a VERY expensive, specialized part. What is the long term point here? You are married to competitors product in your CPUs?

Apple tends to get what Apple wants. Plus it gives them an decent volume product to test EMIB.

Long term, Intel could lay off their IGP department and just use RTG tiles. Or maybe even use nVidia if they give Intel a better deal.
 
Reactions: beginner99
Mar 10, 2006
11,715
2,012
126
Apple tends to get what Apple wants. Plus it gives them an decent volume product to test EMIB.

Long term, Intel could lay off their IGP department and just use RTG tiles. Or maybe even use nVidia if they give Intel a better deal.

Yeah, ceding control of such an important IP to a direct competitor in the CPU market or to a company that's disrupting its data center business makes tons of sense.

Intel should also just fab AMD chips, saves them the R&D! /s
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
It doesn't have to make sense if it saves money in the short term.

It wouldn't save them money in any term. Intel has owned the vast majority of the CPU/GPU market with their iGPU chips for years.

They own it and they share nothing. Killing their own iGPUs that the market already considers good enough, and creating dependence on competitors, while paying a part of their revenue to those competitors is utterly absurd.
 
Mar 10, 2006
11,715
2,012
126
It doesn't have to make sense if it saves money in the short term.

Graphics is a technology that has a very high ROI for Intel, far exceeding the cost of developing the technology.

If Intel is trying to save $ in the short term, cutting here isn't smart.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,689
1,224
136
It could be a Cannonlake GFX dGPU.

10nm would push a 160-192 EU product into sub-200 mm². (This is assuming that they don't use super dense 3T-5T tracks)
 

beginner99

Diamond Member
Jun 2, 2009
5,223
1,598
136
The AMD CEO made denials at a conference, and that resulted in declines in the stock price. If she lied, anyone that lost money could sue AMD.

Not that I buy into the rumor but technically they only denied licensing deals which as was previously mentioned would not prevent AMD selling Intel finished GPU dies which intel then integrates into their product. I doubt this but there could be some reasons:

- pipe-cleaner for emib

It's a low volume product and not a huge issue if it fails. it's to gain actually experience in a real product on the market with EMIB. That has value too albeit of course the chips sold will be of tiny quantity and not recuperate the r&d costs directly.

- For apple

Product is for Apple and hence high enough volume to make it worth it. To further increase sales they will also be put into NUCs.

But yeah, it's grasping for straws.
 
Reactions: Jan Olšan

cbn

Lifer
Mar 27, 2009
12,968
221
106
Apple tends to get what Apple wants. Plus it gives them an decent volume product to test EMIB.

Apple packs a lot in a small space compared to other laptops I have seen. Below is the Motherboard of the most recent 15" MacBook Pro with Radeon graphics I could find:

https://www.ifixit.com/Teardown/MacBook+Pro+15-Inch+Touch+Bar+Teardown/73395#s148859



  • (outlined in red) Intel Core i7-6700HQ 2.6 GHz (up to 3.5 GHz) quad-core processor

  • (outlined in orange) Micron MT52L1G32D4PG-093 4 GB LPDDR3 (four chips for 16 GB total)

  • (outlined in yellow) AMD Radeon Pro 450

  • (outlined in green) Elpida (Micron) EDW4032BABG-70-F 512 MB GDDR5 RAM (four chips for 2 GB total)

  • (outlined in light blue) Intel JHL 6540 Thunderbolt 3 controller (one for each set of USB-C ports)

  • (outlined in dark blue) Intel SR2NH (likely a platform controller hub)

  • (outlined in pink) Texas Instruments CD3215C00 69AV2TW (labeled as Apple's T1 chip in their keynote)

The Samsung K9PHGY8 NAND Flash is located on the front and back of the motherboard. 2 packages each side (4 total packages on this model).

.....So I'm thinking they are going to want even more integration than that in the future. (I'm guessing the LPDDR, GDDR and NAND eventually all replaced with some type of stacked 3DXpoint. Maybe this even on the GPU die?).
 
Reactions: ZGR

NTMBK

Lifer
Nov 14, 2011
10,269
5,134
136
- pipe-cleaner for emib

It's a low volume product and not a huge issue if it fails. it's to gain actually experience in a real product on the market with EMIB. That has value too albeit of course the chips sold will be of tiny quantity and not recuperate the r&d costs directly.

EMIB requires the chip to be designed for EMIB, and use a compatible interface. I can't see AMD custom designing a GPU with EMIB.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
- pipe-cleaner for emib

It's a low volume product and not a huge issue if it fails. it's to gain actually experience in a real product on the market with EMIB. That has value too albeit of course the chips sold will be of tiny quantity and not recuperate the r&d costs directly.

EMIB requires the chip to be designed for EMIB, and use a compatible interface. I can't see AMD custom designing a GPU with EMIB.

What about using multiple smaller dies (EMIB together) to make one GPU that is larger than is normally possible with a monolithic die or with multiple dies on an interposer?
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
What about using multiple smaller dies (EMIB together) to make one GPU that is larger than is normally possible with a monolithic die or with multiple dies on an interposer?

The problem with multiple GPU dies is you kind of still have the SLI/CF problem, of memory waste (duplicate texture buffers) and software to partition the work, as even today certain games have problems with it. Those problems don't just disappear because they are connected on an interposer.
 
Reactions: godihatework

beginner99

Diamond Member
Jun 2, 2009
5,223
1,598
136
EMIB requires the chip to be designed for EMIB, and use a compatible interface. I can't see AMD custom designing a GPU with EMIB.

Ok. Then we can scratch that possibility and only reasonable thing would be a chip for Apple. But yeah, I have my doubts. NUC with pcie 16x slot makes the most sense.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
The problem with multiple GPU dies is you kind of still have the SLI/CF problem, of memory waste (duplicate texture buffers) and software to partition the work, as even today certain games have problems with it. Those problems don't just disappear because they are connected on an interposer.

But what if it is built from the ground up as one big GPU using common building blocks? (Separate GPU cores tied to together with one SYS + I/0?)


http://research.nvidia.com/publication/2017-06_MCM-GPU:-Multi-Chip-Module-GPUs

MCM-GPU: Multi-Chip-Module GPUs for Continued Performance Scalability
Historically, improvements in GPU-based high performance computing have been tightly coupled to transistor scaling. As Moore's law slows down, and the number of transistors per die no longer grows at historical rates, the performance curve of single monolithic GPUs will ultimately plateau. However, the need for higher performing GPUs continues to exist in many domains. To address this need, in this paper we demonstrate that package-level integration of multiple GPU modules to build larger logical GPUs can enable continuous performance scaling beyond Moore's law. Specifically, we propose partitioning GPUs into easily manufacturable basic GPU Modules (GPMs), and integrating them on package using high bandwidth and power efficient signaling technologies. We lay out the details and evaluate the feasibility of a basic Multi-Chip-Module GPU (MCM-GPU) design. We then propose three architectural optimizations that significantly improve GPM data locality and minimize the sensitivity on inter-GPM bandwidth. Our evaluation shows that the optimized MCM-GPU achieves 22.8% speedup and 5x inter-GPM bandwidth reduction when compared to the basic MCM-GPU architecture. Most importantly, the optimized MCM-GPU design is 45.5% faster than the largest implementable monolithic GPU, and performs within 10% of a hypothetical (and unbuildable) monolithic GPU. Lastly we show that our optimized MCM-GPU is 26.8% faster than an equally equipped Multi-GPU system with the same total number of SMs and DRAM bandwidth.

http://www.computermachines.org/joe/publications/pdfs/hpca2017_exascale_apu.pdf




2) Modular Chiplet Design: The performance requirements of the exascale node require a large amount of compute and memory to be integrated into a single package. Rather than build a single, monolithic system on chip (SOC), we propose to leverage advanced die-stacking technologies to decompose the EHP into smaller components consisting of active interposers and chiplets. Each chiplet houses either multiple GPU compute units or CPU cores. The chiplet approach differs from conventional multi-chip module (MCM) designs in that each individual chiplet is not a complete SOC. For example, the CPU chiplet contains CPU cores and caches, but lacks memory interfaces and external I/O.1 There are multiple benefits to this decompositional approach to SOC construction: Die Yield: Building a single monolithic SOC that provides the equivalent capabilities of what we propose for the EHP would result in an impractically large chip with prohibitive costs. Smaller chiplets have higher yield rates due to their size, and when combined with known-good-die (KGD) testing techniques, can be assembled into larger systems at reasonable cost. This approach has already started garnering interest in both academia [9] and industry [8], [19]. Process Optimization: A monolithic SOC imposes a single process technology choice on all components in the system. With chiplets and interposers, each discrete piece of silicon can be optimized for its own functions. For example, the CPU chiplets can use performance-optimized devices and metal layers, while the GPU chiplets use density-optimized devices and metal. The interposer layers can use a more mature (i.e., less expensive) process technology node as the I/O components likely do not need transistors in the cuttingedge technology node nor as many metal routing layers as the compute chiplets. Re-usability: A single, large HPC-optimized APU would be great for HPC markets, but may be less appropriate for others. The decomposition of the EHP into smaller pieces enables silicon-level reuse. For example, one or more of the CPU clusters could be packaged together to create a conventional CPU-only server processor.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Ok. Then we can scratch that possibility and only reasonable thing would be a chip for Apple. But yeah, I have my doubts. NUC with pcie 16x slot makes the most sense.

I don't think *Intel* would agree making for Apple only. Look at their countless derivatives they make of their chips. While there were claims that Iris/Iris Pro lines were originally an idea floated by Apple, they sold it to others as well.

Apple is a high margin customer for sure. But they are not high volume, especially not chip of a KBL-G class which is a high end and expensive chip even for Apple computers. Intel would want to proliferate KBL-G to others as well, not just Apple. They are an independent entity, and it would have to make financial sense for them.

- pipe-cleaner for emib

Aside from the reasons others are pointing out, there is already a pipe-cleaner for EMIB. Intel FPGAs.

But yeah, I have my doubts. NUC with pcie 16x slot makes the most sense.

This is what I think too. NUC with x16 slots for really small discrete GPUs.

The other is the story about RTG not liking being part of AMD courtesy of HardOCP. If that story is true and Murthy really expects RTG to be a significant part of Intel, then KBL-G would be a good start.

Whether that story is true we know a storm is brewing.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
But what if it is built from the ground up as one big GPU using common building blocks? (Separate GPU cores tied to together with one SYS + I/0?)


http://research.nvidia.com/publication/2017-06_MCM-GPU:-Multi-Chip-Module-GPUs

I read the NVidia paper a few months ago, and IIRC, they were applying this to compute space, not Graphics, and a major focus of the research was limiting the inter communications between the modules.

So you do still have the memory duplication issue for textures for graphics with this approach. Each GPU tile has its own memory controller, and they all likely need much the same textures. So either you duplicate the textures in each chips memory pool (thus wasting memory) or you treat it as one big pool but with lot more latency, and huge contention issues given the huge appetite for texture memory.

I am not convinced having a SYS + I/O chip solves the need for any SW involvement either. If that was the case, why couldn't standard GPU be built with a slightly more robust SYS + I/O section that was switchable between master/slave to make doing that kind of thing on Dual GPU cards, where one chips SYS + I/O runs the GPU portion from both chips? Instead dual GPU card always ended up requiring CF/SLI software and were just as problematic as dual cards.

Anyway, there certainly wouldn't bother doing this in CPU + GPU package, there would be no need/point for so much GPU power that they would need multiple GPU dies, they would have a hard time supplying the power and cooling such a beast would need.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I wonder how long it take for GF and TSMC to come out with their own version of EMIB? Who will have the open standard first?
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |