- Nov 14, 2011
- 10,377
- 5,517
- 136
https://www.anandtech.com/show/1201...ete-gpus-hires-raja-koduri-as-chief-architect
Can we get an Intel subforum?
Can we get an Intel subforum?
isn't there like a cooling period before he can do that?
Koduri will officially start in his new role at Intel in early December.
oh boy! there will be war between NV & Intel instead of NV & AMD
Which means AMD gets even further left in the dust than they already are in the GPU race.
Actually it leaves NVidia without a dance partner in laptop GPUs, and the increasing trend is that people are mostly buying laptops.
Both Intel and AMD will be have better and better "Good Enough" laptop GPUs integrated in the CPU Die. NVidia is locked out at this level.
Both AMD and Intel will also have dGPUs, that will be put in special packages tightly integrated with something like EMIB. NVidia will have a hard time integrating it's dGPUs at this level so NVidia will Mostly be locked out at this level of integration.
Intel may have packaged up an EMIB commercial proof of concept, using a Radeon dGPU chip but you can bet that future integrated designs from Intel will be all Intel once they get their dGPU ready, and AMD will be packaging all AMD solutions.
Where does that leave NVidia? Mostly on the outside without a CPU partner for laptops.
NVidia dGPUs will pushed to the niche hardcore Gamer Laptops, that run GTX 1070+
So the real loser here is NVidia, not AMD.
NVidia dGPUs will pushed to the niche hardcore Gamer Laptops, that run GTX 1070+
So the real loser here is NVidia, not AMD.
Intel has twice tried to enter the discrete GPU market and failed. Given Vega's obsolete perf/w vs. Nvidia under Koduri's direction, at best Intel will not be entering the consumer space anytime soon.
If it all pans out, Intel wins and everyone loses. But what I said a few posts of above is more relevant than ever, and also people have been saying Nvidia is dead since 2009 because high end was dying and iGPU's would take over. Each year since then Nvidia has made more money than the previous. They're doing better than ever now.
Ignoring AMD, especially when they are on the rise again, is plain stupidity.I doubt Intel's goals are necessarily to be targeting the consumer space anyway. This seems to be a response to Nvidia's success in the deep learning and compute space, where Intel has probably finally realized that their current Xeon Phi efforts are not going to be enough to compete in the long run.
Yeah I doubt AMD and Nvidia will be sitting still and both have way more experience then Intel does in this field.Ignoring AMD, especially when they are on the rise again, is plain stupidity.
AMD have APUs, have bigger engineering capabilities in current state of delivering hardware(SoC's) for those purposes than anyone in the industry. Nvidia does not have x86 CPUs, Intel and AMD has.
Its also funny how Raja from incompetent fool running Radeon Technologies Group suddenly morphed into star of GPU engineering by joining Intel.
AMD and Nvidia have 4-5 years of "breathe". Any work that Raja will do at Intel will come to fruitin with next generation architeture at Intel. At that time span - A LOT can change for both of those companies.
Yeah I doubt AMD and Nvidia will be sitting still and both have way more experience then Intel does in this field.
Intel has twice tried to enter the discrete GPU market and failed.
It has been my understanding that even high end GPUs don't take full advantage of the bandwidth of PCIe v3 x16 slot and so there still plenty of headroom left there. It is basically the data center and the like that needs PCIe version 4 and 5.Don't think about this as a traditional way. There are some potential problem in the future. Even that PCIe 4.0 and 5.0 is coming, a GPU will need more robust connection to the CPU. AMD had a lot of ideas for this, like the Greenland project, which was basically an MCM design with a Zen CPU connected to a Vega GPU with four GMIs. Intel want to pursue this concept. They don't really care about the legacy PCIe connection, it will be too slow even with PCIe 5.0.
I can't talk about it to much, but Microsoft is working on some huge structural change for Windows. It won't come soon, but Intel probably know about it, and it will be a perfect time to attack this market, because the superfast and low latency connection between the CPU and the GPU will be necessary.
It has been my understanding that even high end GPUs don't take full advantage of the bandwidth of PCIe v3 x16 slot and so there still plenty of headroom left there. It is basically the data center and the like that needs PCIe version 4 and 5.
Your understanding is wrong.
It has been my understanding that even high end GPUs don't take full advantage of the bandwidth of PCIe v3 x16 slot and so there still plenty of headroom left there. It is basically the data center and the like that needs PCIe version 4 and 5.
If you got to take the data out of the GPU's memory(because you have to, it's obviously not going to be sitting there in compute applications), then PCI-e places severe restrictions on attainable memory bandwidth and indirectly, on actual GFLOP/s.It has been my understanding that even high end GPUs don't take full advantage of the bandwidth of PCIe v3 x16 slot and so there still plenty of headroom left there. It is basically the data center and the like that needs PCIe version 4 and 5.
Any predictions about the death of high end were clearly wrong.
But they certainly have lost much of the laptop business, such that something like 60% of the GPUs sold in 2016, are Intel IGPs.
NVidia making more money? Sure. In 2009, NVidia high end GPU (GTX 295 dual GPU) was ~$500, today it's a $1200 Titan. I wonder why they make more money every year.
So far IGP/APUs have only soaked up low end GPUs aimed at essentially non gamers.
The EMIB/Interposer + dGPU will also lock NVidia out of the mid-range as well, that is going to sting.
NVidia saw this coming which is why they pushed so hard into super computing.
California does not enforce non-competes, so, no.isn't there like a cooling period before he can do that?