ultimatebob
Lifer
- Jul 1, 2001
- 25,135
- 2,445
- 126
Sadly, the 3D graphics drivers are so bad in some Linux distributions for some newer cards that you're actually better off using the integrated graphics.
Uhm, memory bandwidth? Or are you assuming HBM / stacked DRAM already?
The point is that underlying process advantages mean nothing without architectural information to back it up, along with real-world performance. You keep talking about theory and things that dont exist yet as if theyre reality.Oh, wait, you're saying "in graphics performance". Sorry, I thought your graph compared CPU FLOPS vs GPU FLOPS; I misunderstood the graph.
The difference between a 2006 part (7100GS) and today's flagship GTX780Ti is a lot more than 75x. For example, the texturing fillrate alone is 140x times higher (1.4 GTexels vs 196). When the 7100GS runs out of VRAM the reduced performance would be hundreds of times slower, for example in 4K performance.Intel claims a 75X gaming improvement since 2006, outpacing both Moore's Law and your graph:
I think? I think is not evidence of anything. Please show us gaming benchmarks of these non-existent parts you keep referring to as fact.They're not falling behind further, and certainly not exponentially. And if you want to compare FLOPS, I think even a theoretical Gen7 IGP with 72-144 EUs can give you a decent understanding of how Intel will catch up in the coming 1-2 years.
Again, manufacturing doesn't mean anything without details of the underlying hardware and how it actually performs in the real world. We were told amazing things about Larrabee and look how that turned out.I already told you. Intel will improve its microarchitecture so that it isn't much behind anymore, and because its manufacturing lead is expanding, Intel will be able to get much better IGPs than what would have been possible without this 2-3 node advantage. Just look at how good or bad GPUs were 2-3 nodes (or 4-6 years) ago.
You keep talking about things that dont exist while I keep talking about things that have already happened.Why do you have to refer to the situation multiple years ago? Why is that relevant? Roadmaps change, plans change, targets change, all sort of things change. If you're going to refer to the past, when Intel was much more behind, you're always going to come to the conclusion that Intel will never catch up, obviously.
Very likely.A GTX Titan? I think 10nm is very likely: 10nm is 5x more dense than 28nm, so your massive 550mm² GTX Titan is reduced to 110mm². Add the CPU and you APU is about the size of Ivy Bridge/Haswell. I don't know how high it will be able to clock, but note that Intel will use germanium at 10nm, which could potentially quite dramatically reduce improve consumption and performance.
Better process = higher density -> more transistors available + lower price; better performance; lower power consumption.The point is that underlying process advantages mean nothing without architectural information to back it up, along with real-world performance. You keep talking about theory and things that dont exist yet as if theyre reality.
What I say are facts: there really will be Gen8 and Gen9 graphics for Broadwell and Skylake. Broadwell and, depending on how you interpret the rumor, Skylake will have more EUs/GT. There will be a GT3 and GT4 SKU with much more EUs than the 20 we have now.I think? I think is not evidence of anything. Please show us gaming benchmarks of these non-existent parts you keep referring to as fact.
You keep talking about things that dont exist while I keep talking about things that have already happened.
Very likely.
Potentially.
I think.
Again, show us facts, not your opinion and/or musings.
This is what we do different. You extrapolate the past, I use the information I have about the Gen8/9 SKUs and manufacturing advances.$If/when any of this happens, dGPUs would've advanced a magnitude (again, based on past history which has actually happened) so that a Titan will be completely obsolete, just like the 7100GS is now.
Why would Intel invest the same amount of money into their GPU tech when there isn't a payoff? iGPU is a free goodie and nothing you can monetize.
Intel hired some very passionate graphics engineers, who always petitioned Intel management to give them more die area to work with, but the answer always came back no. Intel was a pure blooded CPU company, and the GPU industry wasn’t interesting enough at the time. Intel’s GPU leadership needed another approach.
[...]
Pure economics and an unwillingness to invest in older fabs made the GPU a first class citizen in Intel silicon terms, but Intel management still didn’t have the motivation to dedicate more die area to the GPU. That encouragement would come externally, from Apple.
BUt there is, CPU-wise intel hasn't bring any noticeable improvement after Sandy Bridge, GPU-wise they have. Intel's most valuable comsumer chips today are those with Iris Pro graphics....
I don't know Intel's motivations. A CPU with Iris Pro will obviously be more expensive (not free goodie). Maybe they want consumers to buy their CPUs with good IGPs instead of GPUs. I think mobile is probably important for them. They can recycle the IGP in phones, tablets, laptops and desktops, so it has value over a good range of products and for desktops they can simply add EUs (Gen7 is modular with slices).
Edit, from Iris Pro review:
It came from Apple, but with Brian Krzanich as CEO, I think he sees more value in IGPs than Otellini: If it computes, it does it best with Intel.
Why would Intel invest the same amount of money into their GPU tech when there isn't a payoff? iGPU is a free goodie and nothing you can monetize.
Eh, I'd argue all day that architecture trumps process any day.
See: NVIDIA Tesla v.s. Xeon Phi (Knights Corner)
As far as I know DDR4 is basically going to bring about a peak of 100 GB/s. So to get into discrete territory the iGPU is going to need either quite a large cache or its going to need a lot of stacked memory for the iGPU part of the chip. Either will do the job but both are going to increase the cost of the CPU quite a lot.
There really isn't much a modern video card is missing to be able to run solo itself. They're basically self-contained computers in and of themselves designed for highly parallel processing tasks, quite a far cry from rasterizers of old, and even the original Geforce (the first gpu apparently), and are capable of more than just graphics processing. So in a way, Linus was right.With the Phi, does this means if that dCPU is also going away? It seems like dGPU is being merged with dCPU are being merged. Actually, I wouldn't even call it that. People are just realizing GPUs are pretty damn good as functioning as the dedicated device of the entire system. I believe Xeon Phi can run without a CPU, and I think Nvidia's Maxewll will be the first capable of doing so as well.
I think in the future we will still be able to custom build desktops. However, we won't be buying a CPU and GPU separately. We'll be buying all in on devices. The cloest equivalent today would be buying a Celeron + crappy iGP or a Xeon Phi to run everything on your system.
Only if you dont believe in the "good enough" for x task, stuff people claim.Memory bandwidth and bus widths will keep Discrete GPU's around forever.
Only if you dont believe in the "good enough" for x task, stuff people claim.
Like how cpus are fast enough for any daily task for the mainstream user, so building big 200watt cpus isnt needed, thus intel went perf/watt instead.
at some point, unless eye-candy in games accelerates at a unheard of pace, igpu will reach a "good enough" point.
A jump in memory technology like the hybrid memory cube, could really change things.
Once we start seeing HMC take off, prices will go down, and become common place on motherboards (this technology will likely kill off GDDR/DDR4 ect).
And we ll start to see motherboards with 15-20x as much memory bandwidth on them, enough to feed the iGPUs.
Once you have 150-200 GB/s memory bandwidth on a motherboard for the CPU, the iGPU for the "mainstream" user will be fast enough to not need a discrete gpu. (IF intel & AMD make a beefy iGPU)
The only people who will keep useing discrete gpus will be guys that need beyound normal graphics horse power, and with the shrinking market, the prices will sky rocket.
Basically discrete gpus will end up being for guys that build servers like render farms ect.
Lets not forget theres a reason other than just better Performance/watt, Price, for moveing the iGPU onto the CPU.
With HSA or a intel made technology like it at some point, you can see massive improvements in computational performance when doing GPGPU workloads. It also becomes a million times easier to code for.
IT MAKES SENSE TO HAVE THE GPU ON THE SAME CHIP AND INTEGRATED INTO THE CPU.
Theres massive amounts of performance for GPGPU workloads to be found there.
A discrete gpu wont be able to compete in terms of gpgpu workloads soon.
I see unexpected roadblocks to IGP displacing discreets and any IGP advance will also apply to add in boards negating the advantage.