you're talking about an order of magnitude of differencePower consumption and space consumption is going to make a big difference here. If I can get 10x as many GPUs for the same price, but also the same performance as a single one from the other side, there's no contest.
Indeed. If the Tensor cores are really as awesome and applicable as NV claims they are. Obviously 10x MI25 should be more powerful in some cases than 1x Tesla V100. However, if in many important use-cases the Tensor cores are a 9x speedup, there's no way any big buyers will go MI25 in those applications, even if you can get 10 of them for the same price.you're talking about an order of magnitude of difference
From the link:
and from https://devblogs.nvidia.com/parallelforall/inside-volta/?ncid=so-twi-vt-13918
Figure 6: Tesla V100 Tensor Cores and CUDA 9 deliver up to 9x higher performance for GEMM operations. (Measured on pre-production Tesla V100 using pre-release CUDA 9 software.)
hmmm Vega must be very very cheap to be even relevant against V100 at Machine Learning workflows. Tensor cores are the kiss of death.
They bolstered the cache for that.I wonder if the FP32 performance of GV100 is constrained by memory bandwidth? They only increased memory bandwidth by 25% from GP100.
Watching the Nvidia presentation - is there a case to be made that it isn't just about comparing speeds?
Maybe Nvidia is overplaying it's current position and capabilities, but they seemed to show they were strong on pushing improving the groundwork for software & general support for developers using Volta in particular the AI Deep learning angle.
Indeed. If the Tensor cores are really as awesome and applicable as NV claims they are. Obviously 10x MI25 should be more powerful in some cases than 1x Tesla V100. However, if in many important use-cases the Tensor cores are a 9x speedup, there's no way any big buyers will go MI25 in those applications, even if you can get 10 of them for the same price.
Yeah, this hasn't been Vega/Navi talk for several pages now.Cmon guys, stop derailing this thread
Probably because there's nothing to talk about. C'mon AMD!Yeah, this hasn't been Vega/Navi talk for several pages now.
You started off with a false statement that Polaris CU is equal to Tahiti CU which was proven wrong with computerbase comparison of Tahiti vs Polaris at same sp, same clocks, same bandwidth, same ROP and same TMU. Now that you were proven wrong you have shifted the goalpost saying its 3% per year. You should atleast not continue the argument and just accept that you were wrong. Vega will be the first major architectural change in half a decade for AMD and GCN. Lets wait and see what they have come up with before writing them off.
You clearly lack a basic understanding that 3% annual improvement and 0% are the same thing. If you want to mentally masturbate over the difference between 0% and 3% then go right ahead. But in the real world, any gpu company that puts out 3% annual gpu improvements should expect to be out of business by the 3rd year. Just imagine where Qualcomm would be if they tried to pull that crap...
The main drivers of GPU performance improvements are node shrinks and MOAR COARS. The effect of "IPC" improvements (whatever we put into that) pales in comparison.You clearly lack a basic understanding that 3% annual improvement and 0% are the same thing. If you want to mentally masturbate over the difference between 0% and 3% then go right ahead. But in the real world, any gpu company that puts out 3% annual gpu improvements should expect to be out of business by the 3rd year. Just imagine where Qualcomm would be if they tried to pull that crap...
I never said anything like that. You are just trying to divert attention from your false statement that polaris cu is the same as tahiti cu. Anyway i don't think you are going to accept that you made a false statement. so i am not going to discuss this further.You clearly lack a basic understanding that 3% annual improvement and 0% are the same thing.
Say this to Nvidia which GPUs clock for clock, core for core improved 0% since 2014, and release of Maxwell architecture.You clearly lack a basic understanding that 3% annual improvement and 0% are the same thing. If you want to mentally masturbate over the difference between 0% and 3% then go right ahead. But in the real world, any gpu company that puts out 3% annual gpu improvements should expect to be out of business by the 3rd year. Just imagine where Qualcomm would be if they tried to pull that crap...
Sorry if this has already been posted, saw these rumors here - http://digiworthy.com/2017/05/14/amd-rx-vega-lineup-fastest-vega-nova/
I didn't know that, and I was just responding to the particular point made that poster. I see that on average it's only about 40-50% over the P100.You do knwo that google has it's own tensor chip? Intel bought Nervana which has something similar? And can be combined with Xeon phi? It's not like the V100 is first in this or best. All we have seen is some ppt slides which most are best of the best case. Real life speed up will probably be aroudn a 1/3 of that 9x increase because even with machine learning you aint't doing 100% tensor worklfows.
If AMD can come out with a water-cooled card clocked to the hilt, why doesn't nVidia? Is there something preventing nVidia or its partners from selling cards clocked to their maximum?
You make it sound like nVidia not selling you the fastest possible card is some kind of noble thing to admire.