Vega/Navi Rumors (Updated)

Page 118 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

SpaceBeer

Senior member
Apr 2, 2016
307
100
116
AI is much more than ANN. HPC is much more than AI. So there are still many users who just need regular HPC chips (accelerators) and if GV100 is too expensive (and it is), they might choose Pascal, Vega, Xeon Phi or some FPGA instead
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
Power consumption and space consumption is going to make a big difference here. If I can get 10x as many GPUs for the same price, but also the same performance as a single one from the other side, there's no contest.
you're talking about an order of magnitude of difference
 

IllogicalGlory

Senior member
Mar 8, 2013
934
346
136
you're talking about an order of magnitude of difference
Indeed. If the Tensor cores are really as awesome and applicable as NV claims they are. Obviously 10x MI25 should be more powerful in some cases than 1x Tesla V100. However, if in many important use-cases the Tensor cores are a 9x speedup, there's no way any big buyers will go MI25 in those applications, even if you can get 10 of them for the same price.
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,867
3,418
136
From the link:

and from https://devblogs.nvidia.com/parallelforall/inside-volta/?ncid=so-twi-vt-13918


Figure 6: Tesla V100 Tensor Cores and CUDA 9 deliver up to 9x higher performance for GEMM operations. (Measured on pre-production Tesla V100 using pre-release CUDA 9 software.)


hmmm Vega must be very very cheap to be even relevant against V100 at Machine Learning workflows. Tensor cores are the kiss of death.

Want to point out a few things.
1. at FP32 GV100 and Vega look about equal
2. the second graph is very ambiguous because GPU's dont like/can't change precision within the operation, So a 9 times speed up to something that was dog slow before doesn't really mean much.

3.

So 9 times faster but only 1.4 times faster you say..............

Just think of it this way, if this 4x4x4 matrix multiply accumulate was the be all and end all of AI then GPU's wouldn't stand a chance vs ASIC's, but look where we are. If you look at even the high level GV100 SM its obvious that there is far more to the workloads they are targeting by the way they have split FP64, FP32 and INT32 up into separate units. That says to me there is a lot of different types of operations all in flight at the same time.

its obvious that GV100 is a beast and its going to out perform Vega, but lets not get carried away here, how about we just even look at all the data NV has provided in an analytical way first of all.......
 

alcoholbob

Diamond Member
May 24, 2005
6,271
323
126
I wonder if the FP32 performance of GV100 is constrained by memory bandwidth? They only increased memory bandwidth by 25% from GP100.
 

t0mt0m

Member
Apr 21, 2015
35
2
36
Watching the Nvidia presentation - is there a case to be made that it isn't just about comparing speeds?
Maybe Nvidia is overplaying it's current position and capabilities, but they seemed to show they were strong on pushing improving the groundwork for software & general support for developers using Volta in particular the AI Deep learning angle.
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
Watching the Nvidia presentation - is there a case to be made that it isn't just about comparing speeds?
Maybe Nvidia is overplaying it's current position and capabilities, but they seemed to show they were strong on pushing improving the groundwork for software & general support for developers using Volta in particular the AI Deep learning angle.

I think you posted in the wrong thread by mistake.

This is probably the one you were looking for:
http://www.portvapes.co.uk/?id=Latest-exam-1Z0-876-Dumps&exid=threads/nvidia-volta-rumor-thread.2499125/page-2
 

beginner99

Diamond Member
Jun 2, 2009
5,223
1,598
136
Indeed. If the Tensor cores are really as awesome and applicable as NV claims they are. Obviously 10x MI25 should be more powerful in some cases than 1x Tesla V100. However, if in many important use-cases the Tensor cores are a 9x speedup, there's no way any big buyers will go MI25 in those applications, even if you can get 10 of them for the same price.

You do knwo that google has it's own tensor chip? Intel bought Nervana which has something similar? And can be combined with Xeon phi? It's not like the V100 is first in this or best. All we have seen is some ppt slides which most are best of the best case. Real life speed up will probably be aroudn a 1/3 of that 9x increase because even with machine learning you aint't doing 100% tensor worklfows.
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
You started off with a false statement that Polaris CU is equal to Tahiti CU which was proven wrong with computerbase comparison of Tahiti vs Polaris at same sp, same clocks, same bandwidth, same ROP and same TMU. Now that you were proven wrong you have shifted the goalpost saying its 3% per year. You should atleast not continue the argument and just accept that you were wrong. Vega will be the first major architectural change in half a decade for AMD and GCN. Lets wait and see what they have come up with before writing them off.

You clearly lack a basic understanding that 3% annual improvement and 0% are the same thing. If you want to mentally masturbate over the difference between 0% and 3% then go right ahead. But in the real world, any gpu company that puts out 3% annual gpu improvements should expect to be out of business by the 3rd year. Just imagine where Qualcomm would be if they tried to pull that crap...
 

exquisitechar

Senior member
Apr 18, 2017
666
904
136
You clearly lack a basic understanding that 3% annual improvement and 0% are the same thing. If you want to mentally masturbate over the difference between 0% and 3% then go right ahead. But in the real world, any gpu company that puts out 3% annual gpu improvements should expect to be out of business by the 3rd year. Just imagine where Qualcomm would be if they tried to pull that crap...

Only you're being called out for saying a Polaris CU is pretty much the same as a Tahiti CU, which is blatantly wrong, not annual improvements or whatever. Just admit you were wrong and move on.
 

Valantar

Golden Member
Aug 26, 2014
1,792
508
136
You clearly lack a basic understanding that 3% annual improvement and 0% are the same thing. If you want to mentally masturbate over the difference between 0% and 3% then go right ahead. But in the real world, any gpu company that puts out 3% annual gpu improvements should expect to be out of business by the 3rd year. Just imagine where Qualcomm would be if they tried to pull that crap...
The main drivers of GPU performance improvements are node shrinks and MOAR COARS. The effect of "IPC" improvements (whatever we put into that) pales in comparison.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
You clearly lack a basic understanding that 3% annual improvement and 0% are the same thing.
I never said anything like that. You are just trying to divert attention from your false statement that polaris cu is the same as tahiti cu. Anyway i don't think you are going to accept that you made a false statement. so i am not going to discuss this further.
 

Glo.

Diamond Member
Apr 25, 2015
5,765
4,667
136
You clearly lack a basic understanding that 3% annual improvement and 0% are the same thing. If you want to mentally masturbate over the difference between 0% and 3% then go right ahead. But in the real world, any gpu company that puts out 3% annual gpu improvements should expect to be out of business by the 3rd year. Just imagine where Qualcomm would be if they tried to pull that crap...
Say this to Nvidia which GPUs clock for clock, core for core improved 0% since 2014, and release of Maxwell architecture.
 

IllogicalGlory

Senior member
Mar 8, 2013
934
346
136
You do knwo that google has it's own tensor chip? Intel bought Nervana which has something similar? And can be combined with Xeon phi? It's not like the V100 is first in this or best. All we have seen is some ppt slides which most are best of the best case. Real life speed up will probably be aroudn a 1/3 of that 9x increase because even with machine learning you aint't doing 100% tensor worklfows.
I didn't know that, and I was just responding to the particular point made that poster. I see that on average it's only about 40-50% over the P100.
 

guachi

Senior member
Nov 16, 2010
761
415
136
At least the names and prices of the cards aren't completely stupid, even if they are fake.

I suspect, though, that we will get numbers and not names for the cards.

If I can get a 1080 or 108ti equivalent card for the same price nVidia sells their stuff for, I'll consider it a big win. Freesync is just so much cheaper than GSync that even if the cards cost the same I'm saving money overall. And if I'm spending that much on a card, I'd be a fool not to get a *sync monitor.
 

guachi

Senior member
Nov 16, 2010
761
415
136
If AMD can come out with a water-cooled card clocked to the hilt, why doesn't nVidia? Is there something preventing nVidia or its partners from selling cards clocked to their maximum?

You make it sound like nVidia not selling you the fastest possible card is some kind of noble thing to admire.
 

Malogeek

Golden Member
Mar 5, 2017
1,390
778
136
yaktribe.org
Because they don't need to I guess? Adds complexity and cost to a design which doesn't require it to beat the competition. Maxwell and Pascal are very heat efficient chips.
 

alcoholbob

Diamond Member
May 24, 2005
6,271
323
126
If AMD can come out with a water-cooled card clocked to the hilt, why doesn't nVidia? Is there something preventing nVidia or its partners from selling cards clocked to their maximum?

You make it sound like nVidia not selling you the fastest possible card is some kind of noble thing to admire.

Nvidia is selling products at particular performance TDP envelope, whereas AMD is simply chasing Nvidia at the performance metric only. Nvidia is, at the moment, selling a product with more features. This is why you've got a GTX 1080 85% faster than AMD's RX 480 while not using that much more power. If you don't understand why Nvidia does this, let me give you a hint--majority of these cards will be put in OEM systems, and OEMs are very interested in TDP because that will eat into their bottom line with their custom mobos and possibly requiring higher power delivery specs or more expensive PSUs.





Also people aren't dumb, and those looking for more performance can buy an Nvidia card despite what the stock clock speeds are because they know there's more performance left under the hood.
 
Reactions: psolord
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |