GTX 780 rumors

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
So what is everyone's msrp guess I'd gtx780 is based on gk110? I am guessing $599. I think the first lower binned part will be called gtx770 ti and will be $499. GF114's top model will be the gtx760ti and will be around $399. I think 4-6 months after gtx780 comes we'll get a higher performing (less fused off smx's, higher clocks) gk110 part, a la gtx785.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Where do you get this from?

The 680 GTX has more shaders than the 670 and is only 3% faster. The 2GB b 3GB of Vram wont make much difference in anything but the biggest resolutions

No process shrink either so thats out of the question.

Basically all you get is a few more of what you got now which means about 14% better performance. No way you will get 30%

He gets it from basing the gtx780 on gk110 and not gk104/114. His guesstimations are accurate, even conservative IMO.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
So what is everyone's msrp guess I'd gtx780 is based on gk110? I am guessing $599. I think the first lower binned part will be called gtx770 ti and will be $499. GF114's top model will be the gtx760ti and will be around $399. I think 4-6 months after gtx780 comes we'll get a higher performing (less fused off smx's, higher clocks) gk110 part, a la gtx785.

Why not $499? It's perfect price for nVidia because AMD can't really hurt them with their line up.
High prices of GTX260 and GTX280 were the reason why AMD forced nVidia to cut their prices after 4 weeks.
 

Fx1

Golden Member
Aug 22, 2012
1,215
5
81
He gets it from basing the gtx780 on gk110 and not gk104/114. His guesstimations are accurate, even conservative IMO.

Lol. You must be joking.

So a HPC GPU which is designed to run Tesla Cards is going to beat its top of the line home desktop gaming GPU? LOL

The die area is 2x as big as GK104 and the power consumption is massive. Whilst offering no tangible performance benefits in 3D gaming.

There is a very good reason why Nvidia stripped out most of the HPC from the latest desktop GPU's. What makes you think putting them back in is going to improve things for 3D gaming?

How much do you think its going to cost to make GPU's 2x as big as they are now? 2x as much thats how much.

30% is dreaming
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,602
5
81
Where do you get this from?

The 680 GTX has more shaders than the 670 and is only 3% faster. The 2GB b 3GB of Vram wont make much difference in anything but the biggest resolutions

No process shrink either so thats out of the question.

Basically all you get is a few more of what you got now which means about 14% better performance. No way you will get 30%

K20X has close to 30% more computational power than the GTX680 (14/8 SMX * 0.732 GHz/1.006 GHz). Due to GK104 turbo let's say 25%. It also has 30% more bandwidth.

The 680 is up to 10% faster than the 670. If your game/setting is shader bound, it will be more, if it is bandwidth bound it will be less. You cannot give a card more compute power without bandwidth and expect it to perform significantly better - and vice versa.

So if you increase compute power and bandwidth both by 25%, you will get 25% higher performance. That is not including higher clocks for the GTX780 and possibly a full 15 SMX SKU. As I said - 30% are guaranteed for a GK110 SKU, make no mistake there.

Lol. You must be joking.

So a HPC GPU which is designed to run Tesla Cards is going to beat its top of the line home desktop gaming GPU? LOL

The die area is 2x as big as GK104 and the power consumption is massive. Whilst offering no tangible performance benefits in 3D gaming.

There is a very good reason why Nvidia stripped out most of the HPC from the latest desktop GPU's. What makes you think putting them back in is going to improve things for 3D gaming?

How much do you think its going to cost to make GPU's 2x as big as they are now? 2x as much thats how much.

30% is dreaming

No. Firstly, the GTX690 packs 2xGK104 with 4GB memory, that is 7bn transistors, within a TDP of 300W. GK110 has 7bn transistors too and will clock significantly lower - that means lower voltage and thus lower power consumption. Secondly, look at GF114 vs GF110. The GTX580 is 40% faster than the GTX560 Ti with 50% higher bandwidth, 25% higher computational power and LESS fillrate. This comparison is more favorable toward GK110 if we look at Kepler because it has more of everything, especially bandwidth which is very important. Do you think an increase in raw power means nothing? GK110 has all what it needs to be fast in graphics.

Now the only point I would concede is the following:
If Nvidia were to build a GPU that would mirror GK110 but with all the stuff that is non-essential for 3D taken out, it would be faster than GK110 and have better perf/W respectively. But that doesn't change the fact that GK110 itself will be a very capable gaming GPU.
 
Last edited:

Fx1

Golden Member
Aug 22, 2012
1,215
5
81
K20X has close to 30% more computational power than the GTX680 (14/8 SMX * 0.732 GHz/1.006 GHz). Due to GK104 turbo let's say 25%. It also has 30% more bandwidth.

The 680 is up to 10% faster than the 670. If your game/setting is shader bound, it will be more, if it is bandwidth bound it will be less. You cannot give a card more compute power without bandwidth and expect it to perform significantly better - and vice versa.

So if you increase compute power and bandwidth both by 25%, you will get 25% higher performance. That is not including higher clocks for the GTX780 and possibly a full 15 SMX SKU. As I said - 30% are guaranteed for a GK110 SKU, make no mistake there.

Are you trying to baffle me with bull shit or something?

GK110 has most of its die area taken up with HPC. How will HPC help drive more frames in Crysis 3? They took that stuff OUT! in these GPU's

AMD already have 3GB 384bit Memory bandwidth and it doesnt make much difference in todays titles. A 670 GTX beats a 7950 everytime.

At best you might get 20% better FPS and its more likely to be 10-15%. Both GPU's will be 28nm and just more of the same we have now. Higher boost and a few more of everything
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Fx1,
read nVidia's pdf papers about GK110. It's has the same architecture like GK104. It's easy to predict where a Geforce product will land.
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,602
5
81
Are you trying to baffle me with bull shit or something?

GK110 has most of its die area taken up with HPC. How will HPC help drive more frames in Crysis 3? They took that stuff OUT! in these GPU's

AMD already have 3GB 384bit Memory bandwidth and it doesnt make much difference in todays titles. A 670 GTX beats a 7950 everytime.

At best you might get 20% better FPS and its more likely to be 10-15%. Both GPU's will be 28nm and just more of the same we have now. Higher boost and a few more of everything

1. Be polite
2.
GK110 has most of its die area taken up with HPC.
= complete fantasy. As Sontin said, the structure of GK110 is known and it doesn't really differ much from GK104.
3. I'm sure you cannot even elaborate what exactly you mean by "HPC" and how it would impede gaming performance. The ALUs, TMUs, ROP are there, the bandwidth is there - end of story.
4. AMDs memory bandwidth can make quite a difference if the game/setting is right. I've seen as much as a 30% performance boost of the GHz Edition vs the 680. But as I said - you need to couple both, computational power with bandwidth. An unbalanced chip is of no use.
5. AMD uses raw power ineffectively compared to Nvidia. The 7970 GHz Edition has 33% more computational power and 50% more bandwidth, but is not 33+% faster on average. You cannot compare architectures like you did.

Why not $499? It's perfect price for nVidia because AMD can't really hurt them with their line up.
High prices of GTX260 and GTX280 were the reason why AMD forced nVidia to cut their prices after 4 weeks.

You're dreaming unfortunately. If AMD wanted $549 for their 370mm2 GPU, Nvidia will not take less for a GK110-GTX780. Don't forget that the 4870 was dirt cheap, the 7970 GHz isn't. The price level is much higher than back then for AMD products, so if Nvidia launches first, they will just do what AMD did with the 7970 and charge a hefty premium for the fastest SGPU.
 
Last edited:

Fx1

Golden Member
Aug 22, 2012
1,215
5
81
Fx1,
read nVidia's pdf papers about GK110. It's has the same architecture like GK104. It's easy to predict where a Geforce product will land.

Yes apart from the fact that GK110 is 2x as big as todays GPU's upon which Nvidia has poor 28nm yields and when you make the GPU bigger there is going to be even less yield and cost double per GPU. There is no commercial sense to use a less profitable GPU. The only way they would do this is if they have a few million Tesla GK110 reject cores sitting about where parts of the GPU have been turned off.

The GPU has 7bln transistors and is a fabrication nightmare apparently. Last time Nvidia tried to do this on 40nm and they started getting 2% yields and the best they ever had was about 20%.

There is a ton of information about Nvidia on this around the web. A 780GTX will be a faulty core because they wont sell a perfect GK110 to anyone other than Tesla customers.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Look in the past to see the future.

I guess that helps than a discussion about GF110 and why it was earlier on the consumer than on the HPC market.
 

ninocam

Junior Member
Nov 6, 2012
2
0
0
Stop bickering bout' detail, just be excited for possibly, maybe getting some new graphics cards.
 

Fx1

Golden Member
Aug 22, 2012
1,215
5
81
Look in the past to see the future.

I guess that helps than a discussion about GF110 and why it was earlier on the consumer than on the HPC market.

Its not on the consumer market. Its on the HPC market as those Tesla Cards.

Its a monster GPU that uses 350w of power. The heat will be insane and so will the costs associated.

Thats ok when your selling to enterprise. Not so good when its inside your gaming rig.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,602
5
81
Yes apart from the fact that GK110 is 2x as big as todays GPU's upon which Nvidia has poor 28nm yields and when you make the GPU bigger there is going to be even less yield and cost double per GPU. There is no commercial sense to use a less profitable GPU. The only way they would do this is if they have a few million Tesla GK110 reject cores sitting about where parts of the GPU have been turned off.

The GPU has 7bln transistors and is a fabrication nightmare apparently. Last time Nvidia tried to do this on 40nm and they started getting 2% yields and the best they ever had was about 20%.

There is a ton of information about Nvidia on this around the web. A 780GTX will be a faulty core because they wont sell a perfect GK110 to anyone other than Tesla customers.

You don't know yields for Nvidias GPUs, 40nm or 28nm. I guess you are basing your claims on Charlie Demerjan from Semiaccurate who is a known Nvidia-phobe. There is simply no proof to what you are saying.

As far as my sources go, GK110@GTX780 was not profitable for launch in September 2012. It is 4 months later now and the launch might not take place for another 2-3 months. Surely in that timeframe improvements can be made.

Then:
There is no 15 SMX Tesla to begin with. The professional market uses much less cards than the gaming market. While demand for K20(X) seems to be high, it will be fulfilled sooner or later. Wafer output also increased substantially in Q3 2012 and I would expect it will increase further towards Q2 2013.

Finally:
Nvidia needs a strong GPU to compete with HD8970. They cannot do that with a GK114 or GK204 or any other kind of GK104 derivative since it has too little bandwidth. If they were to build a GK104 derivative with a 384bit memory bus, more units etc. they would basically have a GK110.

To develop and tape out a GPU costs much much money. GK110 exists, no further development costs are needed. An additional "HPC-less" GK110 would incur additional unnecessary costs AND due to the limited volume of professional solutions, GK110s R&D costs would probably never be covered. So you see, it seems to be a bad idea not to use GK110.

Its not on the consumer market. Its on the HPC market as those Tesla Cards.

Its a monster GPU that uses 350w of power. The heat will be insane and so will the costs associated.

Thats ok when your selling to enterprise. Not so good when its inside your gaming rig.

Please stop inventing stuff, stop lying. GF110 came in the GTX580 and the GTX570, already forgot?
Nvidia didn't release a single-GPU card in their entire history that had a TDP of 350W and they will not start now. Your claims are completely uninformed and very poorly thought out.
 
Last edited:

Fx1

Golden Member
Aug 22, 2012
1,215
5
81
You don't know yields for Nvidias GPUs, 40nm or 28nm. I guess you are basing your claims on Charlie Demerjan from Semiaccurate who is a known Nvidia-phobe. There is simply no proof to what you are saying.

As far as my sources go, GK110@GTX780 was not profitable for launch in September 2012. It is 4 months later now and the launch might not take place for another 2-3 months. Surely in that timeframe improvements can be made.

Then:
There is no 15 SMX Tesla to begin with. The professional market uses much less cards than the gaming market. While demand for K20(X) seems to be high, it will be fulfilled sooner or later. Wafer output also increased substantially in Q3 2012 and I would expect it will increase further towards Q2 2013.

Finally:
Nvidia needs a strong GPU to compete with HD8970. They cannot do that with a GK114 or GK204 or any other kind of GK104 derivative since it has too little bandwidth. If they were to build a GK104 derivative with a 384bit memory bus, more units etc. they would basically have a GK110.

To develop and tape out a GPU costs much much money. GK110 exists, no further development costs are needed. An additional "HPC-less" GK110 would incur additional unnecessary costs AND due to the limited volume of professional solutions, GK110s R&D costs would probably never be covered. So you see, it seems to be a bad idea not to use GK110.

There was same stuff all over the net about the 685 GTX which never happened. supposed to be based on the GK110.

The only reason they might use that GPU is because they have a bunch of broken Tesla cores they cant sell. This isnt exactly a good thing for the consumer either. Neither is a 350w GPU
 

Fx1

Golden Member
Aug 22, 2012
1,215
5
81
Please stop inventing stuff, stop lying. GF110 came in the GTX580 and the GTX570, already forgot?
Nvidia didn't release a single-GPU card in their entire history that had a TDP of 350W and they will not start now. Your claims are completely uninformed and very poorly thought out.

Who is talking about GF???? 580? who the heck mentioned those cards?
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,602
5
81
There was same stuff all over the net about the 685 GTX which never happened. supposed to be based on the GK110.

The only reason they might use that GPU is because they have a bunch of broken Tesla cores they cant sell. This isnt exactly a good thing for the consumer either. Neither is a 350w GPU

So we take rumors as reliable sources for information, now?
I have explained why they probably will use GK110, read again or ask if you didn't understand it.
There is no 350W GPU, K20 has a TDP of 225W and K20X has 235W - a far cry from your imagined 350W. Stop lying please. :thumbsdown:

Who is talking about GF???? 580? who the heck mentioned those cards?

Sontin was talking about GF110 and you answered. GT200, GT200b, GF100, GF110 all were released in the professional market AND in the consumer market as GeForce. That is an undeniable fact and that makes it likely that it will be the same with GK110. The only difference this generation is, that Nvidia postponed their large GPU to a point in time when it could be made with little trouble.

Edit:
I'm done, I will not humor you further. Do yourself a favor and prepare better before posting here.
 
Last edited:

chimaxi83

Diamond Member
May 18, 2003
5,456
61
101
Look in the past to see the future.

I guess that helps than a discussion about GF110 and why it was earlier on the consumer than on the HPC market.
Its not on the consumer market. Its on the HPC market as those Tesla Cards.

Its a monster GPU that uses 350w of power. The heat will be insane and so will the costs associated.

Thats ok when your selling to enterprise. Not so good when its inside your gaming rig.
Who is talking about GF???? 580? who the heck mentioned those cards?

Reread the posts above ^^ You should probably Google something before you spout nonsense. Then again, why would you start now?

Anyway, I'm hoping for beast GPUs from both companies. Never cared about power usage, not going to be a Balla and pretend to start now lol.
 

Fx1

Golden Member
Aug 22, 2012
1,215
5
81
So we take rumors as reliable sources for information, now?
I have explained why they probably will use GK110, read again or ask if you didn't understand it.
There is no 350W GPU, K20 has a TDP of 225W and K20X has 235W - a far cry from your imagined 350W. Stop lying please. :thumbsdown:



Sontin was talking about GF110 and you answered. GT200, GT200b, GF100, GF110 all were released in the professional market AND in the consumer market as GeForce. That is an undeniable fact and that makes it likely that it will be the same with GK110. The only difference this generation is, that Nvidia postponed their large GPU to a point in time when it could be made with little trouble.

You have no reliable information either so please dont call me out for looking to internet rumours.

Fact is that a GK110 has an MSRP of $3200 which makes huge GPU's possible.

So if Nvidia use broken GK110's as desktop GPU's then they will not actually be GK110 specced GPU's any more. Those chips will be lesser GPU's of unknown specification. So what exactly are you getting?

I still think your more likely to see a different GPU slightly bigger and faster than the current GK104.
 
Last edited:

Ajay

Lifer
Jan 8, 2001
16,094
8,109
136
Well, if the GK114 has a 384 bit bus, I'll save my money for that. I want a larger monitor and the larger bus will help at larger pixel counts. Also, if GK114 is 5B xtors, then I doubt NV would put out a consumer GK110. They'll get better yields and profits off the smaller dice.
 

Fx1

Golden Member
Aug 22, 2012
1,215
5
81
Well, if the GK114 has a 384 bit bus, I'll save my money for that. I want a larger monitor and the larger bus will help at larger pixel counts. Also, if GK114 is 5B xtors, then I doubt NV would put out a consumer GK110. They'll get better yields and profits off the smaller dice.

Makes far more sense doesnt it?

Unless NV has a mountain of useless GK110 GPU's sitting around somewhere.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,602
5
81
Except that any chip will have to have the appropriate cache system to work with that bandwidth effectively. Just a wider bus doesn't cut it. And whoops, we're closer to GK110 already.

Does anyone in their right mind think that Nvidia develops and tapes out GK110 for $200m (my guesstimate, anyone got info how much that roughly costs?) and then sells maybe 100k units only? That just doesn't make sense. As far as I know, the professional market is NOT enough to cover R&D cost for a large GPU on its own. That is why Nvidia released ALL their big GPUs in the consumer market too - to break even and make a profit.
Nvidia has 7000 employees. If only 1000 work(ed) on GK110 for an average salary of $60k/year for 3 years, that is 180 million dollars right there. And Keplers development began 7 years ago (of course not full force - according to Nvidia, development got really manpower intensive 3 years ago). That doesn't include utilities, internal and external expenses and process developing costs at TMSC. And I'm probably too conservative with the numbers of employees working on Kepler and their salary.

Look at Intel:
They invest a shitload of money into their R&D and fabs and therefore need volume sales to make a buck. If Intel only sold Xeons, they couldn't afford their state-of-the-art fabs and their expensive research. Volume is the key word here just as with GK110.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I give Fx1 an award for making the most consecutive uninformed, blatantly wrong posts in a row.

Back on topic, if GK110 doesn't come to the Geforce lineup then quite simply there has to be a Kepler chip between GK104 and GK110 that has not been released yet (GK112???) But I seriously doubt that.
 

Fx1

Golden Member
Aug 22, 2012
1,215
5
81
Except that any chip will have to have the appropriate cache system to work with that bandwidth effectively. Just a wider bus doesn't cut it. And whoops, we're closer to GK110 already.

Does anyone in their right mind think that Nvidia develops and tapes out GK110 for $200m (my guesstimate, anyone got info how much that roughly costs?) and then sells maybe 100k units only? That just doesn't make sense. As far as I know, the professional market is NOT enough to cover R&D cost for a large GPU on its own. That is why Nvidia released ALL their big GPUs in the consumer market too - to break even and make a profit.
Nvidia has 7000 employees. If only 1000 work(ed) on GK110 for an average salary of $60k/year for 3 years, that is 180 million dollars right there. And Keplers development began 7 years ago (of course not full force - according to Nvidia, development got really manpower intensive 3 years ago). That doesn't include utilities, internal and external expenses and process developing costs at TMSC.

Look at Intel:
They invest a shitload of money into their R&D and fabs and therefore need volume sales to make a buck. If Intel only sold Xeons, they couldn't afford their state-of-the-art fabs and their expensive research. Volume is the key word here just as with GK110.

I dont know why but you seem to want to back up your views with irrelevant calculations that are just plucked from obscurity.

The only calculation you need to worry about is Price per GPU. A 7bln transistor GPU costs BIG. Thats why 1 Tesla is $3200 EACH!

So why are they going to put them in a $500 desktop card? When they can put a GPU half its size and probably a fraction of the price on the card?

The only thing we know for sure is that yields on 7bln transistor GPU's are going to be terrible.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |