NVIDIA Kepler GPU Speculation thread

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Flipped Gazelle

Diamond Member
Sep 5, 2004
6,666
3
81
Why do Nvidia need to get out Kepler ASAP?
AMDs latest card is 15-20% faster then GTX 580 and costs a lot more. DX 11.1 is irrelevant since there won't be any DX11.1 games for at least 1+ year.

Bragging rights and enthusiast mindshare. Also, people get tired of waiting, and buyers who generally lean toward Nvidia but don't necessarily dislike AMD will eventually buy the shiny new card.

7970 is anywhere from 5-40% faster than GTX 580 1.5gb, yet costs ~ 10% more, and is cheaper than the GTX 580 3 gb. So how does the 7970 "costs a lot more"?


shompa said:
The most interesting thing about AMDs graphic cards is that they finally have destroyed Dual Link DVI limit of 8 gig/bandwidth for a monitor. Finally we technically can have higher resolution then 1920x1080 120Hz / 2560x1600 60 Hz. The 2560x1600 monitor was released in Aug 2004!

This high speed HDMI is something that Nvidia needs to deliver when higher resolution displays starts to manufacture Q2. Nvidia will release mid marked Kepler for this market in time.

Never even thought about that. Good point.
 

MrTeal

Diamond Member
Dec 7, 2003
3,584
1,743
136
In their contract, I'm sure there is language pertaining to the % of good dies. Also, TSMC is not the only game in town, and poor performance on their part would encourage clients to go else, or perhaps even invest in their own fabs.

I don't think it's within NVIDIA's chequebook to built a 28nm or 20nm fab. TSMC's fab 15 cost them US$13B. Even if they built a smaller one, it would be more than NVIDIA's worth. Not only do they have no experience with it. Imagine their teething pains if TSMC is having so much trouble.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
What, the memory bandwidth? That would only be true if GF110 performance was being gated exclusively by memory. Even then, 384-bit@4GHz gives 192GB/sec. If NVIDIA could get the memory frequency up to 5.5GHz like AMD is able to, that would by 352GB/sec. It's not a doubling, but it's pretty close.

the gtx 480 have ~10% less bandwidth than a gtx 580, that is ~15% faster.
the same way a 7970 have ~30% more bandwidth and is ~25% faster.

so yes, they have double the bandwidth. Keep in mind a 5.5Ghz in a 512 bit-bus is insane
 

MrTeal

Diamond Member
Dec 7, 2003
3,584
1,743
136
the gtx 480 have ~10% less bandwidth than a gtx 580, that is ~15% faster.
the same way a 7970 have ~30% more bandwidth and is ~25% faster.

so yes, they have double the bandwidth. Keep in mind a 5.5Ghz in a 512 bit-bus is insane

If you leave the core clock on your card the same and OC the memory 20%, do you see a 20% improvement in frame rates of synthetic benches?

5.5GHz on a 512 bit bus would be impressive, but HardwareCanucks managed to get AMD's 384bit bus up to 6.2GHz, so while 5.5GHz might not happen I don't think it's outside the realm of possibility, and 5GHz could be doable.
 

Flipped Gazelle

Diamond Member
Sep 5, 2004
6,666
3
81
I don't think it's within NVIDIA's chequebook to built a 28nm or 20nm fab. TSMC's fab 15 cost them US$13B. Even if they built a smaller one, it would be more than NVIDIA's worth. Not only do they have no experience with it. Imagine their teething pains if TSMC is having so much trouble.

I don't mean to imply that NVidia would build their own fabs, but if yields were consistently poor, an entity that is looking to get into the business could look to secure unhappy client (Nvidia) to gain support from potential investors.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Bragging rights and enthusiast mindshare. Also, people get tired of waiting, and buyers who generally lean toward Nvidia but don't necessarily dislike AMD will eventually buy the shiny new card.

7970 is anywhere from 5-40% faster than GTX 580 1.5gb, yet costs ~ 10% more, and is cheaper than the GTX 580 3 gb. So how does the 7970 "costs a lot more"?




Never even thought about that. Good point.

Nobody cares about HDMI as far as computer displays in the professional world. Displayport will be the de facto computer >> monitor connection in the future, for several reasons:

EG: More bandwidth. Multiple monitors on a single connection. More colors. Higher resolution. Wireless display. Packet data transmission. 120hz (not supported by HDMI) Backwards compatibility. HDMI cables come in multiple variations and one suitable for 1080p will not do 1600p. 99% of HDMI ports do not support greater than 1080p. It should be said again: color reproduction is more accurate with displayport - this is why professionals that do autocad / photo rendering professionally use IPS panels with DP connections. HDMI just doesn't cut it.

Basically, displayport is going to be the defacto standard for computer > monitor connections going forward, while HDMI is for those using big screen TVs and xbox 360s. So if you want 3d vision or eyefinity, displayport is the way to do it. So long story short this is why HDMI will be there for legacy purposes only, but is definitely NOT the preferred method for computer > monitor connections.
 
Last edited:

dakU7

Senior member
Sep 15, 2010
515
0
76
Why do Nvidia need to get out Kepler ASAP?
AMDs latest card is 15-20% faster then GTX 580 and costs a lot more. DX 11.1 is irrelevant since there won't be any DX11.1 games for at least 1+ year.

Umm, lowest GTX580 3GB on the egg right now is 589$.
 

Flipped Gazelle

Diamond Member
Sep 5, 2004
6,666
3
81
Nobody cares about HDMI as far as computer displays in the professional world. Displayport will be the de facto computer >> monitor connection in the future, for several reasons:

EG: More bandwidth. Multiple monitors on a single connection. More colors. Higher resolution. Wireless display. Packet data transmission. 120hz (not supported by HDMI) Backwards compatibility. HDMI cables come in multiple variations and one suitable for 1080p will not do 1600p. 99% of HDMI ports do not support greater than 1080p. It should be said again: color reproduction is more accurate with displayport - this is why professionals that do autocad / photo rendering professionally use IPS panels with DP connections. HDMI just doesn't cut it.

Basically, displayport is going to be the defacto standard for computer > monitor connections going forward, while HDMI is for those using big screen TVs and xbox 360s. So if you want 3d vision or eyefinity, displayport is the way to do it. So long story short this is why HDMI will be there for legacy purposes only, but is definitely NOT the preferred method for computer > monitor connections.

I service an architectural firm - they do large projects like stadiums - that do not use DP.

Many professional graphic artists and photographers use HDMI. I know some.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Assumeing 1375mhz on the ram, around 350 GB/s.



^ this.

its not possible for them to double the memory bandwidth, with just 512bit bus, thus they ll lose out on peformance due to memory if they made a chip that big.

Then there is the matter of TSMC ~1.6x (optimistic) gain.
This is from a TSMC spokes person, giveing a "optimistic" outlook on gains.

520mm^2 on 40nm -----> 520/1.6 = 325mm^2 on 28nm.

Since a 520, has 512cores, a 1024 core, would likely be twice as big.

Okay.... 325mm^2 doubled up = 650mm^2.


Then you add in new features, that might take up more space ect..... yeah...
1024 core fermi for 680 is definately out of the picture.

Your math is not at all applicable, compare the size of GT200b to GF100/GF110, and compare core counts. Cores more than doubled, die size went up by only 9.5%. Or math says die size should have more than doubled, then decreased by the node shrink (55nm to 40nm is 27%), ending up with a 750mm^2 die for Fermi. Also compare GF104/GF114 to GT200b. Cores went up by 60%, die size drastically reduced.

TL;DR. Doubling the number of cores does not double the size of the chip on the same node process.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
GK100 = GTX590 performance
GK104 = ~7970 performance

Seriously if the highest end Kepler part only has gtx590 performance, it will be a fail in my book. GTX590 was entirely constrained by it's heat AND power draw, thus it was never able to be the fully realized product it could have been.
 
Last edited:

badb0y

Diamond Member
Feb 22, 2010
4,015
30
91
Seriously if the highest end Kepler part only has gtx590 performance, it will be a fail in my book.
Sometime the 7970 is at the heals of the HD 6990 and GTX 590 so if Keplar only is as fast as the GTX 590 it won't be a blow away victory for nVidia. I hope this is not the case...
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
I service an architectural firm - they do large projects like stadiums - that do not use DP.

Many professional graphic artists and photographers use HDMI. I know some.

Professional graphic artists using 1080p? What? And before you mention that HDMI 1.4 supports greater than 1080p, 99% of hardware doesn't. No graphics cards support 2560x1600p with HDMI that i'm aware of - my own MSI lightning 580's only support 1080p via HDMI. Further, cables are not interoperable and compatible with each other, so most cables don't support greater than 1080p. I find it incredibly hard to believe that any professional would use HDMI at 1080p for work, unless they seriously use a 23" crappy TN panel? I know of approximately zero professions who *don't* use DP. The advantages of DP are not in dispute.

2560x1600 is the defacto standard for professional work. Aside from the fact that most graphics hardware doesn't support 1600p via HDMI -- for that resolution and triple monitor, displayport is the preferred connectivity option. Hell, most IPS panels don't even *have* HDMI connections - there are HP ZR2740Ws IPS panels at my workplace and they *do not have* HDMI-- because their intended audience (professional) don't use HDMI.

HDMI is for xbox 360 kiddies and big screens. Big boys use DP / (or DVI-D for legacy monitors) So long story short, I applaud the 2 displayports on the 7970. If you really care that much you can get a 5$ converter anyway
 
Last edited:

wlee15

Senior member
Jan 7, 2009
313
31
91
Your math is not at all applicable, compare the size of GT200b to GF100/GF110, and compare core counts. Cores more than doubled, die size went up by only 9.5%. Or math says die size should have more than doubled, then decreased by the node shrink (55nm to 40nm is 27%), ending up with a 750mm^2 die for Fermi. Also compare GF104/GF114 to GT200b. Cores went up by 60%, die size drastically reduced.

TL;DR. Doubling the number of cores does not double the size of the chip on the same node process.

The GF100 also dropped 32 texture units and also moved from a gigantic 512-bit GDDR3 bus to a smaller 384-bit GDDR5 bus.
 

Joseph F

Diamond Member
Jul 12, 2010
3,523
2
0
Professional graphic artists using 1080p? What? And before you mention that HDMI 1.4 supports greater than 1080p, 99% of hardware doesn't. No graphics cards support 2560x1600p with HDMI that i'm aware of - my own MSI lightning 580's only support 1080p via HDMI. Further, cables are not interoperable and compatible with each other, so most cables don't support greater than 1080p. I find it incredibly hard to believe that any professional would use HDMI at 1080p for work, unless they seriously use a 23" crappy TN panel? I know of approximately zero professions who *don't* use DP. The advantages of DP are not in dispute.

Chances are, professional cards like the Quadro and FirePro series support >1080p over HDMI.
(This is just speculation. I don't know much about pro cards.)
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,949
504
126
Chances are, professional cards like the Quadro and FirePro series support >1080p over HDMI.
The Quadro cards I've seen have 2-3 display ports, a DVI port, no HDMI port. Or some have 4 display ports, nothing else. Never seen one with an HDMI port.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Really? That quick?

I think you misread my post.

"His point was that a lot of people on our forum have already written-off Kepler because it's "late", expect AMD to refresh HD7970 shortly without problems (like Q2), while expecting 28nm Kepler to be delayed FAR into 2012, with performance increase. My view is not in-line with that sentiment."

If the above road map is correct it looks like we have some time to wait for some shiny high end NVidia parts. For myself I won't be upgrading, till NVidia shows their cards or a 7970 refresh comes out. I will be recommending the 7970 as an upgrade for those running 5870s who want more performance now (and have the cash).

Ya, it looks like unless there is more concrete information, Kepler's HD7970 competitor might still be ways off, perhaps 2-3 quarters away. Too hard to say since NV is completely tight lipped and sources are all over the place, some saying Q1-Q2, others Q3-Q4.

the gtx 480 have ~10% less bandwidth than a gtx 580, that is ~15% faster.the same way a 7970 have ~30% more bandwidth and is ~25% faster.

to double the performance, they will have to double the bandwidth,
it's impossible even using 512bit-rate

That's a poor example since it assumes the same is true for other videocards.

HD6970 is nearly 75-100% faster than HD4870/HD4890 in demanding games and it barely has 35-40% more bandwidth. We can also see the same situation for NV. GTX580 only as 35% more memory bandwidth than a GTX280 (192 vs. 142), but is 70-100% faster, depending on the game.

You can't just assume that to double the performance, you have to double the memory bandwidth.

It depends on the specific architecture and in fact specific bottlenecks on an individual card basis. For example, HD4870 series had more than enough bandwidth (in fact too much). Your argument also assumes that GTX580 uses all of its 192 GB/sec bandwidth efficiently. It could also be the case that GTX580 has way too much bandwidth given its specs, much like the HD4890 and HD5830 were.

Bragging rights and enthusiast mindshare. Also, people get tired of waiting, and buyers who generally lean toward Nvidia but don't necessarily dislike AMD will eventually buy the shiny new card.

Has a lot less value than you think. If you look at the consumer market segment for graphics cards, you'll realize this quickly.



The market share for graphics cards above $199 is only 14%. That means the real battle takes place at $199 and below. For now HD7970 is only important to enthusiasts. Obviously, for our forum that's a very large fraction of buyers in % terms, but for the entire market, the market share for $550 graphics card is likely 2-3% of the overall graphics segment.

Does AMD have any desktop HD7000 cards under $200 launched yet? The $149-199 market segment, occupied by GTX560 and GTX560 Ti is an astounding 51% of desktop discrete graphics market share. Last time I checked GTX560 and GTX560Ti are very competitive vs. HD6870 and HD6950.

There is no need to launch ASAP. Graphics cards are sold all over the world. Just because you announced the launch of a product, you still have to consider the following factor:

1) Ability to deliver large volumes of that product to the market quickly (are there going to be supply constraints in the channel, manufacturing problems, etc.);

2) Ability to distibute inventory across the world in a timely manner (you might have plenty of volume in US and Canada but little to no volume in Europe and Asia for months);

3) The global economy and market timing of launching in Q1 (the least important quarter of the year for technology). With Europe struggling between little to no growth and a full fledged recession, having a lead in the $450+ market segment isn't really a game changer; and,

4) Consumers are going to be all spent after the holidays. So expect first 1-3 months of the year to be pretty slow. That's why it's FAR more critical to launch leading graphics cards in the $0-199 price segment, not in the $300+ segment.

We have already seen this exact scenario with HD5xxx/HD6xxx vs. Fermi series as the most recent example. Despite AMD launching 6 months ahead (and on top of that, also launching prior to the holiday season -- That's a substantial advantage!!) and it still did almost nothing to dent NV's market share on the desktop, which was quickly regained to almost 60%.

The launch of HD7970 has done little to change anything other than for people who buy $500+ graphics cards, especially since you can't even buy it yet and we have no idea how large the volumes will be. For enthusiasts, this card is great. However, NV has competitive cards in every price level up to $400. When AMD launches a full scale top-to-bottom HD7000 line-up from $0-$550 price level with better cards than NV and in ample supply, only then would NV need to start sweating.

You realize NV has record mobile design wins with Kepler too compared to Fermi? Even without any silicon, they are already doing better than they were doing with Fermi prior to launch.

It's the same situation every generation. People think that whoever launches the fastest card first has some "magical advantage". What matters is the product composition of your next generation line-up and the ability to deliver a top-to-bottom next generation lineup quickly. $550+ cards might matter on the forums, but not in the real world. Generations are fought over a 1.5-2 year period now, not over 2-3 months. Also, you aren't even taking into account the possibility of a price cut. Drop GTX580 to $380 and suddenly a $550 HD7970 doesn't look so great.

This post isn't meant to tarnish the launch of HD7000 series in any way. If NV launches first and AMD doom and gloom posts appeared, I would have responded in the exact same way. Just replace NV with AMD letters in my post.

I'll even provide a counter-example in favour of AMD. NV had the fastest single GPU from March 2010 (GTX480) to now (GTX580). Did that stop AMD from having an excellent line-up in the form of HD5000 and HD6000 series? No, it did not. If you think just because HD7970 smokes GTX580, that suddenly people are going to stop buying NV cards at $299 and below, you are strongly mistaken.
 
Last edited:

Flipped Gazelle

Diamond Member
Sep 5, 2004
6,666
3
81
Professional graphic artists using 1080p? What? And before you mention that HDMI 1.4 supports greater than 1080p, 99% of hardware doesn't. No graphics cards support 2560x1600p with HDMI that i'm aware of - my own MSI lightning 580's only support 1080p via HDMI. Further, cables are not interoperable and compatible with each other, so most cables don't support greater than 1080p. I find it incredibly hard to believe that any professional would use HDMI at 1080p for work, unless they seriously use a 23" crappy TN panel? I know of approximately zero professions who *don't* use DP. The advantages of DP are not in dispute.

2560x1600 is the defacto standard for professional work. Aside from the fact that most graphics hardware doesn't support 1600p via HDMI -- for that resolution and triple monitor, displayport is the preferred connectivity option. Hell, most IPS panels don't even *have* HDMI connections - there are HP ZR2740Ws IPS panels at my workplace and they *do not have* HDMI-- because their intended audience (professional) don't use HDMI.

HDMI is for xbox 360 kiddies and big screens. Big boys use DP / (or DVI-D for legacy monitors) So long story short, I applaud the 2 displayports on the 7970. If you really care that much you can get a 5$ converter anyway

Sorry, I meant to say DVI, rather than HDMI. My original comment was primarily directed at the elimination of DLDVI.

But so much of this is perspective and opinion. I would in not consider the HP ZR2740W a professional monitor. LED backlighting limits the dynamic range at the extreme white/black, and the ZR2740W used WLED, not the best choice for color fidelity, even after calibration.

I wouldn't turn down a ZR2740W, though.
 

Genx87

Lifer
Apr 8, 2002
41,095
513
126
Seriously if the highest end Kepler part only has gtx590 performance, it will be a fail in my book. GTX590 was entirely constrained by it's heat AND power draw, thus it was never able to be the fully realized product it could have been.

Why would it be a failure? Nvidia's flagship single GPU has for several generations matched the previous generations dual GPU card.
 

Flipped Gazelle

Diamond Member
Sep 5, 2004
6,666
3
81
Has a lot less value than you think. If you look at the consumer market segment for graphics cards, you'll realize this quickly.

<nice chart>

Mindshare can take years to build/erode, and that's why AMD's "glory days" of HD5xx0 didn't gain as much traction as it might have. We've seen this with CPU's as well. Even when the original Athlon was released and stood toe-to-toe with P3, even when Athlon XP and Athlon64 were all-around better choices than Pentium 4, enthusiasts I knew personally nearly always pushed Intel, and would fabricate reasons to do so. I don't socialize as much with them, now, the industry is far too boring. :\

I'm thinking, though, that now as attention spans are ever-shrinking, mindshare is more fluid, that's why I think Nvidia needs Kepler sooner rather than later. But maybe I'm wrong.
 

Flipped Gazelle

Diamond Member
Sep 5, 2004
6,666
3
81
GTX590 was entirely constrained by it's heat AND power draw, thus it was never able to be the fully realized product it could have been.

What... heat and power draw are a result of the design and knowledge of the fab process. Whenever it bumps it's head against the ceiling, that is the fully realized product.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Hmm, not sure if anyone knows for certain. I know I've read some articles that thought they had a per good die arrangement at least at some point.

Speculation- GTX 760 comes out first around March/April competes with 7870/7950 depending on what clocks they hit. Hope this leads to mid-range price war... otherwise we are looking at only the same 20-40&#37; performance gain per $ we see with the 7970. =S

TSMC does not care. They charge per wafer, not per good die.

But I do wonder how fast 28nm will ramp up and if they will have problem like they did with 40nm. In Q4 less then 3% if TSMC revenue was from 28nm.

We probably have Nvidia/Apple and AMD fighting for 28nm wafers. If TSMC scored the A6 contract they need to deliver an insane amount of wafers to Apple. (Apple needs about 20 million A6 per month when iPad/Iphone uses it and if Apple released the rumored ARM laptop at least 25 million/month)
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Why would it be a failure? Nvidia's flagship single GPU has for several generations matched the previous generations dual GPU card.

It would barely be faster than the hd7970, a larger chip, and wouldn't be out until some time after the hd7970. It would hardly be exciting, unless it was priced ridiculously low.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |