My take on the next 45 days

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

wand3r3r

Diamond Member
May 16, 2008
3,180
0
0
The next generation consoles and games will effect us majorly,look at BF4 for example,there is graphs popping around suggesting your idea of a gtx670 over a 7950 is a bad idea,1200p with no AA in that game and its using 2gb of vram and the game suggests 3gb.

Doubting Titan performance with 6gb vram is gonna be pennies,perhaps it will sit in the $349 range 2 years from now but don't quote me on that.

I think it'll be well below $350 in two years, I'd say 1.5 years may even be possible. The titan range *(big) might* be available in October of this year for $600 (9970) and is already basically available at $680ish with some 780's.
 

skipsneeky2

Diamond Member
May 21, 2011
5,035
1
71
I think it'll be well below $350 in two years, I'd say 1.5 years may even be possible. The titan range *(big) might* be available in October of this year for $600 (9970) and is already basically available at $680ish with some 780's.

Hopefully Amd does put the smack down and brings us a 9970 that just puts the Titan into submission at $600 or less.

Is the 9970 officially confirmed for a October release or at least a 2013 release or is it still pure speculation?
 

lavaheadache

Diamond Member
Jan 28, 2005
6,893
14
81
Hopefully Amd does put the smack down and brings us a 9970 that just puts the Titan into submission at $600 or less.

Is the 9970 officially confirmed for a October release or at least a 2013 release or is it still pure speculation?

I'm pretty sure *EVERY* bit of info out there right now is speculation. I'm so bored with the current hardware that I've been putting together 2 rigs in my spare time to do a fun comparison.

Basically the point of my comparison is to see who had the best forward thinking first gen dx 10 hardware. I'm going to do a few older games and a few newer ones. Thinking about these: COD 4, Bioshock, Farcry 2, Metro 2033 and Last Light, Batman AA and AC, Battlefield 3. Thinking about a few otheres.

The two rigs consist of similar Core 2 Duo's, ddr 2 800, and either a pair of HD 2900 XT's or a pair of 8800 GTX's (wish I had another Ultra :wub: ) I may do it one step further and compare a 9800 gx2 and 3870X2 afterwards.
 

wand3r3r

Diamond Member
May 16, 2008
3,180
0
0
It may just be speculation but I am inclined to believe it at this point. I don't remember the most trustworthy source I've seen but I don't believe anything has been specifically promised by AMD. On the one hand they claimed they won't release anything more than the 7970 this year. Therefore titan = $1k as they let them know they won't compete.
 

vanillatech

Member
Aug 10, 2013
30
0
0
The two rigs consist of similar Core 2 Duo's, ddr 2 800, and either a pair of HD 2900 XT's or a pair of 8800 GTX's

Should be great fun! I wonder if the advancement of drivers through the years have treated the mighty 2900XT's well or not. Specs to drool for, driver performance to make your toes curl. Let us know how it works out!!
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
The two rigs consist of similar Core 2 Duo's, ddr 2 800, and either a pair of HD 2900 XT's or a pair of 8800 GTX's (wish I had another Ultra :wub: ) I may do it one step further and compare a 9800 gx2 and 3870X2 afterwards.

In the former case, you are wasting time since 2900XT competed with 8800GTS 320MB/640MB, not 8800GTX. In the latter case, 4850 competed against 9800GTX, where NV later had to release 9800GTX+/GTS250 to compete against the 4850 after 9800GTX lost.
 

lavaheadache

Diamond Member
Jan 28, 2005
6,893
14
81
In the former case, you are wasting time since 2900XT competed with 8800GTS 320MB/640MB, not 8800GTX. In the latter case, 4850 competed against 9800GTX, where NV later had to release 9800GTX+/GTS250 to compete against the 4850 after 9800GTX lost.

If it were a matter of wasting my time or not i wouldnt be taking the time to fiddle with dinosaurs all together, nor would I play video games all together. Just saying....
 

SiliconWars

Platinum Member
Dec 29, 2012
2,346
0
0
Tahiti is less efficient than pitcairn at perf/watt.

There are a few reasons for this - the first is what Balla alludes to ie clocks (on the GE - the last 50MHz of the turbo are probably just not worth it), and the second is the reason why the original Tahiti was so bad - because it was the first 28nm GPU and its clocks and voltages etc were set on pre-production wafers.

AMD hasn't released a high end card in well over a year, and the difference between 28nm now and then is quite marked. Time is the #1 factor in why GK110 is more efficient than GK104, so imagine what AMD will do with even more time.
 

rgallant

Golden Member
Apr 14, 2007
1,361
11
81
There are a few reasons for this - the first is what Balla alludes to ie clocks (on the GE - the last 50MHz of the turbo are probably just not worth it), and the second is the reason why the original Tahiti was so bad - because it was the first 28nm GPU and its clocks and voltages etc were set on pre-production wafers.

AMD hasn't released a high end card in well over a year, and the difference between 28nm now and then is quite marked. Time is the #1 factor in why GK110 is more efficient than GK104, so imagine what AMD will do with even more time.
hope your right on that
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
That's because GK104 was clocked higher than it should have been :|

I don't necessarily buy into that notion, and even if it was clocked 50mhz too high that would not have made any significant difference in the efficiency of the chip. There is still plenty of headroom above the stock 1050mhz, as many reach 1250mhz or higher WITHOUT voltage modification and still beats Tahiti in performance per watt.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Time is the #1 factor in why GK110 is more efficient than GK104, so imagine what AMD will do with even more time.

Do you realize that GK110 came out 5 months after GK104, right? You think 5 months makes the biggest difference why one chip can be more noticeably more efficient than another chip on the same node?
 

Elfear

Diamond Member
May 30, 2004
7,116
695
126
Do you realize that GK110 came out 5 months after GK104, right? You think 5 months makes the biggest difference why one chip can be more noticeably more efficient than another chip on the same node?

GK110 was released as Tesla cards in November of 2012, almost 8 months after the GK104 launched in March. Consumer GK110 cards didn't launch until February of 2013 or almost a year after GK104.

Are the consumer cards and the Tesla cards using the same revision silicon? I'd say 8-11 months is definitely a factor for a more mature 28nm process.
 
Feb 19, 2009
10,457
10
76
GK110 is more efficient than GK104 in perf/watt. Tahiti is less efficient than pitcairn at perf/watt. This make my point all the more valid. Thank you.

I was responding to your quote, since you worry so much about potential power use from the bigger die of Hawaii.. bigger does not necessarily mean worse efficiency, as we see from gk104 to gk110.

Also from your quote: "I just don't see how AMD will keep power consumption down with a 430-440mm^2 die size unless they have significantly overhauled GCN"

GCN 2 is not GNC 1, obviously. We already see with GCN 1.1 they have some nice efficiency gains.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
I don't necessarily buy into that notion, and even if it was clocked 50mhz too high that would not have made any significant difference in the efficiency of the chip. There is still plenty of headroom above the stock 1050mhz, as many reach 1250mhz or higher WITHOUT voltage modification and still beats Tahiti in performance per watt.


It isn't a precisely balanced card and it's clocked too high, look at the 670 reference card.

GK104 is ROP and BUS limited while having more shader power than it can use.

7970 is the same way against the the 7950, all the extra shader power accounts for less than a 5% increase in clock for clock performance.



Titan vs 780 is a different story, Titan actually has better perf/w than the stock reference 780, however aftermarket 780s beat Titan in that aspect.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
It isn't a precisely balanced card and it's clocked too high, look at the 670 reference card.

GK104 is ROP and BUS limited while having more shader power than it can use.

GK104 has exceptional perf /sq mm and very good perf/watt. the outstanding perf/sq mm comes from removing double precision and not wasting die space on a 384 bit memory bus. GK104's 256 bit memory controller is very good as it can extract the best possbile perf at the lowest possible die size and power.

7970 is the same way against the the 7950, all the extra shader power accounts for less than a 5% increase in clock for clock performance.
HD 7900 cards are over provisioned in terms of memory bandwidth but poorly balanced in terms of front end resources and ROPs

Titan vs 780 is a different story, Titan actually has better perf/w than the stock reference 780, however aftermarket 780s beat Titan in that aspect.
GK110 has very good perf/watt but perf /sq mm is woeful. GK110 is 90% larger than GK104 in die size but Titan is 35 - 40% faster than GTX 770 on a clock for clock basis. thats the cost of a compute heavy design with Tesla / Quadro focus.
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Doesn't GK110, specifically Titan completely destroy the idea that DP units, either active, or fused off (780) cause a loss of perf/w?

If not, what exactly would? GK110, specifically aftermarket 780s are at the top of the pecking ordering when it comes to perf/w this generation.

As far as perf/sq, who cares? Really... It's a useless metric.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Doesn't GK110, specifically Titan completely destroy the idea that DP units, either active, or fused off (780) cause a loss of perf/w?

Clocks are reduced when you flip the DP switch on Titan and I've never seen a power usage measured for it. Have you? I don't know what the perf/W is at the same clocks with DP on and off on Titan. From your post I assume you have that measurement? I don't know if anyone has tested it. Probably because DP is worthless outside of very specialized tasks and those users don't O/C because absolute stability is more important.

Are the "DP Units" fused off on the 780, or is it just software crippled?

If not, what exactly would? GK110, specifically aftermarket 780s are at the top of the pecking ordering when it comes to perf/w this generation.

As far as perf/sq, who cares? Really... It's a useless metric.

perf/mm² directly effects the cost of the chip. It's very important.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Doesn't GK110, specifically Titan completely destroy the idea that DP units, either active, or fused off (780) cause a loss of perf/w?

If not, what exactly would? GK110, specifically aftermarket 780s are at the top of the pecking ordering when it comes to perf/w this generation.

As far as perf/sq, who cares? Really... It's a useless metric.

the consumer cares especially when he has to pay 60 - 70% more money for GTX 780 compared to GTX 770 for 20% better stock performance and 30% better max OC perf. so yeah perf / sq mm is very relevant. And thats where the role of competition is so important. Hawaii should bring competition to GK110 and better prices all round.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
It isn't a precisely balanced card and it's clocked too high, look at the 670 reference card.

GK104 is ROP and BUS limited while having more shader power than it can use.

That is more of a design flaw of than being "clocked too high." I think Nvidia did not anticipate GK104 improving as much as it did over GF114 OR they were OK with the ROP and bus bottleneck given the die size and performance expectations they had in making the chip. But you bring up a good point with regards to how Nvidia will handle GM104. I don't know if a 256-bit bus will be capable of serving a chip in this market segment, especially if Nvidia does not adopt GDDR6 with gen 1 maxwell cards. GM104 could end up with a 320-bit bus, 40 ROPs, and use 7.0ghz GDDR5 vram.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |