Wouldn't your "table" mean this:
HD7970/7970Ghz (GCN 1.0) vs. GTX680/770 (GK104)
R9 290/290X (GCN 1.1) vs. GTX780/780Ti (GK110)
R9 285 (GCN 1.2) vs GTX 750/750Ti (GM107)
??? vs. GTX970/980/Titan (GM204/GM200)
Then it would look more like this.
Wouldn't your "table" mean this:
HD7970/7970Ghz (GCN 1.0) vs. GTX680/770 (GK104)
R9 290/290X (GCN 1.1) vs. GTX780/780Ti (GK110)
R9 285 (GCN 1.2) vs GTX 750/750Ti (GM107)
??? vs. GTX970/980/Titan (GM204/GM200)
Then it would look more like this.
GM107 was developed primarily as a midrange notebook part/low end desktop. They are not even close to competitors - the 285 performs significantly better, is significantly more expensive (~$180 vs $120), and uses significantly more power - completely different market segments.
The only people shifting goalposts are the ones comparing GTX 750 Ti to 285(LOL?) and GTX 960 to 280X/290/290X(LOL?), they know who they are.The R9 285 has a slight advantage in the overall FPS average, but it falls behind the GeForce GTX 960 in our time-sensitive 99th percentile metric. We've seen the reasons why the R9 285 falls behind in the preceding pages. I'd say the 99th percentile result is a better indicator of overall performance—and the GTX 960 leads slightly in that case. That makes the GTX 960 a good card to buy, and for a lot of folks, that will be all they need to know.
In virtually every case, you'll pay more for the Radeon than for the competing GeForce in other ways—whether it be on your electric bill, in terms of PSU requirements, or in the amount of heat and noise produced by your PC. The difference between the R9 285 and the GeForce GTX 960 on this front is pretty dramatic.
Are you trying to argue that the 285 is competing with GM200 and GM204? If you are, wow. If you're not, stop moving goalposts.
Then it would look more like this.
The only people shifting goalposts are the ones comparing GTX 750 Ti to 285(LOL?) and GTX 960 to 280X/290/290X(LOL?), they know who they are.
But the reality is, AMD didn't show up to the fight, and NVIDIA won by default. I think it was AMD's own people who stressed how important showing up to the fight is:
http://www.anandtech.com/show/2937
The market has big bulges and you had better deliver at those bulges. Having product ready for the Q4 holiday season, or lining up with major DirectX or Windows releases, these are important bulges in the market. OEM notebook design cycles are also very important to align your products with. You have to deliver at these bulges. ATIs Eric Demers (now the CTO of AMD's graphics group) put it best: if you dont show up to the fight, by default, you lose. ATI was going to stop not showing up to the fight.
Yep, and project Skybridge
http://www.extremetech.com/computin...new-arm-and-x86-chips-that-are-pin-compatible
Dude, I haven't bought an AMD card for my own use in years, but if 390X had been in the market at the same time as the Titan X, and the 390X was 25% faster and $200 cheaper, then I'd have gone with the R9 390X.
But the reality is, AMD didn't show up to the fight, and NVIDIA won by default. I think it was AMD's own people who stressed how important showing up to the fight is:
http://www.anandtech.com/show/2937
The only people shifting goalposts are the ones comparing GTX 750 Ti to 285(LOL?) and GTX 960 to 280X/290/290X(LOL?), they know who they are.
Maxwell embarasses AMD in performance per watt and AMD had to cut the 285 from $249 to $199 to match the GTX 960's price, losing money because their chip is much bigger and requires more expensive power components and PCB.
So we have two R9 300 cards which show signs of a much better efficiency than current cards. That must come from some place, and who knows what it is. Whstever it is, it must be good since TweakTown yesterday said that the R9 300 cards will have something secret that no rumor have spoken about yet which didnt involve HBM but something spec wise..
The comparison was meant to align the respective architectures GCN 1.2 vs GM107 from a generational/technological standpoint not the cards specifically. Hence Maxwell being vs ??? due to AMD being late to reply to it. The point wasn't what was competing with what in the market at a given price.
TweakTown yesterday said that the R9 300 cards will have something secret that no rumor have spoken about yet which didnt involve HBM but something spec wise..
Rumors have been all over the place with just about everything speculated.
forgive me if this has already been posted
http://www.redgamingtech.com/r9-390x-fiji-xt-comes-with-8gb-hbm-17-tflop-r9-395-coming/
[...]and yet again was 2.5-9 months late with Kepler top-to-bottom roll-out,
The sad part about this is that with less market share, AMD has less earnings, less ability to invest into future GPU designs, thus less chance to make impressive products. Ultimately that leads to higher prices and lower level of competition when NV is in full control. The end result is consumers end up paying $550 for a mid-range GTX960 that NV can now call 980.
Thats because nobody knows about Fiji anything, and everyone is copying every single rumor that is on the internet.What's interesting is how consistent the rumours have been for 6 months now.
nVidia sold every Kepler card. They werent late to the game, 28nm supply was just limited.
The GTX960 was launched for less than the r9 285. The GTX970 cost less than the 290. nVidia shaped up the price/performance ratio this time.
You praise AMD for the price/performance and yet ignore the fact that nVidia forced them to drop their prices. Without Maxwell AMD would still sell a 290 for $399 and a r9 285 for $249.
Stop rewriting history.
nVidia sold every Kepler card. They werent late to the game, 28nm supply was just limited.
The GTX960 was launched for less than the r9 285. The GTX970 cost less than the 290. nVidia shaped up the price/performance ratio this time.
You praise AMD for the price/performance and yet ignore the fact that nVidia forced them to drop their prices. Without Maxwell AMD would still sell a 290 for $399 and a r9 285 for $249.
Stop rewriting history.
Wasn't there some article stating that 20nm was a bust because even though it's a smaller node, it can't reach the same clocks as 28nm and also runs hotter when trying to do so? I can't recall where I read this but it was a credible source, will have to use some google-fu later and find it.
Wasn't there some article stating that 20nm was a bust because even though it's a smaller node, it can't reach the same clocks as 28nm and also runs hotter when trying to do so? I can't recall where I read this but it was a credible source, will have to use some google-fu later and find it.
Its all about cost. Oracle got no problem making a 3.6Ghz+ 700mm2 20nm die on the same process.
Its all about transistor/design cost. Oracle got no problem making a 3.6Ghz+ 700mm2 20nm die on the same process.
Sub 28nm requires double patterning for example. And I can guarantee you that nVidia and AMD will avoid going below 28nm as long as they possible can.