Curious that it s still DX11.0 only like its predecessors....
More like... annoying
Curious that it s still DX11.0 only like its predecessors....
Not entirely sure about thinking about performance/watt in percentage terms.
Certainly if I was building something I'd start with a given performance target - say 1080/30/sane settings - and then see how low I could rationally get the power draw to do that.
(Or cost of course, depending on what I felt like optimising for. I suspect it'll be power for me next time.).
The 750's seem to do that quite comfortably, although what native next gen console games will need could be a question. I'd only consider more performance relevant if getting something tangiable like 60FPS, much higher settings or (especially) higher resolution for it.
One other minority sport is I guess those tiny pico psu's which do ~160W total. You could easily get a 750, low power haswell model etc into that budget, which could make for some very small machines indeed.
To be fair, the the R7 265 is running quite a bit cooler than the 750 Ti in the review because Ryan's using the numbers from the Sapphire Dual-X cooler R7 265 http://www.anandtech.com/show/7754/the-amd-radeon-r7-265-r7-260-review-feat-sapphire-asus/2
Thoses comparisons are moot , actualy the most powerfull card
will likely get the CPU to be maxed out and hence it will inflate
the system power draw because of the CPU, not only because
of the GPU...
Toms Hardware saw this though:
http://media.bestofmicro.com/2/W/422600/original/01-GTX-750-Ti-Complete-Gaming-Loop-170-seconds.png
Which would be great if it wasn't an old HP office build on a unique form factor that isn't ATX compliant, and that you're just continuing to billow the budget here for a guy who really didn't want to spend more than $60 on a card. He was frustrated when I had to explain to him that because of Cryptomining, that GDDR5 version of the 6670 that my little brother got like 1 to 1 1/2 years ago for $65 (after rebate) that has served him marvelously for WoW now starts around $80-$85 for the crappy DDR3 version.To be frank, there are smarter options than that.
http://www.newegg.com/Product/Produc...82E16817139049
$30 and he'd never have to worry about powering most single chip graphics cards ever again.
Yes, this is my standard entry PSU when I make recommendations to new builders.What an awesome deal that is btw - note that it comes with a 6+2 pin PCI-E - that should tell you how confident Corsair is about their ability to handle most graphics cards. I've sold a ton of these in the past and never had one back yet.
I don`t understand why you people use so much energy in trying to make certain people change their minds:
snip.
My main concern is actually the big difference between benchmarks across sites. On TPU the 750 Ti gets more fps than 7870 and GTX 660 in Crysis 3. On AT it's slower than R7 265 which itself is slower than GTX 660.
Other weird results on TPU:
7790/R7 260X
Batman Arkham Asylum: slower than 7770
750 Ti
Bioshock Infinite: faster than 7850/GTX650 Ti Boost
Crysis 3: faster than 7870/GTX660
Far Cry 3: faster than 7850
Splinter Cell Blacklist: faster than 7870
Also strange things can be seen if you compare TPU perf/W charts: 750 Ti chart shows GTX 690 more efficient than 7750 compared to 750 Ti. But if you look at GTX 690 review (http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_690/29.html) you see 7750 being a lot more efficient than GTX 690.
Can the 750s SLI? or has that functionality removed too?
Now can we summarize how the efficiency gains have been achieved:
- L2 cache increased enabling more shaders to operate on a narrow bus, with slow memory.
- No SLI logic.
- Only one GPC (GPC communication logic totally removed).
- FP64 even more anemic than before.
Am I missing something?
Can the 750s SLI? or has that functionality removed too?
Now can we summarize how the efficiency gains have been achieved:
- L2 cache increased enabling more shaders to operate on a narrow bus, with slow memory.
- No SLI logic.
- Only one GPC (GPC communication logic totally removed).
- FP64 even more anemic than before.
Am I missing something?
That is what also allows the 1400mhz clocks too, right?
I don't even know the point of trying to go out on a limb against an industry wide understanding of what these cards are/can be.
W1zzard really needs to get his act together on his benchmarking runs.
We are trying to know how much of an improvement Maxwell really is.
It has 1350mhz memory.
It delivers twice the performance of the GK107 while using the same power on the same 28nm process.
I dont understand what else you need "to know".
Im talking about architectures not only about a cut down GPU die.
So your claim is that previuos 28nm stuff from Nvidia had SLI logic and FP64 units that were burning truckload of watts even when not used?
Really sad to see ppl hell bent on talking down what is impressive product from Nvidia. Nvidia has been focused on perf/power since Fermi fiasco days and it is really showing.
It delivers twice the performance of the GK107 while using the same power on the same 28nm process.
I dont understand what else you need "to know".
So your claim is that previuos 28nm stuff from Nvidia had SLI logic and FP64 units that were burning truckload of watts even when not used?
Really sad to see ppl hell bent on talking down what is impressive product from Nvidia. Nvidia has been focused on perf/power since Fermi fiasco days and it is really showing.
So your claim is that previuos 28nm stuff from Nvidia had SLI logic and FP64 units that were burning truckload of watts even when not used?
Really sad to see ppl hell bent on talking down what is impressive product from Nvidia. Nvidia has been focused on perf/power since Fermi fiasco days and it is really showing.
I didnt say gm107 was not impressive. A lot of people should stop with the stupid defensive attitude. You are NOT Nvidia.
Having said that I still hold that there are a lot of tiny tricks that contributes to the efficiency of this particular die.
The L2 cache might be the biggest contributor, and maybe even tweaks over the mature 28nm process. Maybe the same tweaks that helped bonaire remain efficient despite having too much shaders for a 128 bit bus.
Is it safe to assume that Nvidia is going to be competitive with AMD when it comes to OpenCL performance?