Might explain AMD adding the feature to HDMI?
The timing of all of it seems to make a lot of sense for this purpose. FreeSync over HDMI for TV <-> Console would be a great feature
Might explain AMD adding the feature to HDMI?
And why not? If PS4 Neo includes a Polaris 10 class GPU which was shown to be comparable to a Fury X then it can definitely do some games in 4k. The main point here is that comparison is not optimized. We are discussing Polaris 10 at Fury X speeds in a desktop windows environment. With much closer to metal programming and impending DX 12 performance tweaks this new chipset could absolutely run games @ 4k with medium-ish settings.
Yes smart move to just keep the cpu but tweaked up 30% clockspeed. With console optimizations this should continue to be sufficient in most applications.
Double the core count, increase clocks for CPU, GPU, RAM while TDP stays the same. Moore's law has clearly taken refuge in the lost art of 28nm kung fu.A PS4 pulls 130-140W at the wall while gaming, so doubling the CU count at 28nm while staying at the same power consumption seems unlikely.
Double the core count, increase clocks for CPU, GPU, RAM while TDP stays the same. Moore's law has clearly taken refuge in the lost art of 28nm kung fu.
Your expectations of Polaris seems to have reached new heights.
The PS4 today already struggles at 1080p. Adding twice the power wont do anything if you are to render a resolution 4 times greater.
Doesn't make a lot of sense to backport Polaris to 28nm when its 14nm native, and the economics of the consoles already dictate going to 14nm for #of die per wafer and power consumption reasons. I doubt it will be a 28nm, considering how touchy they are about thermal budgets after the PS360 generation fiascos
Fiji Nano even with GDDR-5 would have higher perf/watt than Hawaii at 4K.
Shintai regardless of my personal projections; you have to atleast agree there will be a performance increase in per CU or per SP. We are already doubling SP count with this proposed chipset so it will definitely be more than double performance.
Yes exactly the ps4 has far more gpu power yet devs still manage it. I forgot to mention that thanks.
The problem is Sony would never be able to afford such a huge and complex die size, even on 28nm. It makes way more sense to utilize a Polaris 10 on 14nm and enjoy the small die size/power consumption/DX 12 opportunities.
How is this relevant? Fiji @ 28nm achieves a TDP of 175W while the entire PS4 uses 130-150W while gaming.Fiji Nano even with GDDR-5 would have higher perf/watt than Hawaii at 4K.
But they don't. Either it's MSFT money hatting or devs not bothering, the performance different between Xbone and PS4 is substantial (almost a whole GPU tier on the PC side) yet AAA console games often perform the same on both consoles.
In situations where the PS4 is given a little leeway, it often bites them in the ass due to frame pacing issues.
Just look at this recent example:
https://www.youtube.com/watch?v=YECC8u0nPO8
In that example the superior PS4 hardware is deemed inferior to the Xbone performance due to frame pacing issues.
And this example:
https://www.youtube.com/watch?v=UPoGhr-3DS8
The Xbone version is deemed "identical" to the PS4 version.
Now imagine a higher performance console thrown into the mix. Will it get hamstrung by the weaker consoles, or will it be used as the base with the weaker consoles getting shafted?
And now rumors of Nintendo NX being faster/stronger than PS4, it will be interesting to see if MSFT hurries something out to compete.
Console buyers aren't regulars in the hardware upgrade. People who keep using the mobile phones example aren't factoring phone upgrades are often baked into service payments. If Sony or MSFT wants to follow that structure, it would make more sense. Pay for Live/PSN+ and get the console "free."
But you are not doubling bandwidth for example.
There are plenty of examples where the difference in visuals between ps4 and xbone is jarring, where ps4 runs games much faster.
Sony does what devs want.
Ultimately, market decides. And market reaction will be interesting to watch.
Also, OCed jaguar makes sense. Some devs were saying something about fine tuning games for consoles, aiming for specifics such as CPU cache latency and capacity, claim a change in those would result in a degradation in performance due to broken optimization, regardless of generally faster CPU architecture. Keeping the core and clocking it higher ensures optimizations to work and improves performance in a linear fashion.
That's the issue. It should be ALL, not some.
EDIT: And when it was PS3 vs 360, the "journalist" and "reviewers" had no issue pointing out when the 360 version of a game was superior. When PS4 and Xbone launched these same talking heads spun it that resolution/IQ didn't matter "just the gameplay."
End of the day I feel Sony is risking the regained popularity they gained. As someone said here, they need to PR it just right. MSFT pretty much blew away all their cred with their always online shtick (witching reading forums like NeoGAF, seems them fools already expect it yet threw a fit when MSFT blatantly said this was the future).
Where MSFT fumbled was execution. Sony can easily piss off the consolers (if you think the flamer wars here are bad, woof, swing by any NPD thread over at Gaf) and suddenly find themselves back in the PS3 situation. Even more so if they expect to charge more per game (which I wouldn't put it pass them).
How is this relevant? Fiji @ 28nm achieves a TDP of 175W while the entire PS4 uses 130-150W while gaming.
How do you expect to get more than 2x perf/watt improvement on 28nm (double core count, increased clocks, better cores) when it takes 14nm to get 2.5x perf/watt with Polaris vs. Hawaii?
Except Fiji obtains better perf/watt by increasing core count and decreasing clocks (HBM is just the cherry on top). Meanwhile, if the specs are real, the new PS4 increases core count and also increases clocks as well.Fiji is ~40% bigger die than Hawaii.
Fiji has ~45% more Shaders
Fiji Nano at 175W TDP is 10-20% (or more faster with DX-12) at 4K than Hawaii at 250W TDP. That makes it have almost 2x the perf/watt of Hawaii at the same 28nm.
So essentially we could have a bigger die with 50% or more Shaders over PS4 and at lower TDP and still be faster at 4K and still manufactured at 28nm.
There are plenty of examples where the difference in visuals between ps4 and xbone is jarring, where ps4 runs games much faster.
Sony does what devs want.
Ultimately, market decides. And market reaction will be interesting to watch.
Also, OCed jaguar makes sense. Some devs were saying something about fine tuning games for consoles, aiming for specifics such as CPU cache latency and capacity, claim a change in those would result in a degradation in performance due to broken optimization, regardless of generally faster CPU architecture. Keeping the core and clocking it higher ensures optimizations to work and improves performance in a linear fashion.
But you are not doubling bandwidth for example.
Anandtech's overclocking results indicate that GM204 isn't bottlenecked; pushing the core clock higher gives substantial benefits even without overclocking the RAM.
This is a terrible way forward for consoles. Split dev priorities, compatibility issues, frequent upgrade cycles, etc. You can argue second rate isn't obsolete, but it's still second rate, and every PS4 currently in homes and stores just became it.
By itself, comparing memory OC to core OC does not tell you if a part is well balanced or bottlenecked.
I don't think the "new" consoles will be rendering games at 4K...
Assuming they don't use GDDR5X, no. But even with standard GDDR5, there's decent room for improvement. The current PS4 APU runs with a memory clock of 1375 MHz, which provides 176 GB/sec of bandwidth. The leaks indicate that bandwidth will be going up to 218 GB/sec, which would correspond to a memory clock of 1700 MHz - well within reason, since Nvidia has had Maxwell cards running at that rate for 2 years already.
And don't forget that the existing PS4 is GCN 1.1, which has no memory compression at all. AMD has had some form of memory compression tech since Tonga, and with Polaris it's likely that it will be at least as efficient as Maxwell's. The GTX 980 shows how far it's possible to go with just a 256-bit bus. And Anandtech's overclocking results indicate that GM204 isn't bottlenecked; pushing the core clock higher gives substantial benefits even without overclocking the RAM.
Assuming that the PS4 NEO APU is a Polaris-based product on 14LPP, and that existing leaks/rumors about number of shaders, bus width, and memory clocks are roughly accurate, I think we should expect the new console's GPU power to be about equivalent to a R9 290X. That's pretty impressive, considering that this is still a solid mid-range discrete card today. That should be good news for PC gamers, since there will no longer be the need to cripple everything for the sake of the consoles. I expect that PS4 NEO users will be given a choice between 1080p@60Hz or 4K@30Hz for most games going forward, and legacy PS4 users will get 1080p@30Hz.
But you are not doubling bandwidth for example.