ShintaiDK
Lifer
- Apr 22, 2012
- 20,378
- 145
- 106
I was thinking 275 W.
So with 1500-1600Mhz you mean 1150-1250Mhz?
I was thinking 275 W.
FUDZilla, VideoCardz and WCCFTech claim that 12 TFLOPs/1.5 GHz 4096 GCN core chip has 225W TBP.So with 1500-1600Mhz you mean 1150-1250Mhz?
So with 1500-1600Mhz you mean 1150-1250Mhz?
FUDZilla, VideoCardz and WCCFTech claim that 12 TFLOPs/1.5 GHz 4096 GCN core chip has 225W TBP.
That's why it needs to be at TSMC.
So in essence, you are claiming that they are wrong, because you believe that? Do you have better sources, information than they have?And they have never been wrong before.
What are you basing this on? If you would be paying attention to Zen thread really, and be interested in the topic, actually, rather than trying to prove your agenda that AMD sucks/rubbish, you would knew that BOTH: microarchitecture and process are in play when it goes to efficiency and performance, in equal way. Great microarchitecture can be let down by process, and its the opposite, as well.What if the issue isn't the node? Its the same excuses with Zen. Node, node, node, but its just as much if not more the uarch.
So with 1500-1600Mhz you mean 1150-1250Mhz?
You've already been shown an RX 480 running @ 1475 mhz in this very thread. It's not impossible that AMD have improved their arch, and glofo have improved their process (/AMD are using Samsung) and they have Vega running at 1500mhz.
Is it 100% certain? No. Is it likely? I don't know. However, it is not impossible. Nobody is saying that it's a fact, but all you are doing is thread-crapping. You weren't so skeptical when people talked about 2.4 ghz 1080 factory OC (are there any factory 1080 that are even over 2ghz OOTB? I think I saw ~1990 mhz). I'm not saying that being skeptical is bad, but you are basically thread crapping and derailing the thread.
I don't think it was necessarily tweaking Pascal's design for higher clocks but something more akin to 4870->4890 clockspeed bump. AMD are long overdue that kind of clockspeed bump for a process.
Given the already significant one-off benefits of such a large jump in the voltage/frequency curve, for Pascal NVIDIA has decided to fully embrace the idea and run up the clocks as much as is reasonably possible. At an architectural level this meant going through the design to identify bottlenecks in the critical paths – logic sections that couldn’t run at as high a frequency as NVIDIA would have liked – and reworking them to operate at higher frequencies. As GPUs typically (and still are) relatively low clocked, there’s not as much of a need to optimize critical paths in this matter, but with NVIDIA’s loftier clockspeed goals for Pascal, this changed things.
I'm not sure if they mean that Pascal's arrangement differing from Maxwell is the reason for its higher clockspeeds. So I don't think that GCN has to undergo a complete overhaul either.
That is Nvidia's marketing in its best, in which idiots believe.It was even explained on Anandtech.
http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/6
And that's why R&D funding is so critical. From 1325Mhz to 1733Mhz on the same node.
That is Nvidia's marketing in its best, in which idiots believe.
Somehow I missed this. If the water cooled GPU has lower clocks, and higher power draw than the one in XFX GPU - then we are looking at best binned chip, or new revision of them.You can look at the video at ~9:40. The video states 133W watts average and a peak of 149W when benchmarking.
I do apologize. But pay attention to how JHH can spin out most obvious things as achievements, and people are able to believe him!Come on. This kind of statement is beneath you.
It's at the actual circuit level that these changes were done. High level uArch of Pascal relative to Maxwell didn't change dramatically, but the circuit designs are probably all new.
I'm not saying that being skeptical is bad, but you are basically thread crapping and derailing the thread.
Who is the mod? And how is it adressed?We have on AT forums very strict rules - he's doing constantly thread crapping/trolling on AMD related threads and still doesn't get infractions/warnings for that.
From my perspective, it's a best proof that some members here have more rights than others.
No, it was running at roughly 1.05V stock and 1.18 overclocked, thus at lower than release voltage.
Power was ~ 100W gaming, ~ 133W [1475 Mhz] overclocked and ~90W in a heaven loop.
Design revision, an improved process, or both, which if true, bodes very well for Vega to hit high performance at competitive wattage.
If this is truly indicative of Polaris perf/watt, then all of the previous arguments as to Vega's inability to compete have been rendered worthless.
When there is exception from the rule, then there is no rule.This is likely more the exception than the rule. Come on, one piece of evidence is enough to invalidate a wall of evidence and highly unscientific.
The only thing this shows evidence for is bias on the person doing the research.
When there is a extreme data point outside the norm, then there is likely an anomaly. It is not the norm and it's why two extremes on a set of data are often thrown out.
We don't keep the single extreme and throw out the hundred other data points. Extremes can be explained by more plausible explanations than he ones you gave.
There was likely something wrong with the test bench or a cherry picked sample or something is up with jays2cents.
The thing is this is huge publicity for XFX and will help them sell thousands more videocards compared to their competitors. No other card has overclocked this well for polaris while using this little power.
A power circuitry redesign won't save 120 watts of power. That's just impossible when this represents a 50% savings in power.
When there is exception from the rule, then there is no rule.
I suggest watching the linked video in the first place.