VulgarDisplay
Diamond Member
- Apr 3, 2009
- 6,193
- 2
- 76
I don't think any of these sites should be given hardware for free. It automatically creates a situation where they feel beholden to the company providing them said free crap.
What is also interesting is, the same card can behave very differently on different systems.Coil whine is sometimes very hard to predict. It is often not connected to your main frequencies (your GPU frequency) but to secondary periodic events (your GPU moving between power stages within a single frame for example). Those large load differences will result in a momentum on any coil in your system.
Now, if this creates an audible whine or not is influenced by the weight of said coil and the fixation on the board. Simply improving the fixation of a singing coil (preferrably with something that has no electric or magnetic influence, like hot glue) can be a permanent fix.
These have been my findings too.One thing on the coil whine is that from what i've read and from direct experience it begins at very high frame-rates.
For me turning on vsync (which I always do) has been the easiest solution on cards that have coil whine.
All the above is BS.
1. AMD certainly does include reviewer guides with their review units. Do you need me to post one?
.......
This is from pg.1 in the thread, do you have a sample?
Ryan Smith replied later in the thread to someone else (I think) that he does not receive any.
Well, just glanced briefly at TPU's Nano review and if anyone wonders why AMD gets annoyed with certain review sites it should be clear here.
1) We are back to having a retail card in the comparison along with a review sample. Something that is only reserved for AMD. Never have I seen TPU (or anyone else) do it with nVidia cards to see if retail samples boost the same as the review samples.
2) Alright we have another new metric thrown in as well. Another way of testing only AMD cards. It's called Clock frequency analysis. Obviously info we don't need for any other card.
3) And making it's return, especially for Nano we have Thermal camera and fan noise recording. It was used in the past but went away and has been resurrected special for Nano.
More information isn't a bad thing, assuming the information is accurate.
Press card outperforms retail card. Sounds like a rerun.
Oh, and coil whine again. Both cards.
Almost every card will have coil whine if it generates a high enough framerate. Some are louder than others. Some need a higher FPS to cause it. But virtually every card I've ever used had it - from both camps.
Obviously a combination of the CPU + game + settings plays a role, but if you cap the framerate to a reasonable value, you should never have the problem.
It's actually remarkable that AMD matched NV's perf/watt on more or less the same underlying GPU architecture that underlines the nearly 4-year-old HD7970. It took NV Fermi, then Kepler and then Maxwell - 3 distinct GPU architectures to be able to actually beat AMD GCN in perf/watt but with HBM1 AMD caught up on the fundamentally re-balanced (8 ACEs) 2011 architecture.
So positive points for the future is that AMD has closed the perf/watt gap with NV and now they can focus on using the advantages of HBM2, GCN 2.0 and 16nm node shrink to stay competitive. For us PC gamers who desire competition, this can only be viewed as a great development after all the trash talking online by AMD haters who wish for nothing but AMD to disappear.
It is not solved by any stretch of the imagination.
A repair tech told me that they started to use paraffin and hot glue, (and some other stuff that I don't recall what he called it) and dip the coils in it to mitigate the issue.
It does help, but it doesn't eliminate the issue.
He also went on to say that you can use the most expensive parts out there, and it can still whine, so there is no direct correlation between cost and having parts that whine, in that, you can have bottom of the barrel parts, and those don't whine, yet, you have a $1200 video card, and it whines.
There are tons of cards (heck, it isn't limited to video cards either) out there, from all camps that have whine, some are much more pronounced than others, but it is still there.
Some people just don't like facts, they get in the way of their agenda.This is a very solid and fair post that describes the issue of coil whine very accurately because it's not vendor specific. Thank you for that! I am honestly disappointed that some people on this forum believe the first thing they read and actually started correlating coil whine with choke quality. One can manufacture a solid core choke with the exact same components (i.e., quality) as a traditional choke. Whether a card has coil whine or not doesn't necessarily mean that there is a scientific correlation between the quality of the actual choke and the noise it emits.
...
Therefore, we have to question the purpose of such propaganda (a member who is paid by a competitor to spread negativity? A person who is a delusional fan of the competitor?) and further actually question if there is any correlation between coil whine and quality.
Is coil whine annoying? It sure is but if a GPU coil whines does it mean it's made from lower-grade level of materials? If so, is there scientific data that backs this up?
now this would piss me off
http://forums.evga.com/Titan-X-Coil-wine-m2326262-p2.aspx post #37
same story with my Titan X ! I live in ?Aus, paid $1600 for it. and the noise drives me nuts, im going to get it watercooled, and im worried the noise will be more annoying
Its a gpu with very aggressive power algorithms with a number of other caveats so saying that it beat maxwell is disingenuous at best. The fury nano underperforms the 980ti (cut down GM200) at approximately the same die size (perf/mm^2 takes a hit) using highly binned not cut down dies on a product optimized for perf/W above all else. The equivalent GM200 comparison would be a downclocked full GM200 with power optimizations (ie sort of like a mobile optimized card).
Your link shows the nano barely more power efficient than the R9 Fury. Nano uses about 9% less power than the R9 Fury for slightly less performance.
Why are you throwing Fermi in there? Fermi was never designed to compete with GCN. Double check your eyes and your charts again. Where are kepler and previous GCN cards on the chart you posted? Oh yeah, better than or equivalent to gcn in terms of perf/W.
Fury might have been an engineering milestone for AMD but from a financial perspective it really is looking like a complete disaster.
Its a gpu with very aggressive power algorithms with a number of other caveats so saying that it beat maxwell is disingenuous at best.
The fury nano underperforms the 980ti (cut down GM200) at approximately the same die size (perf/mm^2 takes a hit) using highly binned not cut down dies on a product optimized for perf/W above all else. The equivalent GM200 comparison would be a downclocked full GM200 with power optimizations (ie sort of like a mobile optimized card).
Your link shows the nano barely more power efficient than the R9 Fury. Nano uses about 9% less power than the R9 Fury for slightly less performance.
Why are you throwing Fermi in there? Fermi was never designed to compete with GCN.
Double check your eyes and your charts again. Where are kepler and previous GCN cards on the chart you posted? Oh yeah, better than or equivalent to gcn in terms of perf/W.
Fury might have been an engineering milestone for AMD but from a financial perspective it really is looking like a complete disaster.
Does it matter how AMD achieved it? No one precludes NV from binning GTX980Ti chips and making a GTX980Ti mini Nano competitor. That's not the point of my previous post -- clearly some people ignored all possibility of AMD matching Maxwell in perf/watt on 28nm, regardless of HBM1, binning, you name it. Don't tell you that you missed 4 years of posts that claimed how AMD is done for in perf/watt and will never come back on 28nm?
Which is fine as a practicality check. But insufficient to make claims about the architecture.1. NV currently doesn't sell such a product so your comparison while valid is not practical. Out of the box, the Nano matches 980Ti's perf/watt at TPU and beats it at TechSpot. I personally couldn't care less about perf/watt metric in a vacuum but given all the FUD I've been reading online for the last 3-4 years about how AMD will never match Maxwell in perf/watt has been proven wrong. Now where are all those people who kept making those statements? Oh right, they are now discussing Nano's poor price/performance and coil whine. Figures. They are only interested in hating anything about an AMD GPU, with 0 intentions of ever buying an AMD card in the first place. If the Nano beat the 980Ti in every metric and had HDMI 2.0, then the discussion would shift solely on 6GB >>>> 4GB. You want to bet on it?
There are plenty of competitors. They simply dont have the same performance. You try telling people looking at the nano that a 970 mini with 1.6x the perf/$ isn't a valid competitor for most users (unless money is no objection).2. For its intended function --- the smallest mini ITX cases, NV has no competitor to the Nano, at any price. That reason alone makes the comparison of the Nano to the 980Ti irrelevant. Why? Because anyone who wants top performance and overclocking is buying the 980Ti anyway. It seems people are having the hardest time in the world understanding that the type of gamers who are buying GTX980Ti and Nano do not cross-shop these 2 videocards. This should be common sense because even the Fury X >> than the Nano too.
Their product matches yes. Their architecture may or may not but this hasn't been tested on a level playing field.My link shows the Nano tying GTX980Ti in perf/watt at TPU at 4K and beating 980Ti at TechSpot. I already explained above how perf/watt is something I don't care about in a vacuum but everyone else who trash talked that AMD will never match Maxwell in perf/watt on 28nm node and how HBM1 had no advantage over GDDR5 (aka AMD should have just made a larger 290X with GDDR5) has been proven wrong.
Which has absolutely nothing to do with the statement that it took nvidia 3 gens to match gcn. That statement is flat out false. Fermi was never the direct competitor to GCN (it competed only because of delays) and kepler had better efficiency.But it did in reality. It took NV a whopping 2.5-9 months to roll-out the entire GT620->690 line-up of cards.
Did you forget how HD7750/7770/7850/7870 wiped the floor with Fermi cards for months before sub-GTX670/680 cards ever showed up?
This is a surprise. You would think it would be the other way around.Oh what a surprise -- a crippled architecture, with crippled compute, horrible double precision compute, VRAM gimped is outperforming GCN in perf/watt.
This game is a POS and invalidates any argument that is being made with it.That's awesome; and now let me know how Kepler's GTX670/680/770/780/780Ti are doing in modern games.
I know. Look at that 6GB titan jump ahead...or not.Let's try high textures at 1080P. Oh, what's that, it's impossible on cards with less than 3GB of VRAM.
You dont have to be a genius to see that AMD's board of directors is incompetent.Easy to be an arm-chair CEO. So let's you are AMD's management with limited financial resources that have to be allocated between their CPU, APU and GPU divisions. You also have limited time -- aka you cannot just produce a brand new GPU architecture that takes 3-4 years to design (NV has sighted on various occasion that's how long it takes for their teams to design a new architecture). What would you have done in AMD's shoes because there sure are A LOT of engineers / expert managers here who think they could have designed a far better GPU than the Fury with HBM1. I am ready to hear suggestions.
AMD doesn't need to mass produce HBM GPU's to gain experience. They can do all that R&D in the lab cheaply (perhaps not quite as well but close) without incurring the costs of production.Furthermore, what makes you think AMD will not use what they have learned on HBM1 and Fury/FuryX/Nano for their future GPU designs? Do you honestly think AMD will scrap everything they have learned on the Fiji chip and HBM1?
What prevents AMD from shrinking Fiji to 16nm node, and using HBM2 and cards like the Fury/Fury X and Nano become next generation's mid-range $300-400 cards?
While I think you have a point the 960 and its recommendations have nothing to do with the nano.<snip>
You don't have to want this product to point out the inconsistencies of the 'reviewers' based on what product they are "reviewing", and how they change their tune depending on if it is AMD or nvidia.This product wasn't for everyone, so why people who have no interest in the builds thiss is being used in are attempting to bash this product, and the reviews associated with it (From both sides of the field) is beyond me. Move on to something you care about.
I assume you meant freesync?I wish Nvidia would release an alternative and support Gsync, so that I'd have options, but I'm happy with AN OPTION, rather than nothing.
Nano looked to be the best thing from the announcement of the Fury line, and, it could have been that, but, instead, it is reserved for a specific crowd, and thus, isn't a mainstream product that we were all hoping for.Personally, I like the Nano.
It's just not time in my upgrade cycle to pick up such a card. And AMD handled the release too poorly for me to give them money. I love business and running a business, you have to do a better job than that. I'm too much of a capitalist to give them money for handling it in that manner but that's another story.
I don't get why so many people are posting about this card like they ACTUALLY cared about it.
Nano looked to be the best thing from the announcement of the Fury line, and, it could have been that, but, instead, it is reserved for a specific crowd, and thus, isn't a mainstream product that we were all hoping for.
AMD may very well be making lots of profit off of these launches, which is good for them, but, as a mainstream consumer, these cards just don't cut it.
Which is why the 380X could very well be THE card people have been waiting for.
You're not getting it. Why don't we see retail samples of nVidia cards to see if they boost like the review cards (and anyone who thinks a 1% difference over 22 games proves anything needs to go to school for statistical analysis)? Why don't we see TPU give us frequency charts of cards that we know the clocks reduce over time like Titan, 780/ti, Titan-X, 980/ti if they are solely interested in giving us more accurate info? They changed up their review routine purely trying to get a gotcha on Nano.More information isn't a bad thing, assuming the information is accurate.
You are getting it. If you are going to compare products in what is basically a competition you use the same playing field.True, but would be better if the same tests were run on everything, as opposed to picking and choosing.
Agreed. People hating the $650 price need to ask themselves: what's the alternative? Price it lower and continue the epic fail bang-for-back strategy? What will that achieve? How will that help AMD?Also, when AMD basically wiped the floor with NV in price/performance for 5 consecutive generations (HD4000->R9 290) and AMD failed to gain market share overall or make substantial profits, any new CEO would try to a new strategy. Will the new strategy work? Maybe or maybe not but we know that the old AMD GPU strategies more or less failed long-term.
The fly in the ointment is, these cards might just be DX12 titans, and once they have the product out, they very well can't up the price since they would be faster than the 980 Ti's then, but, this very well might be another "overclockers dream" type of thing. Hard to prove something when there isn't anything out yet that uses DX12. On paper, it seems feasible that their GCN architecture is better for DX12.Even if Fury X was $500, it's would still be a questionable buy. An after-market 980Ti is 22% faster out of the box at 1440P, but also comes with bonus 50% more VRAM. But then the 980Ti can be overclocked even further which extends the lead over a stock Fury X to 29% in some games. Realistically, that means even at $500, the Fury X is still too expensive, isn't it?
Sure would love to be a fly on the wall to hear why the full tonga never came to be before... It had GCN 1.2 as well, so, something just doesn't sit well with why they never went full steam with tonga.At least in the US, it's still possible to purchase an after-market R9 290 for $240. As long as that card is still for sale, a 380X doesn't seem compelling. AMD should have launched the 380X as the 285X in September of last year. They also should have launched R9 390/390X in January of 2015 alongside R9 290/290X and just have all these chips duke it out.