This threads existence makes me laugh. Arguing about the accuracy of a rumor after the fact? Makes me think of this.
If the rumours are generally pretty accurate (it seems like they for the most part actually are) then it means that as a source he is not terrible.
It's not discussing the accuracy of rumours so much as ascertaining the value of S|A as a source of information for future reference, which is done by working out if his past claim history has a reasonable degree of accuracy or not.
The other option is to continue as usual and have people argue that he's always full of shit while others say he's reporting facts.
Or do you not like testing your sources for accuracy?
So, what I'm getting from both sides, is that:
1. Anything can mean anything.
2. Anything is up for interpretation.
3. Nothing is indisputable and therefore a resolution can never be achieved.
4. We should all be gaming.
If you read or have read Charlie article you can see very clearly what he means.
There is also a difference and a reason Charlie runs a rumour site and not a "respectable site" - he takes more chances, meaning he will be wrong sometimes or even most of the times but on the reverse side he can also come with stories before most others.
Actually I bet if I removed all the NVIDIA hate sentences from Charlie articles and toned down the article titles your opinion would be different.
For example "Nvidia's Fermi GTX480 is broken and unfixable" -> "Nvidia's is having problems getting out a GTX480 with 512 shaders and target TDP subtitle: Low yields and hot".
Means the same but with less emotion - less love and less hate.
That only depends how much your green-tinted sunglasses and blinders block out. I knew months ago that Fermi was going to be a unimpressive, largely from rumor sites like Semi-Accurate. I then waited for a good deal on another high-end setup and settled on 5850 CF, which was verified a major home run after Fermi released.So, like I said. For example, what might be as clear as glass to you, won't be for another. Open to all interpretation, nothing is indisputable, and nothing gets achieved.
That only depends how much your green-tinted sunglasses and blinders block out. I knew months ago that Fermi was going to be a unimpressive, largely from rumor sites like Semi-Accurate. I then waited for a good deal on another high-end setup and settled on 5850 CF, which was verified a major home run after Fermi released.
Wow , so you "knew" for a certainty that Fermi would turn out the way it has? Just by putting on your "red-tinted" glasses and reading a anti-NV slanted website?
Amazing precognition there
Of course, some say that hindsight is 20/20 vision!
The rumors were there for months, where were you? On the surface everyone immediately should have known something was wrong when it was delayed for six months. Kyle at HardOCP said we should expect a March release last October. A product isn't delayed that long "to increase the awesome factor;" it's delayed that long because something is seriously wrong with it. So the mission then was to find out what was so wrong with it.Wow , so you "knew" for a certainty that Fermi would turn out the way it has? Just by putting on your "red-tinted" glasses and reading a anti-NV slanted website?
Amazing precognition there
Of course, some say that hindsight is 20/20 vision!
Turns out I was right, but more so this is a victory for the "rumor sites" as they were pretty spot on this time around.
Looking back over everything Charlie said, did he get a single thing right? If he did, I must have missed it.
What did he get wrong?
Go reread what I wrote again because you didn't understand a thing. He could have stated NVIDIA's new part was called "the X1Z FTW WAHHOOOOO" and you would post "see, he's wrong, it's called the GTX480, see there, that's another thing he got wrong" and I would still be laughing in your face like I am now.448 shaders for top bin part
600MHZ for top bin part
Top bin part will have 5K units total and no more will ever be produced
Performance level of the 480 was wrong(margin was 2x-3x higher then his lies)
Initial tape out was wrong(which actually makes nV look worse, but yet another thing he was just telling lies about)
Pretty much he just spewed a bunch of crap out going everywhere from 512SPs at 750MHZ to 448SPs and 500MHZ clocks and figured one of his fabrications would end up being right.
Where's your link?Read his latest article. Turns out that the 5870 is slow and that the yields for it were so low it was not worth launching(AMD had stated that the 58xx parts were getting 40% yields at launch, 62.5% is barely worth launching in Charlie's reality). Take him as accurate if you will, AMD was stupid to launch their slow parts according to Charlie too. To say he is a fool is putting it mildly, but if you think of him as insightful keep in mind that he is stating on the record that AMD is stupid and slow now
Wow , so you "knew" for a certainty that Fermi would turn out the way it has? Just by putting on your "red-tinted" glasses and reading a anti-NV slanted website?
Amazing precognition there
Of course, some say that hindsight is 20/20 vision!
448 shaders for top bin part
600MHZ for top bin part
Top bin part will have 5K units total and no more will ever be produced
Performance level of the 480 was wrong(margin was 2x-3x higher then his lies)
Initial tape out was wrong(which actually makes nV look worse, but yet another thing he was just telling lies about)
Pretty much he just spewed a bunch of crap out going everywhere from 512SPs at 750MHZ to 448SPs and 500MHZ clocks and figured one of his fabrications would end up being right.
Read his latest article. Turns out that the 5870 is slow and that the yields for it were so low it was not worth launching(AMD had stated that the 58xx parts were getting 40% yields at launch, 62.5% is barely worth launching in Charlie's reality). Take him as accurate if you will, AMD was stupid to launch their slow parts according to Charlie too. To say he is a fool is putting it mildly, but if you think of him as insightful keep in mind that he is stating on the record that AMD is stupid and slow now
I believe MK6. Anyone could have seen it coming if you followed the GPU game closely enough over the last few years. They were increasing the chip size every generation, ati broke down and went for smaller design and a dual gpu card as their top end, and they called it quits chasing the single gpu halo. Turned out to be a smart decision in the long run. People were calling g80 & gt200 big and hungry.
When a manufacturer/designer doesn't volunteer the information and everyone's under an NDA (ie. can neither confirm nor deny), then everything is a rumour by definition geniusLOL - I like to stick with facts, not rumors, so these types of sites are just fluff. Some of you take this nonsense waaaay too seriously.
Actually the physical chip size is about the same as the last new nvidia architecture which was the G80. So using your argument comes with exactly the opposite result as the G80 was probably the most successful gpu of all time so hence fermi should be too.
As for Charlie - he was mostly wrong in pretty well all of his "dramatic" assertions, in that he took some information which was had a grain of truth, then made a load of rubbish up off the back of it. e.g. The eol of the GTX 275/285 meant nvidia were leaving high end market. The TMSC problems meant fermi was "unmanufacturable". The 280W power usage which he guestimated from the 448 shader part (not 512 as people keep claiming - read his article) and 600 core clock. The extra functionality for gpu compute which he said meant nvidia was abandoning gamers - remember how fermi wasn't meant to have any hardware tesselation. The only one that really panned out was how late fermi was which going by how far out everything else was seems more luck then judgement.
The 280W power usage which he guestimated from the 448 shader part (not 512 as people keep claiming - read his article) and 600 core clock.
The rest however is Nvidia's fault. It designed a chip that was more or less unmanufacturable, and we have been saying as much for more than six months now. It is big, 530mm^2 at the minimum, likely 10+mm^2 more, and way too hot. The 448 SP version is listed at 225W TDP, 190W 'typical', and that is with two of the 16 shader clusters clusters fused off. With them on, it would likely be well above 250W, far too hot for a single GPU card.
The raw manufacturing cost of each GF100 to Nvidia is more than double that of ATI's Cypress. If the target product with 512 shaders is real, the recently reported 40 percent yield rates don't seem to be obtainable. It won't hit half of that based on Nvidia's current 40nm product yields, likely far far less.
Cost aside, the next problem is power. The demo cards at CES were pulling 280W for a single GPU which is perilously close to the 300W max for PCIe cards. Nvidia can choose to break that cap, but it would not be able to call the cards PCIe. OEMs really frown on such things. Knowingly selling out of spec parts puts a huge liability burden on their shoulders, and OEMs avoid that at all costs.
280W and 550mm^2 means Nvidia is maxed out on both power use and reticule area for any product from TSMC. There is precious little room to grow on either constraint. The competition on the other hand can grow its part by 60 percent in die area and over 50 percent in power draw while staying below what Nvidia is offering. That puts an upper bound on Nvidia's pricing in a fairly immutable way, barring a massive performance win. If you don't feel like reading to the end, the short story is that it didn't get that win.
Fixing these problems requires Nvidia to do what ATI did for Evergreen, that is, double up on the vias and also change the circuits in a non-trivial way. This process requires a lot of engineering time, a base layer respin, and probably at least one metal spin on top of that. If everything goes perfectly, it will still be more than six months before it can bring a fix to market.
While this is bad for Nvidia, and likely terminal for Fermi GF100 as an economically viable chip, it does actually get worse. The chip is big and hot. Insiders have told SemiAccurate that the chips shown at CES consumed 280W. Nvidia knew that the GPU would consume a lot of power long before the chip ever taped out, but it probably thought it would be around the 225W mark claimed for the compute boards.
To combat this, Nvidia engineers tell SemiAccurate that the decision was made to run the chip at a very low voltage, 1.05V versus 1.15V for ATI's Cypress. Since ATI draws less power for Cypress, 188W TDP vs 225W TDP for the Fermi GF100, every time Nvidia needs to tweak the voltage of its card, that results in roughly 50 percent more amperage used for every .01V the core is raised by. While this is oversimplification, the take-home message is that Nvidia made choices that result in more power added than ATI if the voltages need to be upped.
If you recall, the official story is that the card, in it's cut down and underclocked version, pulls 225W. That number, along with stunningly poor performance, has lead to some notable backpedaling. If that isn't bad enough, some sources at GDC told SemiAccurate that Nvidia jacked up the TDP by 50W last week without warning.
We will be the first to admit we were wrong about the TDPs of the cards. At CES we said the GTX480s shown there were pulling 280W, something Nvidia vehemently denied. Engineers beavering away at the things Dear Leader thinks are important, like the style of the wheels on his Ferrari, have been pulled off to work on cards for some unfathomable reason. Working hard, they have managed to reduce the TDP of the cards 5W to 275W. Yeah, Nvidia finally admitted that the card is the burning pig anyone who has used one knows it is.
Go reread what I wrote again because you didn't understand a thing.
448/600mhz who knows. People other than charlie have said that nvidia had been changing the final specs right up until launch almost.
He was off on tape out, but why do you care? I'd figure you'd be happy that he expected NV to do that well.
As for what has he gotten right, are you retarded?
Where's your link?
had a yield of 62.5 percent, give or take a little, and that yield was considered so low that it was almost not worth launching.
GTX480 is slow, barely faster than an ATI HD5870.
Taiwan Semiconductor Manufacturing Company (TSMC), the world's largest dedicated independent semiconductor foundry, recently confirmed it has run into new issues with their 40nm process technology that have sent yield rates down to 40%.
T
http://www.techspot.com/news/36781-tsmc-40nm-yield-issues-to-affect-amd-and-nvidia.html
According to Charlie the 5870 is slow and was unlaunchable- given your praise of him I can only assume that you agree.