You accused different reviewers of being payed by Intel without evidence only because their results don't match your obvious bias.
What does the word "payed" mean? I've never heard of it. I can promise you that I've never in my entire life used that word.
My bias? I'm the one of the two of us who will most likely be buying a Skylake. I've already said so in this thread, and as far as I know, you haven't. I'm always looking for a good excuse to buy new hardware. Hell, I've been salivating since the second that I first saw what the Z170 chipset had to offer. I'd never buy a Skylake to be able to use Z170 if it had a large regression in performance, but as always with Intel, their latest is even better than its predecessor, which was better than its predecessor, ad infinitum. Haha, well back to the P4, at least.
So now all reviewers that show >10% gains per clock for Skylake in CPU limited games are cheating?
No, just the ones that are. For instance, ones that overclock one CPU, and not the CPU to which they are comparing it:
http://www.eurogamer.net/articles/digitalfoundry-2015-intel-skylake-core-i5-6600k-review Ones that vastly improve the performance of one of the CPUs by not only using the fastest DDR4 available at any price, but also using a vastly overclocked Bclk, on top of their CPU overclock, but only on the Skylake:
https://www.youtube.com/watch?v=lWsEYGA1JbY&feature=youtu.be&t=19m47s
Skewing results in these ways is dishonest. There's really nothing else to say about it, in my opinion. And yes, I do realize that they have been asked to review these CPUs this way, or else Ian Cutress/AnandTech wouldn't be the only one forcing Skylake to compete on its own merits, which it does just fine, btw. It just isn't nearly as dramatic as Intel was wanting it to be, I'm guessing.
These people will rather trust Crucial
Speaking of Crucial, please post the link to someone at Crucial posting what you say they did, in your pretend quote. Link to faked quote:
http://forums.anandtech.com/showpost.php?p=37639451&postcount=3949 You faked a quote in your last response to me that said:
Crucial said:
DDR3-1600 CL8/CL9 DDR3 (used in most Haswell tests) is superior to DDR4-2133 CL15 DDR4 in true latency (used in AnandTech's Skylake review).
than a random poster trying to put Skylake in the worst light possible:
Yes, I guess I must absolutely
hate Skylake. That's why you and I have talked about nothing other than DRAM in the vast majority of our posts, and I've stated in this thread that I most likely will be buying a Skylake. You seriously aren't very good at drawing conclusions, or making arguments, are you?
You need much faster DDR4 to match the real latency of a simple DDR3-1600 CL9 kit
Yes, if 166 Mhz/14.3% is your idea of "much faster". Isn't that the third time you've said more or less that exact thing? I've answered it the same way every time...
Let me help, AnandTech's calculation might be too complex for you which is why you're only making a fool of yourself.
DDR3-1866/CL9 = 207
DDR4-2133/CL15 = 142.2
DDR4-2666/CL15 = 177.73
lmfao, kid, Ian uses simplified estimations like the ones you continue to quote because he wants people who don't have enough intellect to be able to comprehend latency between two types of DRAM will be able to comprehend the point he's making. It has zero to do with him not knowing the proper way to find the true latency for DRAM that has already been speed rated by the manufacturer, I can promise you. Seriously, did you not see where he calls it an approximation? Feel free to contact him about it for verification. BTW, here he is proving that he does in fact know how to compute actual latency:
Ian Cutress @ AnandTech said:
So here we have the values for True Latency:
DDR3-1600 C11: 13.75 nanoseconds
DDR4-2133 C15: 14.06 nanoseconds
BTW, you'll notice that he used the identical formula that I've been using all along, the same formula that everyone on Earth uses, when they don't have access to a $100,000 or $500,000, or however much the machine costs that can "spec" out a DRAM chip by itself, before it has been matched with other similar speed DRAM chips, and turned into a DIMM, with published specifications. CL 11/800= .01375 X 1,000= 13.75 ns, CL 15/1066.5= .01406 X 1,000= 14.06 ns
http://anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/7
To those of you actually reading this, the way to find a DIMM's true latency is as follows:
(CAS / Frequency (MHz)) × 1000 = X ns
Example:
(7 / 667) × 1000 = 10.49475 ns
https://en.wikipedia.org/wiki/DDR3_SDRAM
Skylake would still be at a slight disadvantage with that DDR4-2666 CL15 kit according to AnandTech
Please link where any of the handful of AnandTech employees said that.
Different people in this thread provided actual benchmarks results where higher clocked memory doesn't necessarily beat lower clocked memory with lower latency
Right, one guy posted some stuff comparing one type of DRAM to itself, in a completely different type of software than you and I have ever discussed, and in that single software, even when using the largest, most mismatched case, he came up with a maximum of 12.5% difference...in the wrong type of software. That had what to do with gaming performance again?
and none of what you posted proved otherwise
Maybe not to you, but luckily, something you posted makes my point exactly:
So, thanks for doing the legwork on that one. I do appreciate it.<-- That is
not sarcasm, btw.
basically busting your bullshit of pro-Skylake reviews.
See the above tables for what it looks like when someone uses the fastest RAM available. That's exactly the percentages of gain someone who isn't wanting to be very honest about their review can add to the performance of the Skylake. And yes, I think if you are too poor to be able to afford to spend $100-200 for some decent DDR3-2400 for the DDR3 CPUs in your review, then you should not be using the ultra-expensive DDR4-3600, either.
BTW, here is the performance difference, when using the identical CPU, and DDR3-1,866 vs DD4-2,133. As can be easily seen, the slowest DDR4 available still outperforms low-latency DDR3, even if not by much.
Since you obviously, for whatever reason, have something against Ian's/Anandtech's opinion of DDR3 vs DDR4, here's more or less the identical opinion, from one of the better-known memory manufacturers:
Corsair.com said:
Ultimately thats kind of the takeaway here: DDR4 starts at very high speeds with room to scale higher, and at those entry level speeds, its faster and more capable than its predecessor in almost every test. Mainstream DDR4 actually winds up with lower overall latency and higher bandwidth than mainstream DDR3.
http://www.corsair.com/en-us/blog/2014/september/ddr3_vs_ddr4_synthetic
Reported for posting plain lies, never said that.
Are you absolutely certain of that? We're talking thousands of posts ago in this thread alone, probably ~300 of them by you. You were awfully excited, if you'll remember. I'm more or less certain it was you, since you have been the only person in this entire thread with whom I've communicated about CPUs. I'll see if I can find the post.
I do think Core i7 6700K is the best gaming CPU at stock. We could also call it the best chip for most games (overall, stock or OCed) due to relatively poor scaling with more cores in most titles today,
Who is we? Anyway, I agree, not that its actually available worldwide yet. It's definitely what I'll be recommending for the near future, once Intel gets their production/supply issues resolved.
but Haswell-E's extra cores might make up for the slightly worse IPC/clocks in future titles, which is why I bought one instead of waiting for Skylake in the first place.
Haha, it took me
forever to decide on this 4790k, instead of a 5820k. Like multiple months. I also had considered trying to wait until Skylake was available, but I was really needing an upgrade, and waiting another 9-12 months to see how Skylake turned out just didn't seem like the best idea, at least for me.
Since I decided on the 4790k late last year, I think I made the right decision. Had I known about siliconlottery.com at that time, I would have definitely considered a 5820k even more, although I'm still not sure if I could stand the additional heat of the two additional 22nm cores. Not extra heat during gaming, which I'm sure would be more or less identical, but the extra heat from transcoding video. I had what amounted to close to 2,000 DVDs & Blu-Rays to transcode, not just a couple.