The numbers in the demo were system draw. Both systems has the same graphics cards, same ram and as close to the same motherboard as they could get. You seem to be upset that they are doing well, which is puzzling to me.My point was that a small difference in CB (so it's not 5% but 10+ % ) still makes a big difference in userbenchmark,that's what I was answering to.
I never said that the 9900k was being gimped,but around it's release we have sen how huge a difference the motherboard makes when it comes to power draw (and to a lesser degree to performance) .
Where is your proof that they selected the best possible combo?
Anecdotal but still,properly configured at 4,7Gh running cinebench the 9900k uses 131Watt....158 at stock
CINEBENCH R15.038 131Watt @1,12Volt (158Watt @1,216 Volt Stock)
https://www.computerbase.de/forum/threads/9900k-undervolt.1832939/
These are the details regarding the test configuration.Where is your proof that they selected the best possible combo?
The 9900K is running in a default "out of the box" setup. It's debatable whether this is the best possible combination, though I fail to see what extra one can add to this setup in order to make it look better wrt Cinebench scores.Testing performed AMD CES 2019 Keynote. In Cinebench R15 nT, the 3rd Gen AMD Ryzen Desktop engineering sample processor achieved a score of 2057, better than the Intel Core i9-9900K score of 2040. During testing, system wall power was measured at 134W for the AMD system and 191W for the Intel system; for a difference of (191-134)/191=.298 or 30% lower power consumption.
System configurations: AMD engineering sample silicon, Noctua NH-D15S thermal solution, AMD reference motherboard, 16GB (2x8) DDR4-2666 MHz memory, 512GB Samsung 850 PRO SSD, AMD Radeon RX Vega 64 GPU, graphics driver 18.30.19.01 (Adrenalin 18.9.3), Microsoft Windows 10 Pro (1809); Intel i9 9900K, Noctua NH-D15S thermal solution, Gigabyte Z390 Aorus, 16GB (2x8) DDR4-2666 MHz memory, 512GB Samsung 850 PRO SSD, AMD Radeon RX Vega 64 GPU, graphics driver 18.30.19.01 (Adrenalin 18.9.3), Microsoft Windows 10 Pro (1809).
We are comparing a new node to a node that made multiple iterations with improvement in power and efficiency. And calling Zen2 a new architecture ... improved architecture yeah, but "new" :/
The 9900k is pushed well past the ragged edge of efficiency, and has a silly TDP as a result)
I'm excited to see gaming performance on Zen2, that's really the only arena that's still a big unknown at this point.
There is a reason why AMD choose this configuration for the demo. Performance being more predictable, less worries about threads jumping chiplets.
It's debatable whether this is the best possible combination, though I fail to see what extra one can add to this setup in order to make it look better wrt Cinebench scores.
They could always hire Principled Technologies to set everything up and run the benchmarks.
Jumping chiplets? The demo chip only has one chiplet. We don't even know if each chiplet has two CCXs per chiplet connected by internal IF like on Zen/Zen+ or if it's 8c monolithic.
Lets hope Intel does not add a "disable half the cores" switch and then call it "gaming mode". So no one can take advantage of that.
I as well. I'm long overdue for an upgrade, and I've waited this long. 12-core minimum for my next CPU, though I would prefer a 16-core since I do distributed computing.All I know is that I want to see a 12 core AM4 part by this year
My point was that a small difference in CB (so it's not 5% but 10+ % ) still makes a big difference in userbenchmark,that's what I was answering to.
I never said that the 9900k was being gimped,but around it's release we have sen how huge a difference the motherboard makes when it comes to power draw (and to a lesser degree to performance) .
Where is your proof that they selected the best possible combo?
Anecdotal but still,properly configured at 4,7Gh running cinebench the 9900k uses 131Watt....158 at stock
CINEBENCH R15.038 131Watt @1,12Volt (158Watt @1,216 Volt Stock)
https://www.computerbase.de/forum/threads/9900k-undervolt.1832939/
Intel out of the box settings are entirely random based upon which motherboard vendor you buy.These are the details regarding the test configuration.
The 9900K is running in a default "out of the box" setup. It's debatable whether this is the best possible combination, though I fail to see what extra one can add to this setup in order to make it look better wrt Cinebench scores.
I as well. I'm long overdue for an upgrade, and I've waited this long. 12-core minimum for my next CPU, though I would prefer a 16-core since I do distributed computing.
I don't have an express need for it, but I'd like one because I do occasional rendering (CAD type stuff) and handbrake encodes and video editing, and would like to be able to do more heavy multi-tasking. I'm still on Bulldozer, so there's plenty of options that would be a big upgrade, but I've held off this long so if I can get 16 core for a reasonable price all the better. I might end up getting an 8 core and then giving that to my nephew that wants to get into PC gaming (I want to teach him how to build his own computer, and learn some things).
The 1950x is getting very affordable@$550 and some motherboard to support it are around $200. If you don't get the spendy ram, its not too bad to upgrade, and has upgrade potenial, and its faster (quad channel ram) than a comparable 2 channel system. I am building ripper number 7 next week, parts on the wayI'll probably do something similar if AMD doesn't plan to release a CPU with that second chiplet initially. Upgrade my main rig at home first, and then move it to my HTPC when the 16-core is available. I got itchy to upgrade a few times, but then reality hit when I saw RAM and video card prices. But now that prices are dropping to more sane-ish levels, don't think I'm going to be able defer that upgrade itch again.
Are you using the same motherboard with all the builds?The 1950x is getting very affordable@$550 and some motherboard to support it are around $200. If you don't get the spendy ram, its not too bad to upgrade, and has upgrade potenial, and its faster (quad channel ram) than a comparable 2 channel system. I am building ripper number 7 next week, parts on the way
I have one chepo ASUS that was on sale, not happy with it, but it works. The 2990wx is on the MSI board with like phase 19 VRM (not sure the number) and it works OK< but all the rest are X399 Taichi, I love that board !Are you using the same motherboard with all the builds?
The 1950x is getting very affordable@$550 and some motherboard to support it are around $200. If you don't get the spendy ram, its not too bad to upgrade, and has upgrade potenial, and its faster (quad channel ram) than a comparable 2 channel system. I am building ripper number 7 next week, parts on the way
Yea, I got number 6, a 1950x for $420 before Thanksgiving. Has not been close since. So I said screw it, and got the 2970wxUnfortunately here in Canada, we don't nearly get the same deals as our southern neighbours. Amazon.ca had a pretty good deal on a 1920X for $385, but sadly missed out on it. Never been that price since then.
It would be a huge upgrade for the consumer if Ryzen 3 8C 16T would give 90% (or more) of Core i9 9900K Gaming performance at 65W TDP and half the price.
Yep it's 14nm