If you're specifically targeting the "vast majority of PC users" and you're arguing that professional applications are a very small section of the market, then you're deliberately ignoring the only real place in the market where "performance" has any bearing at all.
Gaming, collectively, is a bigger subset of desktop apps than any single professional application out there. How many millions copies of individual PC gaming titles are installed on PCs around the world? Your guess is as good as mine.
The performance metric for gaming is insatiable. Even casual PC gamers are always looking to max out everything where possible, and always falling short.
Gaming is also among the few apps on the desktop that consumes the most system power, as it can require the use of all system components, including graphics and network.
So, given the above, the advent of e-sports, and, especially, the sheer number of people engaged in the activity on the desktop and their insatiable appetite for more performance, it's a bit myopic to overlook the importance of gaming performance in the desktop environment.
If you ARE ignoring performance that only exists in corner cases, such as bleeding edge games with $1000+ video cards, then performance doesn't matter at all beyond making YouTube and web browsers responsive and capable of sifting through hundreds of open tabs, which, let's all admit, even a 3100 /i3 of any recent vintage can handle with ease.
Those are not corner cases at all. That's the problem with your argument. The 3900x and 3950x though very good in multithreading, afterall these are HEDT chips in their own rights, and sporting more cores than any other chip on desktop, are not impressive, because of these same reasons, in many desktop apps. Like many HEDT chips, they require multithreaded apps to unleash their impressive power. The desktop environment doesn't work that way, as you rightly pointed out with the 3100/i3 analogy.
How is it a $750 16 core halo chip is losing, in gaming, to a 6 core, mid-range chip from the competition? How come it loses in many single/few threaded apps, which is the mainstay of desktop? How come it burns significantly more power at idle, where desktop systems spend most of their time?
The 10900k dominates in everything that is not fully multithreaded, and only loses to chips with a higher core count in highly multithreaded workloads, and even winning in some of them. That's what a halo chip is supposed to do!
So, what remains? Power draw while under load? That doesn't look so good for Intel's latest. Purchase cost? Well, it's either a wash, or tips into AMD'S side when you include needed coolers on the high end stuff.
So, what is it you are pushing here?
Nice qualifier! A chip with a horrible idle power consumption is not suitable for desktop. They belong in a production environment.... how about...... HEDT?
I'm disputing those who are saying these chips are "bad." They are not. Most of these chips are APUs, meaning individuals and organizations wouldn't need to invest in video cards for office use. They idle lower and perform basic office and classic/legacy desktop computing tasks equally, if not faster than the competition.
Nobody's shifting anything. You've got a history of blatantly misrepresenting data, and you're doing it again. Not that you're the only one on this forum that tried to prove the 10900k to be better than it actually is by using Phoronix test data.
Okay, you don't like Phoronix's Linux benchmarking suite. Fine. Link the one you like then.
And you still ignored the circle plot from Phoronix showing that the 3900x was significantly better bang/buck.
So, you do like something in there afterall! Alright, the 3900x is now cheaper, but we didn't need Phoronix to tell us that.
There is a big reason why Phoronix's geometric mean doesn't square with Anandtech, Tom's Hardware, Hardware Unboxed, Gamer's Nexus, or anyone else's productivity benchmarks for the 10900k vs 3900x/3950x. Those "other" sites test with a narrower range of applications that the reviewers have chosen to inform their readers based on what said reviewers believe the readers want/need to see.
Where's the poll? I'd like to add my vote.
Most of the workloads are selected to be as heavily-threaded as possible, and in the case of unexpected outliers (like the Sea Slug simulation in Anandtech's review), they'll go out of their way to explain to you why results are the way they are (either because the test is sparsely-threaded or because it's more bottlenecked by something only vaguely related to IPC + clockspeed).
So, naturally, the 'threadrippers' do very well in the multithreaded tests. Sounds like a foregone conclusion, if you ask me.
The only reason why you (and one other user here - you know who you are!) are now quoting Phoronix is that it's the ONLY site you can find where they have any data that appears to make the 10900k look good as a productivity processor - if you're ignorant enough to quote it out of context. And in three days none of these mental gymnastics will be at all relevant, since the 3900x is on its way out early.
Uh oh no need to get mad over this. Much more data has been posted in this forum. You just choose to dismiss them and accuse the posters as having "an agenda."