- Mar 3, 2017
- 1,687
- 6,243
- 136
Yeah, I think it comes from what Adroc says below:Not sure why the distaste for Cinebench. Sure, its kinda overused, but its not like its completely artificial - its derived of Cinema 4D after all, is it not? Actual app people use for work. And performance it measures fairly translates to performance in other similar apps like Vray or Corona.
Actual Cinema4D is GPU-accelerated.
Yes and you need capable GPU to use that, which most people dont have, so they stick to CPU rendering.B0 step has been with us for a while, it just PRQd early January.
They can't do anything because 1t delta.
Actual Cinema4D is GPU-accelerated.
Sorry, but this statement definitely deserves a lol.B0 step has been with us for a while, it just PRQd early January.
?On occasion I just can't resist calling out blatant fabrications presented as fact
No, even a pretty poverty GPU accelerates renderers well.Yes and you need capable GPU to use that, which most people dont have, so they stick to CPU rendering.
I have personal experience there, so i dare to disagree. Not to mention the vram limit, you dont suffer from when using CPU renderer.No, even a pretty poverty GPU accelerates renderers well.
Yeah but if you're doing something DRAM-heavy rendering it's probably getting offloaded to a farm.I have personal experience there, so i dare to disagree. Not to mention the vram limit, you dont suffer from when using CPU renderer.
It's a cope metric to say that they're faster at cinememe which, either way, won't help much.Just how many client consumers are out there constantly making "content"? Not much is my guess.
That is fooking gorgeous.
Rendering something that requires more than 8GB poverty gpu would have, is not dram heavy rendering.Yeah but if you're doing something DRAM-heavy rendering it's probably getting offloaded to a farm.
Well, if you are stuck with an 8GB VRAM GPU, you can blame Nvidia for that. We even have a whole topic for it. Come join us!Rendering something that requires more than 8GB poverty gpu would have, is not dram heavy rendering.
Working on largeish codebases or dealing with containerized landscape locally are quite common these days. 8-16c are fine but the trend is clear.Just how many client consumers are out there constantly making "content"? Not much is my guess.
No it's not, it's the tiniest possible subset of the DIY CPU market.Working on largeish codebases or dealing with containerized landscape locally are quite common these days
To use your style: No, nobody mentioned the poor peasant DIY market, lol. OEMs supply the prosNo it's not, it's the tiniest possible subset of the DIY CPU market.
From what I understand, this entire discussion is about the mainstream consumer market. Your original answer was to @Saylick who was specifically talking about client PC space (see "client sonsumers"), so we're discussing pros or semi-pros who try to yield more value from mainstream hardware or hobyists / beginners.To use your style: No, nobody mentioned the poor peasant DIY market, lol. OEMs supply the pros
I don't know how to tell you this, but Intel/AMD/Nvidia always know what each other are doing well in advance of anyone else. It would actually be more of a surprise if they didn't have a solid understanding of what Zen 5 is capable of by nowIndeed. If he did have profiling statistics from GNR-B0 silicon Intel would very much like to see them.
In the client space there was a massive need for higher core counts in the pre-Zen era. The jump from 4 to 8 cores was much needed, but it also came with a bit of overprovisioning, in the sense than 6 cores cover most of the client space needs. Slowly but surely this will become 8 cores and 12 cores in the next few years, but this movement is slow because the mainstream market needs ST performance more than it needs MT.I need to preface this by saying that I don't have a clue, but core count in client has always seemed like a cope metric to prop up nT numbers in benchmarks for products that lack 1T so that they don't look completely pointless (yet they do for most people).
I can't believe people here hellbent on justifying AMD's decision to limit desktop core count to 16. I can understand AMD is a business and want to maximize profit and also I understand there could be technical issues as well. But why on earth, we as end users need to defend that? Our concern should be value for money and not on how much profit business's make.
The difference is AMD is capable of more cores in desktop, they already have 96/128 cores in server. On other hand Intel with Skylake hit wall and was unable to make any significant progress. Therefore it is not too much ask for 24/32 cores in desktop. I know it is not profitable for them. But that is a different story.If perf/core was stagnating as well like it was during the Sandy -> Skylake ++++ era then I could understand the issue but we are getting significant perf/core increases gen on gen so it is not like AMD are sitting still.
I can't believe people here hellbent on justifying AMD's decision to limit desktop core count to 16.
Duh, what is the definition of the client PC space? For example, is a i7-based Lenovo ThinkStation a client PC?From what I understand, this entire discussion is about the mainstream consumer market. Your original answer was to @Saylick who was specifically talking about client PC space (see "client sonsumers"), so we're discussing pros or semi-pros who try to yield more value from mainstream hardware or hobyists / beginners.
The difference is AMD is capable of more cores in desktop, they already have 96/128 cores in server. On other hand Intel with Skylake hit wall and was unable to make any significant progress. Therefore it is not too much ask for 24/32 cores in desktop. I know it is not profitable for them. But that is a different story.