- Mar 3, 2017
- 1,749
- 6,614
- 136
Yes, it is live preview in 3DThe Cinema thing actually interests me, or better said, i wonder if same speed-up applies to 3dsmax. By "viewport" you mean more performance/FPS when zooming/camera panning around the modeling space, not refering to rendering speed, right?
Regarding me being wrong, i see it more as difference in what one considers as "significant". You say its easily 10-20 percent faster, i say thats for the most part not perceivable speed-up (10 percent definitely, 20 percent is borderline pushing it), so it does not justify dropping 700 EUROs on the new CPU - to me.
Like, for example, i am using 4090 for work with octane render, and the only upgrade path for me is 5090 (assuming its again about 2x as fast as 4090 for the particular task, like 4090 was compared to 3090 and 3090 compared to 2080Ti). Hypothetical 5080, thats about 10 percent faster than 4090, does nothing for me, if Nvidia wont sell 5090 at all and 5080 gonna be the new top card, i am not paying 1200 for it, fooling myself 10 percent of additional performance is "significant".
The way a game is run (in Windows) on dual CCD SKUs (more specifically on Dual CCD X3Ds and 99xxX) depends at the end if the game is included or not in the named KGL list. This is a file called "KnownGameList.bin" that can be found at this route: "<user_account>\AppData\Local\Microsoft\GameDVR"The dual-CCD SKUs should behave exactly the same as single-CCD ones due to X3D driver straight up disabling the other CCD.
I believe quite a few people would disagree that 20% faster isn't significant. Gamers certainly would as would anyone rendering.The Cinema thing actually interests me, or better said, i wonder if same speed-up applies to 3dsmax. By "viewport" you mean more performance/FPS when zooming/camera panning around the modeling space, not refering to rendering speed, right?
Regarding me being wrong, i see it more as difference in what one considers as "significant". You say its easily 10-20 percent faster, i say thats for the most part not perceivable speed-up (10 percent definitely, 20 percent is borderline pushing it), so it does not justify dropping 700 EUROs on the new CPU - to me.
Like, for example, i am using 4090 for work with octane render, and the only upgrade path for me is 5090 (assuming its again about 2x as fast as 4090 for the particular task, like 4090 was compared to 3090 and 3090 compared to 2080Ti). Hypothetical 5080, thats about 10 percent faster than 4090, does nothing for me, if Nvidia wont sell 5090 at all and 5080 gonna be the new top card, i am not paying 1200 for it, fooling myself 10 percent of additional performance is "significant".
But if I have an encoding job running in the background that would either have to be stopped then or it would absolutely cripple my game on the first CCD.I don't have one, but from what I've read the second CCD would wake up as soon as you alt-tab out of the game.
But if I have an encoding job running in the background that would either have to be stopped then or it would absolutely cripple my game on the first CCD.
Sorry, you are right, I missed the word "sustained".So what? What exactly in my post you don't agree with?
I think the poster meant "if you don't have urgent need, wait for 9950x3d".So new Chiphell leak about 9800X3D.
View attachment 109452
"Frequency is quite high". lol OK. That should rule out anything under 5.4GHz, as the entire Zen 5 desktop lineup bottoms at 5.4GHz. Lower than the lowest is not "quite high". I still maintain that listed frequency iso with 9700X is impossible, so if this guy is for real, then its about a 98% chance 5.4GHz "max boost" will be on the box.
Why the leaker is suggesting to wait for 9950X3D due to 9800X3D pricing is beyond my imagination though. We've seen the MSI leaks showing less fps for the 9950X3D than the 9800X3D-- why they suggest one wait for that because 9800X3D will be expensive when 9950X3D will obviously be even more expensive and slower in gaming??
Mods, this is all in fun, remove if you need, but I saw this on 3DCenter_org's Xitter feed and it made me lol. It really hits home, I wish I'd thought of it.
**EDIT: I just noticed the reds and blues are wrong, lol.
View attachment 109476
Yah my 9700X goes below 5.0 GHz during Cinebench runs. (at default 65W)I wonder what the official boost clock will be.
5.2 GHz while running all cores loaded with Cinebench is very good, but boost clock typically apply to 1 or 2 cores.
This is future
I have a blast from the past that fits that pic:Insert inappropriate rear view mirror joke here....
So, to paraphrase Pat Gelsinger, AMD won't be in the rear view mirror, and they won't be in the windshield either: they'll be riding shotgun and giving Intel instruction on how to drive or else...
Will have some use...
This wouldnt be complete without Lisa Su s "enigmatic" smile...
Check your messages, we need proper colors for the Intel t-shirt. You can replace the one above though, can still be the "last one".Fixed. Last one I promise!
Which is why I am wondering about the wisdom of Lion Cove and Skymont dropping it.Since a few people were talking about SMT uplift…
It's readily apparent to me that Zen 5 is a server architecture through and through... In some cases the SMT uplift is almost double.
Intel never got more than 10% for some reason. Their design must be garbage.Which is why I am wondering about the wisdom of Lion Cove and Skymont dropping it.