On a serious note are games going to use more cores

Aug 11, 2017
30
9
41
so I have a 1700 at 3.9 ghz with sli 980tis

When nvdia drop something better than thoes 2 combined il be going single card

Is my 1700 going to be "future proof" for gaming

My cpu upgrade option in the future are ryzen 2/3 if there's is a good jump in clocks and ipc

Or back to intell which I would rather not do
 

scannall

Golden Member
Jan 1, 2012
1,948
1,640
136
so I have a 1700 at 3.9 ghz with sli 980tis

When nvdia drop something better than thoes 2 combined il be going single card

Is my 1700 going to be "future proof" for gaming

My cpu upgrade option in the future are ryzen 2/3 if there's is a good jump in clocks and ipc

Or back to intell which I would rather not do
You should be fine for quite a while. The trend is games are slowly using more cores as time goes on. No need to go back to Intel, as you'll have a couple generations of just dropping in a new CPU. I wouldn't bother until 7nm though.
 
Aug 11, 2017
30
9
41
You should be fine for quite a while. The trend is games are slowly using more cores as time goes on. No need to go back to Intel, as you'll have a couple generations of just dropping in a new CPU. I wouldn't bother until 7nm though.
What I'm holding out for and why I got an expensive board is that ryzen 3 may be kabby lake ipc and clock speeds with good temps and 8 cores

Hopefully
 

Lodix

Senior member
Jun 24, 2016
340
116
116
What I'm holding out for and why I got an expensive board is that ryzen 3 may be kabby lake ipc and clock speeds with good temps and 8 cores

Hopefully
Zen 2 in early 2019 should alredy provide or even surpass that. Zen is a new architecture with a lot of things to be ironed out in hardware and software I would expect more than a 15% IPC increase in 2 years. Also GF 7nm is optimized for High Performance and we could see stock CPUs reaching 5GHz at stock turbo clocks.

And it is not just games which slowly use more cores, the rest of the software that runs in the background too.
 

StinkyPinky

Diamond Member
Jul 6, 2002
6,828
872
126
I think with the i5s moving to 6 core cpus we should see an even quicker move to better multi-threaded support. Core i5s make up the bulk of the desktop fleet I suspect (most business would get i5s?)
 
Reactions: Velvet thunder

Jhatfie

Senior member
Jan 20, 2004
749
2
81
I think with the i5s moving to 6 core cpus we should see an even quicker move to better multi-threaded support. Core i5s make up the bulk of the desktop fleet I suspect (most business would get i5s?)

I would agree with this assessment. With low end i3's and Ryzen 3's being quads now and affordable hexa-cores being available, I would hope that multi-threaded optimization gets more of a boost.
 
Reactions: Velvet thunder

Mopetar

Diamond Member
Jan 31, 2011
8,004
6,446
136
If you're playing at higher resolutions you're far more likely to be GPU bound than limited by CPU performance. It seems like that may apply to you given you're using 980 Ti SLI.
 

gammaray

Senior member
Jul 30, 2006
859
17
81
imho, good techniques of coding to make efficient use of multi-core cpus into gaming remain largely to be invented.
 

ericlp

Diamond Member
Dec 24, 2000
6,133
219
106
build and they will come... Programmers will use what is available if the more cores catches on and the mainstream goes that route... Thanks to AMD, it looks like it is! Finally! I think you are probably ok as most games will still run at current 2-4 cores. Tho, they will eventually state on the box 8 cores or higher recommended.

Core Balancing!
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,863
3,413
136
Consoles will drive core usage, the next big core usage push will come after the PS5/xbox whatevers launches.
 

MajinCry

Platinum Member
Jul 28, 2015
2,495
571
136
Games have progressively been using more cores as time wears on. Hell, even Bethesda's Fallout 4 will use eight cores (much better than 4 cores + 4 hyperthreads), albeit pretty poorly as it has over 100k thread locks per frame.
 
Reactions: Headfoot

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
We have had cheap 4 core cpu's since the Q6600 in 2007, yet you can still play everything with a dual core i3 in 2017, and nothing really requires more then 4 cores for anything but bragging rights. I think core count is never going to be a problem for 1700 - individual core performance will obsolete that processor much earlier.
 
Reactions: beginner99

dullard

Elite Member
May 21, 2001
25,203
3,617
126
I know several people at Treyarch, the developer of Call of Duty: Black Ops https://en.wikipedia.org/wiki/Treyarch They have no plans at all to go beyond 4 cores. The reasoning is simple:
(a) games don't benefit much from more cores (the gains in games are minimal compared to the amount of programming effort needed) and
(b) they will always program for the lowest common denominator over all systems (including both computers and consoles). With both AMD and Intel still producing 4 core processors with their latest batch (Ryzen 3 and even the soon to be released Intel Coffee Lake i3), those 4-core chips will be out in the wild for many years.

It isn't worth their effort to attempt to use more cores, even if it were possible to gain much from it, for several more years.

Now, that is just one company. But the other game developers probably are considering the same issues.
 
Last edited:

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
I know several people at Treyarch, the developer of Call of Duty: Black Ops https://en.wikipedia.org/wiki/Treyarch They have no plans at all to go beyond 4 cores. ....

I think it is more a case of not being able to really load more than 4 cores consistently, rather than not using more than for cores. Many games show some benefit moving from 4->8 cores, BUT it is a small one, Diminishing returns.

We are looking at Amdahls Law in action.



This is the theoretical best case.

There is no way something irregular like game code will be anywhere near fully parallel like rendering tasks. Game code will have fully parallel sections, like driving the GPU, but game logic itself is likely very stubbornly serial.

How far can you push a typical game? 60% Parallel, 40% Serial? That's the dark blue line 2nd from the bottom. Note how little the performance increases moving from 4 core to 8 cores(10-15%?). It's barely noticeable. It gets swamped by things like faster core speed, or lower cache latency (7700K vs Ryzen 8 core). But run those same games on Ryzen 4 core vs Ryzen 8 core, at the same clock speed, and then you can see the minimal speedup.

So, even when the game is mostly parallel (60%), and the engine automatically increases threads to mach core count, the benefits you see after 4 cores are minimal. They are using more than 4 core, you just don't see much benefit.

I expect this will be the case for most games. There will be a few games, that are more Parallel, things like RTS games like Ashes of the Singularity, where you are moving thousands of units and they interact, you can then give smaller bundles of units to more cores to process independently,let's say this raises Ashes to 80% Parallel. That's the Yellow line on the graph that is a more noticeable bump in performance, say 25%+. Nice but can still be irrelevant if you clock speed is 25% slower..., and this is likely approaching the best case for games.

Bottom line: Don't expect a revolution in gaming performance from an increase in core count. 4 cores really is approaching a sweet spot for
code that is at 60% or less parallel, and I expect that means most games.
 

dullard

Elite Member
May 21, 2001
25,203
3,617
126
I think it is more a case of not being able to really load more than 4 cores consistently, rather than not using more than for cores. Many games show some benefit moving from 4->8 cores, BUT it is a small one, Diminishing returns.
Exactly. The tasks that are easily parallelizable are already done in parallel by the GPU. What you are left with as a game programmer are a bunch of different situations that are hard to consistently determine a proper use of the additional cores.

Consider the typical first person shooter type game. Suppose a programmer dedicated threads to AI of the enemies. More cores would then let you have more enemies with better AI. That is great for the end of a level where there may be many, many enemies against you. But what do you do with the additional cores when there are just a few enemies, or none at all? The cores sit there doing nothing. So, the programmer would have to dynamically shift to create other threads to do something else to keep those cores active. But what? If the player isn't doing much, what do you do with the extra cores? You end up using the extra cores 1% of the time and they sit idle the rest. That isn't a good use of 8+ cores

Even worse, you might have the situation of a user with a Ryzen 7 has a different game than a user with a Ryzen 3 since the AI intelligence would increase with core count. How do you explain to the user that their scores or their accomplishments matter based on the type of CPU that they bought before the game was released? Want to be the top in the world? Play on a Celeron as the AI will be bad. Have an 18 core 7980XE? The AI is now impossible to beat.

So instead, the programmer needs to be even smarter. He/she needs to determine a good use of the cores in all situations that can dynamically change as needed. Keeping those hungry cores fed is a lot of work. And then when you can keep them fed, then Amdahl's law kicks in where those tasks might not really be as parallelizable as the programmer hoped. Even worse, if you overdo the threads, you risk making the main thread of player movement bog down and that would be very detrimental to game play. That is a lot of money to pay programmers for very little user benefit and possibly make it worse.
 
Last edited:
Reactions: PeterScott

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Games are nowhere near the limit yet. For instance, every sound being played is independent, and could be rendered in a separate thread. AI in particular would benefit from better multithreading, not to speed up the pathertic AI we have today, but to enable far more realistic behaviour.

Or you could just render all the sounds in 1 thread as they don't take a lot of cpu grunt so it's not worth splitting it (which is more complex and takes more performance). Good AI is all about interaction - so you can't have each person doing his own thing as it's all about how they interact with each other. It's not quite as simple to split them out as it sounds.

Fundamentally games are pretty synchronous as they run real time, and have to sync every frame and they have a combined state to maintain (sync'd with the online server). There's only so much you can split out in 1/30th of a second being as a single thread still needs to split the work up and then combine it at the end.

That doesn't mean it's impossible to do, but it's a lot of work and really got to be worth it for a dev to bother. Making stuff work over more threads is hard so it's just not worth their effort for a few top end machines, they'll code for the number of threads the masses have.
 

Mr Evil

Senior member
Jul 24, 2015
464
187
116
mrevil.asvachin.com
Exactly. The tasks that are easily parallelizable are already done in parallel by the GPU...
Given that games tend to fully load the GPU while there are a bunch of CPU cores idle, it's beginning to makes sense to offload more non-graphical work onto the CPU. If you use something like OpenCL then you can even switch work dynamically from GPU to CPU depending on which has the most available resources.

...Suppose a programmer dedicated threads to AI of the enemies. More cores would then let you have more enemies with better AI...
Having one thread per AI is the obvious way to divide the work, but it is certainly not the only way, or the best way. Consider pathfinding, which is something that AI has to do a lot of in many types of games, and which can take a lot of CPU time. Each search is independent, so you could do each in a separate thread.

...But what do you do with the additional cores when there are just a few enemies, or none at all? The cores sit there doing nothing. So, the programmer would have to dynamically shift to create other threads to do something else to keep those cores active...
You wouldn't try to render more stuff on the GPU just because the player is looking at the sky, so why worry if there is less CPU work to do at times?

...Even worse, you might have the situation of a user with a Ryzen 7 has a different game than a user with a Ryzen 3 since the AI intelligence would increase with core count. How do you explain to the user that their scores or their accomplishments matter based on the type of CPU that they bought before the game was released? Want to be the top in the world? Play on a Celeron as the AI will be bad. Have an 18 core 7980XE? The AI is now impossible to beat...
Players are used to having to turn graphical detail down if they have a cheap GPU, so it would be perfectly natural for a Celeron owner to have to turn the AI down to low, while a ThreadRipper owner can go all the way up to ultra.

Or you could just render all the sounds in 1 thread as they don't take a lot of cpu grunt so it's not worth splitting it (which is more complex and takes more performance)...
Audio doesn't take a lot of CPU at the moment because the current tech is potato-level. To get an idea of what could be accomplished with more processing power, listen to the binaural audio in Hellblade. That's pre-recorded with a binaural microphone, so it only works for the specific sounds recorded, but it could be calculated in real-time for arbitrary sounds, leading to a massive improvement in immersion.

...Good AI is all about interaction - so you can't have each person doing his own thing as it's all about how they interact with each other. It's not quite as simple to split them out as it sounds...
AIs don't need to know anything about other AIs, they only need to know about the physical state of the world. Just like when you interact with a real human, you don't have access what's going on in their head. That means that if you had one AI per thread, they would not require any communication with each other beyond their effect on the world, which is the same as is already required if you have only one thread for all AI.
 

Rifter

Lifer
Oct 9, 1999
11,522
751
126
I think they will but not for another 10-20 years, until 8+ cores are mainstream. Like when the i3/R3 line hits 8 cores minimum.

Until then its really all about multitasking, i bought a 8 core ryzen so i could game as well as doing other tasks, such as running a encoding or batch filter jobs on my photos/video while i game while loosing no gaming performance.
 
Reactions: JimKiler

dullard

Elite Member
May 21, 2001
25,203
3,617
126
Given that games tend to fully load the GPU while there are a bunch of CPU cores idle, it's beginning to makes sense to offload more non-graphical work onto the CPU.
Your logic is getting quite circular here. If I take your post quite literally, it says we need more CPU cores because our CPU cores are idle. Yes, we can use more CPU cores, that is not in question. What is in question is whether or not it is worth the cost to the programmers.
Having one thread per AI is the obvious way to divide the work, but it is certainly not the only way, or the best way.
Which goes back to the cost point. Anything that is obvious to make use of more idle cores has already been done. Yes, they can do more. But it is non-obvious and exceedingly difficult. Thus it is exceedingly expensive and game developers want to make money not come up with reasons to use more cores.
You wouldn't try to render more stuff on the GPU just because the player is looking at the sky, so why worry if there is less CPU work to do at times?
Back to my first comment above, you seem to be arguing that we need more cores so that we can have them idle?
Players are used to having to turn graphical detail down if they have a cheap GPU, so it would be perfectly natural for a Celeron owner to have to turn the AI down to low, while a ThreadRipper owner can go all the way up to ultra.
Yes, having more or less bells and whistles based on hardware is known and acceptable to users. But having the entire gameplay change is quite another thing. Think about this which would then likely happen: you can't get past a level until your anti-virus kicks in and the AI has to give up a core?

Yes, games can use more cores. The game developers just for the most part won't invest in it until the lowest common denominator has more cores. That won't be until the next major revision in CPUs from both Intel and AMD. And even then, some developers won't find it is worth the effort to persue those "non-obvious" optimizations. At the very best, we'll get a few games soon that will throw in a few bells and whistles if you happen to have more cores.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
So instead, the programmer needs to be even smarter. He/she needs to determine a good use of the cores in all situations that can dynamically change as needed. Keeping those hungry cores fed is a lot of work. And then when you can keep them fed, then Amdahl's law kicks in where those tasks might not really be as parallelizable as the programmer hoped. Even worse, if you overdo the threads, you risk making the main thread of player movement bog down and that would be very detrimental to game play. That is a lot of money to pay programmers for very little user benefit and possibly make it worse.

You definitely want to stick with smarter programming. The goal is good performance, not keeping cores busy. We definitely don't want to turn it into busy work that just punishes lower core counts, just to make higher core counts look better, when smart programming would have had good performance everywhere.

Too many times that is how new tech is marketed. I remember Microsoft first pushing DX10 (and Vista) by funding some DX10 modes in games, that made DX9 look worse in comparison, except is was all BS, and modders enabled the same features in DX9, that looked and performed just as well.

Or HDR monitors that make SDR content look worse, so HDR seems better than it really is:
https://www.youtube.com/watch?v=cgBzpYTn_8c&index=3&list=LLAZUTUJGRCVV5Jvkk5k69wQ

Too often new features are oversold by purposefully sabotaging the current feature set. Thankfully there isn't much sign that this is happening for multi-cores.

In reality the proper approach to performance improvements is profiling code to see where the bottlenecks are and addressing those the best way possible, which won't always be throwing more cores at the problem.

Obviously you get it, but many don't get the simple truth of Amdahl's law (it's actually VERY simple from a math perspective), and that much of gaming won't benefit dramatically from big core counts.

There is also the assumption that games are hard coded for 4 cores because we had 4 cores for a long time. This really isn't the case. Once you discover the portions of the code that are suitable for parallel coding, and do the work of making that section of code parallel, it is now ready for any number of threads. No programmer worth their salt would right a section of parallel code, and hard code it to 4 threads. They would read the amount of available cores from the the OS, and setting thread counts in accordance with that, or using OS/Language constructs that simply have the OS decide the appropriate amount of threads to apply to the parallel construct, like Apples GCD.

We aren't seeing big performance lifts in modern games because A) GPU is really the bottleneck, B) Amdahl Law means there typically will be very little performance uplift in moving from 4 to 8 cores in mixed serial/parallel software load. It has nothing to do with games targeting 4 cores.

Modern games done with parallel coding to take advantage of 4 cores will automatically take advantage of 8 cores. The lack of performance boost is simply Amdahls law.
 

JimKiler

Diamond Member
Oct 10, 2002
3,559
205
106
I know several people at Treyarch, the developer of Call of Duty: Black Ops https://en.wikipedia.org/wiki/Treyarch They have no plans at all to go beyond 4 cores. The reasoning is simple:
(a) games don't benefit much from more cores (the gains in games are minimal compared to the amount of programming effort needed) and
(b) they will always program for the lowest common denominator over all systems (including both computers and consoles). With both AMD and Intel still producing 4 core processors with their latest batch (Ryzen 3 and even the soon to be released Intel Coffee Lake i3), those 4-core chips will be out in the wild for many years.

It isn't worth their effort to attempt to use more cores, even if it were possible to gain much from it, for several more years.

Now, that is just one company. But the other game developers probably are considering the same issues.

If someone finds a way to efficiently code for > 4 cores devs will use it. But until then you are probably right. But as others have said i never want a game at 100% CPU usage again. Once i got a dual core and Windows was still responsive i always want more cores than games will utilize.

Also i remember when 4 cores came out and reading it was way easier to support 2 cores than 4 and more cores in games and software and now we have support for 4 cores in games so give it time and 4 cores will be the min spec for game boxes.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |