I was talking about maximum performance irrelevant of the resolution but at maximum image quality.
For 1080p today there are games that RTX 2060 will be the bottleneck because in order to get 120+ fps you will need to lower the image quality settings (example = BF V ), so you will need a faster GPU.
If you go with the RTX 2080Ti because you want/need 144fps at 1080p ultra, then you will also need a CPU capable to drive the RTX 2080Ti at those fps. And if you want to play at 160 or 200fps at 1080p ultra, you will need an even faster card than the RTX 2080ti and this will require an even faster CPU.
If you dont care about the maximum image quality then most $200 CPUs will get you at 120-144fps with lower performing GPUs like the RX 580 or GTX 1070/2060 etc
Not commentary directed at you but the gaming performance topic in general..
I'd love to see someone do a in-depth technical writeup with a performance profiler at the OS level, thread/CPU level, PCIE messaging level, ram I/O, disk I/O, and GPU level showing exactly what's occurring related to this so-called 'bottle necking'.
A 2700x can beat a 9900k in some games and a 9900k can beat a 2700x in others.
An 8 core 4.0Ghz processor vs. a 1.35Ghz GPU
The CPU cores are not maxed out ...
This reflects code execution inefficiency and wildly variable structuring.
If you think these gaming companies are dedicating millions of dollars in staffing to ensure a video game is written super efficiently so it can run on the latest/greatest processor w/ 8 cores and a 2080ti, you're mistaken. That code looks like crap in a number of cases and runs as best it can at what hardware is thrown at it. Look at some of the performance increases from Battlefield 5 after updates.. 20/30% or any other game in the past .. 20/30% is common after a 'patch'. That's the bottleneck not the processor. At 4.0Ghz, a processor can do enough to make your eyes bleed .. 120hz is nothing. However, if your code is written like crap and its idle 50% of the time, that will change. Also, the worst thing of all is pcie communication latency. If you have a lot of chatter and inefficient memory access, that's your bottleneck.
It's funny that people think a modern 4.0Ghz processor is bottlenecking vidya games without any profiling data showing where the slowdown is coming from. This is not a pro-AMD post. This is a, if a claim is made, prove it in great detail post. With all of the factors listed, the CPU only assumes a %'age of the puzzle..
Anyone remember this crap :
Most reviews are hot garbage... This is what occurs when you have a mainstream phenomenon take over a market vs a highly informed and critical grouping of people. I'm really getting over this element of computing because the more you dissect it, absolutely nothing is there. Gaming is not the most compute critical task in the world. A lot of the code is inefficient is heck and doesn't scale across cores. When you involve all of the system elements, it's an absolute clown show to look at. If a company can release multiple
software updates that result in 20-30% performance improvements (happens all the time in gaming), they're the bottleneck not the hardware. Then there's Winblows scheduler and OS ... Then there's the slew of posts I've already made on this topic regarding the physical limitations of your human vision...
The smallest “microsaccades” move the eye through only a few minutes of arc (one minute of arc equals one-sixtieth of one degree). They last about 20 milliseconds and have maximum velocities of about 10 degrees per second. The largest saccades (excluding the contributions of head movements) can be up to 100 degrees, with a duration of up to 300 milliseconds and a maximum velocity of about 500–700 degrees per second.
https://www.britannica.com/science/saccade
Oh and no you're not processing visual input during this time. Your brain masks it... Aka : dropped frames
Meanwhile:
> WE WUZ Super human frame processors ... MUH FPS.
> Gamers have more significant compute requirements that anyone in the world.
Meanwhile, enterprise and super computers are running on 2.0Ghz processors with way worse single core performance. Fine, vidya game performance is vidya game performance. But don't make a mountain out of a mole hill. It's a vidya game and its likely poorly written for high end performance... And don't even get me started at the sheer amount of boneheads who do reviews for profit.
In many parts of technology, we have created things that exceed human capability.. Only gamers seem to think they are cyborgs with super human capabilities.
(A 4.0Ghz 8 core processor is too slow meng.. It's holding up muh FPS)...
> Nah son !