Discussion Intel current and future Lakes & Rapids thread

Page 384 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Mopetar

Diamond Member
Jan 31, 2011
8,000
6,433
136
Can someone bring me up-to-date on the state of gaming and CPU importance?

Unless you're specifically gaming at a lower resolution to alleviate any GPU bottlenecks (which may be legitimately done for competitive reasons in the case of an FPS game like CS:GO or Overwatch) there isn't much difference in what CPU use since almost all games at 1440p or 4K will result in a GPU bottleneck when running at high or max settings. For some games even 1080p at max setting is enough to cause a GPU bottleneck.

Take Red Dead Redemption 2 benchmarks from the recent AT review of the 11700K as an example.



It doesn't really matter which CPU you use since the performance is bound by the GPU at these settings. If on the other hand you drop the resolution to a comically low number that no one would ever use and turn the graphics down, you'll start to see a difference, as well as some of the higher frame rate numbers you might be referring to.



Outside of a few titles, it really doesn't matter when using the settings that most people will be gaming on. Also, these results were from using a 2080 Ti, which is a more powerful card than most people will be using, which means that average user is going to hit a GPU bottleneck even sooner and likely won't see much of a different even when running lower resolutions or lower settings.
 

exquisitechar

Senior member
Apr 18, 2017
666
902
136
Assuming the same GPU is a 5800X wildly better for gaming than a 10700K or a 6 core Comet Lake for that matter?
No, the 5800X is about 6% ahead at stock at 1080p and is actually often slower if you go all out with overclocking and tuning the core/memory/ring on the 10700k.
 
Reactions: scineram

Hulk

Diamond Member
Oct 9, 1999
4,367
2,232
136
Seems like a lot of (over) emphasis is put on gaming benchmarks when in reality most modern CPU's will run these games at more than playable frame rates or am I missing something?

I guess part of it is using gaming benchmarks as a metric for overall CPU performance.
 

uzzi38

Platinum Member
Oct 16, 2019
2,698
6,393
146
New Hybrid CPU core changes for improved performance suggests it's a new core and not a simple Golden Cove refresh/renaming.
I disagree. It sounds to me like there's changes being made to the little cores - either a new core or AVX512 support on the little cores so you don't have to pick between having them or having AVX512.

The thing that sounds more like a new core would be the cache changes on the following line, but we're talking a Sunny -> Willow type change without the huge 20+% improvement to clocks.
 

Thunder 57

Platinum Member
Aug 19, 2007
2,794
4,075
136
Are you sure? It says "Improved CPU Cache for Gaming" on the box.

The only problem I have so far is Meteor coming after Raptor, seems like it may have quite the impact.

Maybe they can call it GameCache?

And I thought AMD's marketing was dumb (they are). Imagine trying to sell some middle manager bozo that you need these new CPU's and they see "CPU Cache for Gaming"? I don't get it. These things aren't like an Xbox or PS. They are computers that are meant to be able do a whole host of things, yes, gaming included.

I get it that at an enthusiast site gaming will come up a lot. It's annoying though when people assume that gaming is someone's reason for purchasing or upgrading a CPU.

That said, I wonder what they will do for this "gaming" cache. I can't imagine they can/would go back to an inclusive L3.
 

Mopetar

Diamond Member
Jan 31, 2011
8,000
6,433
136
Seems like a lot of (over) emphasis is put on gaming benchmarks when in reality most modern CPU's will run these games at more than playable frame rates or am I missing something?

I guess part of it is using gaming benchmarks as a metric for overall CPU performance.

Kind of, but that's just a recent phenomenon, since if you look back at previous versions of Zen, there usually was a noticeable advantage for Intel in a lot of games at the kind of settings that you would expect people to use, so there was an argument to be made. With Zen 3 that Intel advantage basically disappeared and you're almost assuredly going to hit a GPU bottleneck with either brand of CPU.

Take the Ryzen 2700X when it first came out:



It's not massively behind the top Intel chip, only about 10%, and it was effectively equal to the previous Kaby Lake x700K chip.

Contrast this with the most recent Ryzen CPUs:



Note that the results aren't perfectly comparable since the graphics were VHigh in the first image and Max in the second, but the overall idea is that the AMD CPUs are basically at parity with the best Intel performance or possibly even slightly ahead.

So the reality is that it doesn't matter unless you really care about scraping out every last possible frame. However, that's a reality that we only recently arrived at. If you'd asked a year ago the answer would be different.
 
Reactions: Tlh97

dullard

Elite Member
May 21, 2001
25,184
3,608
126
Seems like a lot of (over) emphasis is put on gaming benchmarks when in reality most modern CPU's will run these games at more than playable frame rates or am I missing something?

I guess part of it is using gaming benchmarks as a metric for overall CPU performance.
Part of it is tribal bickering. Both sides have a strong urge to show that their preferred companies CPU is better in any measurement you can find. For that, you are correct. It is over-emphasized.

But also, there is a lot of variety in people's video cards and how long they keep a CPU. You might have a 2080 Ti now and see AMD's lead only at high frame rates and an approximate tie at lower frame rates. But, that won't stay that way. You might have games with different needs later or a better video card later. Differences in gaming benchmarks at high frame rates help figure out what the situation might be in the future.
 
Reactions: Mopetar

Hulk

Diamond Member
Oct 9, 1999
4,367
2,232
136
Kind of, but that's just a recent phenomenon, since if you look back at previous versions of Zen, there usually was a noticeable advantage for Intel in a lot of games at the kind of settings that you would expect people to use, so there was an argument to be made. With Zen 3 that Intel advantage basically disappeared and you're almost assuredly going to hit a GPU bottleneck with either brand of CPU.

Take the Ryzen 2700X when it first came out:



It's not massively behind the top Intel chip, only about 10%, and it was effectively equal to the previous Kaby Lake x700K chip.

Contrast this with the most recent Ryzen CPUs:



Note that the results aren't perfectly comparable since the graphics were VHigh in the first image and Max in the second, but the overall idea is that the AMD CPUs are basically at parity with the best Intel performance or possibly even slightly ahead.

So the reality is that it doesn't matter unless you really care about scraping out every last possible frame. However, that's a reality that we only recently arrived at. If you'd asked a year ago the answer would be different.

I realize this is a subjective question but in Grand Theft Auto V what is a good average framerate for satisfying gameplay?
 

Mopetar

Diamond Member
Jan 31, 2011
8,000
6,433
136
I realize this is a subjective question but in Grand Theft Auto V what is a good average framerate for satisfying gameplay?

I only ever played it on console and I would imagine that capped out at 30 FPS, so I can imagine anything over 60 FPS would feel great. For something like GTA though I'd probably look more at the 1% or .1% lows or even try to find some benchmarks that attempt to create situations that could cause a massive dip because even the ~30 FPS on the console felt good enough, but anytime it would drop below that it was obvious and a bit jarring. From looking at the AT review results the graphics settings make a pretty big difference since the 4K low results have ~170 FPS for all CPUs and the 1080p max results are all around 95 FPS.

It seems like there's some room in the game to configure it to hit whatever frame rates you want. Frankly it depends on each person as some people swear by 1080p at 100+ FPS and others prefer moving up to higher resolutions like 4K as long as they can stay above 40 FPS.
 

Mopetar

Diamond Member
Jan 31, 2011
8,000
6,433
136
The improvements for The Division 2 were rather impressive, but it looks like there was a regression in performance with The Witcher 3.
 

LightningZ71

Golden Member
Mar 10, 2017
1,657
1,939
136
My son has a Lenovo Flex with the i5-1135G7 that's replacing his previous Flex that had a 4700u that met an untimely end. That i5 is effectively just as fast in everything that he does, including the gaming he was doing, as the 4700u was.
 

Dave2150

Senior member
Jan 20, 2015
639
178
116
Please correct me if I'm wrong, but AFAIU the motherboard BIOS still doesn't get updated by Windows automatically right? In that case, I'd expect the average user not to update.

So given that a lot of motherboards already seem to have entered the retail channel, I think an older BIOS might actually be a fairly common user experience, especially the first few months. As such I feel that a lot of the arguing about BIOS versions feels a bit like splitting hairs. We'll likely get both user experiences.

Best we assume the average Ryzen user didn't update either then, and B450, X470 motherboard owners buying Ryzen 5000 chips just had to RMA then, due to no boot?
 

andermans

Member
Sep 11, 2020
151
153
76
Best we assume the average Ryzen user didn't update either then, and B450, X470 motherboard owners buying Ryzen 5000 chips just had to RMA then, due to no boot?

So I think the "no boot" is a very strong signal that people should upgrade and they'd likely do, but otherwise if people have a working but somewhat suboptimal setup I think that is a somewhat reasonable assumption?

edit: to clarify I think there are a couple of reasons why someone would want to update the BIOS:

1) To scrape the bottom of the barrel wrt perf improvements.
2) If the user experiences a bug and suspects/hopes updating the BIOS might fix it.
3) An user is generally tech-savvy and likes to keep these parts up to date.
4) ...

I think 1 will result in consistent updates among a fairly small set of users. However I think 2 is a very strong impetus to either upgrade or RMA (look at e.g. the current AMD USB issues).

I believe most users in the DIY market are of the level "I can put the parts together" and not really in the performance enthusiast group that wants to scrape the bottom of the barrel (a bit of the same discussion why not a lot of people actually overclock).

As such I actually think the HW wise motherboard compatibility for AM4 is overrated in most reviews. (AMD had a very confusing story about which chipsets support which CPU with the oldest boards not always supporting the newest CPUs, people don't switch every year, and the risk of needing to mess significantly with the BIOS is there)

edit 2: The other consideration is that someone buying an AM4 motherboard now hopefully gets a reasonable BIOS though, as BIOS update speed should have slowed down a bunch after launch. It is especially launch time when this gets messy ..
 
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
6,370
12,746
136
Discussing whether normal users would upgrade BIOS to get improved performance is a moot point:
  • it's extremely early in the cycle, only a very limited number of users will end up with initial BIOS in their new motherboards
  • even with current BIOS performance isn't a big problem, it's just mediocre and unsatisfactory for enthusiasts. Normal users who don't do BIOS updates probably wouldn't even care.
 

Dave2150

Senior member
Jan 20, 2015
639
178
116
So I think the "no boot" is a very strong signal that people should upgrade and they'd likely do, but otherwise if people have a working but somewhat suboptimal setup I think that is a somewhat reasonable assumption?

edit: to clarify I think there are a couple of reasons why someone would want to update the BIOS:

1) To scrape the bottom of the barrel wrt perf improvements.
2) If the user experiences a bug and suspects/hopes updating the BIOS might fix it.
3) An user is generally tech-savvy and likes to keep these parts up to date.
4) ...

I think 1 will result in consistent updates among a fairly small set of users. However I think 2 is a very strong impetus to either upgrade or RMA (look at e.g. the current AMD USB issues).

I believe most users in the DIY market are of the level "I can put the parts together" and not really in the performance enthusiast group that wants to scrape the bottom of the barrel (a bit of the same discussion why not a lot of people actually overclock).

As such I actually think the HW wise motherboard compatibility for AM4 is overrated in most reviews. (AMD had a very confusing story about which chipsets support which CPU with the oldest boards not always supporting the newest CPUs, people don't switch every year, and the risk of needing to mess significantly with the BIOS is there)

edit 2: The other consideration is that someone buying an AM4 motherboard now hopefully gets a reasonable BIOS though, as BIOS update speed should have slowed down a bunch after launch. It is especially launch time when this gets messy ..

I was being sarcastic in my post. IMO the majority of folk building their own custom PC, using high end parts such as Ryzen 5000 series/Intel 11th gen will be switched on enough to update their UEFI.

Of course there's always a bunch that won't, though I don't believe we should base performance of a CPU using a beta/old UEFI version, just in case someone reading the review purchases the CPU and decides not to update.

Reviews, benchmarks etc should show the CPU with the release/live EUFI versions. The same can be said with drivers for GPU's - most drivers give trivial performance increases, though some are the exception. When Anandtech get enough budget to manage a 3080 review, I doubt they'll be using pre-release drivers. I imagine they'll be using the latest drivers, which will contrast with performance from the capable, reliable review sites that managed a launch day/week review.
 

Hulk

Diamond Member
Oct 9, 1999
4,367
2,232
136
I have a feeling that if AMD's parts were topping out at 4GHz and were still at Zen (original) performance level Intel would have already fully transitioned to 10nm. It wouldn't mattered if clocks topped out in the low 4GHz range as long as they held a small clock advantage and IPC advantage they could have enjoyed the profits from the denser 10nm process. Of course this assumes they could (are) achieving decent yields at the proposed clocks.

But that damn AMD had to come along and challenge them on clocks and beat them with IPC, thus forcing them to stay with 14nm to mitigate the performance deficit with clock speed!
 
Reactions: Tlh97 and MangoX

coercitiv

Diamond Member
Jan 24, 2014
6,370
12,746
136
I have a feeling that if AMD's parts were topping out at 4GHz and were still at Zen (original) performance level Intel would have already fully transitioned to 10nm. It wouldn't mattered if clocks topped out in the low 4GHz range as long as they held a small clock advantage and IPC advantage they could have enjoyed the profits from the denser 10nm process. Of course this assumes they could (are) achieving decent yields at the proposed clocks.
Intel using 14nm instead of 10nm has nothing to do with clocks, we have data to back this up:
  • historically Intel went with the denser node even if that meant clock stagnation or even regression in the early phase (see 2700K vs 3770K, 4790K vs 6700K)
  • current and soon to be available 10nm parts are already close or hitting 5Ghz, which means 10nm 5Ghz+ desktop parts would be possible if node fmax was the only problem
  • their competition used newer nodes and repeatedly managed to squeeze higher and higher clocks, gradually approaching Intel's fmax through a combination of node and design optimizations
In conclusion, I don't believe desktop getting last gen nodes has anything to do with max clocks, no mater who tries to spin this in the press (now Intel, tomorrow maybe even AMD). If this were the case we would have gotten Ivy Bridge on 32nm++ with 5Ghz boost instead, and no further desktop implementation from Intel would have dropped bellow 5Ghz.

The issue is always cost and yields. Small fmax regression is nothing compared with what you can achieve by moving to a denser node, worst case scenario is you keep the same transistor count and make the design more efficient.
 
Reactions: Tlh97

Hulk

Diamond Member
Oct 9, 1999
4,367
2,232
136
Intel using 14nm instead of 10nm has nothing to do with clocks, we have data to back this up:
  • historically Intel went with the denser node even if that meant clock stagnation or even regression in the early phase (see 2700K vs 3770K, 4790K vs 6700K)
  • current and soon to be available 10nm parts are already close or hitting 5Ghz, which means 10nm 5Ghz+ desktop parts would be possible if node fmax was the only problem
  • their competition used newer nodes and repeatedly managed to squeeze higher and higher clocks, gradually approaching Intel's fmax through a combination of node and design optimizations
In conclusion, I don't believe desktop getting last gen nodes has anything to do with max clocks, no mater who tries to spin this in the press (now Intel, tomorrow maybe even AMD). If this were the case we would have gotten Ivy Bridge on 32nm++ with 5Ghz boost instead, and no further desktop implementation from Intel would have dropped bellow 5Ghz.

The issue is always cost and yields. Small fmax regression is nothing compared with what you can achieve by moving to a denser node, worst case scenario is you keep the same transistor count and make the design more efficient.

I like your analysis but I think there are a couple of factors not considered in your argument.

Regarding your point about going to denser nodes, yes that is correct the primary reason to move to a denser node was to add to the transistor budget while keeping die size the same or reducing it. Clocks were not a major concern.

But I think we have to consider to additional facts. First, until the transition to 22nm the progression was relatively smooth. 14nm was the first time that Intel encountered technical process "resistance" that actually caused significant production delays. In non-technical terms the "tick" used to be easy.

Also, during those ticks Intel could have afforded a regression in fmax because AMD was so far behind, Apple had no or limited CPU design, and ARM wasn't a threat.

The tremendous issues Intel has been having with process for the last 5 years coupled with the fact that AMD has caught up and passed them I would argue has created a completely new dynamic forcing the requirement to remain at 14nm for that extra 10% clock speed increase. While Tiger Lake clocks do seem quite high we don't know if due to density issues (hotspots) they have just now worked out how to get 8 of these 10SF Willow Cove cores operating at high clocks in the nT mode?

In summary, your argument has moved me I will admit that but I'm still not convinced that Intel has remained at 14nm this long solely due to technical issues, I think part of it is due to AMD (and others) breathing down their necks forcing the need for the highest clocks possible to remain competitive. Also I think sufficient yield for the number of parts they need to chip is (was) also part of it.
 
Reactions: Tlh97
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |