Speculation: Ryzen 3000 series

Page 27 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DrMrLordX

Lifer
Apr 27, 2000
21,802
11,153
136
No. AMD has always had latency issues in contrast with Intel. Look at Sandy Bridge vs. Phenom II and Bulldozer, huge discrepancy in cache and memory latency. We'll see how they do with Zen2, hopefully it's stellar and matches or beats Intel in gaming. I'm tempering my enthusiasm, unlike so many in this thread (like all AMD threads) who set unrealistic expectations and are then disappointed with the results when the truth comes out.

Now you are digging into ancient history. If you want to go that far back, maybe you'd care to see how the memory performance was on x2 machines vs. P4 rigs? None of what you are discussing has any relation to AMD's current latency trouble.

But still i find it hard to believe until it does, as this time around, they would be undercutting their own products, which are not even half year at the market.

If you look back at the PC industry in the 90s and aughties, Intel undercut themselves all the time. Hell look at the original Pentium II (Klamath): fast, hot, locked on a 66 MHz FSB. Not too long after that, Intel revised the Pentium II (Deschutes) with 100 MHz FSB, less heat, and higher clocks, PLUS the infamous Celeron 300a with its fast 1/2 size l2 running at full clockspeed. Anyone who bought a Klamath just before that product launch was screwed beyond belief. Intel did it anyway. And they made a ton of money/earned a ton of mindshare doing it. Who was their competition at the time? AMD and Cyrix had just been kicked off their platform, and Apple was still gimping around with underperforming PowerPC chips. WinChip? Lol. Okay k6s were impressive in some non-fp tasks but we all remember how they fared, don't we? Intel excelled for the sake of excellence. AMD needs to do the same today. If some 2950X owners feel slighted then too bad. At least we have our tech industry back, if only for awhile.

Once they hit 3nm it's all over.
 
Last edited:

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
Now you are digging into ancient history. If you want to go that far back, maybe you'd care to see how the memory performance was on x2 machines vs. P4 rigs? None of what you are discussing has any relation to AMD's current latency trouble.

I'm not the one that claimed it's a recent problem for AMD with Ryzen. I was just illustrating that Intel has always had a sizable lead in cache and memory latency and that translates into superior gaming prowess.
 

dnavas

Senior member
Feb 25, 2017
355
190
116
But when you say you were annoyed by not being able to upgrade every year before, i understand your viewpoint. It is just vastly different from mine, who could not upgrade that often, even if wanted to, so i guess i tend to prize my purchases more, as i keep them longer.

We're not as different as you think. I didn't *actually* buy hardware every year, the point is that, in three years, I was able to quadruple my performance. Instead of get 12.5% or whatever it's been. I did buy hardware these past two years because the 8c just wasn't enough. The 16c is barely enough, and I'm looking forward to being able to run 32c, at which point I'll likely be able to stop. For a bit, anyway

-Dave
 

Greyguy1948

Member
Nov 29, 2008
156
16
91
Regarding 7nm and 14 nm and high clock like 5 Ghz:
Is cooling much more difficult for 7 nm?
Are you more limited to low voltage?
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
It has been for well over a decade so...
I hope you've been around building for as long as he has and others because this Intel heckling is getting tired. Anyone whose been in this scene has rotated back and forth between Intel/AMD when the performance/value crown has traded places. At this point of performance and core count, there's nothing but heckling if someone continues to focus on details related to micro-architectures. I can almost guarantee you less then 1% of people who do this have a single clue or technical analysis doc exampling why there are performance differences between Intel/AMD beyond raw clock speed differences. Meanwhile, the fact that these vidya centric processors are still clown-bus (ringbus) . a dead methodology seems to evade these "experts".

Show me what happens to latency when you try to make a 16core chip with ringbus...

It is well known that games are tweaked to specific processors and optimized for specific video cards.. Absolutely no one who matters cares about the extra 10 or so FPS you can get on Intel's processors vs AMDs (if that). How well does a 2P 64 core compute monster run crysis? No one cares .. People care about raw computing. Throughput vs latency.. Comp arch 101. You can't have both. If you want low latency, throughput is going to get sacrificed. If you want more cores, single core performance will suffer and clocks will go down. People are off into the big boy leagues of computing and a vocal super minority keeps ranting about single core performance and vidya games... Come on.

This is the reason why I throw massive shade on pseudo-deep dive reviews that are all over the internet. Everyone suddenly pretends like they have a grad degree in computer engineering with a decade of industry experience that allows them to comment about differences in micro-architecture. High-end gaming is an insignificant portion of computing. Completely insignificant by the numbers.. So, no, I don't want AMD to focus on how to make a silly hotrod one off processor like intel does with all sorts of micro-architectural hacks and shorts cuts in order to win yester-years advertisement crown for best gaming CPU.. Meanwhile this causes five hundred and eleven socket variants and micro-architectures. That's exactly why they're in the pickle they're in now. Look at the big picture.

I want AMD to focus as they have on building a robust all around platform that I can build and scale out my compute on. A platform I can go from dual core all the way up to 128 core with a familiar micro-architecture and core complexes. A platform that isn't a silly monolithic die that costs an arm and a leg. That is the future. Absolutely no one cares about the 1% of the market that wants a gutted drag race car that slams frames the fastest to the GPU. No one runs this in enterprise computing. It's super inefficient, silly, and it's not scalable.

https://en.wikichip.org/wiki/intel/microarchitectures/coffee_lake



Ring interconnect... You can't scale this hot-garbage micro-architecture that has been around forever past 8 cores. So what are we talking about? You have low latency because you're on ringbus. If AMD made a ringbus CPU they'd have low latency as well. Ringbus is dead.

Every single processor Intel makes is obsolete. In 2020/2021 when they revise everything from the ground up and with a long term vision in mind, then we can talk about Intel. Until then, it's a pile of hot garbage and don't even get me started on their insane Mesh/tile interconnect micro-architecture reminiscent of the utter disaster Xeon Phi :


Beyond a ridiculously complex architecture with tons of latency variations dependent on which tile you're hitting, you have a huge power draw from the grout routing that runs between all of this. This is a mess. Lastly, This only works on a monolithic die.. boom, costs have now doubled. There isn't a single thing Intel is doing right and by next week AMD is going to burry them.

This isn't a drag race contest of lowest latency. That was yester-years battle. Were talking about robust scalable micro-architectures and long lasting platforms. Intel can win all of the latency contests they want.. At the end of the day that will result in a ridiculously expensive, dead, and unscalable monolithic chip architecture that will result in their bankruptcy.

If you want to be relevant in computing, you have to have a multi-die story going forward (period). Intel is going to have the same latency or worse as AMD once they get their head out of their behinds and figure out how to design modern processors that are affordable and scalable.
 

TheGiant

Senior member
Jun 12, 2017
748
353
106
Power draw goes down for same performance. Cooling will be easier
Until we know the die area, don't be so sure about cooling. Not power, but power/area is much more important.
I expect the 7nm chips to have worse thermal performance.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,692
136
I have no clue what other people are doing with their processors but my Ryzen 7's I/O is the bare minimum I can tolerate. 1 GPU/1 NIC (all of my PCIE slots are gone) and then I have 1 nvme for main and some sata.. All of my I/O is gone.

That's how I feel as well. But I still like it. If you are, like me, not primarily a "gamer" Ryzen's I/O is sufficient. Not spectacular, but sufficient.

Besides, "sacrificing" 8 lanes from the GPU to add an additional NVMe drive doesn't matter too much to me. My CH6 allows me to have 3x PCIe x4 drives, 2 PCIe 3.0 x4 and 1 PCIe 2.0 x4. Which is currently sufficient.

But I'm still moving up to HEDT on my next upgrade. Because, why not?

Also, since everyone knew what was coming this year, there's nothing to be butthurt about. Lastly, stuff in computing gets halved all the time and depreciation is swift. Everyone should know this by now. Buy an Nvme drive for $240.. 6 months later its $120 .. which makes me go out and buy another and be thankful for the significant price drop

Isn't progress wonderful? A few years ago, if anyone had said I'd be able to get a ~500GB (SATA, but still) SSD for a Ploughman (Old danish expression for a 500DKK note), I wouldn't have believed them. 128GB ones are pocket change now.

Coincidentally, that had the effect of replacing each and every internal HDD in sight. Again, why not?
 

FlanK3r

Senior member
Sep 15, 2009
313
38
91
Power draw goes down for same performance. Cooling will be easier


could be but all this -Redacted- on internet are totaly fakes. U will see soon more legal info directly from AMD

Profanity is not allowed in the tech forums.

Daveybrat
AT Moderator
 
Last edited by a moderator:

DrMrLordX

Lifer
Apr 27, 2000
21,802
11,153
136
Regarding 7nm and 14 nm and high clock like 5 Ghz:
Is cooling much more difficult for 7 nm?
Are you more limited to low voltage?

Voltage limits are unknown. On high core-count CPUs you should expect the current side of the power draw equation to go up such that VRMs will matter more than ever. Overall power draw per unit performance should go down, BUT there's increased risk of hotspots.

Show me what happens to latency when you try to make a 16core chip with ringbus…


What happens is you wind up with multiple ringbusses per core and funky stuff like that. Pretty sure that's what happened on the higher core-count Broadwell Xeons anyway.

Beyond a ridiculously complex architecture with tons of latency variations dependent on which tile you're hitting, you have a huge power draw from the grout routing that runs between all of this. This is a mess. Lastly, This only works on a monolithic die.. boom, costs have now doubled.

Technically they can go with a "tile of tiles" approach, which is what they're exploring with Cascade Lake-AP. It's two mesh-based CPUs "glued" together. I think it would work better if it were EMIB and not on 14nm++ but you know.

If you want to be relevant in computing, you have to have a multi-die story going forward (period). Intel is going to have the same latency or worse as AMD once they get their head out of their behinds and figure out how to design modern processors that are affordable and scalable.

I would actually like to see what effects EMIB has on inter-die latency. It would be interesting to measure latency between CPU and GPU on Kabylake-G versus a "normal" Kabylake system with the GPU in a full-length PCIe slot.

could be but all this shits on internet are totaly fakes. U will see soon more legal info directly from AMD

Okay, I'm interested.
 

moinmoin

Diamond Member
Jun 1, 2017
4,994
7,765
136
Until we know the die area, don't be so sure about cooling. Not power, but power/area is much more important.
I expect the 7nm chips to have worse thermal performance.
Another reason why chiplets may be a good idea even in consumer chips, they add spacing and isolation between different areas which are then easier to cool than if they all were part of the same monolithic die.
 
Reactions: DarthKyrie

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
I would actually like to see what effects EMIB has on inter-die latency. It would be interesting to measure latency between CPU and GPU on Kabylake-G versus a "normal" Kabylake system with the GPU in a full-length PCIe slot.

Why all these expectations for EMIB. After it's just a cheaper (maybe) mini interposer. In fact I see more deficiencies if you have more than 1 hop for a signal. Also you can't have an active EMIB bridge, which will be the next big thing for assembling chiplets.
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
Right now, it's all Intel has got. That and "glue".
Well, everyone has interposers. It seems to me just an attempt to create a proprietary tech. As ub4ty keeps harping on, that is what companies at their peak resort to doing.
 

jpiniero

Lifer
Oct 1, 2010
14,834
5,448
136
Why all these expectations for EMIB. After it's just a cheaper (maybe) mini interposer. In fact I see more deficiencies if you have more than 1 hop for a signal. Also you can't have an active EMIB bridge, which will be the next big thing for assembling chiplets.

I imagine it would be better in general compared to how AMD is connecting the chiplets now. Which BTW I don't think there's been any actual good rumors about how the physical connection works between the CPU chiplets and the IO die.
 

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
Lots of text that's not relevant to the comments regarding latency, more text, yup, some more text talking about workloads that fall outside of the context of the original disucssion.

In the context of gaming, which is what Moonbog asked about, I brought up latency because it does matter, and yes, AMD has had a serious deficit for what 13 years now and it does kill their gaming performance. That deficit can be 30-40 fps which is very relevant now, let alone down the line when faster GPUs are readily available. Yes, AMD has an overall superior strategy with IF and modular chip design, and yes it lends itself very well to scaling. Yes, Intel will need to battle the same latency challenge that AMD is facing and it looks like AMD has a sizable lead.

P.S. Don't be so emotional and defensive, this is just tech and keeping a cool head will serve your heart well down the line when you're an old duffer like some of us.

P.P.S. I like your overly long winded posts and your insights, keep it up. XOXO
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
Until we know the die area, don't be so sure about cooling. Not power, but power/area is much more important.
I expect the 7nm chips to have worse thermal performance.
Agreed but isn't that what those great heat spreaders are for?
There wasn't an issue from 28nm to 14nm... Why should there be one now?
AMD is going to release a product w/ heatsinks/fans that cools it sufficiently just like they always have. I'm not worried about an issue that will obviously be resolved. If this hurts OC's, so be it... The automotive industry went through a similar maturity against the modder community w/ modern engines that are stretched to the max w/ turbos/etc from the factory. But stock... I have zero worries w/ stock settings and cooling.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
In the context of gaming, which is what Moonbog asked about, I brought up latency because it does matter, and yes, AMD has had a serious deficit for what 13 years now and it does kill their gaming performance.
Funny.... I have been building for far longer than 13 years and gaming performance like it is today is mostly dictated by your graphics card which, from a quick reflection, at the high-end currently run $1,200 a pop and can be SLI'd towards $2,400. Now how many 'I need the most high end lowest latency everything or the experience will be abysmal' gamers have $2,400 worth of vidya cards in their rig? But they're all about 'performance' on the least significant portion of the puzzle... Yeah, sure. It's heckling no more or less and no one cares about the past. I've switched between Intel/AMD so often I can't even recall. Same goes for ATI/Nvidia. Right now a 2700x is sufficient. Anyone who asks me for build advice even gamers, I suggest a 2700x because its sufficient and the platform is upgradeable when it becomes insufficient. 9900k isn't a platform. It's a dead brick. In the world of PC building, in a years time the most high end processor of today is obsoleted by the cheaper/faster/less power intensive mid level processor. Unless you have some urgent business need that demands x,y,z, it's largely a waste of money to buy the highest of high end anything. So, it's heckling through and through.

That deficit can be 30-40 fps which is very relevant now, let alone down the line when faster GPUs are readily available.
For the last and final time.
9900k is ringbus. That is the major reason there is better performance. Ringbus is dead... There is no 'down the line'. 9900k runs at 4700Mhz in benchmarks vs 4000mhz on 2700x.
9900k costs double what the 2700x costs. And lastly, dependent on the game, the deficit is 0 fps to 30fps reflecting its much more to do w/ game design/optimization than it is the actual CPU. Is there CPU affinity given the two (4 core domains on AMD)? Are they intelligently minimizing cross talk from the CPU complexes? vs.. Intel's 'flat' ringbus? All such details are a part of modern CPUs. Intel won't be able to avoid this either. So, by bringing up the 9900k, you're having an argument literally about yester year hardware. AMD is already entering their next iteration (something afforded by the 'da latency is too high' design).

In some cases, the 2700x beats the 9900k by 20fps... Yes, you heard that right. Then there's resolution that impacts the deficit. The least of all problems relates to deep down micro-architectural latency. And at the end of the day, you're laughably paying double for this cool dude award ($540 9900k vs $270 2700x). No matter what reasoning or gymnastics one commits on this topic, it ends up being foolish.

Yes, AMD has an overall superior strategy with IF and modular chip design, and yes it lends itself very well to scaling. Yes, Intel will need to battle the same latency challenge that AMD is facing and it looks like AMD has a sizable lead.
That's all that matters... Who has a winning strategy going forward and who is busy trying to look relevant with yester-year's dead micro-architecture. Intel won the (muh vidya fps) race (in some cases) in a gutted out Civic with NOS ... congrats

P.S. Don't be so emotional and defensive, this is just tech and keeping a cool head will serve your heart well down the line when you're an old duffer like some of us.

P.P.S. I like your overly long winded posts and your insights, keep it up. XOXO
Zero emotion is involved and there is no defensive posture on this matter. I try to present plain and obvious facts. When I feel it is obvious enough and there is no basis behind a certain position, I make sure to use colorful language to point out the absurdity w/ a little humor... adds a little flavor and levity IMO. It also urges the opposing viewpoint to bring their strongest case in rebuttal. You get to the meat of things quicker. I am happy you like my posts and insights ! I enjoy your post and others as well. The only way you sometimes arrive at higher understanding is if there is a challenge to a widely accepted belief. In this way, I give equal credit to the challenge of a viewpoint and the retort. The stronger the exchange, the better...
- ovoxo
 
Last edited:

ub4ty

Senior member
Jun 21, 2017
749
898
96
What happens is you wind up with multiple ringbusses per core and funky stuff like that. Pretty sure that's what happened on the higher core-count Broadwell Xeons anyway.
Basically, a convoluted crap sandwich. Inject I/O into the equation and it becomes a glorified turd which is why I/O is partly is lacking w/ Intel

Dat lopsided I/O ... Intel went towards (We wuz GPU) Tiles because this was a dead end approach w/ lots of downsides.

Technically they can go with a "tile of tiles" approach, which is what they're exploring with Cascade Lake-AP. It's two mesh-based CPUs "glued" together. I think it would work better if it were EMIB and not on 14nm++ but you know.
Nope.. they'll essentially need a high speed interconnect between tile clusters which lands them into the same framing that AMD already has multiple generations of. Except, they'll have a convoluted mess added on. Are you referring to UPI? I see nothing regarding 'tile of tile' : https://en.wikichip.org/wiki/intel/microarchitectures/cascade_lake


Lastly, in AMD fashion, they just took a dump on Intel's catchup maneuver by introducing a centralized I/O chip ... All core complexes now have equal access to I/O... meanwhile, Intel just caught up to their old design.

I would actually like to see what effects EMIB has on inter-die latency. It would be interesting to measure latency between CPU and GPU on Kabylake-G versus a "normal" Kabylake system with the GPU in a full-length PCIe slot.

Okay, I'm interested.
I would and should say that I am too but I have written them off. Their alphabet soup of nonsensical micro-architectures and shenanigans therein is nauseating and you're almost 100% guaranteed it will be overpriced. Plus they're still on 14nm.. So, what's the point? Then there's their IBM like marketing w/ even more nauseating shenanigans. There's so much they have to do to get back on track, I've just decided to ignore them. They have Optane but they're charging in usual fashion an arm and a leg for it. They have lots of great IP but its not translating into great value/performance products and a more simplified advertisement campaign.

Also, If I were to do one thing.. I'd fire the guy who keeps naming their micro-architectures after Lakes.. .. I mean give it a rest
 
Reactions: piesquared
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |