Official AMD Ryzen Benchmarks, Reviews, Prices, and Discussion

Page 158 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

inolvidable

Junior Member
Mar 30, 2009
5
2
81
Donot know what google said, but there are reports that after AMD showed off ryzen on opensource Blender, Intel update the code path to add AVX support. Yet AVX failed to give intel any meaning full improvements. I never personally worked with AVX, because its faster to use C++ AMP to offload calculations to GPU.

Thnaks. That is some useful piece of information.

By TheSlit benchmarks the gains in x265 for example seem to be between a 15% (Haswell) and a 39% (Skylake) at the same fixed clockspeed. After adding OC potential they should go a notch higher.
 
Last edited:

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
The main problem I see is that we are talking about things that may or may not happen. I share most of your views but I fear I could be underestimating Intel. IMHO one of the most cost-effective ways to crush the 8C/16t Ryzen (the ones aimed at workstations) wold be to reverse course and invest big money so the major players implement 256bit AVX2 asap in some of the "must have" software that can benefit from it. This way by the time Skylake-X and the X299 chipset arrived (in a few months) Intel would have a significant advantage over Zen out of nothing, I mean, just by tweaking software with no need to change the price scheme or anything at the hardware level.

As a side note we saw big headlines last week about Google choosing Skylake-S CPUs for their servers due to the leap in performance that the avx-512 instruction set offer for this segment. In Google's words: "Skylake’s AVX-512 doubles the floating-point performance for the heaviest calculations” and is therefore being advanced as eminently suitable for “scientific modeling, genomic research, 3D rendering, data analytics and engineering simulations.”
I get that it is not an accurate comparison because the server segment has little to do with the HEDT one, and because Naples were not there when this deal was made. However I think it is a good example that the instruction set strategy has already been used by Intel with success. By the way avx-512 seem to be one of the most important selling points for the CannonLake Xeons (It seem like the Skylake "mainstream" Xeons would have it at the hardware level, but disabled for some reason)

I am neither a programmer nor an Intel strategist and what I see as the most effective way to hurt Zen in the HEDT segment might perfectly be wrong for a myriad or reasons. Unfortunatelly recent events and the lack of knowledge on my part are advicing me to wait till Intel make some move and enjoy in the meantime every new discovery about Ryzen and every new performance gain it gets through updates in microcode, bios, windows or software in general.

All in all I am truly impressed with AMD and I really hope they can offer us similar level of performance at half the price of their Intel counterparts for a long time so they can regain some deserved market share and force by the way true innovation again in CPUs at reasonable prices.
I was under the impression adding new instructions in software like avx2 is difficult. And it explained why adoption of such instructions have been slow and costly since the dawn if age. Real slow.

With Google intel is probably fighting dedicated fixed function hardware. Its a different market.

Amd is 10% of intel so i dont really how one can underestimate Intel

If you take 1800x clearly its so expensive that a 30% avx2 deficit vs 6850 is sizable. But its still more or less same cost and perf even for avx2.
Even more so for a 1700 vs 6800. Add the 1700 is far leaner.
I think the avx2 is a non issue due to the brute fpu peef of zen that makes it similar avx2 for the cost. Its that simple to me.

The issue is imo do you need the quad bandwith memory or extra pci lanes? And if what is the perf value of it?
Yes go bwe 6800 no go 1700.

Software will also be updated to zen arch and probably far more so. Heck it just got here.
 
Reactions: inolvidable

Agent-47

Senior member
Jan 17, 2017
290
249
76
We aren't interested in Ryzen matching Intel in absolute performance - it's relative to its own performance in other applications.

Okay. so why are we discussing it?

its the first version of zen. was Nehalem perfect? It took intel better part of the last 10 year to refine Nehalem to the state it is today.

If you donot want to compare with intel, all you are saying areas which AMD can work on for the next half decade going forward. which I am sure they know. but its good that there are obvoius areas where from AMD can milk more performance

If you donot want to compare with intel, you should not say things like "These are all problems which are non-existent on Intel's L3 cache design."
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
What are the chances that Ryzen will have smt working by the end of the year? If I can't emulate dual/tri core Ryzen with Ryzen 7 for multiple gaming units on one cpu, I'm going to ignore Ryzen til Ryzen+(Ryzen 2 or whatever)

I pray they fix it.
This was the most interesting hardware upgrade I was going to do

There's nothing wrong with it, it's just being mishandled by software.

You can already use Ryzen for what you want - but I'd be very wary about emulating across the CCX. For emulation, I'd treat it like having two dual quad core CPUs.

If you can lock a couple VMs to four/eight threads on one CCX and share those cores you might have a better end result than limiting the total number of cores.

As in:

CCX ONE:
VM 1: Threads 0, 2, 4, 6
VM 2: Threads 1, 3, 5, 7

CCX TWO:
VM 3: Thread 8, 10, 12, 14
VM 4: Thread 9, 11, 13, 15
 
Reactions: Drazick

virpz

Junior Member
Sep 11, 2014
13
12
81
The difference we are talking about is four fold. After ~4MB of L3 being written on Ryzen by one core it looks like that data is then no longer accessible anywhere near as quickly. This may be a result of its victim-cache policy, unseen cache contention from other cores, a CCX bug, L3 reservations, or even an AGESA/BIOS bug.

On Intel's 6900k, it can access 16MB with near perfect uniformity from any core. Ryzen can only do about 4MB - and that 4MB won't be the same 4MB between context switches about 50% of the time, which makes it even worse as cache lines are then flushed and all new data is fetched from RAM (with abysmal latency to boot).

There's a perfect storm of problems that impacts random-access memory operations negatively (such as fetching the data for AI computations in game, or organizing and issuing draw calls from that data...).

Random access performance is only as expected in that ~4MB region. Cache aware applications will treat Ryzen as having fast, uniform, access to either 8 or 16MB of data depending on if it is NUMA aware and has enough data to fill more than that. Many game engines are cache aware simply because they can manage the data stream more efficiently... and try to keep computations in the caches.

Windows's scheduler is cache aware but does not treat the two L3 segments as anything special - even when it does accurately identify them. Cache locality is lost on non-pinned thread groups.

These are all problems which are non-existent on Intel's L3 cache design.

Hello.

I've been following this Ryzen CCX/Threads/Core/Cache issue for some time now and something is hammering on me...

Can't it be that the coeherency / affinity is not really working as supposed plus the Neural Prediction and/or Smart prefetch are getting fooled by this issue and so are aggravating the problem ?
 

inolvidable

Junior Member
Mar 30, 2009
5
2
81
I was under the impression adding new instructions in software like avx2 is difficult. And it explained why adoption of such instructions have been slow and costly since the dawn if age. Real slow.

With Google intel is probably fighting dedicated fixed function hardware. Its a different market.

Amd is 10% of intel so i dont really how one can underestimate Intel

If you take 1800x clearly its so expensive that a 30% avx2 deficit vs 6850 is sizable. But its still more or less same cost and perf even for avx2.
Even more so for a 1700 vs 6800. Add the 1700 is far leaner.
I think the avx2 is a non issue due to the brute fpu peef of zen that makes it similar avx2 for the cost. Its that simple to me.

The issue is imo do you need the quad bandwith memory or extra pci lanes? And if what is the perf value of it?
Yes go bwe 6800 no go 1700.

Software will also be updated to zen arch and probably far more so. Heck it just got here.

Thanks. This has been very helpful to properly set the value of Zen from a performance/cost perspective even if Intel managed to spread the adoption of AVX2
 
Reactions: krumme

.vodka

Golden Member
Dec 5, 2014
1,203
1,537
136
Google Translate'd - PCGH.de - Zen 2 could appear as Pinnacle Ridge early 2018

Original

--------------------

As expected by what Mark Papermaster said a while back
,

Papermaster confirmed the four-year lifespan and tapped the table in front of him: “We’re not going tick-tock,” he said. “Zen is going to be tock, tock, tock.”

it seems AMD will go for yearly updates to Zen. Pinnacle Ridge around this time in 2018 could mean a quick update like Bulldozer -> Piledriver, a healthy IPC increase from implementing/tweaking stuff that was left on the table for first generation Zen, plus a healthy increase in frequency headroom at the same/lower power.

Obviously, this will be a drop in upgrade for AM4 boards.

FX8350 made FX8150 look like a sad joke, and that's being nice to the 8150. I don't expect Zen 2 to be that much better vs Zen 1, but who knows. This might easily clock above 4GHz at safe voltage on all eight cores while providing 10% better performance per clock... or be a 5% increase like Intel has us accostumed to, plus a meager frequency increase.

We'll see. AM4 is a great platform to be in these next 4 years into 2020-2021, lots of CPU upgrades down the road. Zen is certainly a decent base to iterate on, unlike Bulldozer.
 
Last edited:

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
Google Translate'd - PCGH.de - Zen 2 could appear as Pinnacle Ridge early 2018

Original

--------------------

As expected by what Mark Papermaster said a while back
,



it seems AMD will go for yearly updates to Zen. Pinnacle Ridge around this time in 2018 could mean a quick update like Bulldozer -> Piledriver, a healthy IPC increase from implementing/tweaking stuff that was left on the table for first generation Zen, plus a healthy increase in frequency headroom at the same/lower power.

Obviously, this will be a drop in upgrade for AM4 boards.

By that time,there should be better software optimisations for the design too so even more uplift over what we see now.
 

dark zero

Platinum Member
Jun 2, 2015
2,655
138
106
I was under the impression adding new instructions in software like avx2 is difficult. And it explained why adoption of such instructions have been slow and costly since the dawn if age. Real slow.

With Google intel is probably fighting dedicated fixed function hardware. Its a different market.

Amd is 10% of intel so i dont really how one can underestimate Intel

If you take 1800x clearly its so expensive that a 30% avx2 deficit vs 6850 is sizable. But its still more or less same cost and perf even for avx2.
Even more so for a 1700 vs 6800. Add the 1700 is far leaner.
I think the avx2 is a non issue due to the brute fpu peef of zen that makes it similar avx2 for the cost. Its that simple to me.

The issue is imo do you need the quad bandwith memory or extra pci lanes? And if what is the perf value of it?
Yes go bwe 6800 no go 1700.

Software will also be updated to zen arch and probably far more so. Heck it just got here.
If that so... Intel would get murdered totally and royally... meanwhile, their Atom team would start to become desperate since the Octocore VIA is about to appear and they would impress from that side.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
It really doesn't matter if a game can run 200FPS or 300FPS... no one, at all, would call the game running at 200FPS as running poorly.

It's the conclusions drawn from the data that are problematic - and the idea that this somehow indicates future gaming performance... an idea which has been thoroughly debunked when comparing CPUs with different core counts.

However, if you were testing two quad/hex/octo core CPUs with SMT... suddenly these tests are fully relevant again.

If you were testing just how a single game scales with CPU performance, then it would be very relevant as well. It would set the floor for what people should expect from that game.

But comparing a quad core at 4.5Ghz+ to a octo core at 3.7Ghz at low resolutions, seeing a 10% difference in framerates that are well above what is needed, then declaring the octo core as a bad gaming CPU is... well... dishonest.

In order for my 2600k to bottleneck my RX 480 I had to drop it to its possible lowest resolution (1440x900, IIRC) and the lowest settings across the board (50% resolution scaling as well...). Then my RX 480 started to become used at only ~90% in a few peaks - therefore being held back by my CPU. Never mind I was running 200FPS+ on average in multiplayer.

If Ryzen can only manage 180FPS in that scenario... guess what? I don't care - I don't play like that - I can't even make out people on the ground. As soon as I turn the game up to playable settings, I become GPU bottlenecked - and still push 120FPS @ 1080p.

As soon as settings reach that level of performance in a game - CPU performance is irrelevant for that game, move on.
The issue I have is that these benchmarks are not presenting the worst case scenarios in any of these games. It's showing games, which are known to dip to the 60's and lower, as having over 150 FPS, because they are not benchmarking the spots which need a fast CPU. Using low resolutions to test, is a quick and dirty way to see what CPU is faster, without having to create their own CPU benchmark that hit the CPU hard. It would appear they use the benchmarks setup to test your GPU, which are very easy on the CPU, so they resort to low resolutions to create the CPU bottlenecks.

While creating artificial CPU bound scenarios, with low resolutions, they do give us an idea of what is faster, but ideally we'd want to test those CPU bound areas instead. It's possible those areas will push the CPU differently than the low resolution tests them. Even though I believe the low res test is likely going to be close to accurate.

You see a lot of people hitting the forums complaining about sub 60 FPS in various games in online and various locations in single player games. Their Afterburner data shows a clear CPU bottleneck with i7 6700K's, but have a hard time believing it, because the benchmarks shown are never the CPU bound areas.
 
Reactions: CatMerc

looncraz

Senior member
Sep 12, 2011
722
1,651
136
Okay. so why are we discussing it?

its the first version of zen. was Nehalem perfect? It took intel better part of the last 10 year to refine Nehalem to the state it is today.

If you donot want to compare with intel, all you are saying areas which AMD can work on for the next half decade going forward. which I am sure they know. but its good that there are obvoius areas where from AMD can milk more performance

If you donot want to compare with intel, you should not say things like "These are all problems which are non-existent on Intel's L3 cache design."

I was only saying that the gaming performance relative to Intel isn't what is the problem with Ryzen - it's the gaming performance relative to how it performs in practically everything else.

The nature of that disparity gives us insight into what is likely to be causing it.

We only compare the relative performance that Intel gives in gaming versus other tasks to the same relative performance Ryzen gives in gaming.

Ryzen can match or beat the 6900k clock-for-clock, core-for-core, watt-for-watt... sometimes all three at the same time. It would then be assumed that gaming tasks would follow suit - the fact that they don't is what is of interest.

It basically comes down to gaming being unusually sensitive to the very areas where Ryzen is most different / weakest than its Intel counterpart. Those weaknesses can be overcome, in theory, fully through software. But it will require more than just a simple kernel scheduler patch. Game patches, AGESA code updates, and system library updates are all going to need to become aware of how Ryzen behaves. Tall order, to be fair.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
AMD Pinnacle Ridge could very well launch in Q1 2018, roughly around this time next year. I am just curious to know if AMD and GF have an improved process for their 2018 CPUs. 14LPP and the current Summit Ridge physical design is just not capable of 4+ Ghz operation at safe voltages for 24x7 use. I would like to see a Zen+ design optimized for 4+ Ghz and which can easily overclock to 4.5+ Ghz. I think AMD and GF need to significantly improve the process with custom tweaks and improve the physical design to make that happen.
 
Last edited:

looncraz

Senior member
Sep 12, 2011
722
1,651
136
Well, I decided to order an ASRock Fatal1ty AB350 Gaming K4 board since it was in stock - as well as some more DDR4. I've maintained my Asus C6H order and will use it for my main build, the ASRock will be used to upgrade the wife's with a Ryzen 5 quad to upgrade her aging Phenom II X4 955.

I realized just how much finicky work I'd be doing if I needed to do testing on Windows 7 and Windows 10, two different video cards, and two different SSDs... and then I have three other systems I'm testing... I mean, geez.... Setting up my wife's (currently unused) computer with the ASRock board and the RX 480 and its own SSD with Windows 7 setup and running on an NVMe drive (lucky girl...) and mine with the Asus C6H and the R9 Fury will allow me to only need to swap the CPU, which is a relatively simple affair (not that I'm particularly pleased with the idea of frequently removing and reinstalling a $400 CPU).

With any luck, I'll not have to do too much swapping. I'll probably run all the tests on Windows 7 I have planned this weekend - and I bet my Asus won't even be here before I'm done, so it shouldn't be too much of an issue. Then, when I'm done, I migrate all of my wife's data and I get to keep tinkering with Ryzen until my board arrives and I can move my main rig over.
 
Last edited:

tential

Diamond Member
May 13, 2008
7,355
642
121
There's nothing wrong with it, it's just being mishandled by software.

You can already use Ryzen for what you want - but I'd be very wary about emulating across the CCX. For emulation, I'd treat it like having two dual quad core CPUs.

If you can lock a couple VMs to four/eight threads on one CCX and share those cores you might have a better end result than limiting the total number of cores.

As in:

CCX ONE:
VM 1: Threads 0, 2, 4, 6
VM 2: Threads 1, 3, 5, 7

CCX TWO:
VM 3: Thread 8, 10, 12, 14
VM 4: Thread 9, 11, 13, 15
Sucks for me since I also wanted to use 2 cores for server tasks. I can either wait for a multisocket mobo and hope it's cheap, wait to see if the server cpus are affordable, or just need multiple servers. I'm very turned off about how this all turned out. I may just suck it up and use the hedt platform from Intel. I need to see how Ryzen 5 and 3 are doing.

Where are the 6 4 and 2 core gaming benchmarks already? Hurry up reviewers. Lazy I tell you.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
AMD Pinnacle Ridge could very well launch in Q1 2018, roughly around this time next year. I am just curious to know if AMD and GF have an improved process for their 2018 CPUs. 14LPP and the current Summit Ridge physical design is just not capable of 4+ Ghz operation at safe voltages for 24x7 use. I would like to see a Zen+ design optimized for 4+ Ghz and which can easily overclock to 4.5+ Ghz. I think AMD and GF need to significantly improve the process with custom tweaks and improve the physical design to make that happen.
Mmm I love it. Amd finds a way to save me money again. Til next year amd. Removed from my budget, wonder what I'll spend this on... Would be really sad to combine my cpu and gpu budget into 1080tis. That would be zero savings. Come on amd save me some money with Vega!
 

HutchinsonJC

Senior member
Apr 15, 2007
465
202
126
It really doesn't matter if a game can run 200FPS or 300FPS... no one, at all, would call the game running at 200FPS as running poorly.

It's the conclusions drawn from the data that are problematic - and the idea that this somehow indicates future gaming performance... an idea which has been thoroughly debunked when comparing CPUs with different core counts.

However, if you were testing two quad/hex/octo core CPUs with SMT... suddenly these tests are fully relevant again.

If you were testing just how a single game scales with CPU performance, then it would be very relevant as well. It would set the floor for what people should expect from that game.

But comparing a quad core at 4.5Ghz+ to a octo core at 3.7Ghz at low resolutions, seeing a 10% difference in framerates that are well above what is needed, then declaring the octo core as a bad gaming CPU is... well... dishonest.

In order for my 2600k to bottleneck my RX 480 I had to drop it to its possible lowest resolution (1440x900, IIRC) and the lowest settings across the board (50% resolution scaling as well...). Then my RX 480 started to become used at only ~90% in a few peaks - therefore being held back by my CPU. Never mind I was running 200FPS+ on average in multiplayer.

If Ryzen can only manage 180FPS in that scenario... guess what? I don't care - I don't play like that - I can't even make out people on the ground. As soon as I turn the game up to playable settings, I become GPU bottlenecked - and still push 120FPS @ 1080p.

As soon as settings reach that level of performance in a game - CPU performance is irrelevant for that game, move on.
The bit about it not mattering if a game runs at 200 or 300: I don't disagree. Not sure why it was brought up. If you brought it up to suggest that 720P gaming at 200 or 300 or 500 fps doesn't matter one way or the other and that it's all fast enough, I'd agree if the focus was on *gaming*. The benchmarks at low resolutions historically were NEVER about actual realistic gaming performance. A good logical writer might be able to draw some conclusions from the data and say that if GPUs gained x amount of power in the next year, or the next 2 or 3 years, that your CPU would still be amazing in THAT game.

The conclusion drawn from the data etc: yeah, we're saying the same thing. Not sure why you bring it up as if you and I were in debate about it. You go on about future gaming performance, and if you read what I've said, we're in the same boat in terms of how disgraceful some of the articles have been. That doesn't mean, however, that I think low resolution benchmarks are the devil. And my target, as I before said was the folks in the several pages before I even entered this thread that were saying how silly it was to bench a 1080 GTX on 1080P because it wasn't realistic. It's a CPU benchmark. It's not supposed to be representative of a realistic gaming scenario. The writers can some times in various articles do a lousy job of making it clear, but historically a low resolution game benchmark was always about putting measuring points on a CPU. With enough points in an assortment of games, you can start to understand where one CPU shines and where one fails. But if the writer is going to cherry pick 8 games that benefit primarily on single core IPC/frequency, then the writer has an agenda or was sent an agenda or just did a lousy job of reporting when we all know that the GHz war has long been over and things will expand to more and more threads.

Testing ONLY one game with a quad vs an octo and publishing an article of how much of a failure the octo is at gaming or vice versa, we both agree would be dumb. But just because we both agree that would be dumb, doesn't mean that I think the benchmark itself was dumb. It still tells us that one of those CPU is stronger than the other in THAT game. It's one single performance measuring point. I like lots of measuring points. If PC reviewers did things my way, it'd probably include the top 8 games played on steam and the 8 most recently released or releasing titles and the 2 most hardware demanding games in existence. If there was overlap at any of those points, then you'd just bench that many games less. Maybe I'm too demanding and maybe they shouldn't do it my way.

And what you quoted me on, I specifically say that there should be multiple measuring points in various games and that a writer might include verbiage that goes on to say how even though there are still two very real realities in terms to gaming right now: 1) Games that thrive on a single high IPC/high frequency and 2) games that thrive on several threads... that only one of those has a real future.

I'm not interested in debating someone that is essentially saying and thinking the same thing as me but got lost some where along the way in the communication of it all.

My stance is that low resolution benchmarks still have a place, that they are still useful. They have always been used to measure the CPU's capability while running that particular piece of game code. No one runs 7zip all by itself as a benchmark to determine how well a machine will function in the office. By itself, it's possible it will represent a half decent indicator. The same thing for gaming: Not every game at this current date is as multi-threaded as well as the next. You need several points to establish various aspects of the performance characteristics of a CPU in gaming.

A smart review would include, as you quoted me saying, some games from both realities: 1) high IPC/high freq. and 2) very well multi-threaded. Then the same smart review would go on to say that while in the current market of games the option 1 reality still performs best in some games, the option 2 reality is the reality with a real continued future. And the review might even say how the speed at which the option 2 reality will over shadow the option 1 reality just found a new kind of acceleration due in large part to the release of AMD's new CPUs, but that it has been in an accelerated state due to the PS4, Xbox One, and Wii-U all being powered by highly threaded, low frequency CPUs for the last few years.

If you still find conflict with what I'm saying, we'll have to agree to disagree, because I will not cease to find value in low resolution gaming benchmarks but nor am I blind to the increasing move toward more and more threads.
 

imported_jjj

Senior member
Feb 14, 2009
660
430
136
AMD Naples...

128 PCI-lanes using four dice.

That means, much as I (and others) suspected, that Ryzen has 32 PCI-e lanes. I suspect they aren't accessible via the socket, though. I would *love* a socket pin diagram. Came in really handy in the Socket 7 days.



That's for 2 sockets, i got stuck on that too for a few mins but found this slide.
Edit: On Win 7 when can we see some gaming with SMT on and off?
Edit 2: nm need to look into it more, The usable 128 PCIe lanes are for 2 sockets but unclear what they use in package and to link sockets since that should be PCIe too.
Edit 3: Seems that each die does have 32 lanes and 128 are usable with 1 or 2 sockets
 
Last edited:

AMDisTheBEST

Senior member
Dec 17, 2015
682
90
61
check out ryzen benchmarks on mafia 3 which is an extremely unoptimized game that not even a gtx 1080 can turn out a consistent 60 fps on max setting. Ryzen cpu on that game completely outperforms 7700k and all other intel cpus. I believe there is a real optimization problem on the software part thats causing ryzen to perform subpar on games. those which are well optimized shows intel has a clear advantage. those that are not shows ryzen on the top.
 
Last edited:

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
check out ryzen benchmarks on mafia 3 which is an extremely unoptimized game that not even a gtx 1080 can turn out a consistent 60 fps on max setting. Ryzen cpu on that game completely outperforms 7700k and all other intel cpus. I believe there is a real optimization problem on the software part thats causing ryzen to perform subpar on games. those who are well optimized shows intel has a clear advantage. those that are not shows ryzen on the top.
Does Mafia 3 multi-thread the draw calls? If not, then it must be related to how well the code works with Ryzen. If it is, then it may show the advantages of the extra 4 cores, unless an i7 6900K was used in comparison.
 

AMDisTheBEST

Senior member
Dec 17, 2015
682
90
61
Does Mafia 3 multi-thread the draw calls? If not, then it must be related to how well the code works with Ryzen. If it is, then it may show the advantages of the extra 4 cores, unless an i7 6900K was used in comparison.
mafia 3 favors hyperthreading. 6900k performs much better than 7700k. Mafaia benchmark is my expectation of how ryzen performs due to all the hype, if only it delivers that on all games. :/
 
Last edited:

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
mafia 3 favors hyperthreading. 6900k performs much better than 7700k. Mafaia benchmark is my expectation of how ryzen performs due to all the hype, if only it delivers that on all games. :/
So it sounds like it does better due to the extra cores. Perhaps Ryzen will do better core for core once games are optimized for it, but it sounds like Mafia 3 is performing better due to the added cores.
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
That's for 2 sockets, i got stuck on that too for a few mins but found this slide.
Edit: On Win 7 when can we see some gaming with SMT on and off?
Edit 2: nm need to look into it more, The usable 128 PCIe lanes are for 2 sockets but unclear what they use in package and to link sockets since that should be PCIe too.

Ah, I see. Whoever made that chart sure made things nicely confusing, I'll tell you that much.

Total cores should be 64
Total PCIE should simply read 2x64 = 128.
Memory channels are fine at 16, but should read 2x8
...

I guess it could make sense that AMD would use the x16 PCI-e lane on the die and the other x8/x16 we know/suspect to exist as part of the data fabric between the sockets - so there's possibly a mesh connection of four PCI-e x8 links between the sockets - or 32 PCI-e 3.0 lanes of connectivity hard-linked between matching CCXes between the sockets and GMI link on-package in a ring configuration.

That would make for an interesting bandwidth and latency analysis...

But would 64GB/s really be enough bandwidth? Maybe that's supplementing some other magic AMD hasn't revealed (or I missed...).
 
Reactions: Drazick

imported_jjj

Senior member
Feb 14, 2009
660
430
136
The bit about it not mattering if a game runs at 200 or 300: I don't disagree. Not sure why it was brought up. If you brought it up to suggest that 720P gaming at 200 or 300 or 500 fps doesn't matter one way or the other and that it's all fast enough, I'd agree if the focus was on *gaming*. The benchmarks at low resolutions historically were NEVER about actual realistic gaming performance. A good logical writer might be able to draw some conclusions from the data and say that if GPUs gained x amount of power in the next year, or the next 2 or 3 years, that your CPU would still be amazing in THAT game.

The conclusion drawn from the data etc: yeah, we're saying the same thing. Not sure why you bring it up as if you and I were in debate about it. You go on about future gaming performance, and if you read what I've said, we're in the same boat in terms of how disgraceful some of the articles have been. That doesn't mean, however, that I think low resolution benchmarks are the devil. And my target, as I before said was the folks in the several pages before I even entered this thread that were saying how silly it was to bench a 1080 GTX on 1080P because it wasn't realistic. It's a CPU benchmark. It's not supposed to be representative of a realistic gaming scenario. The writers can some times in various articles do a lousy job of making it clear, but historically a low resolution game benchmark was always about putting measuring points on a CPU. With enough points in an assortment of games, you can start to understand where one CPU shines and where one fails. But if the writer is going to cherry pick 8 games that benefit primarily on single core IPC/frequency, then the writer has an agenda or was sent an agenda or just did a lousy job of reporting when we all know that the GHz war has long been over and things will expand to more and more threads.

Testing ONLY one game with a quad vs an octo and publishing an article of how much of a failure the octo is at gaming or vice versa, we both agree would be dumb. But just because we both agree that would be dumb, doesn't mean that I think the benchmark itself was dumb. It still tells us that one of those CPU is stronger than the other in THAT game. It's one single performance measuring point. I like lots of measuring points. If PC reviewers did things my way, it'd probably include the top 8 games played on steam and the 8 most recently released or releasing titles and the 2 most hardware demanding games in existence. If there was overlap at any of those points, then you'd just bench that many games less. Maybe I'm too demanding and maybe they shouldn't do it my way.

And what you quoted me on, I specifically say that there should be multiple measuring points in various games and that a writer might include verbiage that goes on to say how even though there are still two very real realities in terms to gaming right now: 1) Games that thrive on a single high IPC/high frequency and 2) games that thrive on several threads... that only one of those has a real future.

I'm not interested in debating someone that is essentially saying and thinking the same thing as me but got lost some where along the way in the communication of it all.

My stance is that low resolution benchmarks still have a place, that they are still useful. They have always been used to measure the CPU's capability while running that particular piece of game code. No one runs 7zip all by itself as a benchmark to determine how well a machine will function in the office. By itself, it's possible it will represent a half decent indicator. The same thing for gaming: Not every game at this current date is as multi-threaded as well as the next. You need several points to establish various aspects of the performance characteristics of a CPU in gaming.

A smart review would include, as you quoted me saying, some games from both realities: 1) high IPC/high freq. and 2) very well multi-threaded. Then the same smart review would go on to say that while in the current market of games the option 1 reality still performs best in some games, the option 2 reality is the reality with a real continued future. And the review might even say how the speed at which the option 2 reality will over shadow the option 1 reality just found a new kind of acceleration due in large part to the release of AMD's new CPUs, but that it has been in an accelerated state due to the PS4, Xbox One, and Wii-U all being powered by highly threaded, low frequency CPUs for the last few years.

If you still find conflict with what I'm saying, we'll have to agree to disagree, because I will not cease to find value in low resolution gaming benchmarks but nor am I blind to the increasing move toward more and more threads.


The CPU load differs form low res to high res, especially when you have diff numbers of cores.
With a GTX 1080 at 1080p , 7700k vs 6900k , you have the 7700k ahead 9%. At 1440p it's 7% but if you go to 4k, the 6900k takes the lead.
Low res favors clocks over cores right now that's the problem.
 

imported_jjj

Senior member
Feb 14, 2009
660
430
136
Ah, I see. Whoever made that chart sure made things nicely confusing, I'll tell you that much.

Total cores should be 64
Total PCIE should simply read 2x64 = 128.
Memory channels are fine at 16, but should read 2x8
...

I guess it could make sense that AMD would use the x16 PCI-e lane on the die and the other x8/x16 we know/suspect to exist as part of the data fabric between the sockets - so there's possibly a mesh connection of four PCI-e x8 links between the sockets - or 32 PCI-e 3.0 lanes of connectivity hard-linked between matching CCXes between the sockets and GMI link on-package in a ring configuration.

That would make for an interesting bandwidth and latency analysis...

But would 64GB/s really be enough bandwidth? Maybe that's supplementing some other magic AMD hasn't revealed (or I missed...).

I gave up on trying to guess and will just wait for more details.
Do want to ask again, when do you expect to have some Win 7 results with SMT on and off in games where SMT is a penalty under Win 10.
Since seeing that result for a single game, i am very eager to see more.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |