He did not must most users will just go and look and see oh 1700 and 7700k perform the same. Most won't really see the 4 ghz thing. The 7700k can easily get 20% more performance due to OC in that chart.
EDIT: Joker might have some settings changed in Nvidia control panel (w/e it's called) or you could be testing different areas. Either way, it doesn't really invalidate his results.
His 7700k underperforming invalidates his results in their entirety. Either he is incompetent, and that is likely considering the whole way he thought his 1080p runs were CPU bottlenecked, he screwed up his 7700k overclock (unlikely), or he was dishonest (please no).
Because his 1080p ultra benches were ALL gpu limited 99% of time. Looking at them any further is just being dishonest about what you are measuring.
His 7700k underperforming invalidates his results in their entirety. Either he is incompetent, and that is likely considering the whole way he thought his 1080p runs were CPU bottlenecked, he screwed up his 7700k overclock (unlikely), or he was dishonest (please no).
One of these has 4k ultra run of shadow of mordor as CPU benchmark.
You cannot invalidate his results without duplicating them. Which has not been done, as far as I know.
Yah, he tested 4k? So what? There were 1080p benchmarks too, and 4k is infinitely more relevant than 720p anyways.
720p Low + enthusiast benchmarks are totally worthless. It's a drawcall benchmark that doesn't tell you anything about current and future gaming performance. Totally irrelevant dribble. You might as well say Cinebench is a good measure of gaming performance.
You can clearly see the 7700k bottlenecking and dropping frames in his review.
1080p with an enthusiast card = Extremely Niche
720p Low + enthusiast Card = Totally unrealistic and irrelevant
Joker released this video on the 2nd with the 1800x vs 6800k and I while watching it I noticed he had to manually set it up to make the 1800x run at 3.9ghz. What's going on with boost and XFR on these CPU's? His results look pretty good apart from not being able to get this first batch to OC easily. If they get things sorted out I can see myself building off this platform next.
Joker released another video on the 1800x vs 6800k and I while watching it I noticed he had to manually set it up to make the 1800x run at 3.9ghz. What's going on with boost and XFR on these CPU's? His results look pretty good apart from not being able to get this first batch to OC easily. If they get things sorted out I can see myself building off this platform next.
Boost on the 1800x, assuming it's working correctly (a lot of people say it isn't), only goes to 3.7 GHz with XFR. Single or 2 cores are supposed to go to 4.1 boost, but that won't happen if 3 cores are stressed.
Remember... games are still not fully optimized for more than 8 cores....
The only things that is fully optimized are:
- Databases (SQL, Oracle SQL)
- Virtual Machines
- Video rendering SW
- Java compilers
So, is on profesional stuff where AMD shines a lot. Ryzen is pretty useful on that tasks
And they needs to open ALL their drivers if they want to fight Intel toe by toe.
At least they returned to the game
@Aten Ra, forget 6-8 BIG Cores APUS.
Even Intel can't deliver that.
Also, Raven Ridge is supposed to be 2 / 4 Core APUS.
To mobile and to desktop.
Now for mobile (notebooks, not cellphones), AMD already told that they want to compete with Core M too (5 watts), but knowing AMD is pretty impossible to achieve that ammount of power at 5 watts, but considering that they are at 14 nm, so I expect to see the following names (so is my opinion and prediction):
RR7 - (FX): CPU 4 Cores with HT at:
- 1.75 Ghz Max (7.5 watts) - GPU Vega "Lite" based (256 SP) - 600 Mhz
- 2.5 Ghz Max (15 watts) - GPU Vega based (512 SP) - 700 Mhz
- 3.0 Ghz Max (35 watts) - GPU Vega based (512 SP) - 900 Mhz
RR5 - (A10): CPU 4 Cores without HT at:
- 1.5 Ghz Max (7.5 watts) - GPU Vega "Lite" based (256 SP) - 500 Mhz
- 2.25 Ghz Max (15 watts) - GPU Vega based (512 SP) - 650 Mhz
- 2.75 Ghz Max (35 watts) - GPU Vega based (512 SP) - 800 Mhz
RR3 - (A9): CPU 2 Core with HT at:
- 1.75 Ghz Max (7.5 watts) - GPU Vega "Lite" based (192 SP) - 600 Mhz
- 2.25 Ghz Max (15 watts) - GPU Vega based (384 SP) - 650 Mhz
- 3.25 Ghz Max (35 watts) - GPU Vega based (384 SP) - 900 Mhz
Sempron - (E2): CPU 2 Core without HT at:
- 1.25 Ghz Max (7.5 watts) - GPU Vega "Lite" based (192 SP) - 500 Mhz
- 2.0 Ghz Max (15 watts) - GPU Vega based (256 SP) - 550 Mhz
- 2.5 Ghz Max (35 watts) - GPU Vega based (256 SP) - 700 Mhz
Anyone thinks 1800x should be drop to $350 to compete with i7 7700k, 1700x dropped to $300 to compete with the lesser non k i7s, and 1700 dropped to $250 to compete with the i5s? R5 lines can mop up the market from $150 to $200 and R3 can handle the $100-150 market. Sub $100 can be left with the dual core Athlon branded zen with iGpu and SMT. Who agrees?
Anyone thinks 1800x should be drop to $350 to compete with i7 7700k, 1700x dropped to $300 to compete with the lesser non k i7s, and 1700 dropped to $250 to compete with the i5s? R5 lines can mop up the market from $150 to $200 and R3 can handle the $100-150 market. Sub $100 can be left with the dual core Athlon branded zen with iGpu and SMT. Who agrees?
Not me. I hate that the r5 and r3 are late to the party, but those prices are way too low. I could see some price drops coming when intel finally releases a mainstream 6 core, but not before then.
The 1800x isn't worth $500 to me, but that is because I am happy with good OC's. The people that don't want to OC, and the people that want the max OC will be happy to pay the extra money for the performance.
Anyone thinks 1800x should be drop to $350 to compete with i7 7700k, 1700x dropped to $300 to compete with the lesser non k i7s, and 1700 dropped to $250 to compete with the i5s? R5 lines can mop up the market from $150 to $200 and R3 can handle the $100-150 market. Sub $100 can be left with the dual core Athlon branded zen with iGpu and SMT. Who agrees?
Anyone thinks 1800x should be drop to $350 to compete with i7 7700k, 1700x dropped to $300 to compete with the lesser non k i7s, and 1700 dropped to $250 to compete with the i5s? R5 lines can mop up the market from $150 to $200 and R3 can handle the $100-150 market. Sub $100 can be left with the dual core Athlon branded zen with iGpu and SMT. Who agrees?
Anyone thinks 1800x should be drop to $350 to compete with i7 7700k, 1700x dropped to $300 to compete with the lesser non k i7s, and 1700 dropped to $250 to compete with the i5s? R5 lines can mop up the market from $150 to $200 and R3 can handle the $100-150 market. Sub $100 can be left with the dual core Athlon branded zen with iGpu and SMT. Who agrees?
Anyone thinks 1800x should be drop to $350 to compete with i7 7700k, 1700x dropped to $300 to compete with the lesser non k i7s, and 1700 dropped to $250 to compete with the i5s? R5 lines can mop up the market from $150 to $200 and R3 can handle the $100-150 market. Sub $100 can be left with the dual core Athlon branded zen with iGpu and SMT. Who agrees?
R7 1700 is a bargain even at £330.
Given that OC limits seem to be in the region of 3.9-41GHz for the R7 CPUs, are achievable on the B350 MOBOs (which cost around £90) with the stock (Wraith Spire) cooler, you couldn't possibly ask for better value.
However, if you want to knock £50 off of that, I'm not going to say no!
OCing to 3.5Ghz at the lowest stable voltage, buying a top tier GPU and some fast DDR4, and you're set for a number of years.
To cap it off, the AM4 socket is what you're likely to find on any further CPUs down the line.
He also did a 6800k at 1080p ultra and got the same results as with the 7700k.
If anything those aussie "tech reviewers" aren't the ones to be trusted.
EDIT: Joker might have some settings changed in Nvidia control panel (w/e it's called) or you could be testing different areas. Either way, it doesn't really invalidate his results.
One thing I realized in all this GN and Aussie drama and bickering is that they themselves are in an echo chamber. All their test methodology must be the same, likely all repeating the same mistakes. The point of different reviews is to have individual variance in methodology so that a user can see and judge on their own.. but all those guys are showing the same benchmarks, which kind of makes their reviews quantitatively worthless.
You actually want to see variety in results and test methodology, like Joker, computerbase.de (which did a very thorough job I must say, these guys never fail to impress), hardware.fr, PCper.
Even if it costs manufacture more than Ryzen CPU, it will cost less to manufacture than Polaris 10 GPU. PCB, GDDR5, shroud vs CPU die, package Interposer and HBM2. Lets say that 2304 Polaris 10 costs 199$ and HBM2 APU costs 199$. The manufacturing costs for APU will be lower, and the market is bigger for the APU. MUCH, MUCH bigger.
I question those economics when Polaris 10 cards have sold for as low as $130 (currently around $150 at lowest) in the form of the RX 170. Also, the other component costs for the GPU are taken on by the third party manufacturer (unless it's the AMD reference design) who can handle those expenses better because they're making those things for a range of products and use some like the memory, etc. across different products. From AMD's perspective, they just sell the GPU chip and they're done with it.
If AMD makes a ~100 mm^2 APU die they can make a lot more of them per wafer than at ~200 mm^2, which is more efficient utilization of resources which means they can be sold at a lower cost and target the $100 - $200 segment of the market much more effectively. It also frees up more wafers to be used for the server market where Ryzen has the most potential because its not just getting similar or better results than Intel, it's doing so with less power usage. That's why Intel announced that 10 nm is going to be put towards server products first. They know that they're going to need to added power benefits from the node shrink to be competitive in that market.
Finally, we don't know what the availability for HBM2 is and if there are going to be supply constraints, you can't include it in a product that you need to sell in volume or it means you've got a lot of silicon sitting around useless because there's no HBM2 to slap onto it.
The product you describe would give anyone doing risk management nightmares. You want to combine a lot of new technology together to address a market segment that doesn't really exist in a situation where there might be supply constraints. Any sane person is going to nip that in the bud and tell you that doing it this early on is a bad idea as there are just so many parts of that plan that can go wrong or run into issues which sets everything back. Far better to wait until you're using GPU technology that was designed to have multiple small modules put together on an interposer so you can pair it with a really small die CPU part and cheaper HBM2 memory that is more plentifully available.
A single Zen CCX paired with an 8 CU GPU (whether Polaris or Vega derivative) is going to be more than enough to address multiple market segments. You can make a lot of these chips very easily and target both notebook and desktop markets, and probably even get some that are very low leakage and run at low voltages that will let AMD make a play at the ultra-portable part of the notebook market. If AMD could get some low power parts, they could make a serious play at getting their CPUs/APUs into Apple products which would be a serious design win for them.
It was talked about in another thread that the data fabric operates at the frequency that is half of that of the operating frequency of the memory controller.
It actually runs at the same frequency as the memory controller - which runs at half the frequency of the memory itself. I had to confirm that with The Stilt
One thing I realized in all this GN and Aussie drama and bickering is that they themselves are in an echo chamber. All their test methodology must be the same, likely all repeating the same mistakes. The point of different reviews is to have individual variance in methodology so that a user can see and judge on their own.. but all those guys are showing the same benchmarks, which kind of makes their reviews quantitatively worthless.
You actually want to see variety in results and test methodology, like Joker, computerbase.de (which did a very thorough job I must say, these guys never fail to impress), hardware.fr, PCper.
lol yah, I noticed that as well. I posted in a couple of their videos about other people's results, including Joker's, and they didn't like that. They all came to "defend" the benchmarks. They even went as far to claim I was actually Joker.
There is a video link to one of their videos on the AMD subreddit right now, and there are is at least one other person being called joker as well.
Idk what Joker did, and I don't really care, but they really don't like him.
I agree, i like the idea as 1HI stack (1GB) used as an L4 cache, similar to intels crystal well, im not sure of the Latency benefits of HBM vs DDR4, remember HBM uses interposer and does not need PCI E which adds Latency, so it has to be better, i would like to to see some data on this.
Certainly it must incur more Latency than intels Crystalwell edram setup which is on die, this would be worse for cpu and compute, BUT its primary inclusion is for general graphics bandwidth, which would be 'just enough' whilst having 4x the volume of crystalwell, likely cheaper and more power/thermal efficient, again i would like to see some data on this.
The issue of whether 1GB of vram is enough is a tough question,HBCC would allow the effective vram to double to 2GB, intels crystalwell only has 256mb? Of memory but its much faster i think? Nevertheless that is enough to dramatically speed up the gpu so 2GB effective will offer massive benefits for sure.
The issue that might arise is the amount of bandwidth the HBCC needs to be effective? For instance if it needs 512gbs to enable 2x more textues streamed in from nand than there is vram (something like that?) then what is the effect of reduced bandwidth to this virtual memory allocation? Is it still 2x with little performance penalty vs physical vram or does the virtual vram benefits decrease with bandwidth?
I think HBM for system RAM is years away honestly.
AFAIK an HBM(2) moudule is composed of 8x128 DDR4-like memory controller (very similar protocol and options, only the data beats could be different, i don't remember), so the latency should be similar to INTEL's crystal well, but probabily with higher bandwidth due to much more bits...
I would like to see from different games how many threads they run. Games have different ways in recognizing threads. Where game creates 16 threads to run for 6900, but only 8 threads for R7.
That word does not mean what you think it means. Duplicating results would be literally validating them. You are just claiming that nobody has validated his results.
Just like 4k test is a GPU benchmark that tells you nothing about current and future gaming performance. For all you could conclude from it is that in 2-3 years on this CPU the game will stutter like hell with 100+ fps. And who will you have to blame? Yourself for getting the wrong CPU, like all those folks that bought i5s for their high end gaming rigs when BattleField 1 came out.
You know what is the actual issue with all those reviews, however? Only 1 or 2 of them even bothered to test games that may actually be CPU limited on reasonable hardware configs. Guess what, Ryzen got rolled in those. You could argue all day if results it got were bad (mostly not), but it got rolled in those few, straightforward as that.
I would like to see from different games how many threads they run. Games have different ways in recognizing threads. Where game creates 16 threads to run for 6900, but only 8 threads for R7.
It actually runs at the same frequency as the memory controller - which runs at half the frequency of the memory itself. I had to confirm that with The Stilt
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.