Dell XPS 13 9310 with an 1165G7:
https://www.userbenchmark.com/UserRun/30897425
The scores are awful, but now we know Dell is refreshing the XP3 13 with Tiger Lake.
Edit: Confirmed via a Dell employee's linkedin
https://tw.linkedin.com/in/jimmy-liu-b6a7b11a5
So it seems like four times the L3 cache makes a much bigger difference than a monolithic die. You have to greatly overclock both core and memory to match a stock 3700X.
Dr. Cutress spoke on Reddit about the PCIe lane issue:
It also seems AMD did not give a total number of lanes, but I'm just going to assume its 1x8,1x4, and 1x4 like the 4700U.
"All the APUs support DDR4-3200, and have eight PCIe 3.0 lanes. On the PCIe lanes, this is because the mobile chip was built with eight PCIe 3.0 lanes, to save on power in a mobile environment. AMD sees these chips being used mostly on their own without a separate discrete graphics card, given...
1042 Int and 1317 floating point is considerably lower than most 3600s with DDR4 3200.
https://browser.geekbench.com/v5/cpu/2951284
https://browser.geekbench.com/v5/cpu/2882275
https://browser.geekbench.com/v5/cpu/2864388
https://browser.geekbench.com/v5/cpu/2847159
Only really because the majority of games are heavily GPU bound. A 9900K is 5-10% faster per clock than a 3700X in most games:
https://www.techspot.com/article/1876-4ghz-ryzen-3rd-gen-vs-core-i9/
An overclocked 9600K beats the 3900X in every single game Gamers Nexus tested, and the 10900K was...
It will be interesting to see if the small cores are another architecture or just Golden Cove without AVX-512 and TSX. If they're the latter then this would effectively be a 16C/32T CPU in the vast majority of workloads.
So Intel is 5-10% ahead per clock and 20% ahead on all core clocks. Seems like pretty simple math even with clocks not scaling with performance 1:1. The fact that people point to GPU bound benchmarks to "disprove" this gap is pretty hilarious.
You're likely to be disappointed with Zen 3 if you're looking for a large upgrade over your 9900k. The 9900K already beats the 3700X by 5-10% per clock and your chip is overclocked 20% higher than the all core clocks of the 3700X. Even if Ryzen 4000 brought 30% higher gaming performance it would...
If you notice the 4800U is running Linux while the 1165G7 is running Windows. Linux is usually 10% faster. If you take the best 4800U Windows score the difference is >30%.
https://browser.geekbench.com/v4/cpu/search?dir=desc&q=4800u&sort=score
Per APISAK on twitter the 1165G7's Geekbench 4 score is 23% above the 3900XT in both FP and INT:
https://browser.geekbench.com/v4/cpu/15604157
https://browser.geekbench.com/v4/cpu/15587268
Edit: ST FP and INT scores are about 35% higher than the best 4700U...
Sunny Cove beats renoir by 20% in Integer and 27.5% in floating point when you run SPEC2017 1T. Not sure how Willow Cove being 25% ahead of desktop Zen 2 is unfathomable for some people here.
The 4.7 GHz 3900XT vs the 4.7 GHz 1165G7 in userbenchmark:
https://www.userbenchmark.com/UserRun/29798710
https://www.userbenchmark.com/UserRun/29555622
Looks like a 25% lead IPC for Willow Cove. That's similar to Ice Lake vs Renoir...
Ah yes, please tell us of your vast expertise in Intel's financial details. I'm sure you can refute Bob Swan with your decades of knowledge. Please enlighten us on how AI isnt driving the growth of Xeon Scalable Processors.
"Several years ago, we began a transformation to reposition the company to take advantage of the data revolution that is reshaping computing," CEO Bob Swan said in the earnings call. "We are accelerating growth by expanding the capabilities of our workload-optimized platforms and playing a...
I love how everyone is talking about how AVX 512 is so niche, but Intel sold $3.5 billion worth of chips in 2019 solely for use in AI. AMD had $6.5 billion in revenue in 2018, and will probably be at less than $10 billion for 2019. Intel's AVX 512 DL library is one of the main drivers of Xeon...
You don't compare them because several people per day score higher than the Windows world record, and Linux scores are consistently higher across the board. If you used the world record Linux score you might conclude TGL is a 5-10% step back in IPC. But we all know that's a completely...
This is why you don't compare scores between operating systems:
https://browser.geekbench.com/v5/cpu/search?utf8=%E2%9C%93&q=1065g7+linux
Note the highest 1065G7 score on windows is only 1372. Yet crossing 1400 on Linux is fairly easy. Here's a result from page 3...
No, but we have Ice Lake U RVP to compare to Tiger Lake U RVP
https://browser.geekbench.com/v5/cpu/search?utf8=%E2%9C%93&q=Intel+Corporation+IceLake+U+DDR4+SODIMM+RVP
RVP systems are what Intel sends out to OEMs so they have something to test before they design products. I'd say that's a far...
Now that I think about it, it's probably better to compare vs another ES. If you look at GB scores, ES are almost never even close to the best even if they have retail clocks.
Tiger Lake U with 128MB eDRAM? Seems like an Iris Plus version:
https://ranker.sisoftware.co.uk/show_system.php?q=cea598ab93a190a099bfd8e5c3b18caac3fed8b08dabd3eec8adc8f5c5e390ad95&l=en
You can cool Cascade Lake-X parts with 280mm and 360mm AIOs. Tweaktown overclocked their 18 core 10980XE to 4.8 GHz all core on an H150i without thermal throttling:
https://www.tweaktown.com/reviews/9267/intel-core-i9-10980xe-cascade-lake-processor-review/index7.html
How is this possible...
The primary changes to 14nm are taller, straighter fins and decreased density. Taller fins perform better at high frequencies and decreasing density allows higher clocks. I'm not sure how either of those would no longer benefit Willow Cove.
I suspect it wont be as bad as many people in this thread are making it out to be. Thermal density is the real challenge with cooling modern CPUs. Rocket Lake should have a massive die size increase due to the much larger cache and additional instructions.
Willow Cove on 14nm should have high power draw, but the vast increase in cache should help regulate thermal density. Clocks will be lower than Skylake but not massively so. 14nm can tolerate really high currents so durability shouldn't be an issue. A 9900k is rated up to 193 amps in Intel's...
Since it's a RVP this could very well be a test system that found it's way to the public some how. Google had some similar Ice Lake RVP products that ended up being canceled.
However, Apple is extraordinarily slow at updating hardware in their products sometimes, so who knows?
Nah, its either spoofed or real. There are very few people out there with RVP boards and this is probably the missing Ice Lake Y RVP board that was spotted in May.
edit: Also keep in mind RVP boards are for extremely early designs, and many products tested with them never make it to market...
Could be, but i doubt it. Getting an ICL engineering sample with LPDDR4 to work on an old Apple motherboard would be a pretty monumental task. If it's fake it's likely spoofed.
edit: This is a reference validation platform which would mean it came straight from Intel.
If you assume the single core score at 28W is at 4.3GHz and these tests scale reasonably linearly with frequency, then the 15W test is running around 3.9 GHz. Assuming scaling is less than perfect 3.7-3.8GHz sounds right.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.