Tup3x
Golden Member
- Dec 31, 2016
- 1,210
- 1,290
- 136
Finland.Which country are you in?
Finland.Which country are you in?
What are the requirements for you to adopt me? I will be your assistant/son till death do us part. We can do beautiful benchmarking research together.Finland.
I don't think he would want me as the father. I'm prone to being somewhat controlling when put in a position of authority but I do try to be as fair as possible. Maybe more than most./father?
From what I remember in Alex Ziskind's stream the Surface trackpad bugged out and stopped being clickable.The new Surface Laptop apparently has a great touchpad. Even better than the Macbook Air, I hear...
Don't make me tap the sign.It’s possibly also the SLC that was making the difference, like with A15 it has 32MB lmao. (A14, 16MB. M base chips? 8.)
Look, everyone knows it’s perf/GHz but it’s just a colloquial term now, it’s kinda similar to the “nanometer” process naming. Not entirely, because that’s a corporate name that can’t be replaced, whereas we can choose different descriptors for the actual “IPC” values (perf/GHz), but still. Most of us know by now that IPC isn’t really a proper term lol.
I mean I agree, but for absolute performance reasons for example people get sloppy and will shift from “huh interesting IPC” to “what do you mean, AMD and Intel are like 20% ahead on perf vs [insert mobile laptop core from like QC or speculated future one, and I’m sure they’ll do this when MediaTek/Nvidia have a laptop chip too etc]” and cite charts “8@3 this when it’s practically parity or a smaller margin on mobile. And mobile is most of the client market and a majority of the money as well. You cannot have your cake and eat it too on that, lmao. If you want to talk desktops-only — fine, but that’s less and less of the discourse thankfully and deservedly. It’s just a humongous disservice.
Note: nothingness, I don’t mean *you*. I am using it third person/generically.
It’s a pet peeve because neckbeards from the AMD/Intel caucus love playing fast with this.
Not very helpful unfortunately. Needs login.here is a detailed review of Oryon by JamesAslan https://zhuanlan.zhihu.com/p/704707254
https://www.libreoffice.org/Can someone do a calculation of dividing the runtime of each laptop by the Whr of it's battery?
The point of my answer was to show how *you* can do it.Can you post a jpeg?
It's alive!There are a few Geekbench results with X1E001DE.
Given scores are all around the place for all Qualcomm chips it's hard to conclude. And given the clock increase is only 2% I'm afraid we could not expect a lot.On a more serious note, none of the scores are ascending 3000 yet, as it should.
What's the point when I gotta pay extra monthly fees to add line and sim chip when I can get it for free just making my phone a Wifi Hotspot?The first batch of Snapdragon X laptops does not come Qualcomm's historical strength.
5G Cellular
Awesome, thank you! This confirms my previous suspicions that the Omnibook X was efficiency-oriented.Screw that. I'll do it Excel style!
View attachment 101665
Or heavy multitasker.That being said, I have determined that I must be a very heavy battery user.
It's a pretty good start. Better than AMD's Zen1 in my opinion. It's interesting to see how they can improve this. If they can increase the clocks a bit and then add some ipc improvements on top, they are in really good spot CPU wise.So judging by preliminary SPEC results, Oryon V1 has strong FP performance but weaker INT performance.
So improving INT by a larger amount should be priority for Oryon V2
All I want is EGPU support with either an Nvidia or AMD card. Then I could finally ditch my Intel MBP. I’m looking forward to V2 as well.GPU, divers and firmware (bit odd behaviour in some units)... Those are problematic.
So judging by preliminary SPEC results, Oryon V1 has strong FP performance but weaker INT performance.
So improving INT by a larger amount should be priority for Oryon V2
Yeah it's an antique.SPECFP doesn't mean much IMO and, frankly, is best ignored unless you know exactly what you're looking at.
Current Adreno is pretty modern, they'll just need to have real cache b/w, less skewed ALU/reg ratio and moar frequency.I have nearly zero expectations of Qualcomm making a GPU that is on par with Nvidia or AMD