According to every benchmark an 835 still gets smashed by an Intel Y-series. Completely different price point, but my point was someone that wants cellular probably isn't or shouldn't be that price sensitive.
If you ignore the cellular aspect and compare to Pentium N4200 or the new N5000, it's...
Trying anything and everything to increase the number and quality of UWP apps is the only angle that makes sense to me from a MS perspective. I don't believe for a second that MS is content to concede the mobile/phone market for eternity and/or wait for the next bit thing. Whether that's...
My personal opinion is that people are reading a little too much into this and conflating the facts with opinions/speculation. It is apparently true that there will now be an Intel CPU + AMD GPU part. I have no reason NOT to believe that Intel did layoff graphics engineers or that iGPU is part...
As far as I know tRC isn't really a good/proper measure of latency. Again, I'm not hardware/EE type but my understanding is that tRC is more or less a physical property of the DRAM cell which is more or less the same in any memory technology made with the same process (which I think is why they...
I'm far from an electrical engineer type so take this with a grain of salt, but given that DDR requires a higher voltage than LPDDR, I'd think it makes a difference any time the machine is on. Sleep is going to be the most pronounced because at that point there's very little if anything using...
You could be right about the memory bottleneck although it seems hard to know for sure until you see how far it throttles under-load. I'd assume that the 550 would be able to sustain something closer to max clocks longer but who knows. As for HBM vs DDR as main system memory, regardless of price...
Why are people talking about HBM? Are they hoping to add separate VRAM? Unless I totally misunderstand HBM, you really don't want to use it INSTEAD of DDR. In any case, I don't see how that makes sense. I don't think memory bandwidth is going to be the major bottleneck for a GPU that has to...
Yes, although I would think that big.little is not without it's own set of challenges/tradeoffs. But the A11 is the first design with a true big.little design (the A10 small cores weren't independent and seems like they may have shared resources with the big cores), so I still find it somewhat...
While I think it's true that the Ax series CPU cores are generally larger than stock ARM designs I think solely attributing the performance to that is a vast oversimplification. Certainly Intel would be willing to double their CPU core sizes if they could get 50% more ST performance at the same...
Safari, at least mobile Safari's javascript engine seems to improve at an impressive rate year to year. When anandtech posts new iPhone reviews they don't re-run the javascript tests on the old devices with the new OS, if you do have one of the older devices in their charts and re-run the tests...
My question is could they have just increased the size of the GPU or is there some other bottleneck limiting the graphics performance? If they could have just added GPU cores and they didn't for cost or power concerns, I think the additional CPU cores is a mistake. Does anyone disagree with...
Maybe a few? As a developer you can't design your software such that you'd need 4 cores because most devices don't have them. So best case is there are a few edge cases where things go from a little slow to totally snappy. I think it's hard to make the case that you'd rather have better...
I don't understand why nvidia insists on quad-core SoCs. Mobile devices need single threaded performance and graphics power more than they need more cores. A15 will be a huge help but I can't help but think that they'd be better off with a 2 + 1, a pure big.little design, or even just a standard...
Does anyone know what a typical contract would look like? In this case it seems like they had to commit to 320M to get (up to?) 500M. Percentage wise is that typical?
Thanks in advance. The quantity and quality of knowledgeable posters here is amazing.
Can someone help me catch up here? So the actual report is that there was a "take or pay" clause in the contract between AMD and GF and AMD is choosing to pay. Is there also a contract that requires/required that AMD not use any other foundry as well? I guess what I don't understand is if "take...
This is going to be interesting to watch. You've got Qualcomm, Samsung, Apple, Intel, and Nvidia all competing in this big emerging market and you've got to think some of them are going to come out worse for the wear. It seems to me that Samsung and Nvidia are in the worst positions at the moment.
Just found that thread. Interesting, IDC certainly knows more about this stuff than I do. It's still not clicking for me. Fab 14 phase 5-6 isn't scheduled to begin production until early 2014. What's the plan for 2013? Samsung 32nm? TSMC 28nm HPM? Where is 28nm HPM btw, is that ever coming...
I totally understand that. Samsung has a somewhat weird business model in general. Traditionally, when you vertically integrate you don't also sell to your competitors. The difference is that Samsung started as a supplier whereas historically companies that end up vertically integrated start out...
The thing that confuses me is given how capital intensive semiconductor manufacturing is, why isn't Samsung more concerned about idle fabs? Do they just assume that they'll be able to fill them with their own designs? Seems like they risk sinking the whole ship, no?
Am I crazy to read this as we're currently negotiating with Apple and they're going to pay us handsomely; so don't freak out about our margins when you see the deal finalized.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.