lol, I see a lot of jerking off here, but not a lot of kleenex
AMD has surprised me before. In the 1990s, AMD was considered "cheap" and of poor quality. With K7, AMD sent Intel on a panic. With a fraction of Intel's budget, AMD designed a CPU that was capable of outperforming the Pentium 3.
AMD ran with that design and developed K8. Intel was completely unable to compete in performance at this point, and AMD's sales began picking up after years of being the "other guy." At some point in 2005, AMD surpassed Intel's shipments for around a month. That was completely unheard of at the time.
They're down, but I'm hoping they aren't out. The Bulldozer design just isn't working. They can't afford to throw six years away into Bulldozer like Intel did with Netburst. They just don't have the money to sustain it.
Nah they got lucky and bought another company and ran with the other companies design. Think it was nexgen or something.
Actually GT2 is probably gonna end up 25-50% faster than the Ivy Bridge GT2.
In mobile quad, the GT2 iGPU is alone enough to match/surpass Trinity. In Ultrabook-level SKUs, Ivy Bridge is already competitive/beating AMD. It's in desktops that they'll be still behind. There's a lot less incentive for Intel to invest in iGPUs in a market that isn't limited by power use and is way further behind discrete competitors.
For their K6, IIRC.
Not robust process variations but I guess you and I have different experiences. I see more glass jaws caused by automation because the design is unable to build the right topology to make a complex calculation and automation doesn't have the complex arrays that custom design can do. I've seen register files with over twice the read ports than any automated ones that I've seen because the custom design team was more aggressive with custom circuits.
So that's my meaning of robust, less glass jaws because they're able to design structures that can handle more things and more complex calculation/determinations because they can get a tighter timing loop. As for examples, I hope that gives you the general idea. There was something back in the other HSW thread where someone didn't understand how HSW added a 4th arithmetic unit. An automated design MAY cause compromises (like clustering).
But I'll be fair that again automation has its uses towards the end. A late bug or a late discovered miscorrelation near tapeout really sucks in a custom data path. I had to make several late saves and it was rough.
Makes sense. You can almost certainly get more regfile ports/larger reorder buffer into a custom/hand-implemented design. It's been a long time since I worked on a design aimed at high single-thread performance, so maybe I've just been more interested in the huge area savings that extensive use of automation produces (look at all the whitespace in Apple's A6 core!)... if you're aiming for high frequency / single-thread performance maybe you're still getting benefit (although I read a paper by one of the STI companies that showed a synthesized Cell ended up smaller & faster...). I think the ALU example is less clearly an advantage of custom design - an automated design may be enough smaller that the wire lengths reduce enough to tolerate a slightly inferior implementation from a logic perspective.
I have just read the excellent article by Anand: Intel's Haswell Architecture Analyzed: Building a New PC and a New Intel
As I read through the article about all these things Intel was doing to improve their future processors, one thought kept recurring in the mind: what will be AMD's answer to this change? How will they respond to that feature?
My personal feeling is that with Haswell Intel might pull a lead on the CPU side AMD which might never reclaim.
AMDs APU might still have the 'overall' edge on the iGPU side, but their lead in TDP limited mobile SKUs might be all but eroded. As it is, against Ivy Bridge their lead in mobile iGPUs is much smaller than on desktop, all down to TDP. With Haswell, in the 15 watt arena Intel might even pull ahead on the iGPU side.
Is there anything on the AMD roadmap that can keep them in contention? Or will they play the price game: focus exclusively on low cost (sub-$100) APUs to gather volume and survive?
How do you see AMD reacting to and survive Haswell?
It again definitely depends what the logic is. I have seen some area reduction but in another case I wanted to move a "simple" data path from a custom to a synthesis block. I saw how much area it was in the custom design, took that away from the custom and gave that to the automated block. End result was that it was uncountable, worse timing and needed more area. We quickly punted it back to the custom area. Again it really depends on how aggressive the design is and where I am, we really pack things in that I would bet money that an automated design couldn't get it to fit.
So yeah, it's a case by case. The generalization that automation will give you a more compact design isn't always true. So what ends up happening is a hybrid. You manually draw in the logic, placement, and logic for the toughest areas and do enough manual work for what's important and dump the other parts of the design to automation. I have at some points debated automating more of what i do but the line between automation and custom gets blurry. Is it automated if I tell it exactly what logic, placement and wire for this specific logic but I leave the tools to do it for me? Sort of being manager with a robot minion.
Hi,while current iGPU from Intel are way better than Ironlake-based design (GMA X3100/GMA4500), they seems to heavily suffer with complex geometry. Take a look at Civ5 late game: even a HD4000 is 3x slower than Trinity.
In not-AAA game the situation is somewhat similar, with lower then expected performances: it seems that HD4000 lacks the great driver/software support that integrated Radeons have.
I really think the two are related, where Civ V doesn't use lot of instancing(which saves draw calls), and Intel driver having high overhead. When they get the new driver, performance in specific scenarios(like Civ V) would substantially improve, so its more in line with the average, and you'll probably see others gain somewhat based on how many geometry processing is required. I assume RTSes like Starcraft II might be one of the bigger gains we'll see(aside from Civ V).To prepare for future integrated GPUs which will be even faster, Intel is planning to reduce the driver overhead to comparable levels when measured in CPU cycles per draw call. To accomplish this, a new graphics driver architecture is expected later this year.
First, Ironlake GPU is the one in Arrandale/Clarkdale. The GMA X3x00 is 4th Gen while GMA X/4500 is 5th Gen. Ironlake is Gen "5.75." They had a big boost in geometry performance with Ironlake, when they brought Hierarchial Z support(which vastly improved memory bandwidth usage and improved geometry performance as well).
For Civ V, even Anand mentioned that its likely due to drivers. RWT mentions something related to drivers as well: http://www.realworldtech.com/ivy-bridge-gpu/2/
I really think the two are related, where Civ V doesn't use lot of instancing(which saves draw calls), and Intel driver having high overhead. When they get the new driver, performance in specific scenarios(like Civ V) would substantially improve, so its more in line with the average, and you'll probably see others gain somewhat based on how many geometry processing is required. I assume RTSes like Starcraft II might be one of the bigger gains we'll see(aside from Civ V).
Also in Haswell, it has a dedicated Resource Streamer to further help with driver load. Overall, the HD 4000 in mobile is in average only 20% behind Trinity, so Haswell GT2 has a chance of being ahead.
My personal feeling is that with Haswell Intel might pull a lead on the CPU side AMD which might never reclaim.
Well the GT 2 is 2x faster than the HD4000 thats faster than trinity. What the GT3 willl be we shall see
From the article:
"You can expect CPU performance to increase by around 5 - 15% at the same clock speed as Ivy Bridge. "
A 5-15% boost is really tiny for a tock, if anything I think AMDs moving target is slowing down. Even IVB as a Tick managed 10% in many cases. That said AMD was also quoting 10% improvement a year so I don't know that they'll "catch up", not for a few years at least, but at the very least it doesn't put them further behind if both are going up ~10%.
Depends. A mild IPC increase with a significant clock increase is a hefty improvement. We've not seen a significant clock increase in a while though.
And igpus are pretty much in that limbo of all being good enough for normal use, and not being good enough for game use. Just like gluing more cores together was low hanging fruit, igpu is the new low hanging fruit (but a pretty bad thing to be spending so much die space on)
From the article:
"You can expect CPU performance to increase by around 5 - 15% at the same clock speed as Ivy Bridge. "
A 5-15% boost is really tiny for a tock, if anything I think AMDs moving target is slowing down. Even IVB as a Tick managed 10% in many cases. That said AMD was also quoting 10% improvement a year so I don't know that they'll "catch up", not for a few years at least, but at the very least it doesn't put them further behind if both are going up ~10%.
Yeah, the die size seems like a waste if you're not even going to use the iGPU, given how much die size it takes I wish there were models with either no IGP and higher clock rates or no IGP and six cores in more affordable packages.
When you start looking at transistor count, it gives an indication of the best one can expect with an igpu. IVB quad is something like 1.4B including igpu. AMD's lowest end retail 7xxx gpu is 1.5B transistors. The 7970 is 4.3B (a full two and a half times the transistor count of IVB quad). The 680 is 3.5B transistors.
The options you end up with for good performance for graphics at this point is either a die that is so bloated that no one will be able to afford it, or decent gpu speed at the expense of, you know, the point of a cpu....
Add in the problem of memory bandwidth and latency, and it doesn't paint a pretty picture.
Intel17 said:The problem is, 10% is a relative term.
Suppose AMD's core is at a normalized performance level of 1.00, and Intel's is 1.75. A 10% increase for AMD leads to 1.10, but a 10% increase for Intel is 1.92, or an 18% increase over the AMD.
Something to think about.
Not sure what that has to do with my desire for an IGPU-free die with higher CPU performance?
I'm saying at this point in time, I agree that this focus on igpu is silly.
AMD is more or less competitive in the sense that it plays in a similar space of performance when compared to the processors produced by the rest of industry. But they are getting squeezed pretty hard by Intel's fabs and massive design budget. But then Intel is getting squeezed by ARM. You have to remember Intel doesn't ship anywhere near as many CPUs as the likes of ARM (once you take into account the companies that buy, modify and sell the design). In the CPU world Intel is a niche player as well, AMD is a niche in that niche.