Geekbench 3 Sandy Bridge v.s. Apple Cyclone IPC comparison

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
For non-engineering initiatives (like aesthetics, graphic design, brand management, predicting the taste/preferences of consumers, etc) it takes more than money. Apple has Jonny Ive (and Steve Jobs before he died) to manage their design / aesthetics, and Steve Jobs had great influence managing the brand.

But for engineering initiatives (and I should have made that more clear, I just assumed I was already clearly talking of engineering, but reading it now, I guess I wasn't), it's that simple. More money, better result. If an an engineering outfit outspends their rival in R&D by double, triple, quadruple or more, they will consistently deliver a superior product.

No matter how much of a visionary and a MacGyver the David team is, the Goliath team will always produce a better, faster, more efficient product if they have at their disposal a surplus of hundreds of millions of dollars in R&D compared to the scrawny budget of their David-like competitor. It's simply the way of science and engineering in our modern world. And that's the reality that has gripped the Intel vs AMD "fight", and one that won't be the case for Apple.

This is just plain wrong.

If it were true, Apple would have never rose from the ashes of the 1990s to challenge MS and Intel.

Nor would ARM have risen to be a threat in the first place.

Nor would Microsoft and Intel have ever risen up to challenge IBM, HP, MIPS, and DEC all of whom made their own chips and servers / OS and only two of which even still exist.

History both old and new is replete with examples that totally refute your statements.
 

tempestglen

Member
Dec 5, 2012
81
16
71
Intrinsity
From Wikipedia, the free encyclopedia
Intrinsity was a privately held Austin, Texas based fabless semiconductor company; it was founded in 1997 as EVSX on the remnants of Exponential Technology and changed its name to Intrinsity in 2000. It had around 100 employees and supplied tools and services for highly efficient semiconductor logic design, enabling high performance microprocessors with fewer transistors and low power consumption. The acquisition of the firm by Apple Inc. was confirmed on April 27, 2010.[1]

Contents [hide]
1 Products
2 Customers
3 References
4 External links
Products[edit source | editbeta]

Intrinsity's main selling point was its Fast14 technology, a set of design tools implemented in custom EDA software, for using dynamic logic and novel signal encodings to permit greater processor speeds in a given process than naive static design can offer.

Concepts used in Fast14 are described in a white paper:[2] and include the use of multi-phase clocks so that synchronisation is not required at every cycle boundary (that is, a pipelined design does not require latches at every clock cycle); 1-of-N encoding where a signal with N states is carried as a voltage on one of N wires with the other N-1 grounded, rather than being carried on log(N) wires which can be in arbitrary states; and a variety of sophisticated routing algorithms including ones which permute the order of the wires in a bundle carrying a 1-of-N signal in such a way as to reduce noise exposure, and ones which allow complicated gates to 'borrow' delay from simple ones to allow a shorter clock cycle than a more pessimistic design approach permits. Converters between the two signal encodings are readily available, and are useful for interfacing to blocks of static logic.

This technology has been used to implement ARM, MIPS and Power Archite
 
Last edited:

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
This is just plain wrong.

If it were true, Apple would have never rose from the ashes of the 1990s to challenge MS and Intel.

Nor would ARM have risen to be a threat in the first place.

Nor would Microsoft and Intel have ever risen up to challenge IBM, HP, MIPS, and DEC all of whom made their own chips and servers / OS and only two of which even still exist.

History both old and new is replete with examples that totally refute your statements.

When Apple reinvented themselves, it wasn't just an engineering feat, which is why it isn't a matter of money. It falls into the first category I mentioned - predicting consumer preference, brand management, subjective improvement of aesthetics, etc. These are non-engineering feats, and outside of what I was saying.

As for IBM, DEC, DG, Fujitsu, Wang, etc, they suffered the same fate as Kodak, and this fate is almost largely inescapable, so it wouldn't be surprising if the current players today (Apple, Intel, etc) end up as them in due time: a revolution in tech happened, and they were too slow to adapt, or did not imagine they should adapt to it (example 3rd paragraph below).

But that's not the kind of engineering fight we are talking about here, right? We are just talking of "hey, build a better mobile/desktop/server processor" (let's call this "A"), we aren't talking of "predict, invent, and roll-out the next paradigm of computing" (let's call this "B").

"A" is a slugging match determined by how much R&D dollars you can afford. This is why as an industry matures, the trend is always the same - from several players at first, it ends up with much fewer but much larger competitors. How many graphics card companies used to exist before? How many do now? How many times was Intel an underdog to AMD, vs the other way around?

"B" is chaotic and largely unpredictable, because the way tech progresses and unfolds is unpredictable. Companies of all sizes rise and fall as the tech they are based on become popular and then become obsolete. Kodak is my favorite example. They had the market cornered. But then tech had a revolution, and film was no longer needed in the future - what does that do to a company whose business was film and had "assets" of film factories that are essentially going to be just liabilities?

So you can't conflate the circumstances of "A" vs the circumstances of "B".

Here, we aren't talking of game-changing, unforeseen tech revolutions. We are just talking of "A", which is just how to build a faster, more efficient IC. And it's just a matter of money. For example, if AMD had $10B to spend in Bulldozer R&D or whatever ridiculously large amount, Bulldozer would not have been as late, and it would not have ended up as compromised, and it would have been a far better competition to its Intel contemporaries. But they didn't, and so Bulldozer wasn't - just as Barcelona wasn't, and just as Steamroller won't be.

I hope that clears that up. That's simply how our science works now. Unless we are talking of coming up with a tech revolution / paradigm-shift, engineering shoot-outs are dominated by R&D budgets. It doesn't matter if it's making a faster x86 server processor, a more efficient laptop chip, a better jet-fighter, an improved main battle tank, or even a new drug. If Team 1 can outspend Team 2 by hundreds of millions to billions of dollars in R&D, then Team 1 will consistently deliver a superior product, year after year (maybe a dud or two in their history, but overall they will be consistently better).
 
Last edited:

tempestglen

Member
Dec 5, 2012
81
16
71
When Apple reinvented themselves, it wasn't just an engineering feat, which is why it isn't a matter of money. It falls into the first category I mentioned - predicting consumer preference, brand management, subjective improvement of aesthetics, etc. These are non-engineering feats, and outside of what I was saying.

As for IBM, DEC, DG, Fujitsu, Wang, etc, they suffered the same fate as Kodak, and this fate is almost largely inescapable, so it wouldn't be surprising if the current players today (Apple, Intel, etc) end up as them in due time: a revolution in tech happened, and they were too slow to adapt, or did not imagine they should adapt to it (example 3rd paragraph below).

But that's not the kind of engineering fight we are talking about here, right? We are just talking of "hey, build a better mobile/desktop/server processor" (let's call this "A"), we aren't talking of "predict, invent, and roll-out the next paradigm of computing" (let's call this "B").

"A" is a slugging match determined by how much R&D dollars you can afford. This is why as an industry matures, the trend is always the same - from several players at first, it ends up with much fewer but much larger competitors. How many graphics card companies used to exist before? How many do now? How many times was Intel an underdog to AMD, vs the other way around?

"B" is chaotic and largely unpredictable, because the way tech progresses and unfolds is unpredictable. Companies of all sizes rise and fall as the tech they are based on become popular and then become obsolete. Kodak is my favorite example. They had the market cornered. But then tech had a revolution, and film was no longer needed in the future - what does that do to a company whose business was film and had "assets" of film factories that are essentially going to be just liabilities?

So you can't conflate the circumstances of "A" vs the circumstances of "B".

Here, we aren't talking of game-changing, unforeseen tech revolutions. We are just talking of "A", which is just how to build a faster, more efficient IC. And it's just a matter of money. For example, if AMD had $10B to spend in Bulldozer R&D or whatever ridiculously large amount, Bulldozer would not have been as late, and it would not have ended up as compromised, and it would have been a far better competition to its Intel contemporaries. But they didn't, and so Bulldozer wasn't - just as Barcelona wasn't, and just as Steamroller won't be.

I hope that clears that up. That's simply how our science works now. Unless we are talking of coming up with a tech revolution / paradigm-shift, engineering shoot-outs are dominated by R&D budgets. It doesn't matter if it's making a faster x86 server processor, a more efficient laptop chip, a better jet-fighter, an improved main battle tank, or even a new drug. If Team 1 can outspend Team 2 by hundreds of millions to billions of dollars in R&D, then Team 1 will consistently deliver a superior product, year after year (maybe a dud or two in their history, but overall they will be consistently better).


After Apple acquiring intrinsity and P.A. and other cpu design elite groups, Apple is definitely one of the best low-power-high-perf chipset designer in the world. And with Apple's huge budget into chipset development, only intel could match apple's swift/cyclone etc now and in future. This is engineering feat.

For intel, Apple is more than AMD, more dangerous than Qualcomm and Samsung, not only Baytrail Atom could never come to IOS device, but also Haswell may be discarded from Macbook line.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
When Apple reinvented themselves, it wasn't just an engineering feat, which is why it isn't a matter of money. It falls into the first category I mentioned - predicting consumer preference, brand management, subjective improvement of aesthetics, etc. These are non-engineering feats, and outside of what I was saying.

As for IBM, DEC, DG, Fujitsu, Wang, etc, they suffered the same fate as Kodak, and this fate is almost largely inescapable, so it wouldn't be surprising if the current players today (Apple, Intel, etc) end up as them in due time: a revolution in tech happened, and they were too slow to adapt, or did not imagine they should adapt to it (example 3rd paragraph below).

But that's not the kind of engineering fight we are talking about here, right? We are just talking of "hey, build a better mobile/desktop/server processor" (let's call this "A"), we aren't talking of "predict, invent, and roll-out the next paradigm of computing" (let's call this "B").

"A" is a slugging match determined by how much R&D dollars you can afford. This is why as an industry matures, the trend is always the same - from several players at first, it ends up with much fewer but much larger competitors. How many graphics card companies used to exist before? How many do now? How many times was Intel an underdog to AMD, vs the other way around?

"B" is chaotic and largely unpredictable, because the way tech progresses and unfolds is unpredictable. Companies of all sizes rise and fall as the tech they are based on become popular and then become obsolete. Kodak is my favorite example. They had the market cornered. But then tech had a revolution, and film was no longer needed in the future - what does that do to a company whose business was film and had "assets" of film factories that are essentially going to be just liabilities?

So you can't conflate the circumstances of "A" vs the circumstances of "B".

Here, we aren't talking of game-changing, unforeseen tech revolutions. We are just talking of "A", which is just how to build a faster, more efficient IC. And it's just a matter of money. For example, if AMD had $10B to spend in Bulldozer R&D or whatever ridiculously large amount, Bulldozer would not have been as late, and it would not have ended up as compromised, and it would have been a far better competition to its Intel contemporaries. But they didn't, and so Bulldozer wasn't - just as Barcelona wasn't, and just as Steamroller won't be.

I hope that clears that up. That's simply how our science works now. Unless we are talking of coming up with a tech revolution / paradigm-shift, engineering shoot-outs are dominated by R&D budgets. It doesn't matter if it's making a faster x86 server processor, a more efficient laptop chip, a better jet-fighter, an improved main battle tank, or even a new drug. If Team 1 can outspend Team 2 by hundreds of millions to billions of dollars in R&D, then Team 1 will consistently deliver a superior product, year after year (maybe a dud or two in their history, but overall they will be consistently better).

Apple reinventing itself wasn't just an engineering feat - but it required an engineering feat.

All of these usurpers required such.

Its true, these companies are not usually beaten by someone playing on their turf. Intel didn't get big by making mainframes, they got big by making them irrelevant.
 

Eug

Lifer
Mar 11, 2000
23,752
1,285
126
Hmmm... The just-throw-money-at-it-until-we-win approach is a sure way to run a company into the ground. Sure, having gobs and gobs of money is a huge advantage, but money alone doesn't make for a good business plan.
 

jfpoole

Member
Jul 11, 2013
43
0
66
Clang is heavily backed by Apple. Its essentially the same as using ICC on Intel.

Instead subpair compilers was used with x86.

We haven't seen any evidence that suggests Clang's ARM code generation is ahead of GCC's x86 code generation in the way that ICC's x86 code generation is significantly ahead of GCC's x86 code generation.

We also haven't seen any evidence that suggests Clang engages in any "benchmark shenanigans" for lack of a better phrase. It helps that Clang, LLVM, and GCC are open-source, making it harder for companies to implement these shenanigans in the first place.
 

StrangerGuy

Diamond Member
May 9, 2004
8,443
124
106
We haven't seen any evidence that suggests Clang's ARM code generation is ahead of GCC's x86 code generation in the way that ICC's x86 code generation is significantly ahead of GCC's x86 code generation.

We also haven't seen any evidence that suggests Clang engages in any "benchmark shenanigans" for lack of a better phrase. It helps that Clang, LLVM, and GCC are open-source, making it harder for companies to implement these shenanigans in the first place.

You won't be able to convince Intel apologetics here. When Intel wins with ICC over AMD it's fair game but when Apple/ARM wins over Intel it's now compiler foul play.
 

386DX

Member
Feb 11, 2010
197
0
0
We haven't seen any evidence that suggests Clang's ARM code generation is ahead of GCC's x86 code generation in the way that ICC's x86 code generation is significantly ahead of GCC's x86 code generation.

We also haven't seen any evidence that suggests Clang engages in any "benchmark shenanigans" for lack of a better phrase. It helps that Clang, LLVM, and GCC are open-source, making it harder for companies to implement these shenanigans in the first place.

http://support.primatelabs.com/disc...3-compile-options-for-ios-and-for-android-arm

Geekbench 2 for iOS is built with GCC 4.2, while Geekbench 2 for Android is built with GCC 4.6. We use conservative compiler optimizations (-Os) on both platforms.
Geekbench 3 for iOS is built with Clang 3.3, while Geekbench 3 for Android is built with GCC 4.8. We use aggressive compiler optimizations (-O3 -ffast-math -fvectorize) on both platforms.
One of the reasons for the big jump in iOS scores is that GCC 4.2 had poor ARM code generation, especially when compared with more modern compilers like Clang 3.3 and GCC 4.6. We think switching to the latest compilers has "leveled the playing field" (so to speak) between Android and iOS.
Do you understand why Geekbench isn't a good "true" measurement of hardware performance and why sites like Anand has always said you can't compare Geekbench scores between different platforms.

The Geekbench guys changed compilers to Clang to "level the playing field" or another way to put it give iOS a higher score. I don't disagree with using the best compiler available because the reality is anyone writing a program given the choice will use the best compiler they have whether it's GCC, Clang, ICC, etc. The problem I have in Geekbenchs situation is I don't think iOS needed the playing field level because they already did better then Android on the same hardware with the old Geekbench.

The iPhone's tend to use there own custom CPU so it's nearly impossible to find a direct hardware comparison on the Android platform to see how much the OS affects the GeekBench scores.

The closes hardware we can compare is the original Galaxy S vs the iPhone 4/iPad (original). While the A4 SoC and Samsung's S5PC110A01 SoC are different the CPU Core (Hummingbird) was jointly developed by Samsung and Intrinsity (which Apple eventually purchased) so the CPU performance should be the same at the same frequency. However this is not the case with GeekBench 2, the results are like this:

Galaxy S (1 GHz) = 395
iPhone 4 (.8 GHz) = 390
iPad (1 GHz) = 470

**Note: Scores were taken unscientifically by sorting score of said device then taking average score of the middle page. ie. if there was 10 pages of scores the average of the 5th page, this eliminates outliers **

Notice the same CPU running on iOS scores nearly 20% higher then on Android. As a true hardware test this shouldn't happen and this is why I don't understand why they felt they needed to "level the playing field" when iOS was already scoring better on the same hardware. As an overall platform benchmark I don't think Geekbench score is all that bad, the slower CPU iPhone 4 in real world performs just as well or better then the Galaxy S but that's as much software then it is hardware. It's just foolish to use one benchmark to make a general hardware comparison especially when that particular benchmark is heavily influenced by the platform it runs on.

You won't be able to convince Intel apologetics here. When Intel wins with ICC over AMD it's fair game but when Apple/ARM wins over Intel it's now compiler foul play.

You do realize that ICC improves performance on AMD CPU's just as much as it does on Intel's. Look at how much the compiler can affect performance:
http://www.behardware.com/articles/847-15/the-impact-of-compilers-on-x86-x64-cpu-architectures.html

You wanna know something even more shocking? The old AMD Phenom CPU's actually gained the most when using ICC, it's only the Bulldozer series CPU that sees less of a gain. The point is ICC on x86 platform generally produces faster performance in the neighborhood of 30-75% over GCC. Regardless of what CPU brand you use do you not want the program to use the "best" compiler so it produces the best performance? Apparently you'd rather have your program be 35% slower because if ICC is use you'd only gain 35% performance and a Intel CPU would gain 55% because you don't want your CPU to look "as inferior".
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
This thread is plain dumb.

The benchmarks inside geekbench are old, and valid. They represent the most frequently performed operations in a general purpose computing environment.

If you want to derive real world performance from these benchmarks, you use the most popular compilers. That would be GCC.

If you want to do a hardware comparison, which frankly is irrelevant for the user and for most developers, you write the benchmark by hand in assembly with no OS only a bootstrap loader.

The argument that the OS does this much damage to a bunch of low level benchmarks is nuts. If Android and windows interfere that much with things like linpack, memory stream ops, etc in these benchmarks then Google and MS deserve to fail.

The likely reason the A7 does so well is simple. Heterogenous computing, done right. Google it!!
 

FwFred

Member
Sep 8, 2011
149
7
81
From the SPEC CPU next gen benchmark search

Criteria SPEC considers important for the next CPU benchmark suite. A good benchmark candidate is:

  • Used by real users
  • Compute bound, or can have its compute bound portion excerpted
  • Portable or can be ported to multiple hardware architectures and operating systems with reasonable effort
  • Represents the state of the art for the given field
  • Derived from a representative application
  • Capable of solving problems of varying sizes. SPEC CPU2006 used 3 workloads, in various capacities, for its benchmarks.
  • Reasonably predictable as to its code path. For example, minor differences in floating point accuracy across platforms should not cause the program/application to do wildly different work on those platforms.
 
Last edited:

FwFred

Member
Sep 8, 2011
149
7
81
I posted the SPEC CPU benchmark search criteria to show why SPEC is used, and micro benchmarks with minimal memory sizes and access patterns aren't sufficient in the high performance CPU world.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
I posted the SPEC CPU benchmark search criteria to show why SPEC is used, and micro benchmarks with minimal memory sizes and access patterns aren't sufficient in the high performance CPU world.

If you wanted to pick a benchmark more irrelevant to consumers you couldn't have done better.

SPEC heavily favors processors with a large cache and very high memory bandwidth, because its simulating big iron systems doing continuous processing of massive data sets. This is not something you would be doing on your cell phone. In fact its something 99.99% of people won't be doing on their desktop.

And for years I've seen the PC press treat it with disdain, now suddenly a cellphone processor makes it relevant? Really? I mean, seriously?
 

FwFred

Member
Sep 8, 2011
149
7
81
If you wanted to pick a benchmark more irrelevant to consumers you couldn't have done better.

SPEC heavily favors processors with a large cache and very high memory bandwidth, because its simulating big iron systems doing continuous processing of massive data sets. This is not something you would be doing on your cell phone. In fact its something 99.99% of people won't be doing on their desktop.

And for years I've seen the PC press treat it with disdain, now suddenly a cellphone processor makes it relevant? Really? I mean, seriously?

Are you suggesting Geekbench instead? Really? I mean, seriously?

I am bringing it up because the thread title is comparing Sandy Bridge with Cyclone. How are micro benchmarks in Geekbench relevant in comparing CPUs to a mainstream laptop CPU?

As far as relevance to cell phone workloads, I would pick SPEC over Geekbench. At least it tries to run actual applications with non-trivial memory access patterns.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
The Geekbench guys changed compilers to Clang to "level the playing field" or another way to put it give iOS a higher score. I don't disagree with using the best compiler available because the reality is anyone writing a program given the choice will use the best compiler they have whether it's GCC, Clang, ICC, etc. The problem I have in Geekbenchs situation is I don't think iOS needed the playing field level because they already did better then Android on the same hardware with the old Geekbench.

But look on the other side of this.. they're right when they say GCC 4.2 (positively ancient by today's standards) generated much worse ARM code. You really have no business using this compiler in comparative tests today. Regardless of whether or not using Clang is going for the best available it's going to be much closer to what an up to date compiler will yield than GCC 4.2.

You wanna know something even more shocking? The old AMD Phenom CPU's actually gained the most when using ICC, it's only the Bulldozer series CPU that sees less of a gain. The point is ICC on x86 platform generally produces faster performance in the neighborhood of 30-75% over GCC. Regardless of what CPU brand you use do you not want the program to use the "best" compiler so it produces the best performance? Apparently you'd rather have your program be 35% slower because if ICC is use you'd only gain 35% performance and a Intel CPU would gain 55% because you don't want your CPU to look "as inferior".

I think you should go back and look at the difference in scores for individual tests between GCC and ICC, this is what you see:

bzip2: Close with GCC generally slightly faster
mcf: Close until you turn on SSE3, where ICC gains a 31% lead
gobmk: Close with GCC slightly fatser
hmmer: ICC slightly faster (7%) until you turn on SSE3, where ICC gains a 45% lead and SSE4.1 (52%) and SSE4.2 (129%)
sjeng: Close with ICC slightly faster (10%)
h264href: Close with GCC slightly faster (7%)
astar: ICC moderately faster (13%) until you turn on SSE3 (26%)
milc: ICC fairly faster (20%) until you turn on SSE3 (162%!)
namd: ICC moderately faster (11%)
lbm: ICC much faster (141%)
sphinx3: ICC moderately faster (7%) until you turn on AVX (22%)

I think these results show something a lot different from just "ICC will make your code 30% to 75% faster than GCC", if you're really looking at it critically. You can put things into three categories more or less:

1) Code that's a little slower to a little faster than GCC
2) Code that's a ton faster than GCC once you turn on SSE3
3) Code that's a ton faster than GCC

SSE3+ really doesn't add that much outside of quite specialized instructions that are difficult for a compiler to recognize. To me this smells pretty strongly of hand tuning the emulator to do very well in some of the subtests. In a benchmark where the total score is often publicized it's easy to see why Intel would do this (and why they did it with AnTuTu...). I'm not going to say for sure that this is the case, and even if it is, it doesn't mean that ICC isn't performing perfectly legitimate optimization. But to take this as an indication that they're going to do much better on just about any program you give it seems inconclusive.
 

Nothingness

Platinum Member
Jul 3, 2013
2,752
1,402
136
As far as relevance to cell phone workloads, I would pick SPEC over Geekbench. At least it tries to run actual applications with non-trivial memory access patterns.
You mean like SPEC 2006 libquantum where 50% of the time is spent in two lines of code and where inserting a single preload instruction will speed the code by a huge amount? Yes, very representative.

The only representative part of SPEC is gcc.

I think you should go back and look at the difference in scores for individual tests between GCC and ICC, this is what you see:

bzip2: Close with GCC generally slightly faster
mcf: Close until you turn on SSE3, where ICC gains a 31% lead
gobmk: Close with GCC slightly fatser
hmmer: ICC slightly faster (7%) until you turn on SSE3, where ICC gains a 45% lead and SSE4.1 (52%) and SSE4.2 (129%)
sjeng: Close with ICC slightly faster (10%)
h264href: Close with GCC slightly faster (7%)
astar: ICC moderately faster (13%) until you turn on SSE3 (26%)
milc: ICC fairly faster (20%) until you turn on SSE3 (162%!)
namd: ICC moderately faster (11%)
lbm: ICC much faster (141%)
sphinx3: ICC moderately faster (7%) until you turn on AVX (22%)

I think these results show something a lot different from just "ICC will make your code 30% to 75% faster than GCC", if you're really looking at it critically. You can put things into three categories more or less:

1) Code that's a little slower to a little faster than GCC
2) Code that's a ton faster than GCC once you turn on SSE3
3) Code that's a ton faster than GCC

SSE3+ really doesn't add that much outside of quite specialized instructions that are difficult for a compiler to recognize. To me this smells pretty strongly of hand tuning the emulator to do very well in some of the subtests. In a benchmark where the total score is often publicized it's easy to see why Intel would do this (and why they did it with AnTuTu...). I'm not going to say for sure that this is the case, and even if it is, it doesn't mean that ICC isn't performing perfectly legitimate optimization. But to take this as an indication that they're going to do much better on just about any program you give it seems inconclusive.
Where did you get these figures? Too bad 403.gcc is missing...

As you wrote, using SPEC to prove that icc is great is silly. It's well known Intel is spending a huge amount of time tuning icc for SPEC. I regularly get back to icc and see how it improves my own code; the answer is simple: when I get 5% it's doing well, most of the time I get no speed up at all. My code, as many code, doesn't vectorize and Intel has no access to it
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |