AnTuTu and Intel

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Many of you have seen the article posted on EETimes, here's a link: http://www.eetimes.com/author.asp?section_id=36&itc=eetimes_sitedefault&doc_id=1318857 For those interested in the state of mobile benchmarketing I recommend a read, it's pretty enlightening.

UPDATE: BDTI's president also commented on AnTuTu, covers a lot of similar material: http://www.bdti.com/InsideDSP/2013/07/11/JeffBierImpulseResponse

You may have also seen me rant on this topic before. For those interested I figured I'd give some additional analysis behind some of the things I've said.

All of the analysis is taken from disassembling the NDK library files in the APK. This can be done by:

1) Unzipping the APK, it's just a normal zip file
2) Go to the lib directory and look at the x86 and armeabi-v7a directories, these are used on x86 and ARM devices respectively
3) Disassemble the libabenchmark.so files inside. For this I used objdump, which you can easily get for x86 and ARM. This gives assembly listings and a bunch of names for things like functions and global variables which the library doesn't strip.

First, it's important to understand just what AnTuTu is. I haven't looked for information on all the subtests, but I do know that the CPU-centric integer and floating point portions are using nbench. You can find the source code here: http://www.tux.org/~mayer/linux/bmark.html

The reason I can tell it's nbench is because it uses the same function names and global variable names, and a cursory look at those functions show they do the same thing.

So what's the big reason for the x86 performance difference between AnTuTu 2.9.3 and 3.3? On the surface you can see that they started using ICC for the x86 compilation. This is obvious because the disassembly is littered with strings that have "intel" in the name - in fact, there's even one with icc in the name: ".text.__icc.get_pc_thunk.si"

ICC is well known for high quality vectorization. An examination of the ARM disassembly shows that vectorization wasn't even enabled. The reason I can tell this is because there are no integer NEON instructions - a search for things like vadd.u32 or any other permutation of integer types turns up nothing; same for other basic operations like sub, or, and, etc, as well as load or store instructions. Floating point is a little harder to rule out from a simple search because VFP (scalar) instructions look similar to NEON ones, but what's ultimately telling is I couldn't find any usage of quad-word registers except in a few instructions that were clearly part of garbage regions (data, not real code).

AnTuTu using GCC compiled to target vanilla ARM-v7a processors w/o NEON isn't that bizarre, since there is one case of such a processor that doesn't have NEON support (Tegra 2). Nonetheless, the NDK doesn't make it that hard to include separate code paths that are compiled w/o NEON and use them at runtime. This is a standard development paradigm which Google documents. What I find really glaring is that they didn't do this but they did compile the x86 part with ICC, which is totally non-standard and unsupported as far as the NDK is concerned.

UPDATE: jhu found that vectorizing on the current GCC series used with the NDK doesn't yield any benefit (http://forums.anandtech.com/showthread.php?t=2330288), although the verdict is still out on how much fiddling with the compiler flags or using a newer version of GCC could have helped things. IMO if you're going to use Intel's latest and greatest you should at least do the same w/GCC.

But that's really just the tip of the iceberg. There's another advantage they have than picking the best compiler for the job. Here's an example:

One of the CPU tests in nbench is to check how the CPU is at performing simple bitwise operations - shifts, ands, ors, etc. To do this it sets, clears, or toggles a series of bits in memory, one bit at a time. One of the functions for this is ToggleBitRun located in nbench1.c.

Here is the function:

Code:
static void ToggleBitRun(farulong *bitmap, /* Bitmap */
        ulong bit_addr,         /* Address of bits to set */
        ulong nbits,            /* # of bits to set/clr */
        uint val)               /* 1 or 0 */
{
unsigned long bindex;   /* Index into array */
unsigned long bitnumb;  /* Bit number */

while(nbits--)
{
    bindex=bit_addr>>5;     /* Index is number /32 */
    bitnumb=bit_addr % 32;  /* bit number in word */

    if(val)
        bitmap[bindex]|=(1L<<bitnumb);
    else
        bitmap[bindex]&=~(1L<<bitnumb);
    bit_addr++;
}
return;
}
This is what the ARM code does, located at 0x46386:

Code:
   46386:    b5f0          push    {r4, r5, r6, r7, lr}
   46388:    2501          movs    r5, #1
   4638a:    e00f          b.n    463ac <benchmark_ent+0x1fc>
   4638c:    094c          lsrs    r4, r1, #5
   4638e:    f001 061f     and.w    r6, r1, #31
   46392:    fa15 f606     lsls.w    r6, r5, r6
   46396:    f850 7024     ldr.w    r7, [r0, r4, lsl #2]
   4639a:    b10b          cbz    r3, 463a0 <benchmark_ent+0x1f0>
   4639c:    433e          orrs    r6, r7
   4639e:    e001          b.n    463a4 <benchmark_ent+0x1f4>
   463a0:    ea27 0606     bic.w    r6, r7, r6

   463a4:    3101          adds    r1, #1
   463a6:    3a01          subs    r2, #1
   463a8:    f840 6024     str.w    r6, [r0, r4, lsl #2]
   463ac:    2a00          cmp    r2, #0
   463ae:    d1ed          bne.n    4638c <benchmark_ent+0x1dc>
   463b0:    bdf0          pop    {r4, r5, r6, r7, pc}
That's a pretty straightforward implementation.

Now here's what the x86 equivalent does (note that the function has been inlined, here's one instance):

Code:
   f6416:    b8 56 55 55 55           mov    $0x55555556,%eax
   f641b:    8b cb                    mov    %ebx,%ecx
   f641d:    f7 eb                    imul   %ebx
   f641f:    c1 f9 1f                 sar    $0x1f,%ecx
   f6422:    2b d1                    sub    %ecx,%edx
   f6424:    8d 34 52                 lea    (%edx,%edx,2),%esi
   f6427:    8b d3                    mov    %ebx,%edx
   f6429:    2b d6                    sub    %esi,%edx
   f642b:    0f 85 82 00 00 00        jne    f64b3 <DoBitops+0x593>

   ...

   f64b3:    83 fa 01                 cmp    $0x1,%edx
   f64b6:    0f 85 8d 00 00 00        jne    f6549 <DoBitops+0x629>
   f64bc:    8b 54 24 10              mov    0x10(%esp),%edx
   f64c0:    8d 0c 9a                 lea    (%edx,%ebx,4),%ecx
   f64c3:    8b 14 99                 mov    (%ecx,%ebx,4),%edx
   f64c6:    8b 4c 99 04              mov    0x4(%ecx,%ebx,4),%ecx
   f64ca:    49                       dec    %ecx
   f64cb:    83 f9 ff                 cmp    $0xffffffff,%ecx
   f64ce:    0f 84 f3 00 00 00        je     f65c7 <DoBitops+0x6a7>
   
   ...
   f65c7:    43                       inc    %ebx
   f65c8:    3b 5c 24 0c              cmp    0xc(%esp),%ebx
   f65cc:    0f 8c 44 fe ff ff        jl     f6416 <DoBitops+0x4f6>
What it's doing is, where possible, setting entire 32 bit runs to 0 or 1. The lines at f64c3 and f64c6 are critical. It's replacing 32 iterations of the ARM loop above with those two instructions. Needless to say, it's dozens of times faster doing it this way.

This is what we call breaking the benchmark. Where the compiler applies some logic that makes the benchmark much faster by doing a set of operations that the benchmark identifies as correct (if it even checks) but are not performing the intended function of the benchmark. Classic examples include omitting code entirely if the results are never read, or performing a complex computation at compile-time instead of run time if the inputs can determined to be constant (then just reporting the results).

In this case I'm sure Intel could claim that they're performing a legitimate optimization. Frankly, I doubt it; this kind of optimization would be difficult to recognize and apply in generic code. It'd also be for little benefit, because I've never seen someone use code like this to set or clear huge sets of bits. That part is kind of the catch, because this optimization would make the code slower if the run lengths weren't sufficiently large. In nbench's case they are, but there's no way the compiler could have known that on its own.

What's more, this optimization wasn't present in ICC until a recent release. Somehow I don't think that they just now discovered it has general purpose value. More likely case is that they discovered is they could manipulate AnTuTu's scores. Seems to coincide well with this third-party report appearing showing how amazing Atom's perf/W is - using nothing but AnTuTu. Or the leaked scores seen for CloverTrail+ and now BayTrail that are AnTuTu. Is this really a coincidence?

But frankly, I blame AnTuTu in all of this. They allowed themselves to be manipulated (probably for a price), despite constantly warning against other people cheating their numbers. I don't know if they're displaying a complete lack of integrity or a complete lack of understanding of how their own software works, or something in between the two, but whatever the case I hope they lose all credibility and whatever revenue the program brings them.
 
Last edited:

SiliconWars

Platinum Member
Dec 29, 2012
2,346
0
0
Here's what I don't get.

Intel is targeting ARM OEM's who obviously are not going to fall for this sort of thing. This is why Clover Trail gained very little traction - it's just not good enough regardless of how much Anand or the rest of the tech press falls for these benchmarking tricks.

So Intel goes to Samsung or Apple with Silvermont...and neither of them even bother with AnTuTu for obvious reasons. Silvermont will live on die on merit and AnTuTu won't even be considered.

What is the point in all this? There are no end users buying *Intel* tablets or phones - these tricks simply don't apply to this market unlike with x86 PC's. No OEM is going to be fooled by this, it's an end-user/tech press marketing trick, but if there are no actual products that the end user can buy....?
 
Mar 10, 2006
11,715
2,012
126
Here's what I don't get.

Intel is targeting ARM OEM's who obviously are not going to fall for this sort of thing. This is why Clover Trail gained very little traction - it's just not good enough regardless of how much Anand or the rest of the tech press falls for these benchmarking tricks.

So Intel goes to Samsung or Apple with Silvermont...and neither of them even bother with AnTuTu for obvious reasons. Silvermont will live on die on merit and AnTuTu won't even be considered.

What is the point in all this? There are no end users buying *Intel* tablets or phones - these tricks simply don't apply to this market unlike with x86 PC's. No OEM is going to be fooled by this, it's an end-user/tech press marketing trick, but if there are no actual products that the end user can buy....?

It seems that Intel knows CT+ isn't great, but needs to keep up public appearances until it gets a credible solution on the market. It will be interesting to see how Baytrail performs across a variety of workloads.
 

Khato

Golden Member
Jul 15, 2001
1,224
278
136
So Intel goes to Samsung or Apple with Silvermont...and neither of them even bother with AnTuTu for obvious reasons. Silvermont will live on die on merit and AnTuTu won't even be considered.

What is the point in all this? There are no end users buying *Intel* tablets or phones - these tricks simply don't apply to this market unlike with x86 PC's. No OEM is going to be fooled by this, it's an end-user/tech press marketing trick, but if there are no actual products that the end user can buy....?

Precisely, it's where the 'conspiracy theory' argument kinda falls flat. At least at a high level. Sure there's likely some little group/individual that went and did this, but beyond that? Doubtful.

Anyway, excellent post Exophase.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Great post, Exophase. A friend who works on an ARM design has been ranting for a while about Antutu and Geekbench and the quality of code currently coming out of JITs on ARM... I really hate the cross-ISA situation in terms of benchmarking. The worst part is that generally-credible reviewers don't caveat their articles enough, so people actually give credit to these results. It's worse than the 80s IPC comparisons across RISC/CISC because the macroscopic workload characteristics aren't even the same.
 

SiliconWars

Platinum Member
Dec 29, 2012
2,346
0
0
There must be an awful lot of money to be made on a reliable, representative cross-ISA benchmark. You'd think more effort would be going into it by now.
 

RampantAndroid

Diamond Member
Jun 27, 2004
6,591
3
81
So Intel goes to Samsung or Apple with Silvermont...and neither of them even bother with AnTuTu for obvious reasons. Silvermont will live on die on merit and AnTuTu won't even be considered.

Isn't that short sighted? Apple's iOS apps are compiled for ARM. A switch to Baytrail would be pretty costly to their customers and developers for lack of back compat without a recompile?
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Here's what I don't get.

Intel is targeting ARM OEM's who obviously are not going to fall for this sort of thing. This is why Clover Trail gained very little traction - it's just not good enough regardless of how much Anand or the rest of the tech press falls for these benchmarking tricks.

So Intel goes to Samsung or Apple with Silvermont...and neither of them even bother with AnTuTu for obvious reasons. Silvermont will live on die on merit and AnTuTu won't even be considered.

What is the point in all this? There are no end users buying *Intel* tablets or phones - these tricks simply don't apply to this market unlike with x86 PC's. No OEM is going to be fooled by this, it's an end-user/tech press marketing trick, but if there are no actual products that the end user can buy....?

OEMs already were fooled in the past. Read the AMD-Intel FTC settlement. Some points report how it was proved that Intel fooled OEMs with biased benchmarks and false information against competitors.

Let us imagine a hypothetical marketing strategy that will delight the conspiracy theory folks. Of course nothing of this can happen in reality, but let us try. The first phase consists on spreading fake benchmarking information. If you look to the internet you can find lots and lots of sites reporting the false antutu benchmarks and claiming:

Intel's New Tablet Processor Beats The Best ARM Chip By A Huge Margin
Intel Bay Trail Benchmark Appears Online Crushes Fastest Snapdragon Arm Soc By 30%
Intel's Bay Trail Benchmarks Surfaced &#8211; ARM SoCs Left in the Dust
Once that the average public is misinformed enough and believes the new intel>ARM motto, the next phase consists on selecting a few OEMs that will use the Intel chips instead of the competence (ARM). Intel has proven experience in such negotiations.

Third phase, once the final product is in the market a group of selected review sites favour the Intel-inside product over the competence, using the same biased benchmarks than in the first step.

Fourth phase, fooled users buy the inferior product.

About the eetimes article about this whole antutu issue

http://www.eetimes.com/author.asp?section_id=36&doc_id=1318857

The authors start the article by mentioning how he feels obligated to write because the spreading of misinformation

This blanket statement essentially proclaims that Intel has surpassed the entire ARM ecosystem in mobile processors for the all-important high-end smartphone segment.

I find interesting how the author notices the weird variation is score observed among different versions of the antutu benchmark

Going from the 2.9.3 version of the AnTuTu benchmark to the 3.3 version, the overall AnTuTu scores increased 122 percent, and the RAM score increased 292 percent for the Intel processor, while the scores for the Samsung processor increased only 59 percent and 53 percent, respectively. This was just from a change in the benchmark test, not the processors. Why did the figures change so drastically for one processor and not the other with the revisions in the benchmark? This raises further concerns about the validity of the AnTuTu benchmark.



As mentioned in another recent thread this is a typical intel practice. The other thread shows how subsequent versions of Sysmark are much more favourable towards Intel chips and against AMD. Therefore we see the same now, but with ARM instead AMD and with Antutu playing the role of new Sysmark.

LOL
 
Last edited:

TuxDave

Lifer
Oct 8, 2002
10,572
3
71
There must be an awful lot of money to be made on a reliable, representative cross-ISA benchmark. You'd think more effort would be going into it by now.

IMO, what you really want is a benchmark that reflects realistic workloads. If realistic workloads have regular patterns that hardware and compilers can take advantage of, I think that should count as credit. Some people try too hard to develop code that's super random to defeat all performance optimizations to uncover "true hardware performance". Go too far and you have software patterns that you'll never see in the real world and the benchmark becomes equally useless.
 

SlimFan

Member
Jul 5, 2013
91
11
71
Why is AnTuTu the only benchmark with this type of breakdown? Why not geekbench or others?
 
Mar 10, 2006
11,715
2,012
126
Why is AnTuTu the only benchmark with this type of breakdown? Why not geekbench or others?

Because ARM chips win Geekbench despite evidence that Geekbench is intentionally crippled on Intel processors. From a recent interview with Silvermont's lead architect,

Q: I saw very interesting comparisons of Silvermont with Saltwell in the disclosure. What puzzles me, though, is that it is very difficult to get a read on CPU-limited performance of these low power micro-architectures. For example, a benchmark like "Geekbench" paints "Saltwell" in a rather unflattering light compared to the ARM contemporaries, but then you see benchmarks such as AnTuTu showing a 2C/4T Saltwell taking leadership positions againt a 4C/4T "Krait" or even Cortex A15 in integer and memory bandwidth, while even staying competitive on floating point! Could you help me to understand how Saltwell compares to the competition from what you have seen with more sophisticated measurements, and then from there I have a lot better context to think about Silvermont performance?

A: Geekbench is interesting: you look at the results, and the main “unflattering” results are in a few sub-benchmarks, where the IA version is set up to handle denorms precisely, and the input dataset is configured to be 100% denorms. This is not normal FP code, not a normal setup, and possibly not even a apples-to-apples comparison to how ARM is handling these numbers. So we view this as an anomaly. (The Geekbench developer agrees with us)
Saltwell trails A15 in raw IPC, but its higher frequency and threads are able to help compensate.
Saltwell trails Krait on very basic workloads like DMIPS, but on more complicated workloads Saltwell’s robust architecture will pull ahead.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
I gave AnTuTu got this kind of breakdown because:

a) It's an extreme outlier vs every other benchmark I've seen
b) It's been given special attention in what's most likely an Intel sanctioned report
c) It's what shows up in leaks, including the recent one with Silvermont
d) It's gaining unprecedented popularity, much more so than Geekbench

I hardly see Geekbench looked at, but just a basic glance lends a lot more to its credibility - given that it can actually describe its benchmarks and their validity. And while it's synthetic it's not nearly as as most because it uses real libraries that are in real use.

However I would point out that the Geekbench floating point tests running on 32-bit Windows builds are much worse on Atom than they should be - I haven't looked at it but they're probably compiled with x87 (since scalar SSE isn't guaranteed available for 32-bit x86 and hence isn't a default for compilers, unlike with x86-64) which is crippled on Atom due to having poor decoder throughput. I don't believe that claim that it's being intentionally crippled with denormals for a minute, that'd perform very badly on ARM CPUs too and seriously that's such an utterly deranged plan - especially one for the Geekbench developers to agree with! This is not something you'd do accidentally. If they know it's happening and don't like it they should have (and would have) changed it. And moreover, if an Intel representative is going to say something like that then they need to actually put up evidence.

Note how the same lead engineer says nothing about AnTuTu, making it seem as if it's the one accurate result here.

Personally, whenever I mention Geekbench I say to look at the integer tests in isolation and throw out the floating point tests (and maybe the memory tests and maybe the stream tests - while the latter should be bandwidth limited FP can cause problems there). If you really want me to do an analysis of it I can try something later - what I can say is that the ARM codegen for the stream tests is really bad (but this is no one's fault but GCC's). The integer results vs Saltwell are pretty consistent with a bunch of other native tests I've seen, including Phoronix and even the big IPC paper done a while ago, despite that using a much older GCC (ARM codegen was a lot worse with older GCC, it took a long time to start catching up)

And I don't really feel that bad about ignoring FP tests because FP performance on mobile devices is substantially less relevant than integer performance.

Seriously, if you're asking why I said anything about AnTuTu and not Geekbench you'd may as well ask why I picked it and not every other benchmark. If you ask me ALL of them should be analyzed and scrutinized and nothing should be trusted until fully vetted. But obviously I can't do all that and I went after a target that made itself incredibly obvious. ARM or others could be cheating, certainly, although it's an awful lot harder for them to do it when they don't have their claws on a special proprietary compiler (armcc is not only more or less being phased out but they actually legally prohibit anyone from publishing benchmark results with it.. for some reason)

And if you really must go ask anyone on a remotely educated forum what they think about AnTuTu vs what they think about Geekbench. Go ask on RWT.

Intel17 said:
Because ARM chips win Geekbench despite evidence that Geekbench is intentionally crippled on Intel processors. From a recent interview with Silvermont's lead architect,

So you're saying I'm intentionally turning my eyes when the tables are turned and ARM's cheating vs Intel? Wow, you don't even want to hear what I have to say to you on that. Do your own digging on Geekbench's indiscretions deliberately favoring ARM and I'll applaud you.. but well, unfortunately that'll have to consist of more than claims from an Intel engineer, I'm not going to take them purely at face value. Although for what it's worth, I hadn't even seen this interview yet.
 
Last edited:

SlimFan

Member
Jul 5, 2013
91
11
71
I didn't mean to imply you had been lazy about breaking down AnTuTu vs anything else; I actually think it's pretty neat that anyone has taken the time to deconstruct specific benchmarks. I was just curious why this one in particular, and you answered that.

I didn't take the Intel engineer's response to say that there was anything intentional about the Geekbench issue mentioned; rather it seemed like there was just a difference between the architectures that somehow popped up. I can see how something like this could happen for cross platform benchmarks, especially around FP code.

All of this just seems to confirm that mobile benchmarking is somehow even worse than benchmarking in general, which is pretty impressive. :-(

Next thing you know, someone will find out that all of these Pi benchmarks running around don't really even calculate Pi anymore...
 
Mar 10, 2006
11,715
2,012
126
So you're saying I'm intentionally turning my eyes when the tables are turned and ARM's cheating vs Intel? Wow, you don't even want to hear what I have to say to you on that. Do your own digging on Geekbench's indiscretions deliberately favoring ARM and I'll applaud you.. but well, unfortunately that'll have to consist of more than claims from an Intel engineer, I'm not going to take them purely at face value. Although for what it's worth, I hadn't even seen this interview yet.

Not at all, I mostly just think GeekBench gives weird results, although your explanation of it not being optimized for SSE would explain it. Integer scores look about right (actually maybe a bit high compared to quad Kraits, which is suspicious...maybe it really likes HT?)

When I ran GeekBench on an AMD A6 @ 1.5GHz (Llano), I got scores for FPU that were dismal compared to even what we are seeing from some of these ARM chips, but of course integer was fine.
 
Last edited:

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Not at all, I mostly just think GeekBench gives weird results, although your explanation of it not being optimized for SSE would explain it. Integer scores look about right.

Okay, well I understand you probably didn't mean this then but I hope you understand why I reacted the way I did He asked why I did AnTuTu and not Geekbench and you said it's because Geekbench scores better on ARM and is deliberately crippled on x86 (implying of course that I'm aware of such a problem). That makes me sound pretty bad.
 
Mar 10, 2006
11,715
2,012
126
Okay, well I understand you probably didn't mean this then but I hope you understand why I reacted the way I did He asked why I did AnTuTu and not Geekbench and you said it's because Geekbench scores better on ARM and is deliberately crippled on x86 (implying of course that I'm aware of such a problem). That makes me sound pretty bad.

Exophase,

You know that I have great respect for you and would not suggest that you are being dishonest. This thread is likely to get some serious attention from AnTuTu and the mobile community, and I am glad that you exposed this.

AnTuTu is officially worthless as far as I'm concerned, although props to the Intel compiler team for trying to help rickety old Atom fight the good fight before it gets put down, I suppose
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
IMO, what you really want is a benchmark that reflects realistic workloads. If realistic workloads have regular patterns that hardware and compilers can take advantage of, I think that should count as credit. Some people try too hard to develop code that's super random to defeat all performance optimizations to uncover "true hardware performance". Go too far and you have software patterns that you'll never see in the real world and the benchmark becomes equally useless.

Very true. I don't see too many people complaining that successive video card driver releases basically optimize the drivers by way of profiling specific games and rolling out optimizations that are absolutely game and hardware specific.

In the end these video driver profiles improve gameplay, improve the performance of the hardware when processing the given software, and the consumer gets more for their money.

If that is what compilers are doing, and in doing it the performance optimizations are being captured and represented in benchmark scores, then that is a good thing.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Exophase,

You know that I have great respect for you and would not suggest that you are being dishonest. This thread is likely to get some serious attention from AnTuTu and the mobile community, and I am glad that you exposed this.

AnTuTu is officially worthless as far as I'm concerned, although props to the Intel compiler team for trying to help rickety old Atom fight the good fight before it gets put down, I suppose

praising intel for skewing the benchmarks...intel's competition must feel vindicated..
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,949
504
126
I'm with SiliconWars on this one. OEMs don't give a flying bleep about Intel's benchmark tricks and games, so why even bother?
 

Third_Eye

Member
Jan 25, 2013
37
0
0
Here's what I don't get.

Intel is targeting ARM OEM's who obviously are not going to fall for this sort of thing. This is why Clover Trail gained very little traction - it's just not good enough regardless of how much Anand or the rest of the tech press falls for these benchmarking tricks.

So Intel goes to Samsung or Apple with Silvermont...and neither of them even bother with AnTuTu for obvious reasons. Silvermont will live on die on merit and AnTuTu won't even be considered.

What is the point in all this? There are no end users buying *Intel* tablets or phones - these tricks simply don't apply to this market unlike with x86 PC's. No OEM is going to be fooled by this, it's an end-user/tech press marketing trick, but if there are no actual products that the end user can buy....?

You have got it all wrong. These press releases and bench marketing is purely related to stem the series of downgrades that the various brokerage firms and analysts have placed on Intel

http://seekingalpha.com/article/154...n-meets-the-eye?source=email_rt_article_title

Keep in mind the Street always look for growth as a indicator to push the stock price more than Profitability. See how much ARM is valued compared to Intel. If there is no growth the investors who make money on volatility avoid the stock like plague. I worked in for 3 years in a famous Investment Bank (in IT field though) and this is what I learnt. They like volatility where they can as middle man make money both during the up cycle and the down cycle.

For example in Seeking Alpha there is a mindless reciter/jerker of these kind of things called Sneha Shah
http://seekingalpha.com/article/153...y-a-huge-margin?source=email_rt_article_title
Every news of this kind will have a "bull" (shit) article by her.

There is another dude who does some analysis but in general is bullish on the long term on Intel, AMD, QCOM but heavily short on Arm Holdings PLC, Ashraf Essa. His articles will try to justify how the "street" has got it totally wrong and how Intel is a great investment. But he too leads to over the top pronouncements but after some sanity tones down a little.

http://seekingalpha.com/article/145...t-22-nanometers?source=email_rt_article_title

http://seekingalpha.com/article/151...ct-from-fiction?source=email_rt_article_title

http://seekingalpha.com/article/150...ore-than-likely?source=email_rt_article_title

He believes that by selling cheapo atoms Intel will be able to make/beat the profitability that it currently enjoys by being the king-pin in the x86 world. If you look at the comments section there will be like 50% rabid fans, 30% mild fans, 20% sceptics/heretics.

So it is to these people that Intel is happy to provide the leaks so as to tout the horn.

Already if you look the with the launch of Droid Razr HD series, Googlerola announced a partnership with Intel for phone SOCs. Yet, besides Razr i how many phones has Moto released with Intel? What was the follow-up as Clovertrail was introduced. Similarly the win for Clovertrail Atom in Galaxy Tab3 is definitely cost based (why else would SMSG give a < HD screen if it was a high end win). So OEMs know where to put Intel intheir mobile ventures. For chineese OEMs like Huawei, Lenovo who want to break into the US market and pairing with Intel will give them a advantage on name recognition and marketing.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Very true. I don't see too many people complaining that successive video card driver releases basically optimize the drivers by way of profiling specific games and rolling out optimizations that are absolutely game and hardware specific.

In the end these video driver profiles improve gameplay, improve the performance of the hardware when processing the given software, and the consumer gets more for their money.

If that is what compilers are doing, and in doing it the performance optimizations are being captured and represented in benchmark scores, then that is a good thing.

I more or less agree with this notion too. If a legitimate compiler optimization breaks a benchmark that doesn't necessarily make the optimization wrong, it makes the benchmark bad. If the compiler optimization does nothing but break that benchmark then the optimization is dishonest.

As far as I'm concerned you can't break a non-synthetic benchmark, and generally you can't even break a good synthetic benchmark. nbench is quite bad (some parts worse than others). It's also very very old. If the writers realized this part could be broken like this, which they should have but may not have, they may have also thought no compiler would bother because compilers were a fair bit more primitive back then.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,949
504
126
Driver optimizations improve gameplay, software that people actually use. What is the value of benchmark optimizations that benefit the end user? Any at all? For example optimizing for 3DMark is a ridiculous waste of time from a consumer perspective, yet it's been going on for almost as long as the bench has existed.

On the subject of keeping the investor mindset on a happy note, sure I get that. But any company that focuses too much on that aspect is digging their own grave, sooner or later the piper is going to be paid. You can't fool the market forever, eventually it will move on no matter how strong a particular companies influence is.
 
Mar 10, 2006
11,715
2,012
126
You have got it all wrong. These press releases and bench marketing is purely related to stem the series of downgrades that the various brokerage firms and analysts have placed on Intel

http://seekingalpha.com/article/154...n-meets-the-eye?source=email_rt_article_title

Keep in mind the Street always look for growth as a indicator to push the stock price more than Profitability. See how much ARM is valued compared to Intel. If there is no growth the investors who make money on volatility avoid the stock like plague. I worked in for 3 years in a famous Investment Bank (in IT field though) and this is what I learnt. They like volatility where they can as middle man make money both during the up cycle and the down cycle.

For example in Seeking Alpha there is a mindless reciter/jerker of these kind of things called Sneha Shah
http://seekingalpha.com/article/153...y-a-huge-margin?source=email_rt_article_title
Every news of this kind will have a "bull" (shit) article by her.

There is another dude who does some analysis but in general is bullish on the long term on Intel, AMD, QCOM but heavily short on Arm Holdings PLC, Ashraf Essa. His articles will try to justify how the "street" has got it totally wrong and how Intel is a great investment. But he too leads to over the top pronouncements but after some sanity tones down a little.

http://seekingalpha.com/article/145...t-22-nanometers?source=email_rt_article_title

http://seekingalpha.com/article/151...ct-from-fiction?source=email_rt_article_title

http://seekingalpha.com/article/150...ore-than-likely?source=email_rt_article_title

He believes that by selling cheapo atoms Intel will be able to make/beat the profitability that it currently enjoys by being the king-pin in the x86 world. If you look at the comments section there will be like 50% rabid fans, 30% mild fans, 20% sceptics/heretics.

So it is to these people that Intel is happy to provide the leaks so as to tout the horn.

Already if you look the with the launch of Droid Razr HD series, Googlerola announced a partnership with Intel for phone SOCs. Yet, besides Razr i how many phones has Moto released with Intel? What was the follow-up as Clovertrail was introduced. Similarly the win for Clovertrail Atom in Galaxy Tab3 is definitely cost based (why else would SMSG give a < HD screen if it was a high end win). So OEMs know where to put Intel intheir mobile ventures. For chineese OEMs like Huawei, Lenovo who want to break into the US market and pairing with Intel will give them a advantage on name recognition and marketing.

Very good post.

I think the view that Intel can meet/exceed profitability with el cheapo Atoms is supported by historical precedent; do you remember how much the old Pentium II's cost back in the day? $1000 if you wanted anything remotely top end, and even Celerons were ~$300 (now adjust these for inflation and it gets even more ridiculous). As things get cheaper, the idea is that you can drive a whole heck of a lot more volume.

The problem with Intel is that if it were doing the Atom thing more or less unopposed and driving these new form factors, then it would be all but assured that profitability would skyrocket with the Atoms. The point of contention that the Intel bulls/bears fight over is really whether Intel can capture enough market share to drive the volumes needed to grow the total revenue base in the face of a declining traditional PC market.

I (quite obviously) believe that there's enough volumes in phones/tablets that Intel can see revenue growth despite the "reset" that is happening in the PC (which, really, let's call it high end computing space since even the "traditional PC" is becoming more like a tablet/phone). The bears think that Intel cannot break into this (very lucrative, high volume) space as it is not an established player in the vein of Qualcomm. My guess is that Intel will do very well in tablets, but the jury is still out on phones due to the level of integration needed to really successfully compete. Broadcom will be a very viable contender here given its expertise in hardcore integrating the heck out of everything. Qualcomm, too, is a force to be reckoned with.

If Intel can successfully provide the performance (both GPU and CPU) while maintaining a cost advantage by transitioning quickly to new nodes (which at the low power end it has not), then Intel will do well. If the company fails to execute on CPU, modem, or SoC level integration, then Intel will be in a much worse position.

Time will tell, of course.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |