Intel: GPUs are "ONLY" 2.5~14 times faster than CPUs

lopri

Elite Member
Jul 27, 2002
13,211
597
126
Intel engineers probably should have kept this research in house. Or at least should have consulted a PR team prior to publishing it.

http://www.pcworld.com/article/1997...pu_outperforms_32ghz_core_i7.html?tk=rss_news

Intel researchers have published the results of a performance comparison between their latest quad-core Core i7 processor and a two-year-old Nvidia graphics card, and found that the Intel processor can't match the graphics chip's parallel processing performance.

On average, the Nvidia GeForce GTX 280 -- released in June 2008 -- was 2.5 times faster than the Intel 3.2GHz Core i7 960 processor, and more than 14 times faster under certain circumstances, the Intel researchers reported in the paper, called "Debunking the 100x GPU vs. CPU myth: An evaluation of throughput computing on CPU and GPU."

NVIDIA responds:

http://blogs.nvidia.com/ntersect/20...-to-14-times-faster-than-cpus-says-intel.html

It’s a rare day in the world of technology when a company you compete with stands up at an important conference and declares that your technology is *only* up to 14 times faster than theirs. In fact in all the 26 years I’ve been in this industry, I can’t recall another time I’ve seen a company promote competitive benchmarks that are an order of magnitude slower.

The paper can be found here:

Debunking the 100X GPU vs. CPU myth: an evaluation of throughput computing on CPU and GPU

Recent advances in computing have led to an explosion in the amount of data being generated. Processing the ever-growing data in a timely manner has made throughput computing an important aspect for emerging applications. Our analysis of a set of important throughput computing kernels shows that there is an ample amount of parallelism in these kernels which makes them suitable for today's multi-core CPUs and GPUs. In the past few years there have been many studies claiming GPUs deliver substantial speedups (between 10X and 1000X) over multi-core CPUs on these kernels. To understand where such large performance difference comes from, we perform a rigorous performance analysis and find that after applying optimizations appropriate for both CPUs and GPUs the performance gap between an Nvidia GTX280 processor and the Intel Core i7-960 processor narrows to only 2.5x on average. In this paper, we discuss optimization techniques for both CPU and GPU, analyze what architecture features contributed to performance differences between the two architectures, and recommend a set of architectural features which provide significant improvement in architectural efficiency for throughput kernels.
 

jaqie

Platinum Member
Apr 6, 2008
2,472
1
0
*sigh* what everyone including the OP seems to be missing is that the CPU and GPU are very different types of computers. You can't run an OS on a GPU because they are (still, though it is not as much as it used to be) very specialized processors.

Just like the wimpy little video processors in set top boxes, they can only do one (or a few) thing(s) but they can do them very well because they are specialized and not generalized.
 

lopri

Elite Member
Jul 27, 2002
13,211
597
126
I don't think anyone misses that point. ^_^ (You mean, Intel is missing the point?) It's Intel that says GPUs are 2.5 times faster than CPUs in certain workloads.
 
Last edited:

jaqie

Platinum Member
Apr 6, 2008
2,472
1
0
Than why not include "but in some tasks it is infinity slower" to be technically accurate?
 

Bill Brasky

Diamond Member
May 18, 2006
4,345
1
0
Did anyone else get warm fuzzies all over?

edit: I do like the fact that intel is researching key architecture differences and putting them in a compare and contrast article. Sounds like an interesting read (of course mostly over my head).
 
Last edited:

tincart

Senior member
Apr 15, 2010
630
1
0
Specialized processors are good at some specialized tasks. How shocking. Somebody slap me.
 

sxr7171

Diamond Member
Jun 21, 2002
5,079
40
91
Ok cool. Why don't you begin writing an OS and Office Suite for GPUs? You'll make lotsa money.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
That was a good article.

I wonder how Intel and AMD will evolve Knight's corner and Radeon respectively?

From what I gather (and I am still *trying* to learn this) the markets in GPGPU also seem to be rapidly expanding at the mobile/personal device level.
 
Last edited:

taserbro

Senior member
Jun 3, 2010
216
0
76
The paper was very interesting. I note that the testers themselves felt the need to point out they needed to manage some aspects of the gtx280's memory management for lack of any alternative and that one of the reasons the performance reported became lower than previously reported is due to the lack of cache on die and many other limitations in the chip's design itself when faced with the tasks they chose to benchmark it. Somewhere they also admit that there's a 200 to 300% disparity in the currently unoptimizable (is that a word?) performance of the gtx280 and its actual peak compute and bandwidth numbers. In a couple instances, they also point out facets of the architecture that could be improved and theoretically improve its performance by factors of 3 to 5 such as large embedded cache and that while both platforms would benefit from better memory bandwidth, the gpu is especially bottlenecked by that limitation with using optimizations such as memory usage pattern recognition and other bandwidth limited sorting algorithms. Even while profiling compute-heavy tasks, they were unable to exploit more than 66% of the gtx280's actual peak compute performance for lack of any way to bypass the inherent flaws of the architecture. And despite all these areas where the gpu was penalized for not having been designed with these tasks in mind, it still outcomputed the cpu by a factor of 1.9 at worst and overall outperformed it by a factor of 2.5.

In short, it sounds like general computing on the gpu is still at a stage where it's so far from being optimized that it's pointless to try and make a fair comparison; no matter how hard and sincerely one tries, it still just won't be fair. Also, no matter how "honest" their effort at making their attempt at fully optimizing the software to measure the peak potential of both hardware platforms, it just pales in the face of the many generations of hardware and software advances in favor of x86 and the coding experience in exploiting their strengths and compensating for their weaknesses. That side has simply too much of a head start in front of gpus which weren't even meant to accomplish these operations by design; I mean the cuda library they used was first released in what? 2007? It's had about 3 years to mature in an environment where its adoption is barely picking up now.

The hardware platform is meant to evolve with its software (and vice-versa) and for the task it's meant for. Redoing these tests using the latest and more gpgpu fermi and cross-company corroborated methods would have been more fair in acknowledging the efforts nvidia put into general computing but I wager pitting fermi against even intel's current flagship would have produced a more embarrassing difference ratio for the latter. Considering the comparatively small amount of attention that's gone to optimizing coding and algorithms for parallel computing up to now, I would say the real conclusion of the paper is that gpgpus hold enormous potential still mostly locked by its own immaturity.
 

lopri

Elite Member
Jul 27, 2002
13,211
597
126
That is an excellent assessment, taserbro. The paper was worth reading for even a layperson like me, despite rather large part of data/jargons that I have no clue of. It also gives an idea what kinds of architectural changes NV might have tried to achieve with Fermi, or (ironically) should/would try to achieve in future architectures.

I am curious how this battle will turn out in the future. Apparently all the major players are aware of the importance of "Throughput computing". On the opposite side we have a battle between ARM and x86 looming large. And there is an extremely important litigation about the U.S. patent laws waiting for a verdict from the Supreme Court. (Bilski v. Kappos)

Interesting time, Indeed.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,980
126
If they had run games the GPU would be dozens or even hundreds of times faster.
 

0roo0roo

No Lifer
Sep 21, 2002
64,862
84
91
i didn't realize there was a pissing match worth caring about going on.
i don't run windows on gpu after all.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
On the opposite side we have a battle between ARM and x86 looming large.

Yep, that is interesting (and very confusing to me) at both the Tablet/smartbook and server level.

Let's say hypothetically ARM eventually makes inroads into both those categories and a little beyond?

Where would that leave the majority of Intel's "many/multi core" Marketshare? High end laptop, desktop and HPC?

As far as PC goes, What happens if the GPU ends up doing a better job at encoding? Where would that leave the market for "multi" and "many core" x86 designs for individual consumers?
 
Last edited:

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
One of the interesting points of the tests conducted was their discussion of how the lack of cache and memory bandwidth were serious limiting factors for the GTX280- something that the Fermi architecture made rather huge improvement on(also curious as to why Intel didn't use at least the 285 for these tests, but that is a side topic).

Let's say hypothetically ARM eventually makes inroads into both those categories and a little beyond?

ARM is looking to be in a superior position based on the current landscape of the tablet market. As of right now this seems to be mainly a function of their more open nature and the fact that you aren't locked in to one platform, but them having a big lead to start a new segment is certainly going to aid them in holding on to that segment for the near term at the very least.

Where would that leave the majority of Intel's "many/multi core" Marketshare? High end laptop, desktop and HPC?

I think that may end up being too generous for Intel if they don't make more agressive moves to stop the bleeding. Latest market research indicates that by 2015 there will be 15 Billion devices connected to the internet. On the high side the estimate is 2 billion of those will be 'PCs'(including laptops). That leaves ~13 billion devices that as of right now ARM is the most likely candidate for(and the reason that Intel is pushing development on products like Moorestown). High end laptop and desktop PCs together won't combine to total 2 billion devices, but they will be a significant portion of that. What gets interesting is when the Chrome OS desktop variants start popping up and Androids increasing rise in functionality. With the mobile smart/super/tablet market exploding far beyond the PC's reaches within the next few years how much will a traditional Intel based PC offer over an ARM platform backed by a CUDA processor for typical consumers? The HPC space Intel needs Larrabee for, they can't hope to compete with traditional processors but they know that. What is a far more dangerous situation for them is the pressure coming at them from the low side. How many typical home users need anything beyond what the new wave of tablets offer? What happens when you double their normal CPU compute power and then throw in a GPU with apps tailor made from go to run on them(as this new segment is emerging, Intel doesn't have the luxury of existing install base, legacy and robust OS support)?

As far as PC goes, What happens if the GPU ends up doing a better job at encoding?

The GPU already does, the intesting thing to watch at this point is going to be how fast "Tegra" and the likes scale up into CUDA territory and how functional they are in that area. As of now, the high power draw of consumer GPUs is giving Intel some breathing room, but not much as Tegra3 is already in the works and each generation gets closer to the functionality Intel can't let take a grip on the portable segment.

Where would that leave the market for "multi" and "many core" x86 designs for individual consumers?

Pretty much, us. Don't get me wrong, I see us as a viable market, but we are certainly much smaller then what Intel currently has. When the typical consumer has no need for a desktop PC anymore, it will be the gamers/power users/tech enthusiasts that keep Intel's current processor lines in business. The next few years will be interesting to watch, a whole bunch of people on these forums have been looking at nV's lack of x86 license as the certainty that was going to kill them off while the x86 market in terms of overall computer useage has driven off a cliff in terms of marketshare. By 2015 it is looking like x86 may control under 15% of the computing market if current trends continue. Fusion and comparable devices are still huge and power hungry compared to solutions like Tegra(or Hummingbird/Snapdragon) and their performance advantage isn't large enough for Intel or AMD to be feeling very good about their position at the moment. It's going to be interesting to watch it all unfold. With Intel's massive resources they certainly can't be ruled out, but as of now they haven't impressed anyone in the emerging mobile space at all(even if PC enthusiasts are blown away at their 2 watt chip).
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Pretty much, us. Don't get me wrong, I see us as a viable market, but we are certainly much smaller then what Intel currently has. When the typical consumer has no need for a desktop PC anymore, it will be the gamers/power users/tech enthusiasts that keep Intel's current processor lines in business.

I see your point about "gamers", but at the moment it seems "Core i5 750" is about as good as it gets.

Why would anyone buy Hex core if their GPU was better at the job of encoding?
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I think that may end up being too generous for Intel if they don't make more agressive moves to stop the bleeding. Latest market research indicates that by 2015 there will be 15 Billion devices connected to the internet. On the high side the estimate is 2 billion of those will be 'PCs'(including laptops). That leaves ~13 billion devices that as of right now ARM is the most likely candidate for(and the reason that Intel is pushing development on products like Moorestown).

Speaking of Moorestown, I have read that a Core 2 duo has about 10% of its silicon allocated to Legacy x86 support? Do you or anyone else know about how much of "Atom" or "Moorestown" die space is allocated to x86 legacy support? Could it quite possibly be a larger proportion?

What gets interesting is when the Chrome OS desktop variants start popping up and Androids increasing rise in functionality. With the mobile smart/super/tablet market exploding far beyond the PC's reaches within the next few years how much will a traditional Intel based PC offer over an ARM platform backed by a CUDA processor for typical consumers?

Android has a good market established for it. What kind of Obstacles do you see for "Chrome OS" adoption?

EDIT:Here is an article for Google "Chrome OS" netbook. Some specs and a strategy for subsidizing the device are mentioned.

This article from yesterday mentions Dell testing Chrome OS in some of their products.

With the mobile smart/super/tablet market exploding far beyond the PC's reaches within the next few years how much will a traditional Intel based PC offer over an ARM platform backed by a CUDA processor for typical consumers?

Good point, I just hope AMD can make strong inroads into GPGPU.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
I guess the short version is: there's no such thing as general computing.
Then again, nothing new, Amdahl already told us that ages ago.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
i didn't realize there was a pissing match worth caring about going on.
i don't run windows on gpu after all.

Companies like Dell and IBM started shipping servers equipped with nVidia Tesla cards. I guess that's what this is all about.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
...
The GPU already does, the intesting thing to watch at this point is going to be how fast "Tegra" and the likes scale up into CUDA territory and how functional they are in that area. As of now, the high power draw of consumer GPUs is giving Intel some breathing room, but not much as Tegra3 is already in the works and each generation gets closer to the functionality Intel can't let take a grip on the portable segment.

Pretty much, us. Don't get me wrong, I see us as a viable market, but we are certainly much smaller then what Intel currently has. When the typical consumer has no need for a desktop PC anymore, it will be the gamers/power users/tech enthusiasts that keep Intel's current processor lines in business. The next few years will be interesting to watch, a whole bunch of people on these forums have been looking at nV's lack of x86 license as the certainty that was going to kill them off while the x86 market in terms of overall computer useage has driven off a cliff in terms of marketshare. By 2015 it is looking like x86 may control under 15% of the computing market if current trends continue. Fusion and comparable devices are still huge and power hungry compared to solutions like Tegra(or Hummingbird/Snapdragon) and their performance advantage isn't large enough for Intel or AMD to be feeling very good about their position at the moment. It's going to be interesting to watch it all unfold. With Intel's massive resources they certainly can't be ruled out, but as of now they haven't impressed anyone in the emerging mobile space at all(even if PC enthusiasts are blown away at their 2 watt chip).
Tegra adoption is slower than anticipated though.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Why would anyone buy Hex core if their GPU was better at the job of encoding?

Console devs will force PC native devs to adapt or die off. We aren't going to see a huge jump in per core performance from processors anytime in the near future, Sony and MS will both be using a larger number of cores for their next systems, and that will force PC devs to adapt or limit our games to console ports which will be tailor made to run on many cores to start with.

Do you or anyone else know about how much of "Atom" or "Moorestown" die space is allocated to x86 legacy support?

I honestly have no idea but die space itself isn't as important as how much power it takes- which I also have no clue on to be honest. It could consume 99% of the die space, if it only consumed 1mW to power that segment of the chip it really wouldn't matter.

What kind of Obstacles do you see for "Chrome OS" adoption?

I think Chrome needs a new platform in which to succeed in, or IT departments to get behind it. The Chrome OS is all about cloud computing, something that I have absolutely *no* interest in seeing become popular. That said, I know many IT departments are already migrating in that direction, with the significantly reduced cost that is possible with such a setup it may be something that they do end up getting behind. In terms of Chrome being a viable desktop replacement I don't see it happening anytime soon. As a platform for low power tablets as a counter to Android, that could be something the market would be interested in. Chrome is no threat to desktop Windows, but as another avenue that portable devices could use or other consumer electronics(TVs, DVRs, BRDs etc) it could be an interesting option.

Good point, I just hope AMD can make strong inroads into GPGPU.

As of right now, AMD looks to be in an absolutely terrible long term position. They have nothing to compete in the low end segment, they have nothing to compete in the high end market(HPC- Tesla/Larrabee) and they don't have any compelling parts in the pipeline that could change their outlooks in either segment. Their purchasing ATi and devoting too much effort to parts like 'Fusion' instead of pushing hard into GPU and the portable space may end up costing them dearly. That isn't to say nV won't have any competition, PowerVR has actually positioned themselves rather well. I would think at this point AMD would be analyzing their long term strategy a bit closer, the problem for them is they don't have the resources of Intel and they likely need to focus on a particular angle and push as hard as they can in that direction. Right now it looks like they are pushing in the direction of Fusion which would appear to be the market segment that is in the most danger of getting killed altogether(how many ultra low cost PCs are going to stand up to tablets three years from now?).

Tegra adoption is slower than anticipated though.

Tegra2 is going to be powering the first major competitors to the iPad, the Dell Streak 7 and Streak 10. The CEO of Motorola has also come out and stated that they will be shipping a superphone at the end of this year running Tegra2 @2GHZ CPU speed. Tegra seems to have positioned itself as the premium alternative in the portable market which is where I suspect nV wants to be. Tegra2 isn't enough to help nV push CUDA down to portable markets, but they are building the foundation for their long range goals. Right now it seems like TI and nV are leading the pack by a decent amount in terms of performance in this segment, will be interesting to see if Intel makes a more serious effort to get themselves positioned better.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |