Intel will launch Larrabee

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Aberforth

Golden Member
Oct 12, 2006
1,707
1
0
Originally posted by: BFG10K
Originally posted by: Aberforth

Any chip company is capable of making a over sized chip with 1.4bn transistors on it that can do 2 tflops.
No, they really aren't. Anything that complex will outright fail unless you really know what you're doing. nVidia?s chip is vastly more complex than Intel have ever made but they don?t even have their own manufacturing facilities.

The size of the die is too big to call it "vastly more complex", that's not a work of a genius. Can they do 32nm GPU anytime soon? I doubt it. Even Cell architecture is better than anything nv has done so far.
 

nosfe

Senior member
Aug 8, 2007
424
0
0
last i checked the gtx 280 was capable of about 0.9 TFlops so what's all this talk about 2TFlop gtx 280's all of a sudden?
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,996
126
Originally posted by: Aberforth

The size of the die is too big to call it "vastly more complex", that's not a work of a genius.
Yes it is. The fact they could pull off 1.4 billion transistors in a workable chip of that size is genius. Intel doesn't have anything close to that transistor count plus they have their own fabs.

If it?s so easy why is Intel resorting to throwing a ten year old x86 cores onto one die and calling it a ?GPU?? Why don?t they make a real GPU if it?s so easy for anyone to run along and produce a 1.4 billion transistor chip like you claim? Intel have 45 nm now so why aren?t they making a 45 nm GT200 to beat nVidia?

It?s ?easy?, right? :roll:

Can they do 32nm GPU anytime soon? I doubt it.
Relevance?

Even Cell architecture is better than anything nv has done so far.
LMAO.
 

Aberforth

Golden Member
Oct 12, 2006
1,707
1
0
Originally posted by: nosfe
last i checked the gtx 280 was capable of about 0.9 TFlops so what's all this talk about 2TFlop gtx 280's all of a sudden?

oops, you are right -GTX 280 SLI= 2 tflops, radeon 4870 can do 1.2 tflops.
 

Aberforth

Golden Member
Oct 12, 2006
1,707
1
0
Originally posted by: BFG10K
Originally posted by: Aberforth

The size of the die is too big to call it "vastly more complex", that's not a work of a genius.
Yes it is. The fact they could pull off 1.4 billion transistors in a workable chip of that size is genius. Intel doesn't have anything close to that transistor count plus they have their own fabs.

If it?s so easy why is Intel resorting to throwing a ten year old x86 cores onto one die and calling it a ?GPU?? Why don?t they make a real GPU if it?s so easy for anyone to run along and produce a 1.4 billion transistor chip like you claim? Intel have 45 nm now so why aren?t they making a 45 nm GT200 to beat nVidia?

If NV can do CPU - you might be right.
 

imported_Scoop

Senior member
Dec 10, 2007
773
0
0
Originally posted by: Cookie Monster
Originally posted by: Scoop
After their SSD release, I think Intel can do anything Sure their IGP sucks but so does AMD's and NVidia's.

You got to be joking. The difference between intel's IGP to AMD/nV is quite literally night and day. They provide alot of features that intel lacks (video features, HW accelerations, acceptable performance), not to mention the shoddy drivers for intel IGPs that even getting to the game menu without crashing is a miracle.

Well the point was that all of them suck hard for gaming and there's no point in saying that they're ok for games made in 2003. And are you seriously comparing G35 to the latest AMD/Nvidia IGPs?
 

nosfe

Senior member
Aug 8, 2007
424
0
0
Originally posted by: BFG10K
Yes it is. The fact they could pull off 1.4 billion transistors in a workable chip of that size is genius. Intel doesn't have anything close to that transistor count plus they have their own fabs.

actually they do have, Montecito Itanium; 1.7 billion transistors and 12mb L3 cache per core, and its a dual core so double those up, and they plan on releasing Tukwila this year with 2 billion transistors and 30mb total L3 cache, can you spell "overkill"?
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,996
126
Originally posted by: nosfe

actually they do have, Montecito Itanium; 1.7 billion transistors and 12mb L3 cache per core,
But ~1.5 billion of those transistors were the L3 cache so that doesn't really count. So that leaves ~200 million for actual CPU logic which is paltry.

and they plan on releasing Tukwila this year with 2 billion transistors and 30mb total L3 cache, can you spell "overkill"?
And again most of those transistors will come from the L3 cache.
 

minmaster

Platinum Member
Oct 22, 2006
2,041
3
71
so should one hold out on buying a video card and wait for larrabee?

or hold out for a whole new PC with nehalem + larrabee?
 

nosfe

Senior member
Aug 8, 2007
424
0
0
transistor count is still transistor count, why wouldn't it count? they use special transistors to make cache? no, its the same thing, its only that if some of them go bad they can rebadge it as a lower cache chip but thats the same thing nvidia is doing with the gtx 280/260, its just than nvidia is losing more good transistors while doing it.

either way its still overkill but at least those itaniums cost so much that intel doesn't mind the big die size. The most important feature would be the die size, and guess what? Die size is 27.72 mm × 21.5 mm, or 596 mm²
 

Tempered81

Diamond Member
Jan 29, 2007
6,374
1
81
Tukwila is an Itanium made by intel with over 2 billion transisitors on die.

edit: nm didn't read the whole thread.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: minmaster
so should one hold out on buying a video card and wait for larrabee?

yes .. wait about ten years for intel to get a clue about graphics


In 2010 intel might be able to move to a new production process, cramming more cores onto the same die. They might add more optimizations, and what not. It might be a long time, but it's also time that intel can use to poor 100's of millions into research.

intel might be able to achieve world peace



i wouldn't count on your fanta ... scenario actually happening




 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: emilyek
Originally posted by: apoppin

Maybe in 5-10 years they will have something competitive to today's graphics - by then Nvidia will be light years ahead of them

my opinion

It's possible. There was a recent article that estimated Larrabee would only be on par with current hardware.

But, I dunno; if Intel is serious and not just screwing around, which it appears they are, it seems like they would have a pretty good shot at creating a Core2 sort of dominance of the GPU market.

Competition is good, whatever happens.

Why does it "seem like that"?

They've never had a remotely competitive GPU. Their experience in the current GPU market is light years behind ATi and NVIDIA. They are even behind S3 and Matrox AFAIK.

If NVIDIA, or ATi, went into the CPU market would you expect them to "dominate", if "they weren't screwing around"?

My thought is Intel will have some interesting low and mid range graphics solutions.

 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: minmaster
so should one hold out on buying a video card and wait for larrabee?

or hold out for a whole new PC with nehalem + larrabee?

As Larrabee has an ETA of Q4 next year, that's some wait!
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
i hate to say this . . .
--actually i love to

Larrabeast is intel's "new p4"
. . . "10Ghz or [net]Bust!"


Did you forget already? .. intel even has the same P4 PR team [and engineers, probably] working on it; sure it will help their IG and eventually they may turn up something revolutionary - intel can afford to spend a few tens of millions of dollars on PR or research; IF it somehow turns out, well then "bonus"

What do we really know about it? Other than a major marketing spiel to counter Nvidia's push with CUDA into the pro market?
"Marchitecture", *defined* [imo]

it is already getting some of you asking "should we wait for it"
- hell no!!

 

BFG10K

Lifer
Aug 14, 2000
22,709
2,996
126
Originally posted by: nosfe

transistor count is still transistor count, why wouldn't it count?
So if I took the Itanium?s 200 millions computation transistors and replaced them with more cache transistors (i.e. I made a chip with nothing but L3 cache), is that equivalent to an Itanium from a design and manufacturing standpoint?

Of course not. Just because they have the same amount of transistors it does not make them equal.

Cache transistors are ?dumb? compared to something that performs computation. All they do is store and load data.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
My goodness. We seem to have gotten off topic. First comparing GPU logic to cpu is logic is a waste of time . How many shaders does it take to = one old P chip. Comparing what a GPU can do compared to a cpu is childsplay. Cuda sounds good. But comparred to x86 compute logic. GPU is going to gett stepped on hard. SO it will take ATI .NV hundreds of compute shaders to = just 32 intel cpus. Each shader is a compute logic.
But I won't argue the point . Same thing with the hydra chip . It will work with simple GPU instructions but CPU no way. TO complacated instructions. SO as far as who will lead as far as CPGPU. Intel larrabee easily. For graphics thats were the questions lie. But alot of intels larrabee success will hang on intels own game. If its great. and niether ATI or NV are able to to keep up with RT. on larrabee it could endup very interesting. Plus ATI and Intel will beable to do DX10.1 games where as NV can't.

This game is just beginning . Lets not count intel out till the fat lady sings ok. Or should we go back try to find out who said what about the ATI 4000 series release. You know the thread I am talking about. The one were all went up in smoke. LOL!
 

nosfe

Senior member
Aug 8, 2007
424
0
0
and my point is that the die area is the biggest problem with these chips because the wafers aren't that big and if a couple of transistors fail in the manufacturing process the whole thing is a bust; intel's problem isn't as big as nvidia's because that itanium cost ~$3.5k when first introduced so its sure to make a ton of profit.
I understand that you like monolithic graphic cores but the problem is that there is such a thing as "too big" in the manufacturing world. Bigger dies mean higher chance that one of them goes bad in the manufacturing and also a lot more wasted die space, wafers being round compared to chips being rectangular(why they haven't yet made some advances in making square wafers is beyond be, i mean, they use circles since like forever)

getting back on topic, Nemesis, if the TFlop estimation is correct for larabee i'll be extremely disappointed in it, i mean, thats two years from now, the Radeon 4670 for $80 beats every card from 2 years ago so i won't be surprised if we'll see in two years 4870 like performance(1TFlop) for under $100. Also, TFlop performance takes into consideration the best case scenario where software is specifically written for the card to take full advantage of it, that's why the 4870 doesn't beat the GTX280 even though it has ~0.3 more TFLops. Sure intel has the money to make that happen but that would be most likely for the expensive programs(photoshop, maya, autocad, etc) and not for games specifically because lets face it, intel sucks at gamin, and also there are far to many game making companies out there. The real problem intel will face is the huge library of games they'll have to optimize their drivers for, i feel sorry for their larrabe driver team(not the IGP driver team )
 

Borealis7

Platinum Member
Oct 19, 2006
2,901
205
106
i would rather think about Project Offset as a "proof of concept" type of game. doesnt matter if the gameplay sucks, what matters is whether other studios will license the engine to create games which work well on Larrabee.

think about it in "UT3" ways. if the licensing catches on, Larrabee will be popular and sell in mass quantities.

by the looks of it (movies on Youtube), Offset is going to "Oblivion" the 3d gaming world with its graphics.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I think your missing something from all this. Because intel does everthing in software. other than shaders. There transistor budget is better in effieviency. Free from hardlogic transitors that can be costly as well as unused. Software rendering has hugh advantages.

Project offset is proof of concept. It offers much. As we have seen. But we haven't seen what intel has added to the project offset. The last gameplay demo we have is from 2006 befor intel bought them . I am sure RT will be added.

This game is more than proof of concept. If its a great game it will force direction. Intel doesn't need to license that game engine. What intel needs is what ATI needs. Games produced using DX10.1 taking advantage of globial illumination. This is what Intel and ATI both need . But NV is tring to roadblock. Just as intel is tring to cut cuda off befor old mo grows.

 

kobymu

Senior member
Mar 21, 2005
576
0
0
Originally posted by: BFG10K
Yes it is. The fact they could pull off 1.4 billion transistors in a workable chip of that size is genius. Intel doesn't have anything close to that transistor count plus they have their own fabs.
?

Is the "transistor count" the new GHz? Or the new IPC for that matter?
Since when the transistor count 'is' or 'the' decisive factor to the 'intelligence' ("genius"???) of the design? Or effectiveness? Or efficiency? And last but definitely not least economically (think yields and price)?

And what about factors like clock frequency and IPC? If you can come up with a design that can have a higher IPC (or for the sake of this discussion 'Work per tick') WHILE having the ability to scale higher in clock frequency at the cost of transistor count, that can offer at the net sum a more capable product? Is that said product now inferior due to it's lower transistor count? ...

If it?s so easy why is Intel resorting to throwing a ten year old x86 cores onto one die and calling it a ?GPU?? Why don?t they make a real GPU if it?s so easy for anyone to run along and produce a 1.4 billion transistor chip like you claim? Intel have 45 nm now so why aren?t they making a 45 nm GT200 to beat nVidia?

Maybe they have a different approach to RT rendering in mind?

You do realize that if you know in advance that you are designing a chip that is supposed to execute/do highly paralleled code/work, you are probably gonna end up with something that is made up from a large amount of the same thing.

Complexity wise, a chip that is made up from a large amount of the same thing, especially if that same thing is relatively simple, vs. a chip that is made up from a few but highly complex units, the second chip 'wins' every time.

I just don't see how "a workable chip of that size is genius", sheer quantity is not by itself such a decisive factor, but maybe that's just me.

A manufacturing accomplishment ,maybe, but not necessarily a design accomplishment. And if that is the case, some of that "genius" belongs to TSMC.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
Originally posted by: BFG10K
Originally posted by: nosfe

transistor count is still transistor count, why wouldn't it count?
So if I took the Itanium?s 200 millions computation transistors and replaced them with more cache transistors (i.e. I made a chip with nothing but L3 cache), is that equivalent to an Itanium from a design and manufacturing standpoint?

Of course not. Just because they have the same amount of transistors it does not make them equal.

Cache transistors are ?dumb? compared to something that performs computation. All they do is store and load data.

That argument is just utter nonsense!

A CPU is in many ways just like a program procedure, instruction coming in, results get out. It doesn't matter how many lines of code are dedicated to simple task and how many lines of code are are dedicated to complex tasks, what matters is that the procedure get its job done as fast as possible. If that means the it (the procedure) can get the job done faster by using more, simple, lines of code instead of less, complex, lines of code (or visa versa) that is the better procedure! The amount of lines of code is the less decisive metric here!

If a said CPU can get the job done faster by dedicating some of it's transistors count to do some "dumb" work (cache), then not doing so is plain old the wrong design decision. If you would take a modern CPU and remove it's cache completely and replace it with "smart" transistors (additional ALU or what have you) you will probably see your CPU performance go DOWN!

(this btw reminds me of another example which might be familiar to some of you - a work environment that has too many managers and not enough workers!)

It's the net result that counts, not an arbitrary metric about the usage of resources.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: MarcVenice
Look, the fastest single GPU is a gtx280, capable of 0.8tflops. Double that, and you have intels larabee videocard, supposedly ofcourse. Do we see gtx280's performance being doubled any time soon? A single GPU doing 2tflops? I don't, it'll take at least a year, most likely end of 2009. I'm discounting AMD here, because tflops don't mean everything. While capable of more tflops, most of the time gtx280's gpu > HD4870's gpu.

In 2010 intel might be able to move to a new production process, cramming more cores onto the same die. They might add more optimizations, and what not. It might be a long time, but it's also time that intel can use to poor 100's of millions into research.

the GTX280 is 0.9 TFLOPS, the 4870 is 1.2 TFLOPS, the 4870x2 is 2.4 TFLOPS...

But you say... the 4870x2 is dual GPU...

Well, the larabee is 10 SEPERATE CPU cores.... its not dual gpu, its deca-cpu emulating a GPU.

As someone before said, the great thing about larabee is that it is just raw power, no specialized hardware. New features? install the latest driver and you got the latest directX, the latest video decode, etc, all for every larabee ever made... (unless they start making SSE2 larabees and newer drivers require it... actually it will not surprise me... so i would hold off on that whole "forever and ever" theory)
 

Aberforth

Golden Member
Oct 12, 2006
1,707
1
0
Originally posted by: taltamir
Originally posted by: MarcVenice
Look, the fastest single GPU is a gtx280, capable of 0.8tflops. Double that, and you have intels larabee videocard, supposedly ofcourse. Do we see gtx280's performance being doubled any time soon? A single GPU doing 2tflops? I don't, it'll take at least a year, most likely end of 2009. I'm discounting AMD here, because tflops don't mean everything. While capable of more tflops, most of the time gtx280's gpu > HD4870's gpu.

In 2010 intel might be able to move to a new production process, cramming more cores onto the same die. They might add more optimizations, and what not. It might be a long time, but it's also time that intel can use to poor 100's of millions into research.

Well, the larabee is 10 SEPERATE CPU cores.... its not dual gpu, its deca-cpu emulating a GPU.

Right...nv has 256 (or whatever the number these days) shader processors, as far as I know they are quite separated from each other
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: Aberforth
Originally posted by: taltamir
Originally posted by: MarcVenice
Look, the fastest single GPU is a gtx280, capable of 0.8tflops. Double that, and you have intels larabee videocard, supposedly ofcourse. Do we see gtx280's performance being doubled any time soon? A single GPU doing 2tflops? I don't, it'll take at least a year, most likely end of 2009. I'm discounting AMD here, because tflops don't mean everything. While capable of more tflops, most of the time gtx280's gpu > HD4870's gpu.

In 2010 intel might be able to move to a new production process, cramming more cores onto the same die. They might add more optimizations, and what not. It might be a long time, but it's also time that intel can use to poor 100's of millions into research.

Well, the larabee is 10 SEPERATE CPU cores.... its not dual gpu, its deca-cpu emulating a GPU.

Right...nv has 256 (or whatever the number these days) shader processors, as far as I know they are quite separated from each other

does the nv 256 have 10 seperate x86 decoders? and is each shader group a full x86 compatible core?
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |