News Intel GPUs - Battlemage officially announced, evidently not cancelled

Page 164 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
I'm surprised Celestial is still on TSMC, doesn't bode well for Intel's process nodes.
I don't think there's necessarily anything about this that points to problems at Intel. For now, I would tend to think that sticking with TSMC for large ASIC development is just easier (Intel is already using their tool chain). Obviously, given Raja's departure, there are significant problems at Intel Graphics other than node selection.
 
Reactions: coercitiv

coercitiv

Diamond Member
Jan 24, 2014
6,393
12,826
136
I'm surprised Celestial is still on TSMC, doesn't bode well for Intel's process nodes.
I have no idea of what to think of Intel process nodes anymore, I just consider them unpredictable.

Both a long way off apparently:

So that sounds like over a year until Battlemage and around 3 until Celestial.
2H2024? That really doesn't bode well.

It's not like their competitors will be sitting on their laurels.
Last time I saw a leaked GPU timeline from Intel there was also an Alchemist+ slated for Q3/Q4 2023. I hope the "+" means more than using screws instead of adhesive in GPU assembly. This may be enough to keep the ball running for them, in terms of being able to offer a competitive value oriented product until Q2/Q3 2024 when Battlemage is supposed to arrive.
 
Reactions: ZGR

NTMBK

Lifer
Nov 14, 2011
10,269
5,134
136
I don't think there's necessarily anything about this that points to problems at Intel. For now, I would tend to think that sticking with TSMC for large ASIC development is just easier (Intel is already using their tool chain). Obviously, given Raja's departure, there are significant problems at Intel Graphics other than node selection.
If they aren't using the product to fill their own fabs, then what's the point of an uncompetitive GPU? I just don't see why this makes sense for Intel.
 

Aapje

Golden Member
Mar 21, 2022
1,467
2,031
106
2H2024? That really doesn't bode well.
Given the long delay of Alchemist, it also suggests that they needed to reboot the design of Battlemage, since they are normally already working on the new generation before the previous gen reaches production.
 
Reactions: Lodix

Aapje

Golden Member
Mar 21, 2022
1,467
2,031
106
If they aren't using the product to fill their own fabs, then what's the point of an uncompetitive GPU? I just don't see why this makes sense for Intel.
It's not just fab filler, they presumably want to enter the same professional markets that Nvidia is dominating.
 

KompuKare

Golden Member
Jul 28, 2009
1,072
1,111
136
I have no idea of what to think of Intel process nodes anymore, I just consider them unpredictable.



Last time I saw a leaked GPU timeline from Intel there was also an Alchemist+ slated for Q3/Q4 2023. I hope the "+" means more than using screws instead of adhesive in GPU assembly. This may be enough to keep the ball running for them, in terms of being able to offer a competitive value oriented product until Q2/Q3 2024 when Battlemage is supposed to arrive.
Well, there was a bit of speculation when all the driver issues became apparent:
maybe the hardware group had promised the driver people certain features which they had then spent ages writing for,
yet in the end that feature did not work.

Then a "+" could have any little problems fixed.

But then it does sound a bit like something we often hear about AMD "Navi31, Vega, etc. had an issue and it will be fixed soon"
 

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
If they aren't using the product to fill their own fabs, then what's the point of an uncompetitive GPU? I just don't see why this makes sense for Intel.
GPUs don't make sense for Intel in the short term. They never did. Despite the rosy projections needed to sell the idea to shareholders, it was always going to be a slog, IMHO. That said, yeah, in the long term these are Intel fabbed GPUs, or 1/2 of the business case is gone. Non-Intel fabbed gaming GPUs can still produce good profits, and that's not nothing.

Having HPC/ML/AI server GPUs being produced on non-Intel fabs would be a terrible indicator, IMHO. Being able to deliver well integrated products for these product lines to super computer users (as well as to the sort of mini-super clusters) wouldn't be a very confidence inspiring result. The whole benefit would be getting these sort of systems out faster due to internal efficiencies - which will also consist of mixed GPU/CPU silicon as AMD is heading. That provides advantages that NV just doesn't have right now.

Intel got too big and too fat to keep up with increasingly successful and nimbler companies like AMD and Nvidia. It is hard to see how they get out of their situation. It's almost like they have to crater (as AMD did) to have a hope of building back - but if they do crater, they will have to go fabless - which is like throwing the baby out with the bath water.
 
Last edited:

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
28,818
21,565
146
Intel started this when they could still throw money around like they were Floyd Mayweather Jr. at a strip club. 🤑Given their current financials, I imagine there is a lot of pressure being exerted to start producing a tangible ROI soon or else.
 

sdifox

No Lifer
Sep 30, 2005
96,152
15,772
126
It's worse than that. They;re supporting their main process competitor who's tightly integrated with their main product competitor.
Eh? Intel is not the one ponying up for development, Apple is.
 
Jul 27, 2020
17,917
11,688
116
Jensen would dirty his trousers from laughing so hard at the idea of Raja making a dent in nvidia's grip.
That depends. If Nvidia has important employees in their Indian offices (not sure what goes on there), Raja could poach those employees.
 

Mopetar

Diamond Member
Jan 31, 2011
8,005
6,451
136
That depends. If Nvidia has important employees in their Indian offices (not sure what goes on there), Raja could poach those employees.

As one of my cousins told me many years ago upon hearing a ridiculous utterance: "Well yeah, and monkeys could fly out of my butt."

Anything could happen, but I don't believe the lynchpin to NVidia's success resides in a hard-working, under-appreciated engineer in one of their Indian offices, whose talent can be spotted by Raja and harnessed to bring about the downfall of JHH and his company.

Those monkeys might even reproduce the works of Shakespeare after they emerge.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,692
136
As one of my cousins told me many years ago upon hearing a ridiculous utterance: "Well yeah, and monkeys could fly out of my butt."

Anything could happen, but I don't believe the lynchpin to NVidia's success resides in a hard-working, under-appreciated engineer in one of their Indian offices, whose talent can be spotted by Raja and harnessed to bring about the downfall of JHH and his company.

Those monkeys might even reproduce the works of Shakespeare after they emerge.

Well, pigs can fly. I might have seen the odd herd fly by too.

(Of course they can fly. It just a matter of how big a rocket you strap on. Sometimes yields instant bacon too... )
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,262
5,259
136
Jensen would dirty his trousers from laughing so hard at the idea of Raja making a dent in nvidia's grip.

Tenstorrent has potential to be a bit disruptive. I've always felt that the NVidia AI business is open for a disruptive competitor since NVidia sells full purpose GPU cores with generic raster cores, RT cores and some portion of AI Tensor cores.

It seems like a competitor would have an easy time getting more AI productivity out of a chip that has only AI cores. How much of the NVidia GPU dies Transistor budget going to non-AI? I'd bet it's probably over 75% non AI. A competitor could build an AI chip half the size and devote all the transistors to AI and have much more AI capability.
 
Reactions: beginner99

Aapje

Golden Member
Mar 21, 2022
1,467
2,031
106
It seems like a competitor would have an easy time getting more AI productivity out of a chip that has only AI cores. How much of the NVidia GPU dies Transistor budget going to non-AI? I'd bet it's probably over 75% non AI. A competitor could build an AI chip half the size and devote all the transistors to AI and have much more AI capability.

It seems that Nvidia's strategy is to tout the generic abilities of their cards, so you can do all the pre- and post-processing on the cards, rather than require other hardware to do that.

Jim Keller seems to think that there is room for different kinds of solutions for different use cases, and I wouldn't bet against him.
 
Reactions: DAPUNISHER

Heartbreaker

Diamond Member
Apr 3, 2006
4,262
5,259
136
It seems that Nvidia's strategy is to tout the generic abilities of their cards, so you can do all the pre- and post-processing on the cards, rather than require other hardware to do that.

Jim Keller seems to think that there is room for different kinds of solutions for different use cases, and I wouldn't bet against him.

Pre and post processing?

Training something like an OpenAI GPT model just takes enormous AI training network capability, and some general purpose CPUs to control everything. General purpose GPU and RT cores don't matter at all.

Probably the main benefit for datacenter cloud HW rental, is getting multi-use out of the cards. One week they might be renting the AI capability of a cluster for doing a massive deep learning training session, the next week it might be rented for GPGPU computing, another they might get some use out of RT cores...

But for in house if you aren't renting capacity, or even for a cloud service that finds the AI needs predominating there could be a large win for a dedicated AI HW.

ChatGPT is really putting a massive spotlight on AI again. Which will probably make it easy for Tenstorrent to get more financing for dedicated AI HW, and as you say, I wouldn't bet against Jim Keller.
 
Reactions: beginner99

Aapje

Golden Member
Mar 21, 2022
1,467
2,031
106
@guidryp

Most AI processing is going to be in running the models, not training them. Then you need to get the data in a format suitable for the model. For example, you might need to do image processing to detect relevant elements of an image, or for chatgpt, you need to transform a text into tokens, etc. Then afterwards, you may need to convert the result, or even just render it, if the output is an image.

For training you need to process huge amounts of data, so even though you won't have the postprocessing step, you'll still have to do preprocessing to get the data in the right format. Depending on the kind of preprocessing, a GPU may be far faster and more efficient at it than a CPU.

See:

 
Reactions: igor_kavinski

Heartbreaker

Diamond Member
Apr 3, 2006
4,262
5,259
136
@guidryp

Most AI processing is going to be in running the models, not training them. Then you need to get the data in a format suitable for the model. For example, you might need to do image processing to detect relevant elements of an image, or for chatgpt, you need to transform a text into tokens, etc. Then afterwards, you may need to convert the result, or even just render it, if the output is an image.

For training you need to process huge amounts of data, so even though you won't have the postprocessing step, you'll still have to do preprocessing to get the data in the right format. Depending on the kind of preprocessing, a GPU may be far faster and more efficient at it than a CPU.

See:


Color me a little skeptical of NVidia Maker of Hammers, saying the it's proper to treat everything like Nails.

I do notice that the people building AI HW for their own needs like Google and Tesla are building AI only chips, rather than GPUs. Telsa is making their own AI chips both for the car computer and a different one for learning network, both of which they claim scale better than the NVidia solutions. Google recently says their TPU4 outperforms NVidia A100, while using less power.
 

Aapje

Golden Member
Mar 21, 2022
1,467
2,031
106
Most companies aren't Google or Tesla though.

And like I said, it's likely that different solutions will appeal to different use cases. I bet that aside from those custom chips, Google and Tesla are also using GPU's for some things.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
7,062
7,487
136
The instant NV sees a for real market for AI only hardware, they'll pump it out. Might be working on making the market with their typical software first strategy.

It's a nascent realm and you'd end up taking a bath more often than not betting against NV in these things.

It's basically CUDA round 2, and we've only hit the Fermi stage for hardware software AI.
 
Reactions: Tlh97 and coercitiv
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |