New Zen microarchitecture details

Page 191 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Glo.

Diamond Member
Apr 25, 2015
5,765
4,670
136
Oh, because 1024 GCN would be the only version, no cut down SKUs.

I can haz? Thank you so much! But... what about that cannibalization? You promised, and now the violence... they will rip each other apart!
I have no idea why in your post is this tone.

Yes, lower end SKU's will cannibalize RX 460. But not full chip.
 
Reactions: cytg111

Mopetar

Diamond Member
Jan 31, 2011
8,024
6,475
136
Any modern Zen based APU would "cannibalize" Polaris 11, do you really expect people to buy the APU and add Polaris 11 for the extra grunt?

Why would anyone buy an RX 460 if you could get an APU with just as many shaders, never mind any improvements that Vega adds over Polaris?

I think that we'll eventually get to a 16 CU APU, but probably not until the next node shrink. Otherwise it just eats up too much die space and TDP (The 460 with 14 CUs is rated at 75W itself). If we see it at all on 14 nm it will be towards the end of node's lifetime when the yields have improved and design tweaks can keep the TDP within acceptable limits.
 

coercitiv

Diamond Member
Jan 24, 2014
6,403
12,864
136
I have no idea why in your post is this tone.
I was joking around, guess I should have remembered I no longer have a cat avatar.

The point I was trying to make is any competition or cannibalization between APUs and low end dGPU from AMD is inherent but nevertheless irrelevant: AMD will be happy to sell either, and would certainly not gimp their APU CU count based on Polaris 11 performance, but rather on more important metrics like expected efficiency, cost etc.

Why would anyone buy an RX 460 if you could get an APU with just as many shaders, never mind any improvements that Vega adds over Polaris?
  1. Write cost for Polaris 11 upgrade here:
  2. Write cost for new AM4 system here:
  3. Compare costs
Also, here's how this line of reasoning looks from another standpoint: why would anyone buy an AM4 APU if they could get a faster RX 460 instead?
 
Last edited:
Reactions: Drazick

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
Because Vega architecture scheduling is optimized for 8 CU Shader Engine. 8 CU's we have already in Bristol Ridge. Any imrpovement is ONLY by adding second Shader Engine with 8 CU's. Full designs can be either 8 CU, or 16 CU. B&C claims that mobile platform will have 12 CU's. Yes, if the GPU has disabled few of the cores, thats possible.
You are ignoring the fact that even those 8 CUs are memory starved. If Vega is to bring another memory compression update, even 8 Vega CUs will be enough of improvement over 8 GCN1.2 CUs to warrant calling it a significant upgrade. You do have a point, though, 11 CU is not a sensible number.
Rumor to B&C site came from somewhere. As their other credible information about Ryzen, before.
All rumors come from somewhere. If my memory serves me right, Skylake was hyped by B&C as a big update too.
 

Mopetar

Diamond Member
Jan 31, 2011
8,024
6,475
136
  1. Write cost for Polaris 11 upgrade here:
  2. Write cost for new AM4 system here:
  3. Compare costs
Also, here's how this line of reasoning looks from another standpoint: why would anyone buy an AM4 APU if they could get a faster RX 460 instead?

The people who are in the market for a ~$90 460 are probably the same target market for an APU, which can probably be had for $150 - $300 depending on specifications. Assuming Zen pans out, the CPU alone will probably be a pretty big step up over whatever you could get in the $100 - $150 price range previously from either AMD or Intel.

Considering what you can get a 470 these days, the 460 already doesn't make sense, because why not spend another ~$50 and get over twice the shaders and twice the memory as a 460. So to turn your own reasoning back at you, why would anyone get a 460 if they could get a faster 470 instead? I suspect that the only reason it exists as a product is because AMD had to do something with the salvage parts that Apple wouldn't accept.
 

coercitiv

Diamond Member
Jan 24, 2014
6,403
12,864
136
Reactions: Drazick

Glo.

Diamond Member
Apr 25, 2015
5,765
4,670
136
You are ignoring the fact that even those 8 CUs are memory starved. If Vega is to bring another memory compression update, even 8 Vega CUs will be enough of improvement over 8 GCN1.2 CUs to warrant calling it a significant upgrade. You do have a point, though, 11 CU is not a sensible number.

All rumors come from somewhere. If my memory serves me right, Skylake was hyped by B&C as a big update too.
That is exactly why 16 CU design will have HBM2. To completely lift off the problem of bandwidth.

However, let me give you a little perspective. Depending on the application in gaming scenario, and how well it is "modern", in technical terms, the performance gains from Vega architecture will vary.

Lets assume simple technical stuff. 1024 GCN core chips from Polaris and Vega architecture. RX 460 1024 core, and Raven Ridge GPU.

Raven Ridge APU in Overwatch will be 25% faster, clock for clock from Polaris, just because of the improvement in throughput of the architecture, that is not needed implementing to the game.
Raven Ridge APU in Doom, with game updated to take all of the hardware features, will be up to 75% faster clock for clock, compared to Polaris architecture in similar scenario. But it will require redesigning a bit the pipeline and updating the game for the Vega Hardware(Primitive Shader pipeline, etc...).

I will not even mention all of the compute improvements, because how I understand that is, AMD changed a lot of how the GPU handles registers, to achieve higher clocks that Polaris. Vega appears to truly be future proof architecture, and really will need HBM2 to shine, even on such small chip technically like 1024 GCN core design.

And 16 CU's + HBM2 will appear in massive amount of markets IMO. Embedded, Machine Learning, Gaming, HTPC, normal, cheap, power efficient computers, everywhere. If you ever thought about one solution where you could fit everywhere - it is APU. And this design will be a great start for the foundation of creating mindshare. If AMD wants HSA to start truly get traction, this is the only way.

Dislaimer. Those are only my thoughts on what will happen, and some rumors from professional space, but nothing too specific to share.
  1. Write cost for Polaris 11 upgrade here:
  2. Write cost for new AM4 system here:
  3. Compare costs
Also, here's how this line of reasoning looks from another standpoint: why would anyone buy an AM4 APU if they could get a faster RX 460 instead?
You have forgotten about the improvements in performance of 1024 GCN core chip. It can cannibalize not only RX 460, which is not 1024 GCN core design in mainstream market, for very good reason, but also 1792 GCN core RX 470D, which is not in the market, for very good reason.

If you get this level of performance for lets say 300-350$, and you get more efficient, and much simpler design, why would you buy any of those cards? Why would you buy a CPU, if one chip can do everything, for the masses?

You get the perspective right now?
 

Mopetar

Diamond Member
Jan 31, 2011
8,024
6,475
136
No, a 16 CU APU doesn't make sense (at this time). You're just ignoring what everyone is saying. 16 CUs eat up too much die space. This costs money and lowers yield. Also, it eats up far too much TDP so if you're going to have most of those CUs turned on you need to drop the clocks which kills the performance you're supposed to gain from having the added CUs. At that point it makes more sense to have fewer CUs and clock them higher for similar performance without eating up extra die space. It just doesn't make sense right now from a logistics point of view.

Also, APUs are going to be targeted at customers who want a low cost solution that can handle casual gaming like DotA, Rocket League, etc. and the 460 is already overkill for those games. If you want something more serious, the jump up to a ~$130 470 makes more economic sense than anything else right now if you want a discrete card. If you don't or don't need one, the APU you can buy is going to cost you a lot less if the die size is smaller and AMD can make more of them.

Eventually, we'll probably get a 16 CU APU, but it will be towards the end of 14nm when the process is mature and they need to refresh their lineup, or it's likely the new baseline for the next process node. At that point AMD probably won't sell a stand-alone desktop card with 16 CUs unless it's a salvage part or something that just goes into OEM systems. It's too much of a nice product that only makes sense for a very limited number of customers.

My guess is we see an 8 CU design again unless they've tweaked their layout specifically for their APUs.
 
Reactions: Sweepr

Glo.

Diamond Member
Apr 25, 2015
5,765
4,670
136
Why do you consider Raven Ridge APU as low cost solution, when I am telling you that 95W TDP 4C/8T+ 16 CU+ 4 GB HBM2 will cost between 300 and 350$?

Why do you think that 16 CU will eat too much TDP? Radeon Pro 460 clocked at 907 MHz uses 18W under load for GPU only. The rest of 35W TDP is consumed by GPU RAM. You can test yourself this, with RX 460, declocked to 850-900 MHz.
And before you will ask, yes I have tested this, myself. HBM2 in Raven Ridge APU will use betwen 5 and 8W of power, depending on clocks. And Vega has been optimized for higher core clocks, in lower power scenarios. So there is a lot of headroom for clocks.

AdoredTV have made test for this, also. Here, you check this out:
Around 1 minute mark.

You guys genuinely underestimate AMD. You genuinely underestimate the possibilities.

If you ask me what we can see for the desktop, 95W APU: 4C/8T Ryzen clocked at 3.4/3.7 GHz, 16 CU design with 1250 MHz and 4 GB's of HBM2.
 

Vaporizer

Member
Apr 4, 2015
137
30
66
16 CU Polaris with 8 GB GDDR5 is already at 250$. So I cant see adding 4 GB HBM2 and a full CPU sums up to 300$.
The advantage of a CPU+ extra Graficscard is that you can upgrade lateron.
You loose this advantage when using APU but gain better efficiency and possible smaller Formfactor.
 

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
However, let me give you a little perspective. Depending on the application in gaming scenario, and how well it is "modern", in technical terms, the performance gains from Vega architecture will vary.
Yep, and that's why, frankly, Vega is already overhyped in spite of having about 2 hours total dedicated to it by AMD. When you need to get developers to bend over to your design with minor market share, you are in trouble. It should be better long-run, of course, but long-run and GPUs do not happen too often.

And once again, not even once do you address the issue of HBM2 inflating the cost for something that is absolutely useless in the only growing PC market nowadays. In fact, you suggest that it costs like basically a present i5+rx470 system that will blow it out of the water in almost every circumstance. I won't even mention the fact that present mindshare associates iGPUs of any sort with cheap solution, not something that costs like CPU+dGPU combo.

In the end, your thoughts are mostly drawn on the wish for it to happen and some vague discussions about what Vega is. When frankly AMD barely gave any information on it to make any meaningful conclusion. Hey, it would be fun to happen but i am still not buying that, HBM2 is a no go, and as such 16CU APU is pointless. In fact, even 12CU APU is pointless.
 

richaron

Golden Member
Mar 27, 2012
1,357
329
136

^ Lol, talks about barely having any information in a post full of sweeping generalizations and conjecture. At least username is on point.

I think anyone with their finger on the pulse of technology can see where AMD is heading with their MCM plans. And they're also aware of the possibilities from portables to HPC. Here's hoping Zen is built with enough forethought to allow this to happen sooner rather than later.
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,868
3,419
136
Yep, and that's why, frankly, Vega is already overhyped in spite of having about 2 hours total dedicated to it by AMD. When you need to get developers to bend over to your design with minor market share, you are in trouble. It should be better long-run, of course, but long-run and GPUs do not happen too often.
And whats that got to do with Vega, so far it looks like your just making stuff up.
better load blancing - no developer input needed
caching model for vram - no developer input needed
better rasterization - no developer input needed
better ROP memory handling - no developer needed
better ALU utilization - no developer needed

new primitive shader - ZOMG AMD YOU ARE TEH FAIL!@!@!@!@!@11112@$#%


And once again, not even once do you address the issue of HBM2 inflating the cost for something that is absolutely useless in the only growing PC market nowadays.
So we can have edram chips from intel and they made sense but a sack or two of HBM2 using it as a bandwidth accelerator ( thax to the Vega cache controller) and all of a sudden its unfeasible......

In fact, you suggest that it costs like basically a present i5+rx470 system that will blow it out of the water in almost every circumstance. I won't even mention the fact that present mindshare associates iGPUs of any sort with cheap solution, not something that costs like CPU+dGPU combo.
Not in Laptop , but you just choose to ignore the largest portion of the consumer x86 market...........


In the end, your thoughts are mostly drawn on the wish for it to happen and some vague discussions about what Vega is. When frankly AMD barely gave any information on it to make any meaningful conclusion. Hey, it would be fun to happen but i am still not buying that, HBM2 is a no go, and as such 16CU APU is pointless. In fact, even 12CU APU is pointless.

I dont think HMB2 will haven on RR but it will happen soon and it will be 1 or 2 stacks for 1 or 2 gb of memory to use as a bandwidth accelerator and it will do very well in laptops because the HBM will allow higher performance for lower power.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
By virtue of being in total control over the console market, AMD can bring the benefits of the new features of Vega into the PC space faster than some naysayers might think.

Case in point - primitive shaders and packed-math operations are both features present in the PS4 Pro. So developers already have the hardware to work with. Presumably, Scorpio will have them as well. If things go well, then the benefits of these features in Vega will come in months, not years, in time.
 

Glo.

Diamond Member
Apr 25, 2015
5,765
4,670
136
Yep, and that's why, frankly, Vega is already overhyped in spite of having about 2 hours total dedicated to it by AMD. When you need to get developers to bend over to your design with minor market share, you are in trouble. It should be better long-run, of course, but long-run and GPUs do not happen too often.

And once again, not even once do you address the issue of HBM2 inflating the cost for something that is absolutely useless in the only growing PC market nowadays. In fact, you suggest that it costs like basically a present i5+rx470 system that will blow it out of the water in almost every circumstance. I won't even mention the fact that present mindshare associates iGPUs of any sort with cheap solution, not something that costs like CPU+dGPU combo.

In the end, your thoughts are mostly drawn on the wish for it to happen and some vague discussions about what Vega is. When frankly AMD barely gave any information on it to make any meaningful conclusion. Hey, it would be fun to happen but i am still not buying that, HBM2 is a no go, and as such 16CU APU is pointless. In fact, even 12CU APU is pointless.
If what I have written is wishful thinking then, its also your wishful thinking for AMD to NOT offer such solution. The only argument you have is your thinking about AMD not offering this. Nothing you have provided.

How come? HBM2 will have lower production costs than HBM. For very simple reason. The amount of clients using it, the amount of products that it may/will appear is wider, than for HBM. Nvidia: GP100, GVXXX, AMD: Vega 10, Vega 11, aforementioned APU. What you had in HBM lineup? Only Fiji series.

Why AMD went with 2048 bit memory controller? Because it saves costs. It simplifies the manufacturing, packaging, assembly, everything that could drive costs up. And HBM2 with 2 stacks provides enough bandwidth for every compute product they can offer.

i5+RX 470 combination blown away out of the water, by 4C/8T+16 CU + HBM2 design? No. On CPU side very well may that be the case. But the GPU will be at max between WX5100 and RX 470D performance level. It can blow that combo on value, and performance per watt, and allow to build much simpler computers. But not on raw performance in gaming. In compute - its another story.

About 16 CU design pointless. Why is it pointless. Because YOU believe that APUs are low-end solution? They were for very simple reason. There was never balance in the design. Both AMD and Intel offer APUs. Intel has great CPU, but rubbish graphics, and drivers. AMD has rubbish CPU, but great graphics, graphic drivers, and whats more - required software initiatives, that you completely ignore. They are called GPUOpen. Raven Ridge has for the first time in years a chance to change that, with offering great CPU, and great GPU on the same package. HBM2 is required for that package to shine, thats why it has to be implemented. And once again: 4C/8T+16CU+2 stacks of HBM2 can cost between 300 and 350$. That is not low end price.

This APU would not be bought only by low-end market. This APU can see usage in various markets, from gaming, to server, embedded, machine learning applications. Its pure win in those solutions, because it offers excessive performance per watt compared to CPU+dGPU combination of similar performance.

There is NOTHING that would stop AMD from offering this sort of solution, and they could earn tons of money because of it. With ONE cooler in your computer, you get decent enough performance for 1080p gaming, very capable in compute, and extremely efficient computer overall.

And best bit, according to rumors, they already are working on the design of it.
 

flash-gordon

Member
May 3, 2014
123
34
101
There is NOTHING that would stop AMD from offering this sort of solution, and they could earn tons of money because of it. With ONE cooler in your computer, you get decent enough performance for 1080p gaming, very capable in compute, and extremely efficient computer overall.
For AMD trading a low-margin low-end GPU for a big-margin APU won't be a problem, and the time surely is approaching for a APU of this kind, but the way I see the ones that will suffer the most and be cannibalized are AMD console partners with their current gen hardware.

I'm pretty sure AMD account these when scheduling the release of a APU with this power. So, I bet AMD will wait at least for Scorpio to be on the market for some months before releasing this beast.
 

Glo.

Diamond Member
Apr 25, 2015
5,765
4,670
136
I don't think 1024 GCN core chip clocked at 1250 MHz can cannibalize PS4 Pro APU, or Project Scorpio ones, despite having similar hardware features, apart from few things in PS4 Pro.

Biggest, and most important factor for AMD, if they want APUs to really matter "anywhere" is providing powerful, killer hardware, that can be used in large amount of markets, not only low-end, but also professional, embedded, machine learning, and be viable option there.
 
Last edited:

Glo.

Diamond Member
Apr 25, 2015
5,765
4,670
136
So Vega 1024GCN inside the APUs cannot threaten Polaris 2304GCN @ 900Mhz, yet according to you it has the potential to cannibalize Polaris 1792GCN @ 1200Mhz.
Because it would not be as fast as RX 470D, but much more power efficient. In another post I pointed out, where the GPU in question can land, in performance. Between RX 470D, and WX 5100. And its the best case scenario. Not every case scenario.

And lastly, performance of the APU GPU will rely on the core clocks. If 95W 4C/8t+16CU+HBM2 would clock up to 4.0 on CPU, and 1.5 GHz then we can talk about extremely viable hardware, and possibly, consuming a lot of market. Vega can achieve higher clocks in lower power scenario, because based on what appears from first glance, at the architecture, AMD changed the way the registers are handled, because that was one the holding back factors for previous versions of GCN to achieve higher efficiency, and IPC. It is only a clue, and speculation, based on what AMD provided, to this day, and opinions from game developers, but it also appears to be the case here. Vega, and Raven Ridge APU, therefore, will have much better compute capabilities, than any previous generation of GCN. However, back to core clocks. 1024 GCN core Radeon Pro 460 GPU die, at 907 MHz, which I pointed out, consumes under load around 18W. The rest is consumed by memory. Each memory cell consumes around 4W of power.

If 4C/8T Ryzen CPU at 4.0 GHz core clock is able to be sqeezed into 65W thermal envelope, how low clock would have to be to be sqeezed to 45W? 3.4/3.7 GHz? Then we have 50W of headroom, for HBM2, and the GPU. Vega will have different power curve compared to Polaris, because it can achieve higher core clocks in similar power envelope. So there is a lot of room to play with clocks both on CPU and GPU.

The last question, will it consume at those specs PS4Pro, and Project Scorpio marketshare? Living Room gaming? Not really. But it makes extremely viable option everywhere else.
 
Last edited:

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
I think anyone with their finger on the pulse of technology can see where AMD is heading with their MCM plans.
Of course. In fact, that's my main argument against RR being a scaled-down HPC APU design: the latter is an MCM with utter certainty, the former being an MCM has little to no point.
And whats that got to do with Vega, so far it looks like your just making stuff up.
Nothing really, though none of us really know how far these new shaders reach.. or do we?
So we can have edram chips from intel and they made sense but a sack or two of HBM2 using it as a bandwidth accelerator ( thax to the Vega cache controller) and all of a sudden its unfeasible......
You are making a leap of faith in assumption that eDRAM is as expensive. If it is a fact, share the evidence. If it is not, then we can agree to a tie.
Not in Laptop , but you just choose to ignore the largest portion of the consumer x86 market...........
Fair enough, but i think it is understandable to forget laptops when their TDP does not quite fit.
I dont think HMB2 will haven on RR but it will happen soon and it will be 1 or 2 stacks for 1 or 2 gb of memory to use as a bandwidth accelerator and it will do very well in laptops because the HBM will allow higher performance for lower power.
Maybe when they figure out how to make it cheap enough to use as VRAM buffer. Not this year, is one thing i am confident about.
Case in point - primitive shaders and packed-math operations are both features present in the PS4 Pro.
Google only turns up a speculation on that over at B3D based on fairly vague words of Cerny. Either way, you do have a point, it may just have half the features from Vega.
If what I have written is wishful thinking then, its also your wishful thinking for AMD to NOT offer such solution.
That's fair enough, though from my perspective, i just see another failed product like Skylake GT4e in it.
How come? HBM2 will have lower production costs than HBM.
HBM2 is already produced. Instead of using "will", you may just try and find the costs. I know, that's hard, i've failed to do so when trying to compare eDRAM and HBM costs. Considering that the only products with it (or HMC) we have heard of are the cream of the crop of ASICs on the market, and Samsung has discussed cutting the width of a stack in half to make it somewhat cheaper, i dare to assume it is still expensive as hell.
Why AMD went with 2048 bit memory controller? Because it saves costs.
They were saving costs on 500mm^2 die? That to me means only 1 thing: it is really expensive, after all.
Because YOU believe that APUs are low-end solution?
Well, i don't believe, that's... a fact, you know? Sure, console APUs are sort of exception... until you realize that in grand scale they are still low end products.
Intel has great CPU, but rubbish graphics, and drivers. AMD has rubbish CPU, but great graphics, graphic drivers, and whats more - required software initiatives, that you completely ignore.
You know Intel's iGPU is way better than you give it credit for, nowadays? Drivers are rubbish though.
As for GPUOpen... Well, that's easy: name me software that uses it without direct AMD involvement. Now do not wonder i ignore it.
HBM2 is required for that package to shine, thats why it has to be implemented
And now that i remember it, such package would make way more sense on consoles. They are a semi-custom company, after all.
This APU would not be bought only by low-end market. This APU can see usage in various markets, from gaming, to server, embedded, machine learning applications. Its pure win in those solutions, because it offers excessive performance per watt compared to CPU+dGPU combination of similar performance.
Actually, it would offer same performance per watt as CPU+dGPU combo of similar performance down to VRM efficiency variations. HBM2 is not that power efficient, if you did not know.
 
Reactions: Sweepr

Glo.

Diamond Member
Apr 25, 2015
5,765
4,670
136
HBM2 is already produced. Instead of using "will", you may just try and find the costs. I know, that's hard, i've failed to do so when trying to compare eDRAM and HBM costs. Considering that the only products with it (or HMC) we have heard of are the cream of the crop of ASICs on the market, and Samsung has discussed cutting the width of a stack in half to make it somewhat cheaper, i dare to assume it is still expensive as hell.
Price depends on volume of production. The higher production is, the lower costs are. AMD can have much better deal than anyone else with HBM, because they have co-developed it, and will have at least two products that will use HBM2(so the higher volume they can buy, lowering the manufacturing costs, and therefore lowering the price for AMD).
They were saving costs on 500mm^2 die? That to me means only 1 thing: it is really expensive, after all.
HBM1 was. HBM2 might not, because, of the mentioned volume of production. Saving costs is everything that companies are going for. If they can design hardware that is cutting edge, but also save cost on manufacturing thats the clear win for them.
Well, i don't believe, that's... a fact, you know? Sure, console APUs are sort of exception... until you realize that in grand scale they are still low end products.
If you look at where they are right now - they are low-end. If they look at where they are going - they are not low-end, only.
You know Intel's iGPU is way better than you give it credit for, nowadays? Drivers are rubbish though.
As for GPUOpen... Well, that's easy: name me software that uses it without direct AMD involvement. Now do not wonder i ignore it.
For example HIP. HCC Compiler. Companies are starting to move away from CUDA, to OpenCL. And here HIP was extremely helpful to compile CUDA code to OpenCL.

Yes, I know that Intel GPUs are not bad right now, however, they are only not bad in high-end part. The parts with EDRAM are not bad. The rest is rubbish. And Intel software is rubbish, because nobody in Intel cares about it. Its rubbish to the degree that Apple own, MacOS drivers for Intel GPUs, are better than Intel's for Windows.
And now that i remember it, such package would make way more sense on consoles. They are a semi-custom company, after all.
Not only in consoles, but everywhere, where memory bandwidth can help with delivering data, and reducing the power consumption of hardware.
Actually, it would offer same performance per watt as CPU+dGPU combo of similar performance down to VRM efficiency variations. HBM2 is not that power efficient, if you did not know.
Each memory cell uses between 3 to 5W of power, depending on the clocks of it. HBM was using 5W of power, and Fiji memory was using 20W of power, total. HBM2 uses 3 or 4 W of power, depending on the clocks.

Let me give you perspective. 95W APU with 4C/8T can offer Core 7 4790S level of performance(65W CPU) plus GPU that offers WX 5100 level of performance(75W TDP). Overall, only CPU+ GPU would use 130W of power. This APU can only use 95W of power. And the build is much simpler because all you need is either good enough Cooler like Cryorig H5, or Be Quiet Dark Rock 3, or going "uber" with Liquid Cooling loop.
 

Dresdenboy

Golden Member
Jul 28, 2003
1,730
554
136
citavia.blog.de
Its a theoretical exercise.
What serial chains of cource will present a 4c/8t zen cpu with dificulties that the user will notice and can not be ofloaded to the gpu on a separate die?
Its not like asynch compute is not working because its on a separate die but because the programmers dont use it.
If i look at the fpu in 4c zen its about 4 times as strong as a 2m/4c excavator. And then look at a 8c solution.
It just dont make any sense to go the last stretch and pay 1.2b for it.
That kind of serial code which will probably affect user experience might often be some not so well written code at any stage. There are many ways to create something slow. And you might find it anywhere from device drivers, OS, application code. Even shader compilers might show bad habits here.

Where did you get this 1.2b number from? You don't need money for developing the main GPU components, also used in dGPU chips. These are IP blocks. Putting this all together into a design, where the necessary blocks (Zen cores, GPU cores, UMC, FCH, etc. already exist, mostly in the same process) would in case of ommitting a $5-$10 interposer solution cause costs for new mask sets of <$100M.

Clarkdale and Arrandale had GPUs on package. Would that have been anything close to Fusion if the GPU could have done OpenCL stuff?

Yep, and that's why, frankly, Vega is already overhyped in spite of having about 2 hours total dedicated to it by AMD. When you need to get developers to bend over to your design with minor market share, you are in trouble. It should be better long-run, of course, but long-run and GPUs do not happen too often.
The base of such a design usually affects multiple generations of GPUs. See GCN for an example.

I won't even mention the fact that present mindshare associates iGPUs of any sort with cheap solution, not something that costs like CPU+dGPU combo.
What kind of point is that? Due to iGPUs being severly memory bandwidth limited, they usually were slow and priced in accordingly. If one could reduce the bandwidth bottleneck, would the latter still remain be true?
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
If i was AMD, first 14nm APU would be small, like 100-120mm2.

4x ZEN Cores with HT 8T + 9CU Polaris iGPU (but with 16 ROPs) with only 128bit DDR-4 memory. (3200MHz or higher under OC).

CPU performance would go up 50%+ (2Module Excavator vs 4 ZEN) and iGPU performance would go up by 30-50% because of Polaris Architecture (3rd Gen color Compression, L2 Cache) + higher memory bandwidth.

This thing could be sold cheap and make a lot of profit both in Desktop (35W TDP up to 95W TDP) and especially in Mobile (4.5W TDP up to 45W TDP).
 

beginner99

Diamond Member
Jun 2, 2009
5,223
1,598
136
I find this situation very similar to when High Level object oriented programming languages popped onto the scene, as programmers were still designing complex software that could easily take advantage of high level object oriented languages but they were still writing them in low level languages like C.

It's exactly the opposite because to write code for the GPU, we need to go back to low level languages and paradigms and yes I count C++ as low-level. Yes, there are libraries to use OpenCL from high-level languages but you still greatly need to adjust (or so dumbify) your code to work for OpenCL or GPUs in general. From objects back to arrays and scalars. Point being it is much harder and takes longer to develop and hence will only be used in software were that really brings a great benefit in terms of absolute and relative performance increase.
 
Reactions: Phynaz
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |