[WCCF] AMD Radeon R9 390X Pictured

Page 101 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Are you actually saying that a monopoly is the best way forward because costs are rising?

Engineering/scientific breakthroughs are never seen before they happen. If you read anything historical, note the dismay at how progress has slowed/stopped due to insurmountable barriers. This happens each time a technology matures.

I believe that competition pushes innovation to surpass these seeming barriers. No competition allows a complancy with the status quo and steady sales and profits.

I asked you whats your solution was. Talking about ideological benefits is pointless.

Look at where we are today.
GPU volume is shrinking, and relatively fast that is. This is the main issue since its the key factor to offset higher design and R&D costs.
AMD releases a 300 series rebrand line with 1 new GPU, because they cant afford it. In their financial analyst slides its quite obvious that the only thing you gonna see from AMD in 2016 on the GPU side is more or less node shrinks and nothing else.
Both nVidia and AMD fiddles around on 28nm, because they cant afford 20nm or 14nm at present time. Pascal, like AMDs 400 series doesnt look like much besides a shrink and HBM2. And one of the 2 companies cant make money on their PC division today.

Only way to offset R&D and design cost is higher volume and/or higher prices. However at a certain point higher prices will just mean less volume, less revenue etc. Design cost gets about twice as expensive with every node. R&D cost goes up rapidly as well.

You can pick between both patients die or one of them. Its just how the semiconductor industry works. The economics behind dont care about who wins, just that one have to win. This is why the 20 other x86 makers and god knows how many GPU makers are gone. This is also why the dGPU will vanish at some time in the future. Its all about cost.

By the way you forgot to include the last part of my post.
Only a FOOL will want a monopoly, remember that the next time someone says "Die AMD".

I didnt include it, because as now its just plain silly.
 
Last edited:

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
Also remember HDL doesnt come for free. Just look at Carizzo. HDL basically killed Carizzo for desktop in favour of Kaveri due to speed and power consumption at that target. Hence why Carizzo is limited to 35W and down.

Carrizo is limited because the HDL libraries used appears to be optimised for lower frequencies or a narrow frequency range. I see no reason to prevent another HDL protocol for different frequencies. Carrizo is AMD's attempt to reclaim some notebook/laptop sales.

I actually think Carrizo shows us the potential for Fiji. Note the following slide. If Fiji fits 5120 shaders = 10 x carrizo shaders. 10 times the wattage in that slide fits well with a big GPU power draw.


This lets us estimate for the adaptive clocking/voltage tech.


These two techniques give us a 72% power draw with them compared to without them.

Power saving with many shaders @ same frequency = Beastly
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
Just a minor nitpick: GDDR5 controllers are not big because they have a lot of transistors. They are big because they need complex analog elements that take a lot of space and which do not shrink well or almost at all as you move to smaller processes.

Can you give us a rough idea as to this?
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
I asked you whats your solution was. Talking about ideological benefits is pointless.

Look at where we are today.
GPU volume is shrinking, and relatively fast that is. This is the main issue since its the key factor to offset higher design and R&D costs.
AMD releases a 300 series rebrand line with 1 new GPU, because they cant afford it. In their financial analyst slides its quite obvious that the only thing you gonna see from AMD in 2016 on the GPU side is more or less node shrinks and nothing else.
Both nVidia and AMD fiddles around on 28nm, because they cant afford 20nm or 14nm at present time. Pascal, like AMDs 400 series doesnt look like much besides a shrink and HBM2. And one of the 2 companies cant make money on their PC division today.

Only way to offset R&D and design cost is higher volume and/or higher prices. However at a certain point higher prices will just mean less volume, less revenue etc. Design cost gets about twice as expensive with every node. R&D cost goes up rapidly as well.

You can pick between both patients die or one of them. Its just how the semiconductor industry works. The economics behind dont care about who wins, just that one have to win. This is why the 20 other x86 makers and god knows how many GPU makers are gone. This is also why the dGPU will vanish at some time in the future. Its all about cost.



I didnt include it, because as now its just plain silly.

I have noticed in the past that you take speculation as fact.

AMD releases a 300 series rebrand line with 1 new GPU, because they cant afford it. In their financial analyst slides its quite obvious that the only thing you gonna see from AMD in 2016 on the GPU side is more or less node shrinks and nothing else.

AFAIK, not a single point in that statement is a fact. I'm talking about not using what you read on websites as facts. Every website so far with the retail R300 series have speculated. The only FACTS we have is a picture, watercooled card, HBM memory on interposer.

When did AMD say node shrink ONLY in 2016.

Pascal, like AMDs 400 series doesnt look like much besides a shrink and HBM2.

How in the world do you arrive at these conclusions?

I feel like I'm talking to an alien. Two totally different realities. The most I can say is that I have a different viewpoint.

By the way most industries have more than 1 large enterprise and they remain profitable and competitive.
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136

You are getting a bit confused here.

Here is the text associated with your image.

Synopsys' DesignWare® Logic Libraries provide a broad portfolio of high-speed, high-density and low-power standard cell libraries, providing a complete standard cell platform solution for a wide variety of system-on-chip (SoC) designs. In addition to Synopsys' silicon-proven standard cell libraries, available Power Optimization Kits (POKs) and Engineering Change Order (ECO) Kits deliver outstanding performance, with low power and small area in the advanced nodes of leading foundries.

Ideal for tablet, smartphone, cell phone, graphics, networking, storage and other high-performance applications requiring low power and high density, Synopsys' DesignWare Logic Libraries and Memory Compilers provide a unique set of options that enable SoC designers to optimize their products for speed, area, dynamic power, standby power and cost.

Multiple Architectures, Multiple VTs, Multi-Channel Lengths
The standard cell libraries provide three separate architectures, high-speed (HS), high-density (HD) and ultra high-density (UHD), to optimize circuits for performance, power and area tradeoffs.


As you can see, these are the standard libraries offered by Synopsys to their clients.

This not God making a commandment. There are other libraries in the world that might not have the same constraints. YOU DON"T KNOW.

In fact the following appears to back my claims. Why would AMD spend R&D $ on an effort that had no payback. Obviously they believe that their own libraries have greater value. The analogy alone, moonshot, implied high risk-high payback. They might have gotten their HDL to move up the frequency range relative to standard libraries.

http://www.anandtech.com/print/9319...p-leap-of-efficiency-and-architecture-updates

A big part of the reduction in die area comes from the set of high density libraries being used by AMD. Above were three examples provided where >33% gains were made in silicon area. Typically using a high density library design is a double edged sword – it reduces die area and potentially leaves more area for other things, but the caveat is that it may be more prone to defects in construction, require additional latency or have a different frequency/voltage profile. AMD assures us that these changes are at least like-for-like but most of them contain other improvements as well.

It’s worth noting here that AMD has described the high density library project internally as the equivalent of a moonshot, essentially the developers were part of a ‘skunkworks’ division attempting to make drastic changes in order to improve performance. The high density library is one such successful project from that.



I have mentioned this before. Look at the movie SPARE PARTS and see what creative engineering truly means.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,422
1,759
136
Can you give us a rough idea as to this?

The main reason GDDR5 controllers are so big is that they need to drive long wires to the chip at very high frequencies, while keeping signal quality high enough to still be reliable. To do this, they need complex and precise analog circuitry. This circuitry doesn't shrink much when you get smaller transistors, so a GDDR5 controller on 28nm isn't that much smaller than one on 40nm. In effect, GDDR5 displaces more transistors on every new die.

Interfaces that are slower per line can do with much simpler analog end points, which is why DDR3 controllers are smaller and cheaper. In addition to this, HBM also reduces the capacitance of the lines, again making the driving circuitry smaller.

We don't know how much smaller, but it should be a lot.
 

flopper

Senior member
Dec 16, 2005
739
19
76
The main reason GDDR5 controllers are so big is that they need to drive long wires to the chip at very high frequencies, while keeping signal quality high enough to still be reliable. To do this, they need complex and precise analog circuitry. This circuitry doesn't shrink much when you get smaller transistors, so a GDDR5 controller on 28nm isn't that much smaller than one on 40nm. In effect, GDDR5 displaces more transistors on every new die.

Interfaces that are slower per line can do with much simpler analog end points, which is why DDR3 controllers are smaller and cheaper. In addition to this, HBM also reduces the capacitance of the lines, again making the driving circuitry smaller.

We don't know how much smaller, but it should be a lot.

Kinda big difference.
http://cdn.videocardz.com/1/2015/06/AMD-R9-FURY-vs-GTX-980-Ti-PCB-comparison-900x752.jpg
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
Can anyone provide a link to describe what these libraries do and and how they work?

A library would be a collection of basic circuits, each one containing multiple basic electronic elements, such as transistors, etc. The use of a library provides a faster way to do complex circuit design.

I suppose libraries are the hardware equivalent of compilers.

Anyone want to correct or add?

If you use google, you will find open source examples of libraries.
 
Last edited:

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
This is also why the dGPU will vanish at some time in the future.

I will believe it when I see it. The market for the IGP that can keep a gamer happy is too small to design a CPU that way, unless you think that eventually all gains on the GPU side will level off completely as we hit some sort of wall.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
I will believe it when I see it. The market for the IGP that can keep a gamer happy is too small to design a CPU that way, unless you think that eventually all gains on the GPU side will level off completely as we hit some sort of wall.

Arrogance of one company can be their demise in the future...Just saying.

Nothing saying Intel nor AMD has to have a compatible slot for a NVIDIA dGPU to go in. AMD could still use a dGPU and not allow NVIDIA a free ride....Same with Intel. SLI on a motherboard isn't free after all.
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
I will believe it when I see it. The market for the IGP that can keep a gamer happy is too small to design a CPU that way, unless you think that eventually all gains on the GPU side will level off completely as we hit some sort of wall.

Remember the rumors of 250-300 watt APUs from AMD. I can see these satisfying most gamers. This is pretty much an i5 + 980 class machine.

With HSA you only need one memory pool.

Intel is also making very good headway with their GPU performance. If either of them go the high power route then discrete might die as the remaining market might be too small for investing in production.

In this I agree with ShintaiDK as a possible future.

The obvious negative is less choice for the consumer. A solution is a large increase in CPU-GPU-Memory combinations for sale, which I don't see happening. Inventory planning would be chaotic.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
I asked you whats your solution was. Talking about ideological benefits is pointless.

Look at where we are today.
GPU volume is shrinking, and relatively fast that is. This is the main issue since its the key factor to offset higher design and R&D costs.
AMD releases a 300 series rebrand line with 1 new GPU, because they cant afford it. In their financial analyst slides its quite obvious that the only thing you gonna see from AMD in 2016 on the GPU side is more or less node shrinks and nothing else.
Both nVidia and AMD fiddles around on 28nm, because they cant afford 20nm or 14nm at present time. Pascal, like AMDs 400 series doesnt look like much besides a shrink and HBM2. And one of the 2 companies cant make money on their PC division today.

Only way to offset R&D and design cost is higher volume and/or higher prices. However at a certain point higher prices will just mean less volume, less revenue etc. Design cost gets about twice as expensive with every node. R&D cost goes up rapidly as well.

You can pick between both patients die or one of them. Its just how the semiconductor industry works. The economics behind dont care about who wins, just that one have to win. This is why the 20 other x86 makers and god knows how many GPU makers are gone. This is also why the dGPU will vanish at some time in the future. Its all about cost.



I didnt include it, because as now its just plain silly.

You don't know the composition of AMDs line-up and you don't know why they might make it as they might make it. For the companies its business as usual, you're just reading into it as "someone has to die". Its not odd to shrink older architectures during a new process transition. Doing otherwise could be a massive risk. going beyond 28nm was a problem due to the foundries, not AMD or nvidia. If the economics simply don't work out because those companies don't have the processes down adequately then thats how the cookie crumbles. Work around it.

If we are talking about anyone dying it should be nvidia so the market can actually grow without their sabotage. Then a new more ethical player might come in.
 

Abwx

Lifer
Apr 2, 2011
11,172
3,869
136
I actually think Carrizo shows us the potential for Fiji. Note the following slide. If Fiji fits 5120 shaders = 10 x carrizo shaders. 10 times the wattage in that slide fits well with a big GPU power draw.


This lets us estimate for the adaptive clocking/voltage tech.


These two techniques give us a 72% power draw with them compared to without them.

Power saving with many shaders @ same frequency = Beastly

These are improvement made using a same process, for GPUs a porting to GF could eventualy get better results than this when comparing TSMC made Kabini GPU to its GF counterpart.

 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
Nothing saying Intel nor AMD has to have a compatible slot for a NVIDIA dGPU to go in. AMD could still use a dGPU and not allow NVIDIA a free ride....Same with Intel. SLI on a motherboard isn't free after all.

Well that isn't dGPUs dying, that is Intel and AMD killing them.

Remember the rumors of 250-300 watt APUs from AMD. I can see these satisfying most gamers. This is pretty much an i5 + 980 class machine.

Which is not NEARLY good enough. The Titan X can barely play the most recent games at 60fps at 1080p. Seeing as how in like five years the standard will be BETTER graphics at 4K then a 980 isn't close. Honestly what we have currently is kinda pathetic thanks to the best tech going into cell phones.

I know Intel is improving, but it would have to be a 10x improvement in half that many years to "kill" dGPUs in the market.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
Well that isn't dGPUs dying, that is Intel and AMD killing them.



Which is not NEARLY good enough. The Titan X can barely play the most recent games at 60fps at 1080p. Seeing as how in like five years the standard will be BETTER graphics at 4K then a 980 isn't close. Honestly what we have currently is kinda pathetic thanks to the best tech going into cell phones.

I know Intel is improving, but it would have to be a 10x improvement in half that many years to "kill" dGPUs in the market.
Lots of people play games at not 60 fps 1080p ultra settings gameworks etc.


If integrated performance got high enough I could use it for the majority of my gaming. Skylake looks like it'll have enough gpu performance that I only need a gpu for triple a titles.

I'll probably use a main gaming rig for big screen gaming but for desktop games like Dota 2, league, etc skylake by itself is good enough now.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Lots of people play games at not 60 fps 1080p ultra settings gameworks etc.


If integrated performance got high enough I could use it for the majority of my gaming. Skylake looks like it'll have enough gpu performance that I only need a gpu for triple a titles.

I'll probably use a main gaming rig for big screen gaming but for desktop games like Dota 2, league, etc skylake by itself is good enough now.

Most APUs are doing it since 2012. A8-7650k is the best bang for $ and will run all these games flawlessly.

AMD may become a go to gaming rig part provider thanks to dx12.
If their apus can boost performance running side to side with their full dedicated cards, it can have quite and effect on fps.

APU with HBM will be your entry level dGPU performance, pair it with mid range card to get +50% performance sounds quite amazing.

Even pairing it with high-end GPU that is 5 times faster than apu , it is still additional 20% performance. Nothing to sneeze at, certainly a huge improvement when we take todays "beat" definition.

Anyway, I'm not really into this high-end stuff. Cost to much here. I'm interested what fury brings to the table just to know what is going to be a mainstream card in the next 2 years.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
Most APUs are doing it since 2012. A8-7650k is the best bang for $ and will run all these games flawlessly.

AMD may become a go to gaming rig part provider thanks to dx12.
If their apus can boost performance running side to side with their full dedicated cards, it can have quite and effect on fps.

APU with HBM will be your entry level dGPU performance, pair it with mid range card to get +50% performance sounds quite amazing.

Even pairing it with high-end GPU that is 5 times faster than apu , it is still additional 20% performance. Nothing to sneeze at, certainly a huge improvement when we take todays "beat" definition.

Anyway, I'm not really into this high-end stuff. Cost to much here. I'm interested what fury brings to the table just to know what is going to be a mainstream card in the next 2 years.
I normally wouldn't be but I've mapped my expenditures out for the year and I get paid twice a month and live off 80% of half a months pay. Might as well treat myself with a new gpu.

Amd APUs just lack cpu perf.

Skylake is just so nice at the moment after looking at the broadwell review that I'll pay a little extra for a lot more.
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
Correct me if I am wrong but HSA only works with compute so you would still need to have dedicated memory for graphics (aka split memory into GPU/CPU portion).

Part of the HSA standards allow applications to have a common memory pool. Even if either the CPU or GPU is doing unique tasks the memory could be seen and accessed by both. You won't need separate memory pools. Compute is one of the uses that will benefit a lot, but its not the only one.
The other coming big thing will be PIM, processor in memory. With this you can have some of the memory instructions executing within the memory itself, instead of moving to the CPU or GPU and then returning values to memory.
We are now only starting to see the benefits of interposers. Allowing high speed, low latency transfers.
 
Last edited:
Feb 19, 2009
10,457
10
76
Good question. Maybe yields aren't that good anyway or maybe enough are sold under quadro or tesla brand. But yeah full GM200 named GTX 990 or so seems possible.

28nm should not have issues with yields in 2015.

Kepler is still the flagship Tesla & Quadro because GM200 is neutered on DP compute. NV hasn't marketed any GM200 Tesla SKU that I am aware of.

Thus, a ton of good GM200 dies need to go into a new SKU, think the Titan, 780 and 780Ti situation. We're getting a full GM200, 6GB vram, faster boost clock SKU in a few months.
 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |