[Rumor, Tweaktown] AMD to launch next-gen Navi graphics cards at E3

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
GTX 1660 Ti has 1.9 GHz Turbo Clock speeds at 1536 CUDA core, and 288 GB/s bandwidth.

Why 1280 GCN core chip, with, lets say, 1,8 - 2 GHz, 256 GB/s of GDDR6 memory bus, be not able to get close to GTX 1660 Ti performance?

Remember guys. It appears AMD may have changed a lot in the structure of SIMDs. Similar changes nvidia made with Maxwell. And in the first place - it allowed them to increase efficiency, because less cores could do more work, and at the same time, the change in the architecture allowed Nvidia to increase the clocks of the GPUs, becaused reduced the memory movement over the GPU.

If AMD has massively improved their memory compression then yes, 128 bit bus + GDDR6 clocked higher than the 1660TI's 12gps could do the ticket, but with current AMD architecture they need WAY more bandwidth to equal NVIDIA.
 
Reactions: realibrad

RaV666

Member
Jan 26, 2004
76
34
91
... I cannot see however small GPU, like 120 mm2 with 128 bit memory bus to consume around 100W of power..
Why cant you ? Vega20 is 330mm2 and has 300W TDP, so 1/3 of the size or a bit bigger can easily get same heat density. It could prolly even do 120W or so.
This however depends just how aggressive amd will want to be with clocks, vega has better perf/W than polaris, really, its just amd wanted to exploit every mhz they could get out of Vega 64 because they wanted it to rival 1080.
Simple vega 56 scaling in games from my card.1560mhz=180W, 1640mhz=250W, 1700mhz=340W, so for another 140mhz you almost have to DOUBLE power.
So, CAN a 120mm2 chip have 100 or even 120W TDP, yea, it can.Will it ? Depends on how much amd will need the clocks to hit certain performance number.
TBH i really think amd should stop with this clock maximizing, it hurts them more than it gives.They become butt of the "furnace" jokes, they have to invest more in power delivery, they should rather go down a step with the performance and release cards at their best perf/W curve, thats what nvidia does, their cards could also easily consume 300-400W if they would just LET their users manage TDP power number like amd does.
But i have little hope for amd to properly manage their tdp, all 3 cards they released recently, vega20 before that polaris30 and before that vega64 were maximized beyond the point of reason.
 
Reactions: prtskg

railven

Diamond Member
Mar 25, 2010
6,604
561
126
We actually need competition. For the sake of sane prices, we actually need competition.

I don't think either AMD nor Intel will start a price war. I just want options. I'd love for a price war, but the only price wars I've seen on are the bottom, and frankly not really worth it :/

P.S. If you have RTX 2080 Ti, don't hold your breath for next gen AMD GPUs . They won't be that fast. At least first two dies.

I know
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Absolutely.
Do people think AMD people there at Markham, Austin or Orlando just, like, do nothing?

Are these RTG-focused development sites? If so, then yes, the evidence so far is that they do nothing.

What have we actually seen in the way of GCN improvements since Hawaii? Delta color compression, yes, but it's nowhere near as good as Nvidia's. Some cut-and-paste of better 3rd party memory controllers and video codecs - nice to have, but not exactly cutting edge work, and definitely inferior to Nvidia's in-house memory controllers.

Almost all the RTG-specific development has been essentially worthless: HBCC, DSBR, primitive shaders... none of it added up to a hill of beans in the real world; in terms of performance, Vega 10 was essentially just a die-shrunk, overclocked Fiji.
 
Reactions: ozzy702

Glo.

Diamond Member
Apr 25, 2015
5,761
4,666
136
If AMD has massively improved their memory compression then yes, 128 bit bus + GDDR6 clocked higher than the 1660TI's 12gps could do the ticket, but with current AMD architecture they need WAY more bandwidth to equal NVIDIA.
It doesn't. What AMD needs to do is increase INTERNAL Bandwidth of CUs. What if they have done that AND at the same time - made those CUs execute more work at the same time?
Why cant you ? Vega20 is 330mm2 and has 300W TDP, so 1/3 of the size or a bit bigger can easily get same heat density. It could prolly even do 120W or so.
This however depends just how aggressive amd will want to be with clocks, vega has better perf/W than polaris, really, its just amd wanted to exploit every mhz they could get out of Vega 64 because they wanted it to rival 1080.
Simple vega 56 scaling in games from my card.1560mhz=180W, 1640mhz=250W, 1700mhz=340W, so for another 140mhz you almost have to DOUBLE power.
So, CAN a 120mm2 chip have 100 or even 120W TDP, yea, it can.Will it ? Depends on how much amd will need the clocks to hit certain performance number.
TBH i really think amd should stop with this clock maximizing, it hurts them more than it gives.They become butt of the "furnace" jokes, they have to invest more in power delivery, they should rather go down a step with the performance and release cards at their best perf/W curve, thats what nvidia does, their cards could also easily consume 300-400W if they would just LET their users manage TDP power number like amd does.
But i have little hope for amd to properly manage their tdp, all 3 cards they released recently, vega20 before that polaris30 and before that vega64 were maximized beyond the point of reason.
Why do you guys use previous gen. AMD GPUs as point of view, on architecture, when Navi is much more than just shrink, and added GDDR6 memory?

Navi is the basis of next-gen consoles. PS5 and Xbox Anaconda. They have to be efficient-ish. Consoles will have 8 core CPU, clocked relatively low, and around 3584 GCN core GPU chip. It has to be more efficient than Vega 20 was otherwise it will come with 350-400W PSU. Have we seen a console requiring this much power?
 

RaV666

Member
Jan 26, 2004
76
34
91
Why do you guys use previous gen. AMD GPUs as point of view, on architecture, when Navi is much more than just shrink, and added GDDR6 memory?

Navi is the basis of next-gen consoles. PS5 and Xbox Anaconda. They have to be efficient-ish. Consoles will have 8 core CPU, clocked relatively low, and around 3584 GCN core GPU chip. It has to be more efficient than Vega 20 was otherwise it will come with 350-400W PSU. Have we seen a console requiring this much power?
Uhm, im not saying gcn isnt effecient,it is, just not at the clocks amd is pushing it for desktop cards.PS4/Pro Xbox one/s/x all use the same gcn and they dont require 350-400W, its just that they are clocked at the most efficient point in the voltage curve, polaris for xbox one x , has more shaders and more bandwith (384bit) than desktop polaris(its also on tsmc) but it has lower clocks.I have vega, its very efficient,In the ...1200-1500mhz range.I also dont think thats only shrinked vega, it has larger L2, more instructions, vrr and probably more.BUT its still gcn, for me, its good. I like gcn , but nvidia and lazy programmers made sure that it isnt utilized properly in pc games.But i also think it starts to be, and one reason is turing, its more compute oriented, it has fp16 and so on.Thats why nvidia introduced rtx and are pushing it so heavily, to distinguish themselves in future games.
So as i said before, it depends on amd alone if they are going to be power hungry or not, i also expect navi 10 with lets say 56CU to be faster than vega 64... a bit, but i wouldnt count on them being crazy efficient at this performance.Amd tends to overclock from the getgo and push more volts than needed to sell every die.
Zen2 8 core at 7nm in optimal point in voltage curve should be 2x more efficient than zen1 8 core, it could go as low as 30-40W, 150W for gpu+memory.200W console, looks alright.
 
Reactions: ozzy702

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
It doesn't. What AMD needs to do is increase INTERNAL Bandwidth of CUs. What if they have done that AND at the same time - made those CUs execute more work at the same time?

Why do you guys use previous gen. AMD GPUs as point of view, on architecture, when Navi is much more than just shrink, and added GDDR6 memory?

Navi is the basis of next-gen consoles. PS5 and Xbox Anaconda. They have to be efficient-ish. Consoles will have 8 core CPU, clocked relatively low, and around 3584 GCN core GPU chip. It has to be more efficient than Vega 20 was otherwise it will come with 350-400W PSU. Have we seen a console requiring this much power?


History/AMD's track record. Sure, major leaps in efficiency could happen with NAVI, and I hope they do. AMD has had a track record with GCN of doing very small improvements, there being a ton of hype and high expectations (that on paper "should" pencil out) prior to release and then once the reviews are in we inevitably find out that it's meet the new GCN, same as the old GCN.

Next Gen consoles could easily have wide/fat GPUs that are clocked low for efficiency. Between Ryzen 2's likely stellar power consumption, and low clocked NAVI, I wouldn't be surprised to see 300w or less in a very powerful package. That doesn't mean that NAVI on the desktop will be rolled out with the same mindset. Thus far, with the exception of the Nano, when was the last time AMD released GPUs that weren't pushed to the very edge?
 
Reactions: prtskg

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
It seems that people confusing NAVI the chip/architecture with NAVI the graphics cards.

Polaris 30 is very efficient and close to Turing TU117 in perf/watt, but Polaris 30 as a card (RX 590) is not.

So, when we talk about NAVI make the distinction between NAVI the chip/architecture and NAVI Graphics Cards.
 

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
It seems that people confusing NAVI the chip/architecture with NAVI the graphics cards.

Polaris 30 is very efficient and close to Turing TU117 in perf/watt, but Polaris 30 as a card (RX 590) is not.

So, when we talk about NAVI make the distinction between NAVI the chip/architecture and NAVI Graphics Cards.


I see this claim thrown around quite a bit and honestly, it feels like a bit of excuse making for AMD and intellectual dishonesty. I'm open to the possibility that Polaris targeted for performance/watt is on par with Turing targeted for performance (from what I remember it's close) but I'd love to see an Apples to Apple comparison between the two when both are targeting performance/watt. My guess is that Turing has significantly better performance/watt than Polaris when both are targeting that metric.

I've ran plenty of Polaris, Pascal and Turing graphics cards for mining using the most efficient settings/roms I can find and NVIDIA is consistently more efficient across the majority of algos. I will say that Polaris is fairly power efficient at reasonable clocks ~ 1,000mhz in contrast to factory clocks, so in this we agree that Polaris gains significantly more efficiency from clock speed reduction than Turing or Pascal.

There are of course examples where HBM GPUS are phenomenal in memory bound algos. In general I've found Turing it be extremely power efficient when targeting ~ 1550-1650mhz core clocks, significantly more so than with Pascal and Polaris. I'd assume the same goes for gaming efficiency, but I'd love to see a review. If I had more time I'd do it (I have plenty of setups to build identical machines) but I'm buried for the conceivable future.
 
Last edited:

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
All architectures get more efficient as you reduce clocks and power, but the fact that GCN is basically absent from the laptop market - which is the one where efficiency really matters - tells you the truth of GCN's efficiency vs competitors. Particularly with Maxwell Nvidia took a huge step forward, GCN still hasn't done the same and won't do with minor tweaks.

A major rework is required and as others have said the last huge change (from terascale to GCN) was done with a different US team that was let go. The budgets were cut, the work went to China, and as of yet they have failed to live up that US team's standards. I await the next console releases as that I do expect to have major improvements, but I doubt they will filter through to the desktop for a while yet, I mean the console gpu's aren't even finished yet.

Still I am quite happy to be proved wrong - we all agree that more competition is good - but I'm just looking at past performance as an indicator of the future, combined with the fact the console dev work is ongoing and we are also doing a die shrink. That doesn't suggest we'll get big architectural changes right now.
 

DisEnchantment

Golden Member
Mar 3, 2017
1,687
6,232
136
Going through the LLVM changes the list of HW bugs for GFX10.1 is a lot

GFX10_1_Bugs = [
FeatureVcmpxPermlaneHazard,
FeatureVMEMtoScalarWriteHazard,
FeatureSMEMtoVectorWriteHazard,
FeatureInstFwdPrefetchBug,
FeatureVcmpxExecWARHazard,
FeatureLdsBranchVmemWARHazard,
FeatureNSAtoVMEMBug,
FeatureFlatSegmentOffsetBug,
];
and
FeatureLdsMisalignedBug

https://github.com/llvm-mirror/llvm...939#diff-983f40a891aaf5604e5f0b955e4051d2R733

Probably GFX10_2 might fix a bunch of them, but that is a lot of HW bugs which needs SW workaround/mitigation
But this GFX10_1 has none of the new ML specific instructions added for Vega20 which would probably make it a purely gaming chip it seems.
Lacks ECC, new Vega20 dot instructions (for ML), lacks 1/2 DPFP, some multiply add features etc

Has these new features though not present in GFX9_0_6 (Vega 20), ( I might have missed some )
FeatureMovrel, FeatureNoSdstCMPX, FeatureVscnt, FeatureRegisterBanking, FeatureVOP3Literal, FeatureNSAEncoding, FeatureMIMG_R128, FeatureNoDataDepHazard,

FeatureMovrel - "Has v_movrel*_b32 instructions"
FeatureNoSdstCMPX - "V_CMPX does not write VCC/SGPR in addition to EXEC"
FeatureVscnt - "Has separate store vscnt counter"
FeatureRegisterBanking - "Has register banking"
FeatureVOP3Literal - "Can use one literal in VOP3"
FeatureNSAEncoding - "Support NSA encoding for image instructions"
FeatureMIMG_R128 - "Support 128-bit texture resources"
FeatureNoDataDepHazard - "Does not need SW waitstates"
There is one feature defined but not added to GFX10_1, (FeatureCuMode)
Someone whol could explain these new Features would be great. (Only some of those are self explanatory for me, like the 8 bit dot functions for ML example)
But it seems, GFX10 has a lot more changes from GFX9 than GFX9 had from GFX8 or any other GCN uArch major revision transition/update


I guess the silicon needing a rework could be real... But apparently the bugs are deemed not critical and can be worked around in SW.
 
Last edited:

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
All architectures get more efficient as you reduce clocks and power, but the fact that GCN is basically absent from the laptop market - which is the one where efficiency really matters - tells you the truth of GCN's efficiency vs competitors. Particularly with Maxwell Nvidia took a huge step forward, GCN still hasn't done the same and won't do with minor tweaks.

A major rework is required and as others have said the last huge change (from terascale to GCN) was done with a different US team that was let go. The budgets were cut, the work went to China, and as of yet they have failed to live up that US team's standards. I await the next console releases as that I do expect to have major improvements, but I doubt they will filter through to the desktop for a while yet, I mean the console gpu's aren't even finished yet.

Still I am quite happy to be proved wrong - we all agree that more competition is good - but I'm just looking at past performance as an indicator of the future, combined with the fact the console dev work is ongoing and we are also doing a die shrink. That doesn't suggest we'll get big architectural changes right now.

AMD not being huge in the laptop market has two main reasons. The first is many OEM's shun AMD because of Intel bullying. The second is those that will use AMD, use AMD APU's. So there is less of a drive for a discrete card as the machines are almost always targeted at the low to mid range. Apple does use AMD only for MBP's that have discrete graphics, and they are among the thinnest machines out there.

The GCN based chips in the consoles also show that when properly clocked, the chips are efficient. The issue isn't that GCN is inefficient. Its that AMD clocks them higher than they were designed to be run at, so the power consumption sky rockets.
 

tajoh111

Senior member
Mar 28, 2005
304
320
136
Are these RTG-focused development sites? If so, then yes, the evidence so far is that they do nothing.

What have we actually seen in the way of GCN improvements since Hawaii? Delta color compression, yes, but it's nowhere near as good as Nvidia's. Some cut-and-paste of better 3rd party memory controllers and video codecs - nice to have, but not exactly cutting edge work, and definitely inferior to Nvidia's in-house memory controllers.

Almost all the RTG-specific development has been essentially worthless: HBCC, DSBR, primitive shaders... none of it added up to a hill of beans in the real world; in terms of performance, Vega 10 was essentially just a die-shrunk, overclocked Fiji.

Markham seems more on the software side for engineering if you look at glassdoor.

A huge portion of the hardware engineers for graphics were laid off years ago.

https://semiaccurate.com/2012/10/12/amds-layoffs-target-engineering/

" For sheer numbers, we have been hearing two versions, 10% and 30% of the company, with several sources giving closely related values. Sadly, they are both correct.

The minimum number is 10%, matching last year’s debacle that crippled the company. The 30% is on the engineering side, and Markham is the main target. AMD has gone on a disastrous cycle of cutting jobs and outsourcing, and it isn’t over yet. What so plainly didn’t work last time is going to be repeated in greater numbers this time. This is rank management incompetence."

The actual number was 15% which still translate into 1700 workers.

https://www.theverge.com/2012/10/18...llion-in-q3-will-layoff-15-percent-of-workers

What ended up happening is GPU development shifted from Markham to Shanghai as a cost saving measure as engineers are paid vastly less(1/4 or 1/5). See the picture and article below as proof of this.




"Here is a look at some of the engineering team over in Shanghai, China that were responsible for the development of both Polaris and Vega. AMD still has many employees working on Radeon in the United States and Canada, but the bulk of the development for the upcoming chips as been done in China (hardware engineering) and India (software development)."

https://www.legitreviews.com/amd-ra...i-celebrates-china_183243#671l4HAKmzTIjeEs.99

AMD has not remotely recovered in terms of work force. AMD used to have 16800, employees in 2007. In 2018 that number was 10,100 with many of these jobs now in shanghai instead of North America. It was a necessary move to cut costs and to give AMD the possibility of Ryzen, so we can't fault them for that. But the quality of chips since the Shanghai team took over has been inconsistent to mediocre.

There's a reason why pretty much all designs post Hawaii and have been from the Shanghai team and nothing has been credited to the Markham team. It is gone. The Markham team was expendable and this was reflected by Raja in his last year at AMD.

Raja expressed he was so frustrated with the direction resources were going because according to him, AMD underestimated the viability of discrete gpus in the long run and as a result, were cutting workforce and labor from the division. But this didn't mean AMD wanted to get out of the GPU business, but AMD shifted everything for GPU to foreign development for cost saving measures. The implication that the original Markham team still exists doesn't make sense as it would be redundant and would raise cost for AMD to have a Shanghai and Markham team at a time they could not afford it.
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
Markham seems more on the software side for engineering if you look at glassdoor.

A huge portion of the hardware engineers for graphics were laid off years ago.

https://semiaccurate.com/2012/10/12/amds-layoffs-target-engineering/

" For sheer numbers, we have been hearing two versions, 10% and 30% of the company, with several sources giving closely related values. Sadly, they are both correct.

The minimum number is 10%, matching last year’s debacle that crippled the company. The 30% is on the engineering side, and Markham is the main target. AMD has gone on a disastrous cycle of cutting jobs and outsourcing, and it isn’t over yet. What so plainly didn’t work last time is going to be repeated in greater numbers this time. This is rank management incompetence."

The actual number was 15% which still translate into 1700 workers.

https://www.theverge.com/2012/10/18...llion-in-q3-will-layoff-15-percent-of-workers

What ended up happening is GPU development shifted from Markham to Shanghai as a cost saving measure as engineers are paid vastly less(1/4 or 1/5). See the picture and article below as proof of this.


View attachment 5992

"Here is a look at some of the engineering team over in Shanghai, China that were responsible for the development of both Polaris and Vega. AMD still has many employees working on Radeon in the United States and Canada, but the bulk of the development for the upcoming chips as been done in China (hardware engineering) and India (software development)."

https://www.legitreviews.com/amd-ra...i-celebrates-china_183243#671l4HAKmzTIjeEs.99

AMD has not remotely recovered in terms of work force. AMD used to have 16800, employees in 2007. In 2018 that number was 10,100 with many of these jobs now in shanghai instead of North America. It was a necessary move to cut costs and to give AMD the possibility of Ryzen, so we can't fault them for that. But the quality of chips since the Shanghai team took over has been inconsistent to mediocre.

There's a reason why pretty much all designs post Hawaii and have been from the Shanghai team and nothing has been credited to the Markham team. It is gone. The Markham team was expendable and this was reflected by Raja in his last year at AMD.

Raja expressed he was so frustrated with the direction resources were going because according to him, AMD underestimated the viability of discrete gpus in the long run and as a result, were cutting workforce and labor from the division. But this didn't mean AMD wanted to get out of the GPU business, but AMD shifted everything for GPU to foreign development for cost saving measures. The implication that the original Markham team still exists doesn't make sense as it would be redundant and would raise cost for AMD to have a Shanghai and Markham team at a time they could not afford it.
Using a purely $ based argument for the quality of work possible from an employee is very misleading for good analysis. I cost more so I'm better has been shown false many times, as we've seen so 'epically' with AMD and their CPU division.

Individuals accept lower salaries all of the time for many reasons. For example, with the China one child policy of many years, you would expect that sole child to want to be close to aging parents and settle for a lesser salary than if working in the West. Other countries have different social values to family than the accepted norm in the USA.

AMD had a fab workforce circa 2007. You can't compare both workforce numbers directly.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Yeah, when AMD spun off GF, their 'workforce' dropped significantly, but those workers were not necessarily layed off. They just now worked for a different company.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
Using a purely $ based argument for the quality of work possible from an employee is very misleading for good analysis. I cost more so I'm better has been shown false many times, as we've seen so 'epically' with AMD and their CPU division.
The loss in effective manpower and experience during the transition must have been enormous. I wouldn’t be surprised if that set AMD back a year in driver development alone. Large complex software stacks can be bitch to learn, let alone improve - and that assuming excellent documentation, which isn’t always a given.
 

tajoh111

Senior member
Mar 28, 2005
304
320
136
Using a purely $ based argument for the quality of work possible from an employee is very misleading for good analysis. I cost more so I'm better has been shown false many times, as we've seen so 'epically' with AMD and their CPU division.

Individuals accept lower salaries all of the time for many reasons. For example, with the China one child policy of many years, you would expect that sole child to want to be close to aging parents and settle for a lesser salary than if working in the West. Other countries have different social values to family than the accepted norm in the USA.

AMD had a fab workforce circa 2007. You can't compare both workforce numbers directly.

Although labor cost cannot be tied to quality of work done alway, there definitely is some correlation, bigger budget typically result in better products with the right talent and lack of corruption. In terms of salary, you have to remember it's a competitive job market. Higher salaries at companies that pay well attract more talent which makes the job selection process more stringent(they cannot hire everyone). This means better educations, more experience will get you hired on top of network connections. If your setting up a team with a low budget in mind and lower pay, the talent pool that it attracts will simply not be as good.

A small insufficient budget works against the success of a product and is never a positive. With that out of the way, experience typically does add to product quality and this is what the Markham team had over the Shanghai team.

The Markham team was excellent at developing graphics and built the foundation of GCN. They were also competitive and at many points in time, made products superior to Nvidia in previous years. They were a very strong team.

The same cannot be said of the Shanghai team. The track record of the Shanghai has been highly suspect and shows great inexperience compared to the Markham team. Not only has AMD fallen behind in performance per watt, I don't think there has been a product from the Shanghai team that has met peoples expectations unlike the Markham team. This should not be a surprise because it takes time for talent to develop.

The talent for developing GPU graphic just doesn't exist to the same extent as north america. China independently of AMD, has yet to develop a competitive graphic chip yet whether it is for mobility or graphics. The talent pool has not developed yet.

Even after the global foundaries spinoff, AMD shed 3-4000 employees post this even.

https://www.eetimes.com/document.asp?doc_id=1324307#

AMD laysoff 7% of staff in 2014.

https://venturebeat.com/2015/10/01/amd-to-lay-off-500-people-or-5-of-workforce-in-restructuring/

AMD laysoff 5% of staff in 2015

https://www.mercurynews.com/2011/11/03/amd-to-lay-off-1400-workers-worldwide-about-80-in-sunnyvale/

AMD laysoff 12% of Staff in 2011.

This is on top of the 15% in 2012.

These years were well after the Global foundries spinoff. AMD went from 13000 employees during this time to about 9000. I said this many times and the prediction came true. The success of Ryzen did not mean that Vega(before it's launch) was going to come out with the same quality. Ryzen was possible because cuts were made to the GPU division. The slow development, product release, and the many rebrand are a reflection on the budget AMD put into graphics. When Vega came out to disappoint, it should not surprise anyone that is pragmatic. This forum in particularly was hit hard by the hype train crash because of the generally pro AMD attitude and the inability to notice the warning signs that Vega was going to disappoint(even super pro AMD Adoredtv notice this).
 

Glo.

Diamond Member
Apr 25, 2015
5,761
4,666
136
History/AMD's track record. Sure, major leaps in efficiency could happen with NAVI, and I hope they do. AMD has had a track record with GCN of doing very small improvements, there being a ton of hype and high expectations (that on paper "should" pencil out) prior to release and then once the reviews are in we inevitably find out that it's meet the new GCN, same as the old GCN.
If History and AMD's track record would be anything to follow, they would NEVER made Zen, Zen+, Zen 2 architectures .

And yes. This is THE SAME company.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Using a purely $ based argument for the quality of work possible from an employee is very misleading for good analysis. I cost more so I'm better has been shown false many times, as we've seen so 'epically' with AMD and their CPU division.

Individuals accept lower salaries all of the time for many reasons. For example, with the China one child policy of many years, you would expect that sole child to want to be close to aging parents and settle for a lesser salary than if working in the West. Other countries have different social values to family than the accepted norm in the USA.

AMD had a fab workforce circa 2007. You can't compare both workforce numbers directly.
As someone who works in software engineering and often sees people outsourced there's two ways you can outsource to India/China/Israel:
1) You do it to save money. You pay lower wages, you get average people who only work for you to get experience and move on.
2) You do it to get better people, you pay close to what you paid in US/Europe but that gets you the cream of local talent.
2 is smart, but 1 is what most companies actually do because the bean counters want to save money, not make better products (which in the end should make you more money then you could ever save).

I'll let you decide why you think AMD moved dev to China.
 
Last edited:
Reactions: prtskg

JasonLD

Senior member
Aug 22, 2017
486
447
136
If History and AMD's track record would be anything to follow, they would NEVER made Zen, Zen+, Zen 2 architectures .

And yes. This is THE SAME company.

Zen is total departure from Bulldozer based architecture, while Navi, based on all the rumors and leaks so far, is still GCN. I don't see much reason to be hyped about Navi other than it is incremental update from Polaris/Vega, and most of the improvement will come from moving to 7nm, not from the architectural improvement itself.
 

prtskg

Senior member
Oct 26, 2015
261
94
101
Zen is total departure from Bulldozer based architecture, while Navi, based on all the rumors and leaks so far, is still GCN. I don't see much reason to be hyped about Navi other than it is incremental update from Polaris/Vega, and most of the improvement will come from moving to 7nm, not from the architectural improvement itself.
I'm just happy that it'll be the last GCN architecture. And since it also goes in consoles, it'll be more gaming oriented, unlike Vega.
 

Glo.

Diamond Member
Apr 25, 2015
5,761
4,666
136
Zen is total departure from Bulldozer based architecture, while Navi, based on all the rumors and leaks so far, is still GCN. I don't see much reason to be hyped about Navi other than it is incremental update from Polaris/Vega, and most of the improvement will come from moving to 7nm, not from the architectural improvement itself.
You do realize that GCN is ISA, and its implementation can be different, depending on the design of the GPUs?

There is plenty patent leaks and Linux driver changes that suggest that is really the case with Navi.

It all depends now on Front End of the GPU, and its design.
 

Leadbox

Senior member
Oct 25, 2010
744
63
91
Nvidia's naming each and every tweak of their architecture like some completely new and radically different thing, has clouded a lot of ppl's view on gpus.
 

DrMrLordX

Lifer
Apr 27, 2000
21,802
11,157
136
Well, one thing that is abundantly clear is that the Shanghai development team has produced deeply unimpressive results so far.

Did they develop Vega20? I am somewhat impressed with Radeon VII. Radeon Instinct Mi50 and Mi60 also seem to be doing pretty well.
 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |