No it doesnt. A 50% performance per watt improvement will increase the performance up to 4090 levels @4k @350watts.
Where are you getting 64% from?
100%/300w*1.5*350w = 175% of the 6900xt speed at 4k. According to techpowerups 4k charts 4090 is 75% faster than 6900xt
I wondered about that. Total board power would theoretically mean the max power the board would chug under the most strenuous load, so while 100% GPU load and USBc output?
Have you seen confirmation of this somewhere?
Nvidias 20 series had USBC right? So were their tbp numbers inclusive of...
If it seems unrealistic to get a 56cu die from a 64cu die due to yields, than a 64cu die from a 80cu die should be extremely unlikely. Where does that leave the 6800?
Something like this seems to make sense to me more so than the rumoured 72cu 6800xt
6700xt 40cu
6800xt 64cu
6900xt 80cu
This lends even more credence to the possibility of the 6800xt being a 64cu card.
If we take the techpowerup review of 3080 trio, 80% faster than the 5700xt at1440p
3080=100, 5700xt=55.
55 x 10% IPC increase, x 80% more cus, x 21% higher clocks = 131. Theoretically 30% faster than a 3080 at...
Well, not if you're a NVIDIA fan
But anyway, the die sizes definitely didn't add up to the rumoured cu counts. But that was assumed to be due to the infinity cache. Still, this seems to be next level sandbagging AMD has engaged in
But see this is the part that currently seems unreconcilable with 6800xt being navi21xt, 72cu part.
With TSMCs reported defect density in its 7nm process of 0.09, the ratio of good dies to defective dies is around 2:1...
While he said that i don't think it's likely. That leak would have to come from AMD direct and considering how tight lipped they've been i doubt they will leak numbers 4 days away from reveal. The numbers are too well rounded too, could just be estimates
Says info was provided by aibs who were given by amd. Could be true, but also might not be.
Still doesn't make sense as to why the 6800xt is a cut down die, why AIB partners would only get a cut down die when most dies produced would be full 80cu dies, why there is no 6900 non xt and why there...
So theres something Im having doubts about and that is that the XTX die, AMD exclusive, is either the only full die (80CUS) or a higher binned full die.
If its a higher binned die, the performance delta over the remaining full dies wont be that large I imagine. The 5700xt lisa su edition was...
Perhaps in some stages? I'm not savvy on the process except that they will definitely use different machines at some point.
So if rdna2 was intended to be fabbed on euv, then they would not be able to decide to reduce volume to free up capacity for zen3 i believe
Perhaps all 7nm lines may be counted together in total 7nm wafer capacity, but 7duv and 7 euv use different machines. If AMD had planned and allocated rdna2 to 7nm euv then those wafers do not compete with zen3.
All this die and node comparison between zen3 and rdna2 is missing one possibility. If rdna2 is on 7+ euv then it won't be competing with zen3 for wafers
I would guess it's a mix of 1 and 3.
If a Radeon card lands faster than the 3080 but with 16gb ram and for the same approximate cost, 699-799, then a 3080 20gb for 899 or more becomes much less attractive. Had Radeon been slower than the 3080 they may have still went ahead.
As it is now, if...
I struggle to know where to even begin....
A hobbled titan is the xx80ti...... The titan is all about the pro driver features, thats why they charge so much for them. Its not the full die or the extra ram, its the drivers. Always has been. Thats why a full GA102 quadro costs 5 grand, even with...
Actually it's not, because it doesn't get the pro driver features than actual titans used too.
Now, a prosumer who would have bought a titan for professional work and needed the drivers is forced to buy the Quadro
There is no gap between a 3080 and 3090 performance wise to slot in a 3080ti...
Where have they said anything about a 3080ti?
The truth of the matter is the 3090 is the 3080ti in everything but name. By naming it the 3090 no one complained about the ridiculous price hike
Look you really need to stop this else pretty soon the last bastion of defence NVIDIA fanboys have to put their minds at ease that Huang is the GOAT will be dlss
You can't be serious? Detrailed? They are toe to toe with the 3080, with probably 16gb ram. All they need now is to price it the same and have good stock and it will be an absolute home run what ever card it is
No they dont. Today they showed the 5900x vs the 3900x and the 10900k. The 5950x is the top chip
In fact I would say AMD are know for this type of sandbagging during events, same as nvidia. remember the 3080 launch? 3080 is the flagship, oh wait here is the 3090. Zen2 launch was the same.
It...
according to overclock3d the 3080 gets 75fps in gears5. so really this shows we dont know anything yet
also gears5, BL3, and COD:MW were not shown in the AMD slides of the 5900x vs 10900k. For all we know the 5900x is slower in those titles. more reason to
not get carried away with what we...
Got to hand it to the guy, he knows how to market. This isn't a NVIDIA problem, it's a consumer problem!
The fact that the aorus cards launched with 36 cards available says it's a supply problem
Sent from my SM-N975F using Tapatalk
I think they could get more than 80cus in 500mm2. 172mm2 56cu X2 is 344, + 100mm2 for a 512bit gddr bus = 444. Of course this leaves out the controllers and probably lots of other things, but this would have been 102cus and 16 ggdr bus's. Just a fun speculative though
Yeah I was wondering this myself. Maybe there is big Navi, and then bigger Navi
Because remember, the mi100 has 128CUs with 120 active
Also, 1 CU is smaller than 3.2. I calculated through pixel measurement that 1 DCU was ~ 3.6mm2, by 28 = 100mm2 for CU only. Which seems accurate as there is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.