I thought TMUs are going up from 64 to 128?
No way,
TMUs are too large to double them.
Yes they doubled the TMUs in GF104 but they halved the Raster and Polymorph engines
Please don't jump around. We were talking about TMU's?
I think what he is saying is its hard to keep everything thats in GF100 and then double the TMUs.
I know nothing about how large they are, that's just how I interpreted it.
Please don't jump around. We were talking about TMU's?
How many TMUs does the GF100 core have when fully functional?
I mean they doubled the TMUs but at the same time they halved the Rasters and Polymorphs to save space. If we double TMUs in the GF100 the die size will sky rocket.
I mean they doubled the TMUs but at the same time they halved the Rasters and Polymorphs to save space. If we double TMUs in the GF100 the die size will sky rocket.
The only way you could be arguing these points, is if you knew EXACTLY the die space each TMU, Rasters, and Polymorphs takes up. Is this right or wrong?
If you don't know this information, this argument cannot exist. If you had this information, I.E. exact space (mm2) each component takes up, then we have a ball game. Until then though, this argument is dead in it's tracks. You can take guesses all you want, but that won't make it so.
How can that be? Are you harping on him for using common sense? Am I missing something? If you take Fermi, which is already a huge die, and double the number of TMU's, you are adding even more to an already huge die. It stands to reason that since the Fermi die is already bordering on supermassive, that they wouldn't do this without reductions on other aspects of the architecture. Like they did before. Why do you have such a problem with him stating this?The only way you could be arguing these points, is if you knew EXACTLY the die space each TMU, Rasters, and Polymorphs takes up. Is this right or wrong?
If you don't know this information, this argument cannot exist. If you had this information, I.E. exact space (mm2) each component takes up, then we have a ball game. Until then though, this argument is dead in it's tracks. You can take guesses all you want, but that won't make it so.
How can that be? Are you harping on him for using common sense? Am I missing something? If you take Fermi, which is already a huge die, and double the number of TMU's, you are adding even more to an already huge die. It stands to reason that since the Fermi die is already bordering on supermassive, that they wouldn't do this without reductions on other aspects of the architecture. Like they did before. Why do you have such a problem with him stating this?
Your point is perfectly clear. It's also the extreme end of the argument. To be sure, we can't know anything about the die size unless we have the actual dimensions of each architectural feature. However, as I said before, it stands to reason that since these chips are all built on the same process node, and are in fact in the same family, the relative size of each unit will remain the same. This is not an amazing leap of logic. And it stands to reason, as I said before, that if you doubled the TMU's, without restricting anything else, you would end up with a large increase in die size.Yes, I have "a" problem with him, and you, stating this. I don't have "such" a problem where it makes me lose my appetite or go on a drinking binge. I hope you can appreciate the difference and not get lost in the text.
You don't know the size a single TMU takes up. You can't know the increase in die space for something doubled if you didn't know the size before it was doubled. You can't know how much die space a Polymorph engine takes up once it's halved if you didn't know the dies space it took up before.
My point, as if it wasn't clear enough, is that the argument of die space for particular components, e.g. polymorph engines, raster units, TMU's is totally and completely moot when you don't know the data. Data being the actual size (die space in mm2) each uses. Not even close. Therefore, no valid argument can be made about how much a doubling and/or reduction of any component, let alone a combination of components are changed (doubled or reduced).
I seriously do not think I can explain this any better.
If you have "duh", and you double it, how much "duh" do you have?
Well, what is "duh"? How many transistors does it have? How many mm2 does each "duh" take up? If you don't know the answers to the last three questions, than you can't answer the first question.
Uclalabrate, Atenra You guys have done pretty darn good here. Very good logic and have presented your arguments extremely well . This is a pretty good Topic
Your point is perfectly clear. It's also the extreme end of the argument. To be sure, we can't know anything about the die size unless we have the actual dimensions of each architectural feature. However, as I said before, it stands to reason that since these chips are all built on the same process node, and are in fact in the same family, the relative size of each unit will remain the same. This is not an amazing leap of logic. And it stands to reason, as I said before, that if you doubled the TMU's, without restricting anything else, you would end up with a large increase in die size.
That fact that you're harping on semantics and jumping up his ass for an extremely reasonable and logically thought out speculation is at the least amusing to me, and bordering on nitpicking. Sure, we don't ABSOLUTELY know, but all signs point to yes. Maybe they have some tweaks in there that will let them eat their cake and have it too, like AMD did with Barts, but who knows? If you're trying to eliminate speculation on this board, you're fighting a losing battle.