Falloutboy
Diamond Member
- Jan 2, 2003
- 5,916
- 0
- 71
nothing is stopping nvidia from doing a die shrink Q1 next year and upping the corespeed another 25-30% that would prolly be enough to keep competetive.
Originally posted by: JBT
512bit memory controller? Not a chance in hell for under 2 grade for that sucker.
Originally posted by: Gamingphreek
FIrst of all there is NO POSSIBLE WAY that they will move to a 512-bit memory bus width. It simply isn't going to happen. Far too advanced, and far too expensive to do so at this point. Next gen however wouldn't suprise me.
Originally posted by: Gamingphreek
Umm, unfortunately that is not the way it works ...
Originally posted by: Ged
My Official Ideas/Thoughts/Theories:
NVIDIA is going to release a 90nm G70-based GPU to compete with R520. The reason we don't have an Ultra right now is that NVIDIA is finishing up its work to port the design to 90 nm and making sure that it will be on shelves when ATI paper launches the R520.
Assuming NVIDIA moves to 90 nm for the Ultra and NVIDIA keeps the die area the same, we're going to see a large increase in Vertex and Pixel Shaders.
130 nm to 110 nm was an increase from ~223 million transistors to ~304 million transistors.
110 nm to 90 nm could end up giving them another 100 million transistors to play with in the same area (~400 million transistors).
304 - 223 = 81 million transistors = 2 more Vertex units + 8 more Pixel Pipes (2 more Quads) + Optimizations across the board
100 million more transistors could easily equate to 2 more Vertex units, 8 more Pixel Pipes and another 256-bit memory controller plus extra transistors to make all necessary units aware of the extra memory.
NVIDIA GeForce 7800 Ultra
+450 Mhz clock
+90 nm
+32 Pixel Pipes
+10 Vertex Pipes (maybe 12 vertex pipes)
+512-bit GDDR3 Memory Controller
+512MB 700Mhz (1.4 Ghz) GDDR3 Memory (16x512Mbit chips (2x(8x512Mbit))
+89.6 GB/s GPU to Memory Bandwidth
Well, I can dream at least.
NVIDIA might just port G70 to 90 nm, up the clock and utilize faster GDDR3 to compete with R520.
Originally posted by: linkgoron
I think ATI should release a card with 25 pixel piplines so they can say they have more pipelines than Nvidia's cards...
Originally posted by: humey
Need wait and see, Ultra has being around since G-Force 2
Originally posted by: Gamingphreek
I don't like the fact that it doesn't have and h.264 decoder. The workload placed on a cpu is quite large by this decoding and GPU assistance is very helpful in relieving the workload so it is important I would say. I was really hoping we might see something from nVidia but it's too bad.
They have H.264 acceleration. However since IIRC the spec isn't finalized nor released it will come in a later driver revision. It is in AT article.
130 nm to 110 nm was an increase from ~223 million transistors to ~304 million transistors.
110 nm to 90 nm could end up giving them another 100 million transistors to play with in the same area (~400 million transistors).
Umm, unfortunately that is not the way it works ...
NVIDIA GeForce 7800 Ultra
+450 Mhz clock
+90 nm
+32 Pixel Pipes
+10 Vertex Pipes (maybe 12 vertex pipes)
+512-bit GDDR3 Memory Controller
+512MB 700Mhz (1.4 Ghz) GDDR3 Memory (16x512Mbit chips (2x(8x512Mbit))
+89.6 GB/s GPU to Memory Bandwidth
FIrst of all there is NO POSSIBLE WAY that they will move to a 512-bit memory bus width. It simply isn't going to happen. Far too advanced, and far too expensive to do so at this point. Next gen however wouldn't suprise me.
90nm: There is a decent shot at this
32 Pixel Pipeline: While it could happen it would not be a mere refresh. It is too hard to simply slap on another 8 pipes.
10-12 Vertex Processors: MAYBE 10 but i HIGHLY doubt 12.
700mhz GDDR3: Eh... maybe looking a bit high but it is certainly within reach, however yields probably wont be too good with memory at that speed.
GPU Bandwidth 89.6GB: Not, under any circumstances, going to happen. Just put it out of your mind now. Mainly because you wont see a 512bit memory architecture.
Set your sights a lot lower. I wouldn't be suprised to see a 90nm part (Not necessarily the Ultra) sometime in the future. I also wouldn't be suprised to see 32 more pipelines and a slightly higher clockspeed. After that, you are dreaming.
-Kevin
Originally posted by: Ged
Originally posted by: JBT
512bit memory controller? Not a chance in hell for under 2 grade for that sucker.Originally posted by: Gamingphreek
FIrst of all there is NO POSSIBLE WAY that they will move to a 512-bit memory bus width. It simply isn't going to happen. Far too advanced, and far too expensive to do so at this point. Next gen however wouldn't suprise me.
I submit that if the "Ultra" doesn't have a 512-bit memory bus, it is not because it's impossible, it's just not necessary or cost effective yet. It looks like we can agree on that much.
A 512-bit memory bus would also let NVIDIA use much slower memory chips and achieve a higher throughput at the same time. or 384-bit bus for that matter. The name of the game in computer graphics is parallelism.
Originally posted by: Gamingphreek
Umm, unfortunately that is not the way it works ...
How was my approximation off? If that's not the way it works, feel free to explain it.
AFAIK there is no such thing as a 384bit memory bus. IIRC bandwidth increases exponentially by a factor of 2. 2^2 (ie: 4, 8, 16, 32, 64, 128, 256 etc...).
It is because it is impossible. Do you know how many transitors that would take!? Additionally i dont think you really comprehend the scope of this. You dont simply slap on another 256bit bus. The process is MUCH MUCH more involved, i can not go any deeper than that simply because i dont know everything else that is required.
Additionally, with a 512bit architecture, aside from the near impossible, impossibly fast even, switch, think about how much this would cost! We are talking >$2000 a video card if you are talking about things that complex.
As for your approximation, you cannot assume that by simply shrinking the die you can effectively slap on some more transistors. First of all you have to deal with power leakage. Second of all, yields. Third, Cost. And so on. Adding another 100 million transistors is not an easy task.
It seems to me that you simply dont understand the scope of what is involved in creating a GPU.
It doesn't have to. 384 = 128 + 128 + 128.
It's just like ATI using 24-bit FPUs. 32 would be the next step up from 16-bit if you were doubling, but ATI chose 24-bit precision.
It's just like ATI using 24-bit FPUs. 32 would be the next step up from 16-bit if you were doubling, but ATI chose 24-bit precision.
I didn't say 512-bit architecture, I said 512-bit memory controller. The only added cost for a video card that used a 512-bit memory bus would be the extra memory chips in order to have enough memory in parallel. The fact that you could use slower memory chips that cost less and are more abundant might even make the card cost less than having to use cutting-edge memory chips that are harder to come by.
I fail to see where you get this $2,000 cost estimate from. Assuming they could fit two 256-bit memory controllers on their GPU as they moved to 90 nm without needing to increase the total surface area of the GPU, a 512-bit memory controller or 2x256-bit memory controllers wouldn't increase the cost of each GPU at all (unless you are comparing a single 256-bit GPU to the dual 256-bit GPU, obviously a single controller would use less die space).
The shrink from 130 nm to 110 nm is just like the shrink from 110 nm to 90 nm in terms of engineering that needs to take place. NVIDIA was able to add the same % of transistors, use less power, use a single-slot cooler, and increase clock speed in the move from 130 nm to 110 nm. NVIDIA has the expertise to do the same when they move to 90 nm. AMD was also able to lower power usage and increase clock speed with their move to 90 nm.
The cost of a 300mm^2 die on 130 nm is the same as the cost of a 300mm^2 die on 110 nm is the same as the cost of a 300mm^2 die on 90 nm in terms of silicon used.
I imagine that NVIDIA has already spent a great deal of time playing with the 90 nm process and design considerations while working with SONY on the RSX and they have probably already messed around with TSMC's 90 nm process. So, the cost of generating a 90 nm design for NVIDIA is moot, because they would have to take the 90 nm plunge anyway.
They get the same number of cores/wafer with this as with the 6800(GT? Ultra? I forget. Read Guru3d's review if you care).
Originally posted by: obeseotron
I'm pretty sure current nvidia cards are actually 4 64 bit memory controllers, not 2 128 bit ones. Even 128bit memory controllers are prohibitive, that's why we have dual channel 64 bit for main memory and effectively quad for graphics. Whatever is required to coordinate all of these controllers proably increases at a much greater than linear pace meaning that a doubling in the number of memory controllers would probably far more than double the size of the overall memory controller. It is certainly not impossible and is probably likely on the next generation after g70/r520, but when the companies decide what they are going to implement they are working with a pretty hard transitor limit, so they have to prioritize, and obviously they thought things like pixel pipelines and shader capabilites outweighed the benefits of a more complex memory controller.
Originally posted by: Gamingphreek
It is too expensive as of right now. We aren't going to flame you or anything for suggesting something such as XDR. You are adding to the thread, that is what we like!
-Kevin
Originally posted by: BenSkywalker
They get the same number of cores/wafer with this as with the 6800(GT? Ultra? I forget. Read Guru3d's review if you care).
I just read through the article quickly and I don't see it mentioned anywhere(although the article is 28 pages long and I was reading quickly). I don't see how anyone could write that they are getting the same amount of cores per wafer with the die size increasing so much.
Rambus looks bad to many because of past claims well as it turns out they weren't cring wolf there was an actual conspiracy against them.
Originally posted by: Gamingphreek
It doesn't have to. 384 = 128 + 128 + 128.
It's just like ATI using 24-bit FPUs. 32 would be the next step up from 16-bit if you were doubling, but ATI chose 24-bit precision.
No it doesn't work like that. To achieve 256bit...do you think they just slapped 2 128bit controllers together, no.
-Kevin
(from the AT 7800 article)Again, we have a 256 bit (4x 64 bit) memory interface to GDDR3 memory
(From the 5950 article)The biggest challenges that exist with a 256-bit wide memory interface are routing related; routing traces to/from the GPU and to/from the individual memory chips themselves