If GF100 is as strong as expected:
6GB GDDR5 might not be enough for Tesla-model and it's scientific calculations. Games just can't utilize full potential of GPUs.
Some single-G200 quadros already have 4GB vram for a reason ;).
http://vr-zone.com/articles/-rumour-nvidia-gt300-architecture-details-revealed/7763.html?doc=7763
According to this..the codename would be GF100. GPU would have over 3B transistors.
Second one replacing Geforce 9600 cards? Not likely since it won't perform anywhere near 9600-level.
Weakeast of the 9600-cards is this 9600 GSO 512 (if we exlude GDDR2-version from Asus).
It has 48SP like GT220..and higher clocks (4% on core, 18% on shaders). So far so good, but real...
Well newest Fuad farce:
http://www.fudzilla.com/content/view/14165/1/
He's saying how Nvidia's new 40nm mobile GPUs would be DX10.1 + support GDDR5.
..but one day earlier it was reported that these mobile GPUs would be DX10 (as anyone should have already known) + GDDR3...
http://www.brightsideofnews.co...-targets-225w-tdp.aspx
Well I would want to believe..but
I know that UMC's 40nm doesn't seem to have those same problems as TSMC's and UMC is ready for mass production..so they could be ideal canditate to build G300 GPUs initially. Because of the price point...
Intel could squeeze 64 Larrabees on 300mm wafer (45nm). For comparison Nvidia squeezed 94 GT200 on 300mm wafer (65nm) and GT200 65nm was 575mm^2
At least on wafers Larrabee looked HUGE
If Larrabee's performance is only at GTX 285 level..then it's huge dissapointment. Basically Nvidia wouldn't even need to release new cards to counter it!
Larrabee is huge and very expensive chip to make. One 300mm wafer could hold only about 64 Larrabees when for comparison one 300mm wafer...
Current gen of Nvidia cards can't do DX10.1..would require new architecture..so:
1) Nvidia's next generation architecture supports DX10.1, but not DX11
2) This is Fud.
Well Nvidia is in trouble; if ATi gets their RV870 out in June and Nvidia will get their G210-series out half a year later..and this will be just 40nm version of current gen...DX10.0 stuff
http://resources.vr-zone.com//...96/Galaxy_GTX295_1.JPG
According to VR-Zone cooling solution on new GTX 295 is much much better: More cooling performance while it's more silent. Isn't that RPM number pretty similar to GTX 260 55nm?
Brand doesn't really matter: Almost all HD4890 cards are reference model and made in same factory in same assembly line. You can find some difference in sticker and some of them might include some game with that card.
AMD doesn't need to find more efficient architecture to fight against GT300.. just make RV870 to be "dual RV740" with DX11 capability; it would have 1280SP and size would be around 250mm^2. They could put three of these on one card and it would still be cheaper solution than single GT300.
First:
GTX 295 - 1242MHz * 480SP
G300-A1 - 1600MHz * 512SP
SingleGT300-A1 would have 37.4% higher shader performance than twoGT200.
Also:
-GTX295 is SLI solution and not 100% efficient
-GTX295 has bottlenecks in area of memory and G300-A1 wouldn't have these problems.
Performance is pretty similar; if using 8xAA then HD4870 X2 is faster, but with any other setting (including 2560x1600 4xAA) GTX 260 SLI should be faster.
GTX 260 SLI has also these small arguments on it's side:
+Runs cooler
+Not as noisy as HD4870 X2
+Uses less wattage
GT200 has 240 single precision units and 30 double precision units.
Performance of cards is pretty low when there's 64-bit floating point stuff:
GT200: 1/8 of normal performance
RV770: 1/5 of normal
GT300: 1/2 of normal [512 single precision units; two of them could do double precision...
+GDDR5 prices are going down + GDDR5 speeds are getting ridiculously high: Fastest GDDR5 chips are rated for QDR 1750MHz. Think it this way. You can get 224GB/s with 256-bit bus.
---
What they are rumouring about GT212:
1) DX10.1 support
2) 384SP, 96 TMU, 16 ROP
3) GDDR5 [QDR 1250MHz] +...
Rumour war :). If current rumours about RV870 and GT212 would hold truth then RV870 would have about the same performance as GT212 has. Difference would be that RV870 is 140mm^2 when GT212 would be about 300mm^2..
AMD would dominate with bigger margin than they do now.
Well there won't be Nvidia GDDR5 card any time soon since that memory costs Much and there aren't enough chips. Perhaps some cheaper SKU GT200-generation card with 256-bit bus will have GDDR5-memory.
You'll get the general idea when you look at Palit's 8800 GTS, GT and 9600 GT coolers :P. At least it worked with 8800 GTS 512. With 783/2250 clocks it never reaches those reference cooler temperatures (with reference clocks) and is still silent. In that GTX 280 card it has much larger fan, but...
..and I'd believe they would move towards GDDR5 first with performance level cards..not high end (since high end has that wide memory bus.
Nvidia did start to use GDDR3 in the days of GeforceFX. First model using GDDR3 was Geforce FX 5700 Ultra (with 128-bit memory bus)..when for example FX5950...
Well if you are indecisive..then Palit has enough time to bring their cards for you :)?
----
That Gainward GTX 280 seems to be first Nvidia highend card for long time that doesn't use reference pcb+backplate? Remember that Palit's 8800 GTX/Ultra's were reference cards..as is 9800 GTX and...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.