GTX480 arrived [evga forums]

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
People tend to not understand one simple fact, which is hardware is getting hotter and hotter.
No, sloppily engineered hardware is getting hotter and hotter. Such was the case of Prescott, which Intel rectified with the Core 2 series. Such was the case with the HD2900XT, which was rectified with the HD3870 series, etc.
I was shocked when I saw video cards that took 2 slots because the heatsink itself is bigger than the card itself. Now since electricity usage is directly proportional to the heat generation, it is really physics 101. Mrk6 tended to believe that GTX480 will heat up the room while 5870 doesn't, which he will probably fail grade 10 physics.
Show me once where I said that. Go quote it. The stupidity in this post is amazing.
Keys stated that both cards will heat up the room, probably by 1-2 degrees in hours of gaming, and we probably know that by experience, and GTX 480 will heat up the room faster than 5870. However, video card isn't the only thing that generates heat, so regardless of video card, your room is going to heat up. Using such excuse to bash Fermi is nothing more than trying to find bones in an egg.
The irony here is hilarious. If you had taken even one semester of elementary physics, you would realize what a fool you sound like. Heat is a quantity, the GTX480 produces much more of it, and therefore will heat up it's surroundings (the sink) more. How you fail at such a simple concept is beyond me.

By superior, Keys meant the architecture, not raw performance. Now of course some may argue that ATI's existing architecture is great, but is the 200 architecture great too? In fact, Nvidia has been recycling the 8800 over and over again. Can't Nvidia simply shrink the 2xx architecture from 55nm to 40nm, resulting in less power draw and increase performance? Yes, but they didn't. Instead, they got a turn and use a new architecture that many believed it isn't going to what on gaming. Well, it works, and it works better in Dx11. So now ATI is using the old architecture while Nvidia is using a new one. As to performance it is head to head, if not better. That is with 480 CUDA cores, imagine it on 512 CUDA cores.
So it's six months late, is less efficient, missed its performance mark, might do some things better (tesselation?) and you're cheering that. The fanboy psyche is truely interesting.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,001
126
People tend to not understand one simple fact, which is hardware is getting hotter and hotter. I was shocked when I saw video cards that took 2 slots because the heatsink itself is bigger than the card itself. Now since electricity usage is directly proportional to the heat generation, it is really physics 101. Mrk6 tended to believe that GTX480 will heat up the room while 5870 doesn't, which he will probably fail grade 10 physics. Keys stated that both cards will heat up the room, probably by 1-2 degrees in hours of gaming, and we probably know that by experience, and GTX 480 will heat up the room faster than 5870. However, video card isn't the only thing that generates heat, so regardless of video card, your room is going to heat up. Using such excuse to bash Fermi is nothing more than trying to find bones in an egg.

By superior, Keys meant the architecture, not raw performance. Now of course some may argue that ATI's existing architecture is great, but is the 200 architecture great too? In fact, Nvidia has been recycling the 8800 over and over again. Can't Nvidia simply shrink the 2xx architecture from 55nm to 40nm, resulting in less power draw and increase performance? Yes, but they didn't. Instead, they got a turn and use a new architecture that many believed it isn't going to what on gaming. Well, it works, and it works better in Dx11. So now ATI is using the old architecture while Nvidia is using a new one. As to performance it is head to head, if not better. That is with 480 CUDA cores, imagine it on 512 CUDA cores.

Now Fermi is the first product of its design, using the tick-tock theory from Intel CPU, it is the "tick", and it is already as good as if they were to shrink the 2xx architect, meaning that it is a win in terms of decision. Nvidia was greedy as they both redesign and use 40nm at the same time without knowing that things may not go according to plan. For that, Nvidia had faced at least 6 months delay to create something that only worked 80% from the goal at 20% yield. However, now they have a new and working architecture. We already know that 512 CUDA cores will be faster. We know that ECC memory can be ditched. If they can somehow make 480 CUDA cores on 55nm, then it will be a huge cut on cost plus increase on yield. It is clear that there is a lot of headroom for improvement on this new architecture, the one that people believed it isn't for gaming, is actually good for gaming.

Lets come back to what we have today. ATI had 6 months to fix their drivers, resulting the cata 10.3, which increases performances by a good amount. Won't it be logical for Nvidia to be able to do the same thing on their drivers? Eyefinity is still buggy where the 2nd display flickers. Nvidia is experiencing the same problem where the card won't down clock upon multi display. Interestingly, the Nvidia problem was made big, and the ATI problem was not mentioned here in this forum. Did I mentioned that 120hz display won't work on 5870?

I disagree with most of what you've said.

Fermi is superior on paper maybe. But how do you rate one architecture 'superior' to another?

If you look at performance per dollar, the 5870 tops it gernerally.

If you look at how much silicon Fermi needs, AMD tops it. You'd think a 'superior' architecture would give you better performance per mm2 of silicon, but it doesn't. Why does this 'superior' architecture get less performance per mm of silicon? For the record, I could care less if my GPU was the size of a pin head or 800mm2, it doesn't matter to me. But for the sake of what Serro said, I'd like to know how it's 'superior' when it does less per mm2 of silicon. If you have two GPU's, one from company Red and one from company Green, and both are 275mm2, I would expect the superior architecture to be faster, right? I mean, it's architecture is superior, so given the same amount of transistors it should be faster. This isn't the case with Fermi.

Performance per amount of heat produced. Who wins there, the 'superior' architecture or the crappy 'old' architecture? How about the amount of noise produced to keep the hot GPU cool and running?

I'd also like to remind you that the 'inferior' 58xx architecture was developed with the idea of the top of the line card being a dual GPU. Two of the GPU's can fit in the PCIE spec power envelope. Will Fermi's 'superior' architecture allow for dual GPU's? We'll have to see, but I think we can all agree if Nvidia makes it, it'll likely not fit in PCIE spec and have some serious power draw and heat issues.

In raw performance the top of the line Fermi certainly wins there. Fermi looks to have the better tessellation abilities, but I'll wait for real games (not just a benchmark) before I'd declare that as absolute truth. But, why does the gap close as the load increases? Despite it's 'superior' architecture and higher memory bandwidth, the 5870 is closer to Fermi in performance at 2560x1600 resolution than at 1920x1200 resolution. It seems odd that a 'superior' more forward looking architecture can't pull away more as the load increases... it infact does the opposite. Does that mean something for tomorrow's games, or doesn't it?

And lastly, I don't know that we'll see any huge driver improvements with Fermi. We may see some improvements, but we may not. When AMD launched they had early drivers. When Fermi launched it may very likely had somewhat mature drivers. Remember, this thing was supposed to be out as early as October I've heard. Nvidia has had working silicon for some time. We'll see how the drivers play out, but I personally don't believe that Fermi launched with very early drivers, I'm confident Nvidia has been tweaking them for some months before the actual launch.
 
Last edited:

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Looks like you're late for the "Butthurt by Fermi" group therapy session, better hurry, Keys will need a shoulder to cry on. Honestly, the rampant fanboyism on this forum is something else. But keep posting, I'll keep shutting down your arguments; it's a great break from work.

You ever look in the mirror?

Get out a little and smell the fresh air. These are video cards produced by companies who dont know or care about you. It is hard to imagine anybody taking the place of Rollo. But you have clearly shown an eagerness to prove you are upto the task.

And what is with this lame internet toughguy approach? Do you win some trophy if in your mind you keep shutting down and out shouting people on an internet msgboard?

I hope AMD pays you well.
 

Saga

Banned
Feb 18, 2005
2,718
1
0
I figure a large number of Fermi buyers fall into this category:

1) Will not buy ATI, regardless of performance

2) Live with their parents, saved enough money on allowance or job at Burger King to get a Fermi. Do not pay heating/AC so the only cost to them is the initial sunk cost.

I find it somewhat comical that there are so many arguments in this thread about why someone would buy a particular card.

Has nobody really thought of the most basic? Most of my motherboards use nVidia chipsets. They also cost $400+, and I typically skip a graphics card generation because SLIing an additional card from the previous is far cheaper and is usually faster than upgrading to the newest high-end every single time. So I have two options, replace a $400 board and factor that into my graphics card decisions, or buy either the top of the line or near top of the line nVidia card, and simply buy another one later to SLI if I feel it's necessary. This is why I simply got another 280GTX instead of getting a 480GTX, and letting my 9800GTX sit in the third slot for PhysX. Please tell me there is a realistic way I could have done better with the money factoring in motherboard costs alone, and I'll begin to listen.

From a performance and reliability perspective I am very fond of nVidia chipsets. This plays a significant role in my graphics card purchasing decisions.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
All excellent points SlowSpyder, no need for me to parrot them :thumbsup:

You ever look in the mirror?

Get out a little and smell the fresh air. These are video cards produced by companies who dont know or care about you. It is hard to imagine anybody taking the place of Rollo. But you have clearly shown an eagerness to prove you are upto the task.

And what is with this lame internet toughguy approach? Do you win some trophy if in your mind you keep shutting down and out shouting people on an internet msgboard?

I hope AMD pays you well.
Looks like we're finally winding down this session of me handing the NV fanclub it's ass again. The minute anyone posts anything that isn't putting Fermi in a good light, you can count on Wreckage, Keys, Gex87, or some other incompetent to start posting away and grasp at anything to take down said posters argument. It doesn't matter if it's logically flawed, a personal attack, or just plain wrong, one or more of you will be there. But what the group of you don't realize is that this just adds against you. You'll notice every time I or some other poster have soundly defeated one of the fanclub's bullshit posts, they either ignore it or turn to a personal attack. For instance, after I pointed out the inaccuracy of your horrible "sports car" analogy, what have your last two forum posts been? Personal attacks:
Fermi could had been 100% faster and MRK6 would be disappointed. Find something to piss and moan about. Most like its late release and sure still yapping on about the heat and noise and 5 bucks a month it costs to run the thing.
You ever look in the mirror?

Get out a little and smell the fresh air. These are video cards produced by companies who dont know or care about you. It is hard to imagine anybody taking the place of Rollo. But you have clearly shown an eagerness to prove you are upto the task.

And what is with this lame internet toughguy approach? Do you win some trophy if in your mind you keep shutting down and out shouting people on an internet msgboard?

I hope AMD pays you well.
I could go complain to a mod about it, but I don't care. You're some crying fanboy and I'd rather leave the posts up to show A) your true character and B) the trash they allow on these forums. In the end, I suppose you've done your job, haven't you? You've tried to drag down any interesting discussion down to your level, by calling everyone else fanboys and saying we should all go outside. Sulk more about your favorite company, it's pathetic.

In the end, I'm a hardware enthusiast and will always enjoy discussing hardware. There isn't a single post of mine that shows me being a "fanboy" despite your ill attempts to pin it on me. If that's the best you can do, looks like you're really hurting... about computer hardware. Wow the irony is so thick you could cut it with a knife.
 

Madcatatlas

Golden Member
Feb 22, 2010
1,155
0
0
That sounds like great reasoning to me Saga. But what are you thinking about future upgrades now that there wont we any nVidia chipsets? Atleast not untill we hear more about the Intel vs nVidia case. Or did i miss something crucial in all this?
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,001
126
I find it somewhat comical that there are so many arguments in this thread about why someone would buy a particular card.

Has nobody really thought of the most basic? Most of my motherboards use nVidia chipsets. They also cost $400+, and I typically skip a graphics card generation because SLIing an additional card from the previous is far cheaper and is usually faster than upgrading to the newest high-end every single time. So I have two options, replace a $400 board and factor that into my graphics card decisions, or buy either the top of the line or near top of the line nVidia card, and simply buy another one later to SLI if I feel it's necessary. This is why I simply got another 280GTX instead of getting a 480GTX, and letting my 9800GTX sit in the third slot for PhysX. Please tell me there is a realistic way I could have done better with the money factoring in motherboard costs alone, and I'll begin to listen.

From a performance and reliability perspective I am very fond of nVidia chipsets. This plays a significant role in my graphics card purchasing decisions.

To me that's hardly the most basic reason to buy a card. I bet well under 10% of gamers currently use multiple GPU's. I understand your position, you have an expensive motherboard that allows for SLI, so you want to stick with that board and find two GTX280's be a better option than a GTX480.

But, like I said, most people don't use multiple GPU's. I think it's great to have the option, but a lot of people want the next DX version capable card. Some of the new features a new generation of cards will support.

Don't get me wrong, I think the idea of adding a second card is potentially going to be the better option. I just don't think a lot of people consider it, they want the newest tech often.
 

Nox51

Senior member
Jul 4, 2009
376
20
81
I find it somewhat comical that there are so many arguments in this thread about why someone would buy a particular card.

Has nobody really thought of the most basic? Most of my motherboards use nVidia chipsets. They also cost $400+, and I typically skip a graphics card generation because SLIing an additional card from the previous is far cheaper and is usually faster than upgrading to the newest high-end every single time. So I have two options, replace a $400 board and factor that into my graphics card decisions, or buy either the top of the line or near top of the line nVidia card, and simply buy another one later to SLI if I feel it's necessary. This is why I simply got another 280GTX instead of getting a 480GTX, and letting my 9800GTX sit in the third slot for PhysX. Please tell me there is a realistic way I could have done better with the money factoring in motherboard costs alone, and I'll begin to listen.

From a performance and reliability perspective I am very fond of nVidia chipsets. This plays a significant role in my graphics card purchasing decisions.


I thought that intel chipsets were considered superior to NV's on the core 2 duo and later platforms without hdd corruption issues that were happening on NV side.

That is of course ignoring the whole Intel and NV breakup over chipsets today...
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
No, sloppily engineered hardware is getting hotter and hotter. Such was the case of Prescott, which Intel rectified with the Core 2 series. Such was the case with the HD2900XT, which was rectified with the HD3870 series, etc.
Lol, PSU offers more and more juice, but they shouldn't need to according to your definition. 5970 uses more electricity than 5870. In your definition, that is due to sloppiness.

Show me once where I said that. Go quote it. The stupidity in this post is amazing.

Awwww, looks like I stepped on the fan club's toes . It's a problem because I don't want inferior, inefficient hardware in my system. It's a problem because I don't want my room turning into a sauna every time I play a game. It's a problem because I don't want to run my A/C all the time and pay twice the cost in electricity to keep my condo at a comfortable temperature. It's a problem because I don't want to run my card's fans louder to keep an overclock stable. Should I stop here or keep going? What's next? The fan noise isn't a problem - it's a feature because the white noise nHANCES TEH GAMING X-PERIENCE!!1!!!.

Give me a break.

The irony here is hilarious. If you had taken even one semester of elementary physics, you would realize what a fool you sound like. Heat is a quantity, the GTX480 produces much more of it, and therefore will heat up it's surroundings (the sink) more. How you fail at such a simple concept is beyond me.

If my post is stupid, then I have no words to describe yours. You said that you don't want an inferior, inefficient hardware, referring GTX 480, in your system because It is a problem as it will turn your room into a sauna every time you play a game. Yet, without a sense of irony, or knowledge, that 5870 also requires a huge heatsink, and produce a lot of heat also. You believe that having a GTX 480 will double your cost in electricity to keep your condo at a comfortable temperature, however without knowing that the same effect occurs with 5870 too. That is where you fail at grade 10 physics. You will answer yes to the question "Will a lightbulb doubles your cost on air-conditioning?" I hope you know not to touch a lighbulb when it is on for a while.

So it's six months late, is less efficient, missed its performance mark, might do some things better (tesselation?) and you're cheering that. The fanboy psyche is truely interesting.
six months late as if you brought the card 6 months ago and therefore hurting you? Less efficient compare to what again? Missed its performance mark? Where exactly did you get the performance mark before the card releases? So you don't care about PhysX acceleration, you don't care about high tessellation performance, but a lightbulb worth of electricity.

Seriously, go liquid cooling already if noise is a problem. It kills 2 birds with 1 stone.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
I disagree with most of what you've said.

Fermi is superior on paper maybe. But how do you rate one architecture 'superior' to another?

If you look at performance per dollar, the 5870 tops it gernerally.

If you look at how much silicon Fermi needs, AMD tops it. You'd think a 'superior' architecture would give you better performance per mm2 of silicon, but it doesn't. Why does this 'superior' architecture get less performance per mm of silicon? For the record, I could care less if my GPU was the size of a pin head or 800mm2, it doesn't matter to me. But for the sake of what Serro said, I'd like to know how it's 'superior' when it does less per mm2 of silicon. If you have two GPU's, one from company Red and one from company Green, and both are 275mm2, I would expect the superior architecture to be faster, right? I mean, it's architecture is superior, so given the same amount of transistors it should be faster. This isn't the case with Fermi.

Performance per amount of heat produced. Who wins there, the 'superior' architecture or the crappy 'old' architecture? How about the amount of noise produced to keep the hot GPU cool and running?

I'd also like to remind you that the 'inferior' 58xx architecture was developed with the idea of the top of the line card being a dual GPU. Two of the GPU's can fit in the PCIE spec power envelope. Will Fermi's 'superior' architecture allow for dual GPU's? We'll have to see, but I think we can all agree if Nvidia makes it, it'll likely not fit in PCIE spec and have some serious power draw and heat issues.

In raw performance the top of the line Fermi certainly wins there. Fermi looks to have the better tessellation abilities, but I'll wait for real games (not just a benchmark) before I'd declare that as absolute truth. But, why does the gap close as the load increases? Despite it's 'superior' architecture and higher memory bandwidth, the 5870 is closer to Fermi in performance at 2560x1600 resolution than at 1920x1200 resolution. It seems odd that a 'superior' more forward looking architecture can't pull away more as the load increases... it infact does the opposite. Does that mean something for tomorrow's games, or doesn't it?

And lastly, I don't know that we'll see any huge driver improvements with Fermi. We may see some improvements, but we may not. When AMD launched they had early drivers. When Fermi launched it may very likely had somewhat mature drivers. Remember, this thing was supposed to be out as early as October I've heard. Nvidia has had working silicon for some time. We'll see how the drivers play out, but I personally don't believe that Fermi launched with very early drivers, I'm confident Nvidia has been tweaking them for some months before the actual launch.

First of all, if the focus on cypress and fermi is about DirectX11, then it is clear that Fermi wins by far.
480 vs 5870 on unigine
Forget about the FPS, just look at the scene, which card produce a smoother scene through the branch?

Of course the "6 month late" is being used over and over again, but then are you comparing 480 with the 5870 at launch? Both have limited supplies at launch. Now I don't look back, I look at what we have now. There are lots of 5870 around, but not 480, which is a true problem in terms of purchase. Other than that, I don't see why that "6 month late" plays any roles.

No one denies the fact that 480 uses more electricity. Because of that, it produces more heat. Because of that, it produces more noise. Breaking this down into 3 separate points is not a fair comparison. Now 480 requires more electricity, about 33% more than 5870 but produce almost 100% better on unigine branching. Is this not efficient?

Now back to Dx9/10 games, both cards play toe to toe. Now the thing that eats up FPS is AA, and both vendors attack it differently. The new architecture should be able to compute AA using a new method, which is not effective on older generations. Until that is out, they are still using the old method, and therefore, not completely utilizing the card. Yes, I really don't know if there will be a new driver that increases the performance, but is it really so gay to say that there is a high possibility?

As of now, with the existing driver, it ties with 5870 on Dx9 and Dx10 games, but beat 5870 under Dx11 branching. Is it not logical for engineers to focus its performance on Dx11 before Dx9/10 as Dx11 is the "big thing" now?

You may say that 480 don't have a lead on Dx11 games, but you should also know that those so called Dx11 games are really Dx10 plus a few Dx11 support that doesn't impact the game. In other words, you can treat them as Dx10.

Again, I am not saying that 5870 is garbage and people should throw it away and grab 480. In fact, they are both good cards. If you already got a 5870 card, gratz on the new card. I just hope that people who got the new Fermi card can have the same blessing instead of telling them that they have got a old ineffective hot junk, or the thing that Mrk6 described.
 
Last edited:

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Lol, PSU offers more and more juice, but they shouldn't need to according to your definition. 5970 uses more electricity than 5870. In your definition, that is due to sloppiness.
That's not my argument at all, where are you reading that? Or are you like so many others on this forum that simple make shit up when they have no reasonable rebuttal? The 5970 uses more power than a a 5870 and give substantially more performance (sometimes upwards of 80%). However, it is a different technology and there are different circumstances regarding it. If you would like to compare multi-GPU, use a resonable example like a 5970 vs. GTX480SLI or 5870CF. As it stands, the GTX480 uses 40-50% more power for 10-15% more performance. That's a poor design, as it was with the other examples I gave. I have stated this argument several times in this thread and others, and there isn't much more I can do to break it down, since it's so simple in the first place. You're going to have to become more educated in the matter if you want to further partake in the discussion.
If my post is stupid, then I have no words to describe yours. You said that you don't want an inferior, inefficient hardware, referring GTX 480, in your system because It is a problem as it will turn your room into a sauna every time you play a game. Yet, without a sense of irony, or knowledge, that 5870 also requires a huge heatsink, and produce a lot of heat also.
Wrong again. Let's reel you back into reality with some numbers: http://www.xbitlabs.com/articles/video/display/geforce-gtx-480_13.html#sect0 . The 5870 uses 130W less than a GTX480 under load. That's 130W extra that is consistently dumped into the immediate environment of the computer and user. Barring extensive use of fans throughout my condo, I'd have to leave leave the A/C on longer and more often to equalize the temperature difference, which adds to my overall costs.

You believe that having a GTX 480 will double your cost in electricity to keep your condo at a comfortable temperature
Again, you either fail at reading comprehension or simply do not understand rudimentary physics. It doubles the cost of the 5870's component to cooling my place, which is what I stated. You people make up shit like it's an indoor sport.
however without knowing that the same effect occurs with 5870 too. That is where you fail at grade 10 physics. You will answer yes to the question "Will a lightbulb doubles your cost on air-conditioning?" I hope you know not to touch a lighbulb when it is on for a while.
And here's some more physics failure coming from you. You realize that a light bulb is hot (great, we're at first grader, let's keep trying), but fail to realize that the lightbulb's immediate environment is uncomfortably warm. Thankfully, it's on the ceiling, under a shade, or a across the room and doesn't bother me. I'm sitting right next to my computer, and there the extra heat does bother me. That's not a difficult concept, but you still can't grasp it several posts later.

six months late as if you brought the card 6 months ago and therefore hurting you? Less efficient compare to what again? Missed its performance mark? Where exactly did you get the performance mark before the card releases? So you don't care about PhysX acceleration, you don't care about high tessellation performance, but a lightbulb worth of electricity.
Seems like you're the one that's hurting since you keep rushing in here in (failed) attempts to defend your company/card/whatever; I've been enjoying my 5870 and 5850's for sometime. NVIDIA had six months to create a card that's better than the 5870 and they didn't. PhysX is a joke, tessellation in games is minor enough for the two cards to perform the same. All they created was a gimped GPGPU that they had to overclock to outperform the 5870, and even then it's a measly 10-15% faster. Like I said, you can go join Gen87 and Keys and cry about it, not my problem.

Seriously, go liquid cooling already if noise is a problem. It kills 2 birds with 1 stone.
Liquid cooling doesn't solve the problem, as it still dumps the same amount of heat into the environment. If you understood a shred of physics, you'd know that, but alas, you don't. Please keep trying though, this is like shooting fish in a barrel.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
All excellent points SlowSpyder, no need for me to parrot them :thumbsup:

Looks like we're finally winding down this session of me handing the NV fanclub it's ass again. The minute anyone posts anything that isn't putting Fermi in a good light, you can count on Wreckage, Keys, Gex87, or some other incompetent to start posting away and grasp at anything to take down said posters argument. It doesn't matter if it's logically flawed, a personal attack, or just plain wrong, one or more of you will be there. But what the group of you don't realize is that this just adds against you. You'll notice every time I or some other poster have soundly defeated one of the fanclub's bullshit posts, they either ignore it or turn to a personal attack. For instance, after I pointed out the inaccuracy of your horrible "sports car" analogy, what have your last two forum posts been? Personal attacks:


I could go complain to a mod about it, but I don't care. You're some crying fanboy and I'd rather leave the posts up to show A) your true character and B) the trash they allow on these forums. In the end, I suppose you've done your job, haven't you? You've tried to drag down any interesting discussion down to your level, by calling everyone else fanboys and saying we should all go outside. Sulk more about your favorite company, it's pathetic.

In the end, I'm a hardware enthusiast and will always enjoy discussing hardware. There isn't a single post of mine that shows me being a "fanboy" despite your ill attempts to pin it on me. If that's the best you can do, looks like you're really hurting... about computer hardware. Wow the irony is so thick you could cut it with a knife.

Seek help
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
I can see why Mrk6 says the 480 is inferior when it uses more power and runs hotter and is louder than a 5970 that on average is about 50% faster.
You could say its cheaper, but so was the 2900xt, but people still called that a crap card.

Is it even physically possible to run tri-sli with 480s?

I don't understand when anybody can say one card is better than the other in dx11. the only dx11 games we have are dirt2, Avp, BC2, cop and metro2033 and you cant see the "improvements" when you run them in dx11.
I'll wait for second gen dx11 games like f1-2010, crysis2 and BF3. we all remember how well the fx5800 performed in x2 or whatever that game was. and we all remember how well the 2900xt performed in call of Juarez.

Unigene is a just tech demo and i will pay no attention to it for the same reasons nobody pays attention to 3dmark.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,001
126
First of all, if the focus on cypress and fermi is about DirectX11, then it is clear that Fermi wins by far.
480 vs 5870 on unigine
Forget about the FPS, just look at the scene, which card produce a smoother scene through the branch?

Of course the "6 month late" is being used over and over again, but then are you comparing 480 with the 5870 at launch? Both have limited supplies at launch. Now I don't look back, I look at what we have now. There are lots of 5870 around, but not 480, which is a true problem in terms of purchase. Other than that, I don't see why that "6 month late" plays any roles.

No one denies the fact that 480 uses more electricity. Because of that, it produces more heat. Because of that, it produces more noise. Breaking this down into 3 separate points is not a fair comparison. Now 480 requires more electricity, about 33% more than 5870 but produce almost 100% better on unigine branching. Is this not efficient?

Now back to Dx9/10 games, both cards play toe to toe. Now the thing that eats up FPS is AA, and both vendors attack it differently. The new architecture should be able to compute AA using a new method, which is not effective on older generations. Until that is out, they are still using the old method, and therefore, not completely utilizing the card. Yes, I really don't know if there will be a new driver that increases the performance, but is it really so gay to say that there is a high possibility?

As of now, with the existing driver, it ties with 5870 on Dx9 and Dx10 games, but beat 5870 under Dx11 branching. Is it not logical for engineers to focus its performance on Dx11 before Dx9/10 as Dx11 is the "big thing" now?

You may say that 480 don't have a lead on Dx11 games, but you should also know that those so called Dx11 games are really Dx10 plus a few Dx11 support that doesn't impact the game. In other words, you can treat them as Dx10.

Again, I am not saying that 5870 is garbage and people should throw it away and grab 480. In fact, they are both good cards. If you already got a 5870 card, gratz on the new card. I just hope that people who got the new Fermi card can have the same blessing instead of telling them that they have got a old ineffective hot junk, or the thing that Mrk6 described.


I never mentioned Fermi being 6 months late. I don't know why you are trying to make a counter piont to anything I said by mentioning that, seeing as I never said anything about Fermi coming to market later than Cypress. You made the comment that Fermi is a 'superior architecture'. I disagree and am pointing out why I disagree. Fermi coming late doesn't affect the architecture. What it does do is hurt Nvidia a bit from a business stand piont and to some degree give people the impression that Nvidia is not the technology leader. For instance, I bought a 5870. My wallet is now closed for this generation. Others have done the same, bought an AMD card because Nvidia was no where to be found. Being late doesn't matter in regards to a superior or inferior architecture, I never brought it up, I'm not sure why you did.

Again, I'd wait for real games before you get too excited about Fermi and it's tessellator. Heaven is a completely synthetic bench. As I understand it, it uses tessellation so much that some of the tessellated triangles are the size of an individual pixel. No game is likely to do that. By doing that you'd be killing performance for no noticable visual gain. If I remember correctly, the Radeon 2900XT scored higher in 3DMark06 than the 8800 cards. We all know how that turned out. Don't put so much faith in synthetic benches. The 5xxx cards tessellators may be plenty for real world games, it may fall short. It's way too early to declare Fermi as 'superior'.

Regarding heat, power, and noise, who cares if I make three seperate bullet points or lump them together? Power use, heat, and noise are valid concerns for many people. It doesn't matter how they're listed. It doesn't matter if I say:

-Heat, Power, Noise
or
-Heat
-Power
-Noise

Does either way make it any less of a concern to someone who wants a quiet card or is concerned about running Fermi on their 550 watt power supply? Nvidia equipped this thing with what looks to be a pretty stout 5 heat pipe cooler because they had to. You talk about Nvidia using 33% more power while being faster in a synthetic benchmark. I guess if I was building a no holds barred rig just to run that bench, then Fermi is your choice. But for those of us who actually play games with our cards, than Fermi's 'superior' architecture looks very inefficient in comparrison.

You said that Fermi is a superior architecture, an it is on paper, but I'm just not seeing it in the real world. It's faster overall than AMD's fastest single GPU, but I don't believe it is because of being a 'superior' architecture. It's faster because Nvidia knew where the bar was set since at least September 09, so they did what they had to do to be faster. AMD's 'inferior' architecture seems much more well rounded. It's also built with the idea of being able to be used as a very fast single GPU or a top of the line dual GPU. I'm just not seeing Fermi's superiority. I think you may be confusing Nvidia's pulling all the stops regardless of power use, heat output, time it takes to get to the market compared to the competition to beat AMD's existing fastest single GPU with 'superior' architecture.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Seek help
I'm not the one in a fan club personally insulting others over video cards . That's a great defense, really, it's the Gen87 "I have no idea what I'm talking about, and since my insults are meaningless and ill-contrived, I'm just going to attempt to degrade the entire discussion and field, even thought that's why I'm here in the first place!" defense.
Graphics cards are serious business.
Ahhh, I knew there were more in the nV circle jerk, there's another. Are you going to come in here and cry too?
I'm just not seeing Fermi's superiority. I think you may be confusing Nvidia's pulling all the stops regardless of power use, heat output, time it takes to get to the market compared to the competition to beat AMD's existing fastest single GPU with 'superior' architecture.
Exactly.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Has nobody really thought of the most basic? Most of my motherboards use nVidia chipsets. They also cost $400+

I wasn't aware that NV manufactured chipsets for the fastest platform around - being Intel's X58 chipset with Core i7 980X processor. I am not aware of abundant choices of $400+ motherboards other than this EVGA http://www.newegg.com/Product/Produc...-059-_-Product Also, Phenom II X4s run on AMD chipsets. So you must be using 650/680 chipsets? Even so, 680 provided little to no benefit over the $100 cheaper 650 chipset boards.

Please tell me there is a realistic way I could have done better with the money factoring in motherboard costs alone, and I'll begin to listen.

Unless you are into extreme overclocking, a $150-175 board performs identically to a "$400 board". Therefore, you could start by not wasting $ on >$175 motherboards. In fact, if you skip SLI/CF altogether, something like a $130 P55A-UD3 will more or less deliver identical performance to a $400 motherboard. So you could get a 5970 videocard for $600 + $130 motherboard. There are other alternatives such as Asus P55 Evo for $175 and dual 5850s for $540. Either of these ATI setups are good alternatives to dual GTX 280s and a $400 motherboard.

From a performance and reliability perspective I am very fond of nVidia chipsets. This plays a significant role in my graphics card purchasing decisions.

Intel chipsets have consistently outperformed every single NV chipset ever made from top end FSB overclocking to I/O performance, etc. The only good NV chipset was Nforce 2 for AMD. In fact, other than SLI compatibility out of the box during 650/680i chipset days, there was not a single other reason to purchase NV-based motherboards in the last 5 years. Unfortunately for NV users with 650/680 boards, there were issues with 45nm Penryns compatibility/overclocking. And on top of that, NV chipsets were inferior in idle and load power consumption: http://www.anandtech.com/show/2140/6
 
Last edited:

Genx87

Lifer
Apr 8, 2002
41,091
513
126
I'm not the one in a fan club personally insulting others over video cards . That's a great defense, really, it's the Gen87 "I have no idea what I'm talking about, and since my insults are meaningless and ill-contrived, I'm just going to attempt to degrade the entire discussion and field, even thought that's why I'm here in the first place!" defense.
Ahhh, I knew there were more in the nV circle jerk, there's another. Are you going to come in here and cry too?
Exactly.

You sound paranoid. Like I said before seek help. This is a video card you are getting all worked up over. Life will continue well after Fermi and the 5xxx series are retired.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
You sound paranoid. Like I said before seek help. This is a video card you are getting all worked up over. Life will continue well after Fermi and the 5xxx series are retired.
You're still posting in this thread though, even after your argument was dismissed. In fact, since you only come back to insult or personally attack me, I'd say you're infatuated, which is creepy at best. Interesting justification though, that I'm the one that needs help .
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
i never mentioned fermi being 6 months late. I don't know why you are trying to make a counter piont to anything i said by mentioning that, seeing as i never said anything about fermi coming to market later than cypress.
...
And lastly, i don't know that we'll see any huge driver improvements with fermi. We may see some improvements, but we may not. When amd launched they had early drivers. When fermi launched it may very likely had somewhat mature drivers. remember, this thing was supposed to be out as early as october i've heard. nvidia has had working silicon for some time. We'll see how the drivers play out, but i personally don't believe that fermi launched with very early drivers, i'm confident nvidia has been tweaking them for some months before the actual launch.
you made the comment that fermi is a 'superior architecture'. I disagree and am pointing out why i disagree. Fermi coming late doesn't affect the architecture. What it does do is hurt nvidia a bit from a business stand piont and to some degree give people the impression that nvidia is not the technology leader. For instance, i bought a 5870. My wallet is now closed for this generation. Others have done the same, bought an amd card because nvidia was no where to be found. Being late doesn't matter in regards to a superior or inferior architecture, i never brought it up, i'm not sure why you did.
Now, the unigine demo can't be bias to Fermi, although it is possible for Nvidia to cheat in their drivers, but so can ATI. Suppose they didn't, then the unigine branch does so a distinguishable performance. That does not make 480 a must buy, just simply show a lead, that is all.

Again, i'd wait for real games before you get too excited about fermi and it's tessellator. Heaven is a completely synthetic bench. As i understand it, it uses tessellation so much that some of the tessellated triangles are the size of an individual pixel. No game is likely to do that. By doing that you'd be killing performance for no noticable visual gain. If i remember correctly, the radeon 2900xt scored higher in 3dmark06 than the 8800 cards. We all know how that turned out. Don't put so much faith in synthetic benches. The 5xxx cards tessellators may be plenty for real world games, it may fall short. It's way too early to declare fermi as 'superior'.
Reviews are out, stated that 480 is the fastest single core card. Either they all lie or you simply don't wanna accept the fact. What reviews also say is that they expect more from Nvidia after all the wait, which is perfectly valid. However, all reviews are done on the released spec, and agreed that GTX 480 is the fastest card. Is Nvidia fanboy say so? or ATI fanboy say no?

Regarding heat, power, and noise, who cares if i make three seperate bullet points or lump them together? Power use, heat, and noise are valid concerns for many people. It doesn't matter how they're listed. It doesn't matter if i say:

-heat, power, noise
or
-heat
-power
-noise

does either way make it any less of a concern to someone who wants a quiet card or is concerned about running fermi on their 550 watt power supply? Nvidia equipped this thing with what looks to be a pretty stout 5 heat pipe cooler because they had to. You talk about nvidia using 33% more power while being faster in a synthetic benchmark. I guess if i was building a no holds barred rig just to run that bench, then fermi is your choice. but for those of us who actually play games with our cards, than fermi's 'superior' architecture looks very inefficient in comparrison.

You said that fermi is a superior architecture, an it is on paper, but i'm just not seeing it in the real world. It's faster overall than amd's fastest single gpu, but i don't believe it is because of being a 'superior' architecture. It's faster because nvidia knew where the bar was set since at least september 09, so they did what they had to do to be faster. Amd's 'inferior' architecture seems much more well rounded. It's also built with the idea of being able to be used as a very fast single gpu or a top of the line dual gpu. I'm just not seeing fermi's superiority. I think you may be confusing nvidia's pulling all the stops regardless of power use, heat output, time it takes to get to the market compared to the competition to beat amd's existing fastest single gpu with 'superior' architecture.
You said you are not seeing it, but you know it works well on unigine demo and attacking the bar set by ATI 5870. This is a contradiction. Again, ATI uses and old design an strink it, while Nvidia uses a new design. It is always dangerous to use a new design as it may perform worst then the old one. It turns out that it works extremely well on tessellation, and not worst on old stuffs. I won't say 480 beats 5870 to dust, but I will say it is a good start on something new. Again, a newer driver will probably boost the performance by about 10-20% in the coming months, but as of now, 5870 is really not far behind, given that it has been out for 6 months. What more do you expect?

Put down your bias for a moment, 5870 has been out and has been the fastest single core GPU and not many people deny it. After 6 months of glory a new card comes in and beat it. Guess what, 6 months later, if not earlier, a new card will take the crown.

It is also a fact that 480 tends to use more electricity to get more performance out, but guess what, EVGA will have a Super super clock version out of 480. If putting more electricity into a chip means better perfermance, we may as well ramp up the voltage of old video card off ebay and call it a day. It simply don't work that way. There is a tolerance on how much juice the hardware can take. It can only take more juice if the card is designed to handle it. At the end, to push more resistors, more juices are needed. The challenge is, some parts may not handle the juice and fail to function. Again, 480 is not perfect, not where near perfect, but it is really okay given that it is first of its design. Now 5870 uses more electricity, louder, and hotter than 4870, but you don't have a problem with that. So why are you having problems that 480 is 2-5 dba louder, use 100 watt more, or produce more heat, about what a 100 watt lighbulb produces rock your boat is beyond me.
 
Last edited:

Attic

Diamond Member
Jan 9, 2010
4,282
2
76
It will be interesting to see reliable sales figures for the GTX 470 and 480 2-3 months down the road. Price/Performance clearly favors AMD/ATI and I gotta think the knowledge of how hot and loud the 480 is is going to knock some wind out of it's sales. The 470, if it continues to only find availibility at a price point close to the 5870, it looks to be a lousy purchase IMO.

I'd personally like to see nVidia move in a different direction with their approach to pc gaming and the gaming community. If the Fermi is recieved poorly i'd hope it would be a catalyst for change at nVidia in regards to pc gaming.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
You're still posting in this thread though, even after your argument was dismissed. In fact, since you only come back to insult or personally attack me, I'd say you're infatuated, which is creepy at best. Interesting justification though, that I'm the one that needs help .

You confuse infatuation with somebody giving you solid life advice.
 

imaheadcase

Diamond Member
May 9, 2005
3,850
7
76
Actually people DO consider heat and noise...why else would reviewers bother investigating that stuff if they didn't?


They also comment on the packaging, how it looks, etc. Stuff MOST people don't care for.

If you are going to care about heat and sound when buying a expensive card. Maybe you should reconsider the high end card game.. I can't even believe people are mentioning electric bills. lol

I've never once heard someone complain about such stuff in real life with computers.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |