Originally posted by: evolucion8
Originally posted by: Keysplayr
Originally posted by: MarcVenice
I heard there will be a new app for F@H for ATI cards, and that it will boost performance a lot, so much, that ATI cards can Fold as fast as Nvidia cards?
Yes, we've been hearing this since 4xxx series launch. There will be this. There will be that. If they can do it, GREAT!!! Lets have it!! Still waiting on improved AVIVO since the last review that was in 2008. I'm not saying ATI "can't" do it, it's just that they're "not" doing it. At this stage, they might be feeling it's hopeless to compete against GT200 hardware in anything but gaming for all we know. Why throw scant money trying to best a competitor you cannot best in those areas? If I were ATI, I'd put most resources into making their GPU's a bit more powerful in the GPGPU dept. Not just a powerful GPU design, but one that is a bit easier to code for and get something out of them. Because right now, it looks like it's a real bitch and I could see why devs arent bothering.
You are just ranting. AVIVO encodes movies is much faster than Bada Boom and is free. F@H is not currently using the Global Data Share, Local Data Share and the Global GPR feature found in the HD 4x000 architecture which doesn't exist in the HD 3800 architecture. The current F@H client is from November 2008 for Christ sake and is optimized for the HD 3800 GPU, so that's why it runs considerably slower in the HD 4800 than in their nVidia counterparts and the games could run faster on ATi hardware if developers under the TWIMTBP program instead of optimizing shaders to do a lot of texture reads, were focused to improve performance and image quality.
I am just ranting? AVIVO, as I have said, was faster. But the IQ was very poor, and had pixelated blocks in the finished product. Could be the reason it was faster. It didn't do the right job. Sounds very logical to me, how bout you? Am I ranting?
What does F@H not currently using the Global Data Share mean to us? Face it Evolution, if they could code F@H to run faster on ATI hardware, they would have by now. If they could have, they would have. We've just got finished talking about how much more difficult it seems to be to code for ATI architecture. And if the rumored R8xx is still the same 1+4 shader architecture the R7 series is albeit more of them, I don't see anything changing for the next two years at least. Many devs are probably just abandoning the effort because an easier solution is available. Could be? Doesn't sound like I'm ranting.
I've seeing games like Call of Juarez which was on nVidia program, and now looks better with ATi, the same happened with Far Cry 1 in which the demo The Project was created, pushed the boundaries in powerful graphics with the X800 series of cards, (
http://ati.amd.com/technology/...ames/crytek/index.html ) or HL2 EP1 and EP2 which looks amazing because they have a lot of shaders. Even other games like F.E.A.R. or F.E.A.R. 2 which aren't under no paid program and looks great, because they optimize their graphics for a standard like DX9 etc.
Hmm. Is F@H, or BOINC a part of the "TWIMTBP" program?
But I see that most games under the nVidia program have blocky heads, lots of overstretched textures, lots of aliasing, heck, look at Dead Space, look at Lost Planet, look at Quake 4, Call of Duty 4, Gears of War, Assassin Creed, they look good, but there are very few shader effects and their realism relies on textures, like if we were in the late 90's. Common, instead of using textures to simulate stuff, they should use more shaders to simulate a much higher variety of materials.
You might as well list the other hundreds of games listed under TWIMTPB. Why stop there? LOL. Blocky heads. That was funny.
Originally posted by: dguy6789
It's more accurate to blame the FAH people than ATI. They can't even make a non beta properly working multicore client that has a gui installer yet. They've also been working on an Xbox 360 client for years now with nothing to show for it still. They definitely don't get anything done in a timely manner.
Hmmm.. They got the NV folding client out in a timely manner. Several versions IIRC.
There shouldn't be any doubt in anyone's mind that a client that supports the 4000 series hardware properly will score significantly better than the current one. It's a joke that the vastly inferior 8800 cards get more work done than 4800 cards, it's obviously a software issue.
Until this is accomplished, of course there will be doubts. If it could have been done, it would have been done. Simple. You guys are hysterical.
Yeah, look at MilkyWay@Home, it's performance is much stronger on ATi hardware than in nVidia counterparts. ATi's knowledge of compilers and flexible programming is paying by itself today. Think about this, a HD 4890 GPU, having only 959 million of transistors, being able to keep up and outperform the 1.4 Billion monster aka GTX 260 and remain competitive with the 1.4B GTX 275 shows how much work is done per mm2 in the die against the GT200 architecture which is more inefficient in performance per mm2.
http://www.brightsideofnews.co...graphics.aspx?pageid=1
BSN*: Are you planning to release a NVIDIA client as well? Why not?
Gipsel: Not at the moment and there are several reasons for that. First of all, the ATI application still needs some polishing like multiple GPU or Linux support. Furthermore, the project itself is working with nVidia on CUDA-powered version. Apparently, nVidia gives a lot of support to BOINC projects that want to port their applications to CUDA. Together with the mature CUDA SDK, it shouldn?t take long until MW@H also gets a GPU application that supports the latest nVidia cards.
The reason I started with ATI in the first instance was the quite massive performance advantage ATI has on current hardware for the kind of calculations done at Milkyway [Dual Precision format - Ed.]. I hope, it will increase the interest of getting GPGPU applications ported also to ATI hardware, which is in a lot of cases at least as capable as comparable nVidia offerings. The fact that I?m a member of Team Planet3DNow!, a BOINC team associated with an AMD oriented website, has no influence whatsoever.
I might be reading the article wrong, but is there the possibility of ATI being the faster GPGPU at MW@H because there is no Nvidia client for it yet? I am not sure. But what do you think would happen if this same dude downloaded the CUDA SDK and created a client for MW@H? I think I know what would happen. Double precision? I thought GT200 supported it? Am I wrong?
Anyway, if you called my post a rant, what the heck would you call yours? Babble?
Blocky heads. Still making me chuckle. Thank you. :thumbsup: