Originally posted by: apoppin
i see ... you were stretching something to the ridiculous to make a point. And i will have to agree that FPS are a "measure" of only part of the picture. However, FPS graphs will show a LOT more ... so the 'min/max/av' is pretty useful in a summary of results.
Its not ridiculous when FPS are the basis for most assumptions that more RAM makes no difference in flawed reviews and comparisons. If anything, testing should be conducted similarly to storage reviews rather than GPU reviews.
i can't quite agree ... i believe watching the FPS in FRAPS [for example] does indicate repeated/skipped frames, hitching, or rubber-banding. Not to mention, a honest and attentive reviewer will mention issues related to gaming "smoothness"
Again, averaged into total frames and even with the frames within that second to come to Frames Per
Second the impact might be minimal or completely undetectable, but when observing such performance drops the impact is clearly obvious. For example, if you were at 50FPS in a GPU intensive frame sequence then transitioned to 10 consecutive frames that were actually less GPU intensive but HDD thrashing caused those 10 frames to be skipped, you would still see 50FPS in 2 consecutive seconds with an FPS monitor. What you might see on screen is your character at one position but then lurching forward to another position instead of your character moving smoothly if those 10 frames were rendered to its new position (which would be a 50 to 60FPS increase if there was no thrashing/skipping).
I've already brought Crysis up as an example with its timedemo as an example of how FPS simply won't give you an accurate picture of what's going on.
Crysis Benches from VR-Zone
Keep in mind, I'm only linking these results because they show a bunch of different tests, but anyone can mimic them on their own PCs and most likely have already. What you will notice in many reviews is that people throw out the first run. Why? Like you said, most "responsible reviewers" will mention something about texture loading or thrashing. Even if they didn't, the difference is so miniscule in terms of FPS it would be lost in the average of 3 results. But when you actually watch the demo runs, the differences are extremely obvious, as each camera turn may result in a hitch/pause on the first run. Nearly impossible to pick-up in a quantifiable test but taken as a whole you're looking at the demo taking 2-3s longer. Even if the offending transitions were only a fraction of a second, they're certainly obvious enough that they would impact your gaming experience.
How does this bench show more RAM translate into better performance with this example? Instead of having to constantly flush/cache new textures similar to the first run, a game can keep more textures cached resulting in smoother gameplay similar to the 2nd runs and beyond.
When you look at these results in comparison to 64-bit or more RAM, the differences between 32-bit and 64-bit might not be evident at all until you reach a point where the 64-bit OS can use more RAM and the 32-bit OS can't. It might not seem like a big deal, but if you're cruising along at 60FPS but then the game starts getting jerky/choppy seemingly out of nowhere, its going to be huge impact on your gaming experience, even if it doesn't result in a drop in FPS.
As for reviewers mentioning issues related to gaming "smoothness" or even load times, they certainly do, but realistically people often don't read such "subjective" comments, they just look at the pretty pictures. But when you actually see the difference in person, the difference is quite obvious and extreme. This is another reason reducing memory is helpful so that you can actually know what to look for and see how increasing memory helps improve performance and the main reason I find it rather incredible you don't understand more memory = more performance when the game or app is actually able to take advantage of it.
Well - logically - if you can't *notice* it, then it does not make a practical difference. i can only give my hopefully forthcoming experience with the 2 OSes. As to uploading FRAPS videos, i doubt it. Perhaps the graphs.
No, like I said, the difference is *very* noticeable, its just not easily quantifiable. Both yourself and JustaGeek, the two main skeptics in this thread, have already acknowledged performance improvements when either 1) increasing RAM in 32-bit or 2) upgrading to 64-bit so that more RAM could be used by the system/game.
If you wanted to get an idea of where those improvements originated from, you should've taken note of how much RAM HG:L was using with 2GB vs. 4GB (3.5GB addressable or w/e). From that its really quite simple, if HG:L could address a full 3GB in 32-bit with 3.5GB addressable you probably wouldn't see much improvement with 64-bit, however, I don't think it can or will due to system overhead and reserved swap space, which would probably bring its max addressable space to somewhere around 2-2.5GB virtual/physical. With only 2GB that would still be an improvement since you'd benefit from more physical address space, but that's not to say you wouldn't benefit further from an additional 500MB to 1GB addressable in a 64-bit environment if the game was able to use it.
Another option as I mentioned was to monitor HDD and RAM/page file activity even though it'd be a total PITA. This is probably the best way to illustrate whats going on when you see hitching/thrashing or what's going on when you're loading, but still doesn't do justice to what you're actually seeing on the screen.
i can only do either 2 or 4GB of RAM as i don't want to lose my Dual-channel capabilities. i think we're all pretty agreed that 2GB is the minimum for Vista gaming; 4GB is overkill for 32bit systems but we want to test the practical differences between it and the 64bit system addressing the entire 4GB. Perhaps since you have 8GB of RAM you might consider testing the differences for us vs 4GB in your rig.
You can still simulate those tests running in single channel until you get to 4GB. 1x1GB single channel, 2x1GB running single channel, 3x1GB should default to single channel, although some boards report dual channel. At 2x1GB you can also run dual channel and see any differences between single and dual channel but I think most know the difference is no more than 5% and not that relevant for these tests, similar to increasing RAM speed or using DDR2 vs. DDR3.
As for running tests myself, I don't mind running a few, but like I said, running benchmarks isn't exactly what I'd consider fun. I've just finished some changes to my system and will be spending some time enjoying it (especially my new LG HD-DVD/BRD combo drive, which is awesome for those interested in one).
The testing involved is long and tedious and anything short of a FRAPs video or recording with a video camera wouldn't really reveal actual gameplay differences. Load times are easy enough to measure and I've already done that with a few games (again using FRAPs videos), but again they're not quick tests to measure since it typically involves playing the game normally until you reach a point additional RAM is used. In something like an FPS, you might need to load each map once before you see the benefits of additional RAM, so unless you have a dedicated server that you can set cvars for, its not exactly a painless exercise.
Then you get into things like SuperFetch which benefits will almost be completely lost as the caching engine won't know what to cache and will lose whatever cached benefits with each reboot to change system configurations. Even with FRAPs videos the results will be somewhat skewed due to the massive CPU hit in Vista and drop in FPS, although it still accurately captures relative jerkiness in gameplay.
i already know Ready Boost makes Hg:L's gaming performance *worse* ... what else will we find out? i am pretty eager to begin. It is not a matter of "ego" although i'd prefer to be "right". If 64 bit offers me a practical advantage over 32-bit Vista with my current rig and games then i will be taking advantage of MS' upgrade offer. i'll let you know.
BtW, there are generally not a lot of reboots ... you cold boot into Vista and let it settle down [a good while - watch the HD activity] ... run your game benchmark 3 or 4 times and average them ... then you can reboot and repeat to your heart's content; i just find something to do in the meantime [which is usually trying to make sense of the data].
Also, there are several ways to handle it ... you can create a pure gaming environment with no programs installed and run your benchmarks and do the rebooting between games - probably much like the tests tat AnandTech reviews - using a image to make identical environments ,,, or else you can make it a "average user's rig" with AntiVirus, background programs running and internet connected. Whatever i do, it will be the same between OSes so the variables are kept to a minimum
i'm more inclined to test the "average user's rig"
Well, like I said pages ago, the only real way to see the differences is to experience them first-hand. I'm already confident in my findings but that's not to say mvoing to 64-bit/4GB+ is worthwhile for everyone.
As for testing methodology it shouldn't matter much, but I'd be more inclined to simulate typical systems where you wouldn't be overly concerned with what apps are running in the background since that is another practical advantage of more RAM and a 64-bit OS.