NO NO NO NO NO.
You need to capture the end-user experience, and there are elements of the graphics chain that you are not taking into account when you do what you suggested. I suggest that you read this:
http://techreport.com/review/21516/inside-the-second-a-new-look-at-game-benchmarking/11
"The slide above shows the frame production pipeline, from the game engine through to the display, and it's a useful refresher in the context of this discussion. Things begin with the game engine, which has its own internal timing and tracks a host of variables, from its internal physics simulation to graphics and user input. When a frame is ready for rendering, the graphics engine hands it off to the DirectX API. According to Petersen, it's at this point that Fraps records a timestamp for each frame. Next, DirectX translates high-level API calls and shader programs into lower-level DirectX instructions and sends those to the GPU driver. The graphics driver then compiles DirectX instructions into machine-level instructions for the GPU, and the GPU renders the frame. Finally, the completed frame is displayed onscreen."
Once again. NO. You guys bickering back and forth for pages and pages in this thread need to face the fact that you will need to take into account whatever black magic NV and AMD do that is not captured by FRAPS. The only real way to do that is to literally point a high-speed camera at the monitor and compare. Use multiple monitor types if you are afraid that the results are monitor-specific.
That is the ONLY way to capture end-user experience objectively. Not FRAPS frametimes, not subjective user experiences.