AMD allows for a tear, or the method I mentioned.
Without the tear, you might get up to 7ms added to a frame if it falls below 25ms on one that has a minimum of 40hz. Or at least that is what AMD says is supposed to happen.
When a frame takes longer than 25ms to display, the same frame gets a refresh. It has no idea how long it will be before the next frame is ready, so it made no attempt to refresh earlier than at 25ms of displaying. Given that the fastest a refresh can take place is 7ms, if this happens, that frame is committed to be displayed for at least 32ms. Even if a new frame was ready at 26ms. This is still better than what typically happens with V-sync, but still not as ideal as Nvidia attempts to do by averaging out the previous 2 frames to allow for it to preemptively refresh sooner than 25ms.
ocre is most definitely way off. I agreed on that. I was just trying to explain what actually happens, or is supposed to. Someone earlier was mentioning that it might be adding 25ms to the frame, rather than 7ms. This is not supposed to happen, and should be ironed out fairly easily.
What eventualy allow for a tear is the screen manufacturer, if this latter use a panel that has minimal frame rate of 25ms then Freesync can do nothing about it, just get a monitor that has a lower minimal refresh time rate, that s purely panel dependant, or eventualy firmware dependant but in this latter case there s few probabilities that the manufacture will artificialy limit a panel below its capabilities.
Notice that Gsync is not free from this issue, it s just that panels with lower refresh times were selected for the purpose, in this respect Techreport is wrong to assume that it could have the slightest advantage over Freesync, the only thing they witnessed is that the Benq has not a panel as good (refresh time wise) as the Gsync one used as comparison.