Originally posted by: TBSN
Ok, thanks
Matthias99. That clears everything up.
Originally posted by: ra990
Yea not really....there is quite a bit more that goes into up converting to enhance the image and make it look better/sharper on the higher resolution than just blowing it up. Hence what Matt2 notices.
If it was "nothing but the process of blowing it up" then they'd all look the same.
I dare anybody to watch a DVD on my HTPC, upconverted to 1080p via FFDSHOW processing and tell me it isn't HD they're watching.
What kind of variables are there between software players and their various ways of upconverting or playing back DVD's, and how much video and/or CPU power does it take to do this optimally?
So far from this thread it seems as if FFDSHOW is a pretty comprehensive program, but what kind of settings would you need control over, other than having the program de-interlace and scale the resolution?
I'm wondering because I've been tinkering with this computer that I want to make into a temporary "media computer," with a 1080 screen and nice sound system, etc. I'm not going to be getting a HD drive of any sort, and it doesn't have to interface with any home theater equipment. I just want to be able to play DVD's, and have them look beautiful (because I know they are capable of it!).
Thanks
It all depends on what you want to do. Personally, as much CPU as you can get is the answer for how much you need for FFDShow to do some of its magic.
Ideally, you want to take your raw video as is, and immediately do a resize to your full monitor's resolution, and then on that new resized resolution do all the other things like de-interlacing, de-noise, edge-sharpening, lumen adjustments, temproral noise blurring, etc., etc., etc....
However, to work on the full blown resized image is EXTREMELY CPU INTENSIVE! This is why no one does it. Resize is almost always the last thing everyone does in terms of order of processing. This results in decent results, but all the blemishes that still exist in the image before the re-size are enhansed and enlarged. By doing all the other processing after the resize, you would maximize the benefits of the other filters/processes, but at a cost of having to deal with an image that is now 4 or more times the size in number of pixels as the original image. This then requires an equal 4x increase in CPU power to process the image, and for some of the processes even more (as the best results from some of the processing you can do occur by completing multiple passes over the same image, so those processes would see potentially a 20x increase in CPU usage to perform after the image was resized).
So how much CPU is needed for FFDShow? As much as you can afford to spend.
My HTPC is in my sig, and it is woefully underpowered to do what I would like to do with FFDShow. However, I will say that what it can do is a lot better then what I have seen any stand-alone upconverting DVD player capable of doing. It even does a better job then most stand-alone hardware video upscallers and denoisers that I have seen (DVDO iScan VP50 (~ $2,600), iScan HD+ (~ $900), Lumagen VisionHDQ (~ $1,800), Algolith Mosquito-HDMI (~ $850), and Crystalio II (~$4,500)).
I have seen and been able to play with all of the above hardware video processorers. Some are better then others, and many have features that you will not have on a HTPC (i.e. processing HDMI input sources for instance, like a PS3 or cable/satilite set top box). That said, for standard TV from a tuner card (or HDTV from a tuner card), or DVD's, FFDShow paired up with a modern day processor family (Core 2 Duo or AMD FX) will really give any of those above processors a true run for the money and then some.
Go read up on AVISynth plug-ins with FFDShow. These allow you to do custom video scripting on the fly. If you know what you are doing, this will result in the best image quality that you can get as you can use correct video processing algorithms, and not someone's short-cut approach (you wouldn't believe how poorly many de-interlacers are coded, there really is only one correct way to maintain image quality when you de-interlace video, but it is processing intensive, which is why may cut corners because they want their application to work on the average computer out there).