How do you find that Windows Defender performs compared to Avast?
I haven't tried to perform any kind of meaningful analysis. I've only found one recent irritating delay from MSE (when it scans a message that I'm telling Thunderbird to junk), and I was on the verge of conducting such an analysis when I then found that the issue wasn't consistent.
I am running Win 8.1 with Avast on a reasonably new machine but I'm getting a bit fed up with all the bugs introduced in software updates of Avast, not to mention the constant bombardment of pop up advertising from them trying to flog extra features.
Starting to think about deleting Avast and trying Windows Defender instead.
That's why I use and recommend MSE/WD: Security software should be present and monitoring the situation, not making an ass out of itself every five minutes.
Only thing that worries me is that Defender / MSE always seem to rate poorly in the online reviews, but then again I've never really downloaded anything that Avast and/or Malwarebytes have ever detected anyway,
The two review sites I'm aware of are complete and utter junk (av-test / av-comparatives): They either employ some very questionable scoring systems (for which they don't supply the formula for said scoring system), or they're giving out 100% / 97% effectiveness ratings like candy when that blatantly isn't the case in the real world. The big problem that researchers have for testing anti-malware packages is this: One can test said packages with already available malware (and the older it is, the less effective of a test it logically makes; the newer the malware then it will be significantly less likely that one can get hold of it yet that type is surely the most useful for testing), or one can start writing test malware for the sake of testing the products. The latter approach would test the heuristics capabilities of modern malware products (ie. their ability to spot problematic software behaviour and tackle it, which is an important anti-malware skill), but it says little about testing against what's already out in the wild (and modern anti-malware products work mainly on being supplied malware identification info via regular updates and partly on heuristics).
Before the days of av-test / av-comparatives, a researcher would pick a thousand or so bits of malware and say ten products to pit the threats against, and release their results. One bit of research would say that product X sucked badly and product Y did comparatively well. Another research would later come along with a multitude of different threats and find a somewhat different result. The common factor in the results was that near-100% effectiveness never occurred, because by definition you're doing it wrong if you're finding results like that. Even an anti-bacterial cleaning product won't claim 100% effectiveness because there are two distinct problems with such a claim: One being "bacteria we know of" (ie. we don't know of all bacteria, especially in an ever-changing world), two being "100%" is a very absolute thing to claim. It's fine for a "researcher" of course because they won't get in trouble with advertising regulations, but it's a rather handy loophole to use if you run a big-name malware company and you have a friendly "researcher" at your disposal.