Originally posted by: DasFox
Originally posted by: John
Read
this page first. Now look at the
chart.
So how good is the quality of information for this chart? Is it possibly one of the better detection rating charts out there?
Personally all the AV apps that are rated higher then KAV and NOD32 I've never seen before, or would ever consider them to have a better detection rating.
Hmm
What you don't consider is that those higher rated are also known to generate higher FPS due to aggressive heuristics... I can score 100% if i detect every file as malware....
In those charts, we are talking not of run of the mill malware randomly selected from a set of all malware. The sample instead consists of malware that has actually being reported as having infected users computers a lnumber of which computers are likely to be protected by avs. (castlecops will then unload it to otc) -
This means we are talking about a self selected sample of malware that has evaded at least 1 av, so it is clearly a tough test where most of the samples are likely to be new (something like what mechon is doing in his test, but even harder than his, since in his tests he is using infections found on unprotected machines/honeypots I think).
This kind of tests of new unknown malware favours antiviruses that have very speculative heuristics... But such heuristics usually come with higher FPs...
Webwasher is basically antivir + one of their own engines and because it is deployed at the server level, it can be more aggressive.
Antivir, has very aggressive heuristics particularly if you turn it on. A lot of its detections is packer based. e.g. files are flagged regardless of content as long as they are packed with a specific packer plus some other generic feature. This solves a lot of problems but creates new ones with FPS....
Forinet and ikarus tend to detect *everything* including clean files..
Panada has a poor reput , but my experience with them is that they are pretty underrated, of all the avs they have the best blend of technologies.
In short all the tests have different methodology, different conditions etc, they are all not comparable. And which test is most useful depending on what you are looking for,
IMHO, most important is to look at the test set. How did they find/choose the samples/ what is the sample size?
Most test results vary because this condition is very different.
What steps did they take to ensure the sample consists of real functioning malware as opposed to junk files?
What types of malware was chosen? trojans, worms, rootkits, adware , keyloggers etc Polymorphic malware?
How was the AV tested? - what settings were used?
Generally when i look at most of the test results, they "make sense" to me, based on my personal experience, reading about the antviruses and my personal understanding of the strength and weaknesses of each av.
Understanding of stasitics is critical, people obsess over minor differences that are well within statical sampling error.