Virus attacks have firmly established themselves as the leading IT security threat. Not only do they result in financial losses, but they also serve as a vehicle for many other security threats, such as the theft of confidential information and unauthorized access to sensitive data. The antivirus industry has responded by coming up with a number of new approaches to protecting IT infrastructures - to name a few, these include proactive technologies, emergency updates during outbreaks, significantly more frequent antivirus database updates, etc. This paper is the first in a series of articles that will provide more information on the newest technologies used by antivirus companies and help users to judge the effectiveness of these technologies more objectively. In this article, we will focus on proactive technologies.
Virus attacks cause enormous damage and, equally important, the number of types of malicious code is growing at an increasing rate. In 2005, growth in the number of malicious programs exploded: according to Kaspersky Lab, the average number of viruses detected monthly reached 6,368 by the end of the year. Overall growth for the year reached 117% compared with 93% for the previous year.
Likewise, the nature of the threat itself has changed. Malicious programs are not only much more numerous, but also significantly more dangerous than ever before. The antivirus industry has responded to the challenge with a number of new approaches to antivirus protection, including proactive technologies, shorter response times to new threats that can cause outbreaks, as well as more frequent antivirus database updates. This article provides a detailed analysis of the proactive protection, often promoted by vendors as a panacea for all existing and even all possible viruses.
Contemporary antivirus products use two main approaches to detect malicious code - signature-based and proactive/heuristic analysis. The first method is sufficiently simple: objects on the user’s computer are compared to templates (e.g., signatures) of known viruses. This technology involves continually tracking new malicious programs, and creating their descriptions, which are then included in the signature database. Therefore, an antivirus company should have an effective service for tracking and analyzing malicious code (that is, antivirus lab). The main criteria used to evaluate how effectively the signature-based approach is implemented include new threat response times, frequency of updates and detection rates.
The signature-based method has a number of obvious shortcomings. The primary disadvantage is the delayed response time to new threats. There is always a time lag between the appearance of a virus and the release of its signature. Contemporary viruses are capable of infecting millions of computers in a very short time.
Thus, proactive/heuristic methods of virus detection are becoming increasingly popular. The proactive approach does not involve releasing signatures. Instead, the antivirus program analyzes the code of objects scanned and/or the behavior of the applications launched and decides whether the software is malicious based on a predefined set of rules.
In principle, this technology can be used to detect malicious programs that are as yet unknown, which is why many antivirus software developers were quick to advertise proactive methods as a panacea for the rising wave of new malware. However, this is not the case. To judge the effectiveness of the proactive approach and whether it can be used independently from signature-based methods, one must understand the principles upon which proactive technologies are based.
There are several approaches which provide proactive protection. We will look at the two which are the most popular: heuristic analyzers and behavior blockers.
A heuristic analyzer (or simply, a heuristic) is a program that analyzes the code of an object and uses indirect methods of determining whether it is malicious. Unlike the signature-based method, a heuristic can detect both known and unknown viruses (i.e., those created later than the heuristic).
An analyzer usually begins by scanning the code for suspicious attributes (commands) characteristic of malicious programs. This method is called static analysis. For example, many malicious programs search for executable programs, open the files found and modify them. A heuristic examines an application’s code and increases its “suspiciousness counter” for that application if it encounters a suspicious command. If the value of the counter after examining the entire code of the application exceeds a predefined threshold, the object is considered suspicious.
The advantages of this method include ease of implementation and high performance. However, the detection rate for new malicious code is low, while the false positive rate is high.
Thus, in today’s antivirus programs, static analysis is used in combination with dynamic analysis. The idea behind this combined approach is to emulate the execution of an application in a secure virtual environment (which is also called an emulation buffer or “sandbox”) before it actually runs on a user’s computer. In their marketing materials, vendors also use another term - “virtual PC emulation”.
A dynamic heuristic analyzer copies part of an application’s code into the emulation buffer of the antivirus program and uses special “tricks” to emulate its execution. If any suspicious actions are detected during this “quasi-execution”, the object is considered malicious and its execution on the computer is blocked.
The dynamic method requires significantly more system resources than the static method, because analysis based on this method involves using a protected virtual environment, with execution of applications on the computer delayed according to the amount of time required to complete the analysis. At the same time, the dynamic method offers much higher malware detection rates than the static method, with much lower false positive rates.
The first heuristic analyzers became available in antivirus products sufficiently long ago, and all antivirus solutions now take advantage of more or less advanced heuristics.
A behavior blocker is a program that analyzes the behavior of applications executed and blocks any dangerous activity. Unlike heuristic analyzers, where suspicious actions are tracked in emulation mode (dynamic heuristics), behavior blockers work in real-life conditions.
First-generation behavior blockers were not very sophisticated. Whenever a potentially dangerous action was detected, the user was prompted to allow or block the action. Although this approach worked in many situations, “suspicious” actions were sometimes performed by legitimate programs (including the operating system) and users who didn’t necessarily understand the process were often unable to understand the system’s prompts.
New-generation behavior blockers analyze sequences of operations rather than individual actions. This means that determining whether the behavior of applications is dangerous relies on more sophisticated analysis. This helps to significantly reduce the number of situations in which the is prompted by the system and increases the reliability of malware detection.
Today’s behavior blockers are able to monitor a wide range of events in the system. Their primary purpose is to control dangerous activity – that is, analyze the behavior of all processes running in the system and save information about all changes made to the file system and the registry. If an application performs dangerous actions, the user is alerted that the process is dangerous. The blocker can also intercept any attempts to inject code into other processes. Moreover, blockers can detect rootkits - i.e., programs that conceal the access of malicious code to files, folders and registry keys, as well as make programs, system services, drivers and network connections invisible to the user.
Another feature of behavior blockers that is particularly worth mentioning is their ability to control the integrity of applications and the Microsoft Windows system registry. In the latter case, a blocker monitors changes made to registry keys and can be used to define access rules to them for different applications. This makes it possible to roll back changes after detecting dangerous activity in the system in order to recover the system and return it to its state before infection, even after unknown programs have performed malicious activity.
Unlike heuristics, which are used in nearly all contemporary antivirus programs, behavior blockers are much less common. One example of an effective new-generation behavior blocker is the Proactive Defence Module included in Kaspersky Lab products.
The module includes all of the features mentioned above and also, importantly, a convenient system that informs the user of the dangers associated with any suspicious actions detected. Any behavior blocker requires input from the user at some point; so the user must be sufficiently competent. In practice, users often do not have the knowledge required, and information support (in effect, decision-making support) is an essential part of any contemporary antivirus solution.
To summarize, a behavior blocker can prevent both known and unknown (i.e., written after the blocker was developed) viruses from spreading, which is an undisputed advantage of this approach to protection. On the other hand, even the latest generation of behavior blockers has an important shortcoming: actions of some legitimate programs can be identified as suspicious. Furthermore, user input is required for a final verdict regarding whether an application is malicious, which means that the user needs to be sufficiently knowledgeable.
Some antivirus vendors include statements in their advertising and marketing materials that proactive/heuristic protection is a panacea for new threats, which does not require updating and therefore is always ready to block attacks, even for those viruses that do not as yet exist. Moreover, brochures and datasheets often apply this not only to threats that use known vulnerabilities, but to so-called “zero-day” exploits as well. In other words, according to these vendors, their proactive technologies are capable of blocking even malicious code which uses unknown flaws in applications (those for which patches are not yet available).
Unfortunately, either the authors of these materials are insincere or they don’t quite understand the technology well enough. Specifically, combating malicious code is described as a fight between virus writers and automatic methods (proactive/heuristic). In reality, the fight is between people - virus writers versus antivirus experts.
The proactive protection methods described above (heuristics and behavior blockers) are based on “knowledge” about suspicious actions characteristic of malicious programs. However, this “knowledge” (i.e., a set of behavior-related rules) is input into the program by antivirus experts and is obtained by analyzing the behavior of known viruses. Thus, proactive technologies are powerless against malicious code that uses completely new methods for penetrating and infecting computer systems, which appeared after the rules were developed – this is what zero-day threats are all about. Additionally, virus writers work hard to find new ways of evading behavior rules used by existing antivirus systems, which in turn significantly reduces the effectiveness of proactive methods.
Antivirus developers have no choice but to update their set of behavior rules and upgrade their heuristics in response to the emergence of new threats. These types of updates are certainly less frequent than in the case of virus signatures (code templates), but still need to be performed regularly. As the number of new threats increases, the frequency of such updates will inevitably rise as well. As a result, proactive protection will evolve into a variant of the signature method, albeit based on “behavior” rather than code patterns.
By concealing the need to update proactive protection from users, some antivirus vendors in effect deceive both their corporate and personal clients and the press. As a result, the public has a somewhat erroneous idea of the capabilities of proactive protection.
Despite their shortcomings, proactive methods do detect some threats before the relevant signatures are released. An example of this can be seen in the response of antivirus solutions to a worm called Email-Worm.Win32.Nyxem.e (Nyxem).
The Nyxem worm (also known as Blackmal, BlackWorm, MyWife, Kama Sutra, Grew and CME-24) can penetrate a computer when a user opens an email attachment containing links to pornographic and erotic sites or a file on open network resources. It takes the virus very little time to delete information on the hard drive. Up to 11 different file formats are affected (including Microsoft Word, Excel, PowerPoint, Access, Adobe Acrobat). The virus overwrites all useful information with a meaningless set of characters. Another distinctive characteristic of Nyxem is that it only becomes active on the third of each month.
A research group from Magdeburg University (AV-Test.org) carried out an independent study to assess the time it took different developers to respond once Nyxem emerged. It turned out that several antivirus products were able to detect the worm using proactive technologies, i.e. before the signatures were released:
|Proactive detection of Nyxem by behavior blockers|
|Kaspersky Internet Security 2006 (Beta 2)||DETECTED|
|Internet Security Systems: Proventia-VPS||DETECTED|
|Panda Software: TruPrevent Personal||DETECTED|
|Proactive detection of Nyxem by heuristics|
|eSafe||Trojan/Worm  (suspicious)|
|Nod32||NewHeur_PE (probably unknown virus)|
|Time of release of signatures to detect Nyxem|
Overall, eight antivirus products detected Nyxem using proactive methods. Does this, however, mean that proactive technologies can replace the “classical” signature-based approach? Certainly not. To be valid, analysis of the effectiveness of proactive protection should be based on tests involving large virus collections, not individual viruses, however notorious.
One of the few widely acknowledged independent researchers who analyze proactive methods used by antivirus products on large virus collections is Andreas Clementi (www.av-comparatives.org). To find out which antivirus programs are capable of detecting threats that do not as yet exist, solutions can be tested on viruses that appeared recently, e.g., within the past three months. Naturally, antivirus programs are run with signature databases released three months ago, so that they are confronted with threats that were then “unknown” to them. Andreas Clementi’s focus is on the results of this type of testing.
Based on the results of testing conducted in 2005, the heuristics used in the Eset, Kaspersky Anti-Virus and Bitdefender solutions were the most effective.
|Proactive (heuristic) detection rates|
The test used a collection that included 8,259 viruses. From the results above, we see that the highest detection rate in the test was about 70%. This means that each of the solutions tested missed at least 2,475 viruses, hardly an insignificant figure.
In another test of the effectiveness of heuristic analyzers conducted by experts from Magdeburg University (AV-Test.org) in March 2006 for PC World magazine, detection rates achieved by leaders of the test did not exceed 60%. Testing was conducted using one-month old and two-month old signatures.
It should be noted that the high detection rates demonstrated by heuristic analyzers have a downside: their false positive rates are also very high. To operate normally, an antivirus program should strike a balance between detection rates and false positive rates. This is also true of behavior blockers.
The results of the analyses conducted by AV-comparatives.org and AV-Test.org provide a solid illustration of the fact that proactive methods alone are incapable of providing the necessary detection rates. Antivirus vendors are perfectly aware of this and, for all their rhetoric on proactive technologies, continue to use classical signature-based detection methods in their solutions. Tellingly, developers of purely proactive solutions (Finjan, StarForce Safe'n'Sec) must purchase licenses for “classical” signature-based technologies from third parties and to use in their products.
Naturally, signature-based methods have shortcomings as well, but so far, the antivirus industry has been unable to come up with anything capable of replacing this classic approach. Consequently, the primary criteria to measure the effectiveness of antivirus solutions will continue to include not only the quality of proactive protection, but response time to new virus threats (the time it takes to add the relevant signature to the database and deliver the update to users) as well.
Below is information on average response times demonstrated by leading antivirus vendors for major antivirus threats during 2005. The Magdeburg University research group (AV-Test.org) analyzed the time it took developers to release updates containing the relevant signatures. The analysis covered different variants of 16 worms that were most common in 2005, including Bagle, Bobax, Bropia, Fatso, Kelvir, Mydoom, Mytob, Sober and Wurmark.
|Average response time||2005|
|0 to 2 hours||Kaspersky Lab|
|2 to 4 hours||BitDefender, Dr. Web, F-Secure, Norman, Sophos|
|4 to 6 hours||AntiVir, Command, Ikarus, Trend Micro|
|6 to 8 hours||F-Prot, Panda Software|
|8 to 10 hours||AVG, Avast, CA eTrust-InocuLAN, McAfee, VirusBuster|
|10 to 12 hours||Symantec|
|12 to 14 hours||—|
|14 to 16 hours||—|
|16 to 18 hours||—|
|18 to 20 hours||CA eTrust-VET|
In summary, a number of important conclusions can be made from the above. First of all, the proactive approach to combating malicious programs is the antivirus industry’s response to the ever-growing stream of new malware and increasing rates at which it spreads. Existing proactive methods are indeed helpful in combating many new threats, but the idea that proactive technologies can replace regular updates to antivirus protection is a fallacy. In reality, proactive methods require updating as much as signature-based methods.
Existing proactive techniques alone can not ensure high malicious program detection rates. Furthermore, higher detection rates are in this case accompanied by higher false positive rates. In this situation, the new threat response time remains a solid measure of antivirus program effectiveness.
For optimal antivirus protection, proactive and signature-based methods should be used together, given that top detection rates can be achieved only by combining these two approaches. The figure below shows results of testing conducted by Andreas Clementi (www.av-comparatives.org) to determine the overall (signature-based + heuristic) malicious program detection levels. It may seem that the differences between programs that performed well in tests are small. Yet, it should be kept in mind that the test was performed on a collection of over 240,000 viruses and a difference of 1% accounts for about 2,400 missed viruses.
|Overall detection rates|
Users of antivirus solutions should not place too much trust in the information they find in vendor marketing materials. Independent tests that compare the overall capabilities of products are best suited to assessing the effectiveness of solutions available on the marketplace.