11 Sep Another look at VBMania
13 Jul Nirvana for cybercriminals?
07 Jul Testing and Accountability
Join our blog
You can contribute to our blog if you have +100 points. Comment on articles and blogposts, and other users will rate your comments. You receive points for positive ratings.
Like many others, I took advantage of Amazon.com's sale and ordered a Kindle Fire HD last week. When I got around to exploring the Amazon App Store, it didn't take long before running into malware.
While searching for a particular benchmarking app I was presented with some additional apps. One of them immediately looked suspicious.
Yesterday the Iranian CERT made an announcement about a new piece of wiper-like malware. We detect these files as Trojan.Win32.Maya.a.
This is an extremely simplistic attack. In essence, the attacker wrote some BAT files and then used a BAT2EXE tool to turn them into Windows PE files. The author seems to have used (a variant of) this particular BAT2EXE tool.
There's no connection to any of the previous wiper-like attacks we've seen. We also don't have any reports of this malware from the wild.
You may have noticed that we lowered our internet threat level to low risk. We have taken another look at Email-Worm.Win32.VBMania and its prevalence and came to the conclusion the increased threat level was not warranted.
The number of overall infections has been quite low. The number of spammed messages is relatively high, but those don't pose danger anymore as the URLs in the emails are all down. So VBMania will not harvest any additional victims through email. Additionally, VBMania will fail to (properly) run on Windows 7 when UAC is enabled.
That leaves VBMania with two infection vectors: it creates copies of itself on network shares and USB devices. VBMania can be annoying to clean up manually, but the malware doesn't pose much of a challenge to get rid of with a security product.
The noise around VBMania really reminds me of the Bozori worm from 2005. (Some vendors called it Zotob.) For Bozori the overall infection numbers weren't that high either. But, just like with VBMania, some big media corporations got hit which created a lot of extra buzz.
Overall, this threat is far from sophisticated - the malicious techniques it uses are ancient. As a matter of fact, the heuristics that shipped with our KAV6 release over four years ago detected this sample proactively.
To be honest, I'm still somewhat amazed that VBMania managed to make the headlines in the same week we saw a very sophisticated zero-day attack against Adobe Reader.
Corporations that ended up infected with VBMania should seriously rethink their security over the weekend.
Over the weekend I spent more time looking into the zero-day LNK (shortcut) Windows vulnerability that Aleks blogged about last week. It’s now been classified as CVE-2010-2568 and is being actively exploited in the wild.
My main conclusion is that this vulnerability is a fundamental part of how Windows handles LNK files. This means there are two huge negatives – firstly, as this functionality is pretty standard, it's going to be harder to create effective generic detections which don't cause false positives.
We’ve released generic detection for malicious LNK files which try to exploit the feature. I think that the LNK format will start receiving a lot more attention now, both from the good guys, and the bad, so do take a look at the mitigations put up by Microsoft. I’m sure it will be time well spent, as I fully expect this vulnerability to be widely exploited while we’re waiting for the patch.
Today Microsoft is ending support for XP/Service Pack 2. According to reports there are still a lot of machines running XP/SP2. So this sounds like a serious problem, right? Actually, I’m not convinced of that.
Let’s look first at consumer machines – those which aren’t being centrally managed. Why would these machines still be running SP2? Obviously, Windows Updates must have been disabled. I can only think of two main reasons why that would be the case: either a malware infection which is somehow preventing WU from working, or people have disabling WU on pirate versions to be sure they can continue to use Windows without having to pay for it.
In the first case, infection already occurred. In the second case, it’s very unlikely that the machine was ever patched after the initial SP2 install. That means that such machines are vulnerable to any of the exploits that exploited XP vulnerabilities discovered after August 25, 2004, when SP2 was released. In other words, these computers have been vulnerable for a long, long time.
What about the business environments still running SP2? In the vast majority of cases the admins will have decided that the time just isn’t ripe for SP3. SP3 was released just over two years ago. If admins haven’t rolled out SP3 yet, it seems pretty unlikely that the other software they’re running - such as Office and Adobe Reader – is going to be up to date. These are the same companies that are still running Internet Explorer 6.
Given all this, I don’t think ending support for SP2 will create any sort of nirvana for cybercriminals. All the unpatched (and attackable) machines have been this way for a long time now – and chances are, if they were going to be infected, it would have happened a long time ago.
AMTSO (the Anti-Malware Testing Standards Organization) is a coalition of security professionals, including many antivirus product vendors, product testing organizations and publishers, and some interested individuals. Given the highly technical nature of its activities, it is inevitable that the organization owes some of its authority to the expertise of the security specialists within its ranks, but that doesn’t make it a vendor lobby group. As Kurt Wismer (not himself a member) points out here “many of them are employed by vendors precisely because that's one of the primary places where one with expertise in this field would find employment.” Given some recent negative publicity aimed at AMTSO ( example), we want to collectively clarify the following points on behalf the anti-malware industry, where we come from, and indirectly on behalf of AMTSO.
We find it strange that expertise in the testing field is somehow seen as a disqualification, given the specialist expertise that characterizes the group.
While some distrust anything a vendor says and accept uncritically anything a tester says, others are puzzled that different tests can vary so dramatically in their evaluation of the same product. While this may sometimes be simply due to poor testing practice, there are other, deep-seated reasons, one being the high volume of malware and new attacks seen every day. Vendors work hard to close the gap between the ideal 100% detection and what is actually achievable, by developing a range of technologies, both proactive and reactive. The capabilities of products can change, while tests using broadly similar methodology can generate dramatically ‘conflicting’ results due to different approaches to the selection, classification and validation of samples and URLs, among other factors.
AMTSO aims to promote precisely the kinds of tests that clearly show up these variations, and its members were flying the flag for real world testing before AMTSO ever formally existed, believing that sound testing benefits vendors and customers as well as testers. As an industry, we are all too aware that we cannot currently offer detection of all known and unknown malware. The relatively high scores achieved in established tests by major vendors do not necessarily reflect real world performance, but real-world detection cannot be measured in terms of product comparison with no checks on selection, classification and validation of malicious samples and URLs.
Another misconception is that AMTSO members simply don’t like tests done by non AMTSO members. This is not the case: none of the undersigned have a problem with labs that intend to provide objective, real-world testing. (Though other testers are entitled to object vehemently when one company claims to be the only one doing live, internet-connected testing, and that all other testers are doing static testing based on the WildList).
However, charging consultancy fees for the release of any information relating to a test (even to participants) is very different to the transparency that AMTSO advocates, though we recognize that full-time testers generate revenue like any other business. However, when a tester claims to have shared information about methodology in advance, and fails to provide methodological and sample data subsequently, even to vendors prepared to pay the escalating consultancy fees required for such information, this suggests that the tester is not prepared to expose its methodology to informed scrutiny and validation, and that compromises its aspirations to be taken seriously as a testing organization in the same league as the mainstream testing organizations committed to working with AMTSO.
No-one believes that AMTSO has all the answers and can “fix” testing all by itself, but it has compiled and generated resources that have made good testing practice far more practicable and understandable. The way for testers (and others) to improve those resources is by talking to and working with AMTSO in a spirit of co-operation: the need for transparency is not going to go away.
As you may have read the AMTSO had another meeting a couple of weeks ago. AMTSO is strongly committed to improving the overall relevance of anti-malware testing.
During our latest meeting we accepted two new papers. The first paper is on whole product testing and the second is on performance testing. As pointed out by the vast majority of people in the AV space the tests of old have never truly accurately reflected real life performance. With the changes that the threat landscape has seen over the past few years this has become truer than ever.
So rather than having tests which focus on individual components to test detection of, or rather protection against, threats an entire product should be tested. Just think about a scenario where an email-borne threat is not detected by the file scanner, but the anti-spam component is able to flag the message it comes with as spam.
The other document talks about how to more accurately test the performance – or speed (impact) – of AV solutions. One scenario where this will be useful could be determining the amount of RAM a certain product may occupy. Many people will try to establish this by looking at the amount of (virtual) memory taken by the processes belonging to the product. However, certain products may also inject some of their DLLs into other processes, therefore unintentionally masking some of their footprint. It’s therefore the best practice to compare the entire RAM usage.
The bad news, I say jokingly, comes from one of the new documents we continued working on in Helsinki. The False Positive testing document has proven to be quite the challenge and sparked a lot of debate. Especially the area of testing false positives on web resources - such as domains and web scripts - an interest of mine, proved to be particularly challenging.
It definitely looks like testers are continuing to improve their tests to more accurately reflect real life scenarios. And that’s great for two main reasons. Most importantly, it gives users better information. Secondly, it gives vendors the opportunity to spend their resources focusing on things that protect the user. So it’s great to see the progress that we’re making in AMTSO.
If you haven't had a chance to read the documents go to the AMTSO web site and have a look!
Amongst some others the Zeus bot is one of the most prolific bots in the wild and in the media. Lately there has been quite a few reports on the aspects surrounding Zeus, such as new research and the Troyak takedown.
Naturally, this is great news. However, awareness is still lacking and the heavy reporting around Zeus is making more people aware of the sophistication of the cyber criminal underground. Unfortunately, In many of the reports there is a recurring incorrectness. These reports talk about “the Zeus botnet”, which is an inaccurate reflection of reality.
The reality is that there are many, many different Zeus botnets all maintained by different cyber criminals. The amount of unique Zeus botnets is likely to be in the hundreds. The cyber criminals behind the Zeus bot will sell it to anyone who can then start their own unique botnet. Going even further there are some side-branches of Zeus maintained by other cyber criminals.
Given this situation it’s not unlikely that in a large enterprise machines may be infected with Zeus bot variants which are controlled by different cyber criminals and therefore belong to different Zeus botnets.
In order to create greater distinction we’ve seen a security company give a particular Zeus botnet another name when talking about it in the media. From my own perspective this novel idea didn’t quite work as it seemed to cause more confusion rather than less.
Sadly, I’m not convinced that a botnet naming convention for variants of a particular bot will help the public have a better understanding in the short term. So where does that leave us? Well, I think there is an easy guideline.
If the security community is reasonably sure that a certain bot is controlled by one cyber criminal group we can refer to the threat as a botnet. Examples of this rule are Conficker, Storm and Mebroot. If the bot is available in the underground we should refer to the threat as bot or botnets created by the following bot. Examples of this rule are Zeus, SpyEye and Poison Ivy.
As you've most probably read by now search engines providers have been working on providing so called real time search results. These results include queries to, for instance, Facebook, Twitter and Myspace.
We may not all realize this, but we have just turned yet another technological corner. Everyone will have exponentially more and faster access to personal information now including data from social networks. Everyone naturally includes cybercriminals.
In my opinion, cybercriminals now have a great new opportunity to combine two major threat vectors - Black Hat Search Engine Optimization and social networks. Now turnaround will be faster and more people will see the malicious links created by black hat SEO – something search engines have already failed to control.
This is important, because to date attacks via social networking sites aren't yet as prevalent or sophisticated as they could be. The gang behind Koobface has recently stepped up their game but overall isn't really technically advanced. In fact, from where I sit, the development of malware that's targeting social networks is really reminiscent of that of IM-Worms some years back. It's the same situation: your friend's compromised account is used to persuade you to click on a malicious URL. So we'll probably soon see the social engineering approaches used to spread social networking threats following a similar evolutionary path.
I'm also concerned about how real time search results will affect our online privacy.
Clearly, it's no coincidence that Facebook introduced their new set of privacy guidelines just days before Google introduced real time search. The recommended Facebook settings - which surely will be used by the vast majority of the Facebook community - put a lot of information into the public and semi-public domains.
Yes, this approach will definitely make real time search results more effective. But I definitely think that the recommended settings expose too much PII.
What does this hold for the future? I'm convinced that real time search is just in its infancy. I'm positive that soon enough search engine providers will offer everyone the opportunity to use real time search with their Facebook/Twitter/MySpace/etc. credentials. This would then allow people to more effectively crawl what their friends - or friends of friends - are up to. An opportunity that the cyber criminals will surely not let go to waste.
Today we got another DDoS attack on Twitter. A lot of people are asking why Twitter doesn't seem to be coping with attacks like these. And at the same time there are more and more people jumping on the bandwagon saying stay away from Adobe products.
What's the link? Two extremely high profile companies which are being targeted by various cyber criminals around the world. In addition, both of these companies have less than outstanding track records when it comes to security issues.
But that's pretty much where the parallel ends. Looking at Twitter over the course of this year what conclusions can we draw?