As long-time blog readers may know, I shifted my focus to North American threats some three years ago. Ever since, I've noticed major cultural differences in how security issues get tackled.
One way in which the difference is very clear is the use of secret questions as an added security measure. While secret questions are not overly common in Europe, they're very popular in the USA.
It goes without saying that out-of-band authentication used by many European banks is a much more secure approach than asking a secret qeustion next to a regular password. And banks are just one of many examples. Secret questions are everywhere now.
Enter the Facebook era. Rarely do I encounter a secret question that people wouldn't likely have posted the answer to on Facebook. It's worse with the services that allow users to reset their password based on answering the secret question(s) correctly.
A short while ago, I decided to prepare a presentation on web vulnerabilities and specifically on XSS attacks. This involved studying the way today’s filtration systems work.
I selected the most popular Russian social networking website, VKontakte.ru, as a test bed. One thing that grabbed my attention was the updated user status system.
The HTML code in the part of the page where users edit their status messages is shown below:
As you can see, filtering is performed by the infoCheck() function. The status itself is located in this string:
What we have here is two-step filtration. The first step is performed when the user enters the status message. The second step involves converting the status message to text and returning it to the page in the shape in which other users will see it.
While the second step definitely works well and it would clearly be impossible to convert to active XSS, things are not as simple where the first step is concerned, so it is that step that we will look at in greater detail.
Predictably, the simple <script>alert()</script> did not work, and the status remained empty. Other ‘script-like’ attempts didn’t work, either – it seems that this particular string is explicitly filtered.
However, the <script> tag is not essential for a script to be executed. The first vulnerability is introduced on the user’s machine by using the <img> tag: by entering the string <img src=1.gif onerror=some_function> as the user’s status, we can get that function to be executed. For example, we can call the function profile.infoSave(), which is called with an empty parameter to clear the status, but use a parameter of our choice. Thus, if we enter <img src=1.gif onerror=profile.infoSave('XSS')>, we get the string “XSS” as our status message:
Another interesting vulnerability associated with the filter is that the tag <A> is not filtered. If we enter <A HREF="//www.google.com/">XSS</A> as our status, we get… a hyperlink clicking on which brings up a status editing window and, a moment later, opens google.com.
As we all remember, XSS = cross site scripting, so I decided to test the next vulnerability using a third-party website with a script loaded on it. In addition to the tags mentioned above not being filtered, the <iframe> tag also successfully passed the filter. As a result, entering <iframe src="yoursite.com" width="100%" height="300"> in the status line will produce an iframe which will launch the above-mentioned script loaded on the page. Below is an example of what the iframe can look like:
This is a more serious vulnerability than the other two. One way of exploiting it is by creating a URL to change user status and sending it to the victim user in the hope that the user will click on it. The script will be executed on the user’s page even before the status message is published. This is a classic example of passive XSS.
These vulnerabilities existed from 01 August, 2010 – the time when the new user status system was introduced. We notified VKontakte’s administration on 01 March, 2011 and the vulnerabilities were closed on 03 March.
This month's patch Tuesday is comprised of three bulletins covering four vulnerabilities. Two bulletins affect Windows while the other affects Office. The Windows vulnerabilities affect all currently supported client OS’s. The only critical vulnerability of this month belongs to Windows Media. A maliciously crafted MS-DVR file can allow for remote code execution.
A new version of the Android Market has just been launched, making it possible for every device owner to look for applications, buy or even remotely install apps to an Android device directly from the browser on a desktop computer. Wait, remotely install? Have we misheard something?
No, it’s an official feature of the brand new market. If you use an Android device, it means that you have a GMail account associated with your device, and now you can remotely install any application from the Android store. You just need to:
A new Twitter worm is spreading fast, using the “goo.gl” URL shortening service to distribute malicious links.
Our users are protected from this worm and all the URLS are being blacklisted in our products.
Here are some of the technical details:
Those “goo.gl” links are redirecting users to different domains with a “m28sx.html” page:
This IP address will then do its final redirection job, which leads to the Fake AV website:
Modern game consoles are not only dedicated to gaming anymore, they rather offer a great variety of entertainment and many methods to support the whole gaming experience by offering platforms to meet other gamers from around the globe, share thoughts via private messages and status updates, a fully fledged browser to surf the web, media server capabilities and even online stores to buy games and additional game content via credit cards and gift coupons, which can be bought at shops if you're not having a credit card.
Does that remind you of something? Indeed, it's actually pretty similar to a social network - and it can also be connected to Facebook & Co. to keep your friends updated what trophies or achievements you just won.
In terms of security the vendors of these consoles did a pretty good job, all inner systems got hardened and signed installers made sure you can't install anything you want - which may annoy some people but keeps the system secure. But now it seems like the game has changed for the PS3. While it was possible to jailbreak the system with specially crafted USB sticks before, the first soft-mods are now available. The reason behind this? Four years after the release of the PS3 the master key was now found out by a group of modders. Many gamers now take their chance to individualize their system by installing a home-brew environment that allows to roll out programs unapproved by Sony.
So what are the consequences? First of all, many people will jailbreak the PS3 just for the sake of it, because it's considered fashionable as it is with the iPhone, as my colleague Costin points out in a recent issue of Lab Matters. Unfortunately most people are unaware that this might open the floodgates for malicious or unwanted software. Parallels to the Ikee worm on iPhones are inevitable. This worm spread itself only via jailbreaked iPhones - making apparent how many devices are actually jailbroken and how dangerous this can be. And now home-brew software variants for the Playstation 3 have been released and are spreading through the web over different sources. Who knows what's behind those offers? The original intention of the programs might be benign, but who knows if the installer package has been compromised and re-offered for downloading?
As pointed out before, buying games and related content from the online shop via credit card is popular and potentially dangerous if homebrew software is installed,as the software could carry out a man-in-the-middle attack or redirect to phishing sites. Alternatively, installed games or the respective game scores could be blocked and thus the software would act as ransomware or send out spam via the internal message system... There are many malicious possibilities that the bad guys can utilize for financial profit!
Are these scenarios realistic? -Unfortunately yes
Is it going to happen? -I hope not...
Facebook has started offering a new profile*. What’s unique about this is that they offered it. In the past they had always forcibly changed it and added privacy changes, much to the chagrin of their user community and privacy advocates.
The way that this change developed was either clever marketing or social engineering, though I hesitate to have a debate on the difference between the two. When logging into Facebook, users were greeted with the news that some friends were using the “New Profile”.
This clever bit of information was there to notify the users that there is an alternative. It adds an idea of exclusivity. There is something else, and your friends are using it, but you’re not. Are you missing out? The message was then repeated as friends adopted the new profile.
Facebook has been heavily criticized in the past for forcibly changing settings and reducing their user’s privacy. Let’s not forget that Facebook is a company that sells things. It is not their main intention to ensure you make contact with old friends from school. They are there to make a profit and selling user information is one way they do that. However if users lock down all their privacy they won’t have much to sell.
Facebook has overcome this by using an opt-in strategy this time. First, they offer a new profile. The new profile is more of a personal showcase. Not entirely different, but the layout has moved around. They are quick to notify you that your privacy settings have not changed. The most interesting part is the addition of personal information links on top of the new profile:
These entice the users to add more personal data, showing more about you as a person. They also override the privacy settings in the profile management area, because hey, you changed it yourself. Did it work? I would say yes. I saw more and more friends adding birthdates, home towns, work information, and more. All of this is very sellable information to advertising companies looking to “profile” their users.
It seems Facebook has learned its lesson about forcing changes on users, and even used it to its advantage to gain more information about them. Be wary of putting too much personal information online. A lot of the info you might post on Facebook could be used for malicious purposes, such as guessing your password reset hints for other sites or targeted attacks on the company you work for. If you’re not sure, best keep it to yourself.
*not everybody is convinced about the new profile just yet
In early December, Kaspersky Lab experts detected samples of the malicious program TDL4 (a new modification of TDSS) which uses a 0-day vulnerability for privilege escalation under Windows 7/2008 x86/x64 (Windows Task Scheduler Privilege Escalation, CVE: 2010-3888). The use of this vulnerability was originally detected when analyzing Stuxnet.
Using an exploit for this vulnerability allows the rootkit TDL4 to install itself on the system without any notification from the UAC security tools. UAC is enabled by default in all the latest versions of Windows.
After the Trojan launches in the system, e.g. in Windows 7, its process receives the filtered token (UAC in operation) with the regular user privileges. An attempt to inject into the print spooler process terminates with an error (ERROR_ACCESS_DENIED).
Last week I did the impossible - I took a week of vacation, without visiting the Internet. So this week I’ve been playing catch-up. There are a number of striking topics from last week - the steady stream of clickjacking worms on Facebook, the Adobe zero-day and the discovery of the Unreal IRCd server backdoor.
However there's one story that really stands out - Google's full disclosure of the Microsoft Windows Help and Support Center zero-day vulnerability, which is present in Windows XP and Windows 2003 - CVE 2010-1885. It's the second zero-day published by a Google employee in the space of two months. That starts to sound like a strategy, doesn't it? Let's try to analyze the situation...First of all, the Google employee involved this time states he disclosed the vulnerability of his own accord. That may very well be true, but there are some side notes to this.
In the full disclosure publication the employee thanks quite a number of colleagues at Google. That makes it likely that someone in a manager's position was aware of this research. Secondly, after the full disclosure of the Java Web Kit vulnerability two months ago, Google must have had renewed internal discussions on the rules and guidelines for fully disclosing vulnerabilities.
Rather a strange situation isn't it? Google's official policy has been to responsibly disclose vulnerabilities. Doing it privately rather than in the name of the company? Well, I don't buy that for a second. At Kaspersky Lab we're all for responsible disclosure and if I were to privately fully disclose something in a similar fashion I'd surely have to go looking for a new job. Given this, I can only conclude that full disclosure is not discouraged within Google and possibly even encouraged.
So, what might Google's motives be? Well, the first thing that stands out is that Google did not publish full disclosure on zero-day vulnerabilities up until a couple of months ago.
Given that they employ quite a number of well known vulnerability researchers something must have changed this year. The thing which comes to mind is Aurora.
With Aurora, Google got compromised through a zero-day vulnerability which Microsoft was aware of. One might therefore theorize that Google has developed some sort of zero-tolerance.
When Google finds that the impacted vendor isn’t responding quickly enough they'll simply publish full details of the vulnerability. So, in Google's eyes full disclosure is the best thing from a security perspective.
Some may argue that Google's battle with Microsoft has intensified and publishing vulnerability details could offer a strategic advantage. Well, I'm sure that Google will do everything it can to distance itself from this perception. Putting individual and corporate assets on the line is no joke and no self-respecting company would even toy with the idea simply in order to gain market share.
That brings us back to option number one - Google thinking it's helping the greater good. Let's ignite the full disclosure debate again taking Google as an example.
Let's imagine a situation where the vulnerability through which Google got compromised late last year was publicly disclosed two weeks prior to the attack. What would have been different?
I'd argue that the amount of 'collateral damage' would have been much higher, but Google would very likely still have been a victim. By collateral damage I mean the number of Joe Average machines that got owned across the world.
As for corporate victims, any company still running IE6 at the end of 2009 was bound to be running Adobe Reader 7 as well (that’s speaking from our experience with a number of high tech IT companies throughout the US and Europe). Reader 7 became EOL at the end of last year, so if it hadn't been through IE6 Google would likely have been compromised a different way.
If even one of the most high tech, resourceful IT companies in the world can't get its act together how can it expect the rest of the world to do so? Again, I'm sure Google means well but it publishing full disclosure information is definitely having a negative effect on the threat landscape. On both occasions there were exploits for the zero-day vulnerabilities days after publication. There's simply no denying that machines are getting infected that otherwise wouldn't be infected.
Please, Google, stop this initiative before causing more innocent casualties.