22 Sep A psychological experiment
Join our blog
You can contribute to our blog if you have +100 points. Comment on articles and blogposts, and other users will rate your comments. You receive points for positive ratings.
Fabio, our researcher in Brazil, has noticed malware authors using an old trick to mask URLs. The trick involves specifying an IP address such as say, 18.104.22.168 (the IP address of google.com, borrowed from my colleague, Costin) in a numerical base other than base 10. The supported bases are octal (8) and hexadecimal (16), and even a single 32bit number work, thus the following are all valid, and each will take you to google.com:
Now, by itself, this isn’t terribly interesting from a technical perspective; this ‘feature’ of IP specification has been around for quite a while.
However…what is interesting though is that due to the relative obscurity of using such methods to denote an IP or URL, it is quite feasible that existing security products do not correctly identify the URLs as valid or flag them as malicious when they point to existing known bad websites.
In my testing, Firefox on Windows supports all of the above addresses, under Linux however, Marco from our German office says some are unsupported. Based on poor browser support for such features, it’s possible to imagine URL filtering tools having the same lack of support.
In addition to potential weak tool support for such URLs, it is likely that unsuspecting users may be more easily convinced that a particular URL is legitimate, which I think is the obvious goal of using such URL obfuscation techniques.
I've been thinking a bit about human psychology in the wake of the Fan Check virus scare. There were a lot of rumors flying – depending on who you listened to, the Fan Check Facebook app was malicious, not malicious, a hoax...And while I was thinking, a controversial psychology experiment kept coming back to me.
Back in 1963, Yale psychologist Stanley Milgram published an article in the Journal of Abnormal and Social Psychology detailing his research findings on how people respond to authority figures. In Milgram's experiment, a test subject was told to give electric shocks (which escalated in intensity) to an individual in a separate room if the individual failed to respond correctly to questions. The test subject was also told that the individual had a heart condition. No electric shocks were actually administered, but when the button was pressed to "deliver" a shock, a pre-recorded response was played – ranging from screaming to pleading for the shocks to stop to silence. Many of the test subjects continued to administer shocks up to "maximum voltage", even though they admitted they felt uneasy about doing so.
Milgram's experiment showed clearly that when a person is told to do something, they'll usually do it, even if it goes against their own perceived values. Our adversaries, the malware authors, have a great understanding of basic psychology, and they know that this principle holds true in the digital world as well. Their latest “experiment”, where they sent Facebook users messages asking them to warn their friends about the “Fan Check” virus was pretty successful. People complied simply because they'd been told to.
Of course, this case isn't exactly analogous to the study described above; those who "warned" their friends didn't see any harm in doing so, and probably thought they were being helpful. But the behavior is very similar to the "blind obedience" mentality highlighted by Milgram.
The behavior demonstrated in the Milgram study has been replicated in the real, non-research world. And the boundaries between the physical world and the digital world are getting increasingly blurred. At the moment malware scares are mostly created unwittingly. But we've also seen the emergence and rise of cyberbullying and other nasty behavior. How long will it be before we see cybercitizens knowingly acting against their own values, simply because they've been told to do so?