English
The Internet threat alert status is currently normal. At present, no major epidemics or other serious incidents have been recorded by Kaspersky Lab’s monitoring service. Internet threat level: 1

Beware the scammers’ crocodile tears!

Tatiana Kulikova
Kaspersky Lab Expert
Posted August 16, 12:03  GMT
0.1
 

Having realized that users are getting wise to their scams involving unclaimed inheritances of multi-millionaire African princes, so-called Nigerian scammers have resorted to other outlandish stories from their arsenal of social engineering. We recently caught a few messages in our traps that suggest the scammers are not only unscrupulous and greedy but also engage in self-irony.

In particular, we detected some mailings supposedly sent by the FBI and its agents. The messages state that in the course of a large-scale investigation they identified users who had fallen victim to spammers, fake “Nigerian brides”, the organizers of non-existent lotteries, and bogus lawyers of deceased millionaires. The recipient of the “FBI” message was listed as a victim, and he/she could now receive compensation for any losses. The next step in the scam is most likely to be a request to send a payment to cover the costs of processing the compensation request, such as filling in all the necessary documentation. In other words, it uses the typical Nigerian scam scenario.


At the very least, this message should set the alarm bells ringing because the mailbox of these supposedly diligent fighters of cybercrime is hosted on a free, publicly available resource and not on an FBI server.

In another mailing the fraudsters go even further. This message is supposedly written by a Reverend Eugene who is writing on behalf of a Mr. Smith Walters, a repentant scammer who is on his deathbed in hospital. Eugene recounts the story of how Smith descended into a life of crime following his mother’s death and his need to look after his young sister. He started making money by sending out scam letters under numerous aliases, and when that didn’t pay enough he started selling drugs.

But the fraudster couldn’t enjoy the proceeds of his criminal life for long – a serious illness has apparently struck him down. With death approaching, Walters Smith decided to repent for his numerous sins, asking an intermediary to write to his victims asking for forgiveness and offering to return what he had supposedly stolen. Eugene asks any potential victims to contact him so they can receive part of Smith’s illegal legacy as a form of compensation. In the ensuing correspondence the “Reverend” is likely to ask for money to pay legal fees or to cover the costs of Smith’s medical care. After that, no more is likely to be heard from either of them.


As you can see, exposed or dying scammers cry, but they cry crocodile tears. And unsuspecting users can end up paying for those tears. So, whatever you do, don’t trust strangers who suddenly write with a tempting offer of millions!


1 comments

Prashant Kate

2013 Sep 19, 19:58
0
 

Why do Nigerian Scammers Say They are from Nigeria?

False positives cause many promising detection tech-
nologies to be unworkable in practice. Attackers, we
show, face this problem too. In deciding who to attack
true positives are targets successfully attacked, while
false positives are those that are attacked but yield
nothing.
This allows us to view the attacker’s problem as a
binary classification. The most profitable strategy re-
quires accurately distinguishing viable from non-viable
users, and balancing the relative costs of true and false
positives. We show that as victim density decreases the
fraction of viable users than can be profitably attacked
drops dramatically. For example, a 10
×
reduction in
density can produce a 1000
×
reduction in the number
of victims found. At very low victim densities the at-
tacker faces a seemingly intractable Catch-22: unless he
can distinguish viable from non-viable users with great
accuracy the attacker cannot find enough victims to be
profitable. However, only by finding large numbers of
victims can he learn how to accurately distinguish the
two.
Finally, this approach suggests an answer to the ques-
tion in the title. Far-fetched tales of West African riches
strike most as comical. Our analysis suggests that is an
advantage to the attacker, not a disadvantage. Since
his attack has a low density of victims the Nigerian
scammer has an over-riding need to reduce false posi-
tives. By sending an email that repels all but the most
gullible the scammer gets the most promising marks to
self-select, and tilts the true to false positive ratio in his
favor.

NTRODUCTION: ATTACKERS HAVE
FALSE POSITIVES TOO
False positives have a long history of plaguing secu-
rity systems. They have always been a challenge in
behavioral analysis, and anomaly and intrusion detec-
tion [5]. A force-fed diet of false positives have habit-
uated users to ignore security warnings [15]. In 2010 a
single false positive caused the McAfee anti-virus pro-
gram to send millions of PC’s into never-ending reboot
cycles. The mischief is not limited to computer secu-
rity. Different fields have different names for the inher-
ent trade-offs that classification brings. False alarms
must be balanced against misses in radar [22], precision
against recall in information retrieval, Type I against
Type II errors in medicine and the fraud against the
insult rate in banking [19]. Common to all of these ar-
eas is that one type of error must be traded off against
the other. The relative costs of false positives and false
negatives changes a great deal, so no single solution is
applicable to all domains. Instead, the nature of the
solution chosen depends on the problem specifics. In
decisions on some types of surgery, for example, false
positives (unnecessary surgery) are preferable to false
negatives (necessary surgery not performed) since the
latter can be far worse than the former for the patient.
At the other extreme in deciding guilt in criminal cases
it is often considered that false negatives (guilty per-
son goes free) are more acceptable than false positives
(innocent person sent to jail). In many domains de-
termining to which of two classes something belongs is
extremely hard, and errors of both kinds are inevitable.
Attackers, we show, also face this trade-off problem.
Not all targets are viable,
i.e.
, not all yield gain when
attacked. For an attacker, false positives are targets
that are attacked but yield nothing. These must be
balanced against false negatives, which are viable tar-
gets that go un-attacked. When attacking has non-zero
cost, attackers face the same difficult trade-off prob-
lem that has vexed many fields. Attack effort must be
spent carefully and too many misses renders the whole
endeavor unprofitable.
Viewing attacks as binary classification decisions al-
lows us to analyze attacker return in terms of the Re-
ceiver Operator Characteristic (ROC) curve. As an at-
tacker is pushed to the left of the ROC curve social good
is increased: fewer viable users and fewer total users are
attacked. We show that as the density of victims in the
population decreases there is a dramatic deterioration
in the attacker’s return. For example, a 10
×
reduc-
tion in density can causes a much greater than 1000
×
reduction in the number of viable victims found. At very low victim densities the attacker faces a seemingly
intractable Catch-22: unless he can distinguish viable
from non-viable users with great accuracy the attacker
cannot find enough victims to be profitable. However,
only by finding large numbers of victims can he learn
how to accurately distinguish the two. This suggests,
that at low enough victim densities many attacks pose
no economic threat.
Finally, in Section 4, we offer a simple explanation for
the question posed in the title, and suggest how false
positives may be used to intentionally erode attacker
economics.

Attacks are seldom free
Malicious software can accomplish many things but
few programs output cash. At the interface between
the digital and physical worlds effort must often be
spent. Odlyzko [3] suggests that this frictional inter-
face between online and off-line worlds explains why
much potential harm goes unrealized. Turning digital
contraband into goods and cash is not always easily
automated. For example, each respondent to a Nige-
rian 419 email requires a large amount of interaction,
as does the Facebook “stuck in London scam.” Cre-
dentials may be stolen by the millions, but emptying
bank accounts requires recruiting and managing mules
[7]. The endgame of many attacks require per-target
effort. Thus when cost is non-zero each potential target
represents an investment decision to an attacker. He
invests effort in the hopes of payoff, but this decision is
never flawless.
2.2 Victim distribution model
We consider a population of
N
users, which contains
M
viable targets. By viable we mean that these tar-
gets always yield a net profit of
G
when attacked, while
non-viable targets yield nothing. Each attack costs
C
;
thus attacking a non-viable target generates a loss of
C:
We call
d
=
M=N
the density of viable users in the
population.
We assume that some users are far more likely to be
viable than others. Viability is not directly observable:
the attacker doesn’t know with certainty that he will
succeed unless he tries the attack. Nonetheless, the
fact that some users are better prospects than others
is observable. We assume that the attacker has a sim-
ple score,
x;
that he assigns to each user. The larger
the score, the more likely in the attacker’s estimate the
user is to be viable.
More formally, the score,
x;
is a sufficient statistic
[22]. The attacker might have several observations about
the user, where he lives, his place of work, the accounts
he possesses,
etc
: all of these be reduced to the single numeric quantity
x:
This encapsulates all of the
observ-
able
information about the viability of User(
i
). Without
loss of generality we’ll assume that viable users tend
to have higher
x
values than non-viable ones. This
does not mean that all viable users have higher val-
ues that non-viable ones. For example, we might have
pdf
(
x
|
non-viable) =
N
(0
;
1) and
pdf
(
x
|
viable) =
N
(
;
1)
:
Thus, the observable
x
is normally distributed
with unit variance, but the mean,
;
of
x
over viable
users is higher than over non-viable users. An example
is shown in Figure 2.
The viability depends on the specific attack. For ex-
ample, those who live in wealthier areas may be judged
more likely to be viable under most attacks. Those who
are C-level officers at large corporations might be more
viable of elaborate industrial espionage or Advanced
Persistent Threat attacks,
etc
. Those who have fallen
for a Nigerian scam, may be more likely to fall for the
related “fraud funds recovery” scam.
It is worth emphasizing that rich does not mean vi-
able. There is little secret about who the richest people
in the world are, but attacking the Forbes 100 list is not
a sure path to wealth. To be viable the attacker must
be able to successfully extract the money (or other re-
source he targets). For example, if an attacker gets key-
logging malware on a user’s machine, harvests banking
passwords but cannot irreversibly transfer money from
the account this counts as a failure not a success. This
is a cost to the attacker for no gain.

Reply    
If you would like to comment on this article you must first
login


Bookmark and Share
Share