The Internet threat alert status is currently normal. At present, no major epidemics or other serious incidents have been recorded by Kaspersky Lab’s monitoring service. Internet threat level: 1

The Mystery of Duqu: Part Six (The Command and Control servers)

Kaspersky Lab Expert
Posted November 30, 15:10  GMT
Tags: Targeted Attacks, Stuxnet, Zero-day vulnerabilities, Duqu

Over the past few weeks, we have been busy researching the Command and Control infrastructure used by Duqu.

It is now a well-known fact that the original Duqu samples were using a C&C server in India, located at an ISP called Webwerks. Since then, another Duqu C&C server has been discovered which was hosted on a server at Combell Group Nv, in Belgium.

At Kaspersky Lab we have currently cataloged and identified over 12 different Duqu variants. These connect to the C&C server in India, to the one in Belgium, but also to other C&C servers, notably two servers in Vietnam and one in the Netherlands. Besides these, many other servers were used as part of the infrastructure, some of them used as main C&C proxies while others were used by the attackers to jump around the world and make tracing more difficult. Overall, we estimate there have been more than a dozen Duqu command and control servers active during the past three years.

Before going any further, let us say that we still do not know who is behind Duqu and Stuxnet. Although we have analyzed some of the servers, the attackers have covered their tracks quite effectively. On 20 October 2011 a major cleanup operation of the Duqu network was initiated. The attackers wiped every single server they had used as far back as 2009 – in India, Vietnam, Germany, the UK and so on. Nevertheless, despite the massive cleanup, we can shed some light on how the C&C network worked.

Server ‘A’ – Vietnam

Server ‘A’ was located in Vietnam and was used to control certain Duqu variants found in Iran. This was a Linux server running CentOS 5.5. Actually, all the Duqu C&C servers we have found so far run CentOS – version 5.4, 5.5 or 5.2. It is not known if this is just a coincidence or if the attackers have an affinity (exploit?) for CentOS 5.x.

When we began analyzing the server image, we first noticed that at least the ‘root’ folder and the system log files folder ‘/var/log/’ had a ‘last modified’ timestamp of 2011-10-20 18:07:28 (UTC+3).

However, both folders were almost empty. Despite the modification date/time of the root folder, there were no files inside modified even close to this date. This indicates that certain operations (probably deletions / cleaning) took place on 20 October at around 18:07:28 GMT+3 (22:07:28 Hanoi time).

Interestingly, on Linux it is sometimes possible to recover deleted files; however, in this case we couldn’t find anything. No matter how hard we searched, the sectors where the files should have been located were empty and full of zeroes.

By bruteforce scanning the slack (unused) space in the ‘/’ partition, we were able to recover parts of the ‘sshd.log’ file. This was kind of unexpected and it is an excellent lesson about Linux and the ext3 file system internals; deleting a file doesn’t mean there are no traces or parts, sometimes from the past. The reason for this is that Linux constantly reallocates commonly used files to reduce fragmentation. Hence, it is possible to find parts of older versions of a certain file, even if they were thoroughly deleted.

As can be seen from the log above, the root user logged in twice from the same IP address on 19 July and 20 October. The latter login is quite close to the last modified timestamps on the root folder, which indicates this was the person responsible for the system cleanup – bingo!

So, what exactly was this server doing? We were unable to answer this question until we analyzed server ‘B’ (see below). However, we did find something really interesting. On 15 February 2011 openssh-5.8p1 (the sourcecode) was copied to the machine and subsequently installed. The distribution is “openssh_5.8p1-4ubuntu1.debian.tar.gz” with an MD5 of ‘86f5e1c23b4c4845f23b9b7b493fb53d’ (stock distribution). We can assume the machine has been running openssh-4.3p1 as included in the original distribution of CentOS 5.2. On 19 July 2011 OpenSSH 5.8p2 was copied to the system. It was compiled and binaries were installed into folders (/usr/local/sbin). The OpenSSH distribution was again the stock one.

The date of 19 July is important because it indicates when the new OpenSSH 5.8p2 was compiled in the system. Just after that, the attackers logged in, probably to check if everything was OK.

One good way of looking at the server is to check the file deletions and to put them into an activity graph. On days when there was notable operations going on, you see a spike:

For our particular server, several spikes immediately raise suspicions: 15 February and 19 July, when new versions of OpenSSH were installed; 20 October, when the server cleanup took place. Additionally, we found spikes on 10 February and 3 April, when certain events took place. We were able to identify “dovecot” crashes on these dates, although we can’t be sure they were caused by the attackers (“dovecot” remote exploit?) or simply instabilities.

Of course, for server ‘A’, three big questions remain:

  • How did the attackers get access to this computer in the first place?
  • What exactly was its purpose and how was it (ab-)used?
  • Why did the attackers replace the stock OpenSSH 4.3 with version 5.8?
We will answer some of these at the end.

Server ‘B’ – Germany

This server was located at a data center in Germany that belongs to a Bulgarian hosting company. It was used by the attackers to log in to the Vietnamese C&C. Evidence also seems to indicate it was used as a Duqu C&C in the distant past, although we couldn’t determine the exact Duqu variant which did so.

Just like the server in Vietnam, this one was thoroughly cleaned on 20 October 2011. The “root” folder and the “etc” folder have timestamps from this date, once again pointing to file deletions / modifications on this date. Immediately after cleaning up the server, the attackers rebooted it and logged in again to make sure all evidence and traces were erased.

Once again, by scanning the slack (unused) space in the ‘/’ partition, we were able to recover parts of the ‘sshd.log’ file. Here are the relevant entries:

First of all, about the date – 18 November. Unfortunately, “sshd.log” doesn’t contain the year. So, we can’t know for sure if this was 2010 or 2009 (we do know it was NOT 2011) from this information alone. We were, however, able to find another log file which indicates that the date was 2009:

What you can see above is a fragment of a “logwatch” entry which indicates the date of breach to be 23 November 2009, when the root user logged in from the IP address from 19 November, in the “sshd.log”. The other two messages are also important – they are errors from “sshd” indicating a port 80 and port 443 redirection was attempted; however, they were already busy. So now we know how these servers were used as C&C – port 80 and port 443 were redirected over sshd to the attackers’ server. These Duqu C&C were never used as true C&C – instead they were used as proxies to redirect traffic to the real C&C, whose location remains unknown. Here’s what the full picture looks like:

Answering the questions

So, how did these servers get hacked in the first place? One crazy theory points to a 0-day vulnerability in OpenSSH 4.3. Searching for “openssh 4.3 0-day” on Google finds some very interesting posts. One of them is (https://www.webhostingtalk.com/showthread.php?t=873301):

This post from user “jon-f”, which dates back to 2009, indicates a possible 0-day in OpenSSH 4.3 on CentOS; he even posted sniffed logs of the exploit in action, although they are encrypted and not easy to analyze.

Could this be the case here? Knowing the Duqu guys and their never-ending bag of 0-day exploits, does it mean they also have a Linux 0-day against OpenSSH 4.3? Unfortunately, we do not know.

If we look at the “sshd.log” from 18 November 2009, we can, however, get some interesting clues. The “root” user attempts to log in using a password multiple times from an IP in Singapore, until they finally succeed:

Note how the “root” user tries to login at 15:21:11, fails a couple of times and then 8 minutes and 42 seconds later the login succeeds. This is more of an indication of a password bruteforcing rather a 0-day. So the most likely answer is that the root password was bruteforced.

Nevertheless, the third question remains: Why did the attackers replace the stock OpenSSH 4.3 with version 5.8? On the server in Germany we were able to recover parts of the “.bash_history” file just after the server was hacked:

The relevant commands are “yum install openssh5”, then “yum update openssh-server”. There must be a good reason why the attackers are so concerned about updating OpenSSH 4.3 to version 5. Unfortunately, we do not know the answer to this question. On an interesting note, we observed that the attackers are not exactly familiar with the “iptables” command line syntax. Additionally, they are not very sure about the “sshd_config” file format either, so they needed to bring up the manual for it (“man sshd_config”) as well as for the standard Linux ftp client. What about the “sshd_config” file, the sshd configuration”? Once again, by searching the slack space we were able to identify what they were after. In particular, they changed the following two lines:

GSSAPIAuthentication yes
UseDNS no

While the second one is relevant for speed, especially when performing port direction over tunnels, the first one enables Kerberos authentication. We were able to determine that in other cases exactly the same modifications were applied.


We have currently analyzed only a fraction of the available Duqu C&C servers. However, we were able to determine certain facts about how the infrastructure operated:

  1. The Duqu C&C servers operated as early as November 2009.
  2. Many different servers were hacked all around the world, in Vietnam, India, Germany, Singapore, Switzerland, the UK, the Netherlands, Belgium, South Korea to name but a few locations. Most of the hacked machines were running CentOS Linux. Both 32-bit and 64-bit machines were hacked.
  3. The servers appear to have been hacked by bruteforcing the root password. (We do not believe in the OpenSSH 4.3 0-day theory - that would be too scary!)
  4. The attackers have a burning desire to update OpenSSH 4.3 to version 5 as soon as they get control of a hacked server.
  5. A global cleanup operation took place on 20 October 2011. The attackers wiped every single server which was used even in the distant past, e.g. 2009. Unfortunately, the most interesting server, the C&C proxy in India, was cleaned only hours before the hosting company agreed to make an image. If the image had been made earlier, it’s possible that now we’d know a lot more about the inner workings of the network.
  6. The “real” Duqu mothership C&C server remains a mystery just like the attackers’ identities.

We would also like to send a question to Linux admins and OpenSSH experts worldwide – why would you update OpenSSH 4.3 to version 5.8 as soon as you hack a machine running version 4.3? What makes version 5.8 so special compared to 4.3? Is it related to the option “GSSAPIAuthentication yes” in the config file?

We hope that through cooperation and working together we can cast more light on this huge mystery of the Duqu Trojan.

Kaspersky Lab would like to thank the companies PA Vietnam, Nara Syst and the Bulgarian CyberCrime unit for their kind support in this investigation. This wouldn’t have been possible without their cooperation.

You can contact the Kaspersky Duqu research team at “stopduqu AT Kaspersky DOT com”.


Oldest first
Threaded view


2011 Nov 30, 20:49

About 0 day

http://groups.google.com/group/00xx/msg/fc560a33adee8371? that one looks like a legit exploit (<= 5.2)?



2011 Nov 30, 21:24

Re: About 0 day

Its an old fake for those stupid enough to run it




2012 Dec 29, 12:08

Re: Re: About 0 day






2011 Nov 30, 21:32


Thank you for the great research on this!
As far as CentOS and OpenSSH it seems to be more than just coincidence at this point.

>> why would you update OpenSSH 4.3 to version 5.8 as soon as you hack a machine running version 4.3?

The obvious answer is to patch the hole so others cannot get in the same way you did. Also makes sense to cover your tracks so that others cannot find out how you exploited 4.3


mark roth

2011 Nov 30, 21:47

reasons and actions

I found this very interesting, having followed the link from slashdot. Two details stand out, esp. after speaking to my manager about the sshd business: first, why would they yum update openssh, since you report they installed 5.8 from an ubuntu/debian source package. CentOS 6, like RHEL 6, is running 5.3p1 (with all known security fixes backported by upstream)?

Secondly, my manager agrees with the previous poster: you update to prevent other attackers' access. After all, their attacks might break your attack.

Finally, this indicates very, very bad password policy on the part of the compromised servers. If these belong to corporations, management should be looking very hard at why they were so easily broken... and why they're not running brute-force resistance, such as fail2ban.




2011 Nov 30, 22:51

Why update

For clues as to why they'd update, try looking at: http://openssh.org/security.html

At first I thought perhaps they were updating due to GSSAPIAuthentication (since it was turned on after updating sshd), but it appears 4.2 fixed the delegation issue and the installed version was already 4.3, yes?

Perhaps they wanted to get to 5.2 to fix the plaintext recovery issue which USED to be posted here: http://www.cpni.gov.uk/Docs/Vulnerability_Advisory_SSH.txt but has since disappeared. A search of their site does not immediately turn up the advisory (HMM!)


Jesse Carter

2011 Nov 30, 23:00

Kerberos OpenSSH

First, thanks Kaspersky for you hard work and the transparency of your results.

Regarding Kerberos OpenSSH :

OpenSSH has supported Kerberos-5 authentication and authorization since Version 3.8. The actual implementation of this support changed quite a bit with the release of OpenSSH-4.3p2:
As of OpenSSH [4.3p2],the Kerberos authentication is done via their Kerberos credentials and the authenticated users are allowed to forward their credentials to a remote machine over ssh. Kerberos authentication support is available for SSH protocols 1 and 2. For SSH protocol 2.0, GSSAPI support is also available. In addition to other authentication mechanism support, GSSAPI facilitates authentication with Kerberos also.

The attackers simply enabled GSSAPI which was already available on Version 4.3. Why? While it is possible to use other stongly encrypted authentication methods with GSSAPI, Kerberos is by far the most popular use for GSSAPI when combined with OpenSSH. Which leads me to believe the attackers utilized Kerberos authentication to some extent. Maybe they like the convenience of single-sign-on as much as I do. I would think with the amount of C C proxying that is happening here, it would greatly reduce some of the hoops they would have to jump through to actually admin all of the servers. Regardless they must have enabled it for a reason, which leads me to believe they used Kerberos for some type of shared authentication across C C servers.

***On a side note, did Kaspersky do a slack scan for the Kerberos KDC config file? You might find some interesting breadcrumbs there.***

Regarding the attackers rush to update OpenSSH:

People (end-users, attackers, admins, etc) all update software for one or both of the same two basic reasons:
1. The installed version has a security vulnerability that is patched/fixed in the current version.
2. The installed version does not have a feature that the current version has.

So it follows that the attackers updated to 5.8 for one or more of these reasons. It's also important to note that they simply updated to the latest revision of OpenSSH-5, not OpenSSH-5.8 specifically, which means the changes in OpenSSH which prompted the update could have happened anywhere after OpenSSH 4.3 (February 1, 2006).

The argument for #1 is clear and has already been made. If there was a known (at least among black-hats) exploit for OpenSSH 4.3, then the attackers would want to close that to prevent anyone else from taking over their newly conquered C C server. It is highly likely that whatever exploit may have been available for 4.3 has been patched by now.

The argument for #2 is not as straight forward; the most prominent feature changes since 4.3 are as follows (Wikipedia OpenSSH Homepage):
OpenSSH 4.9: March 30, 2008
Added chroot support for sshd(8)
OpenSSH 5.4: March 8, 2010
Disabled SSH protocol 1 default support. Clients and servers must now explicitly enable it.
Added PKCS11 authentication support for ssh(1) (-I pkcs11)
Added Certificate based authentication
Added "Netcat mode" for ssh(1) (-W host:port). Similar to "-L tunnel", but forwards instead stdin and stdout. This allows, for example, using ssh(1) itself as a ssh(1) ProxyCommand to route connections via intermediate servers, without the need for nc(1) on the server machine.
Added the ability to revoke public keys in sshd(8) and ssh(1). While it was already possible to remove the keys from authorized lists, revoked keys will now trigger a warning if used.
Added Multiple authentication methods and single sign-on (via the agent-forwarding)

There are two obvious candidates for "required feature" here.
The first is Netcat. For the unfamiliar it is a multipurpose tool which allows the user to read AND write data across network connections. There is an array of potential nefarious uses for the capabilities offered by Netcat as opposed to standard -L port forwarding. Not least of which is the ability to circumvent some common local security policies and firewall rules regarding port numbers.

The second is agent-forwarding. (Quoted from OpenSSH homepage)
ďAn authentication agent, running in the userís laptop or local workstation, can be used to hold the userís RSA or DSA authentication keys. OpenSSH automatically forwards the connection to the authentication agent over any connections, and there is no need to store the RSA or DSA authentication keys on any machine in the network (except the userís own local machine). The authentication protocols never reveal the keys; they can only be used to verify that the userís agent has a certain key. Eventually the agent could rely on a smart card to perform all authentication computations.Ē
This is another layer of security for the attackers to prevent leaving potentially fatal breadcrumbs in the event that C C servers are ever discovered/imaged.

The argument for both is also possible. Given that there may have been a zero-day and the new feature available by updating past version 5.4, the attackers could kill two birds with one stone.

*Edited to correct spelling and spacing issues*


Costin Raiu

2011 Dec 01, 00:56

Re: Kerberos OpenSSH

Dear Jesse,

Thanks for your detailed post, excellent informations!
We also tend to believe your point number 2) as the main reason for installing 5.8. The netcat-like feature does seem indeed very likely to be the feature which is useful for multiple tunnels (tunnel within tunnel within tunnel).

In regard to the question:

"***On a side note, did Kaspersky do a slack scan for the Kerberos KDC config file? You might find some interesting breadcrumbs there.*** "

Unfortunately, we couldn't find any. Actually, by looking at the list of deleted files, there was never one in the system. Although they did enable "GSSApiAuthentication", a Kerberos config file was never created.

On the other hand, there are some very interestingly named files which have been deleted:

r/r 7201934: root/.ssh/0000000000 2011-10-20 18:40:15 (EEST)
r/r 7202146: root/00 2011-10-20 18:07:28 (EEST)
r/r 7202146: root/0 2011-10-20 18:07:28 (EEST)
r/r 7202146: root/000000 2011-10-20 18:07:28 (EEST)

For these, the 'shred' command was used, so no chance to recover the original content. Any idea what the purpose of a file called "0000000000" in ".ssh/" could be?



Sam Crawford

2011 Dec 01, 01:25

Re: Re: Kerberos OpenSSH

I'm not sure I agree with your description of the netcat mode. My understanding is that it provides a simple way of interacting with a remote service, tunnelled over SSH. A good description and examples are at http://blog.rootshell.be/2010/03/08/openssh-new-feature-netcat-mode/. Given that it's primary use is for interactive communications, it would seem a bit heavy handed to replace the entirety of OpenSSH just for this (in earlier versions you could have used the -L option previously and then just telnet back to localhost on some listening port). Of course, given their relatively novice nature on the Linux command line, it could be that they didn't realise this.

Another addendum to the above thought: the -W option is for the SSH client, not server. Their commands and config changes were related to the server, not client. I don't think the netcat mode is the reason for the upgrade.

Regarding the shred'd files, it looks likely that the filenames were changed as well. ~/.ssh/0000000000 has no meaning. However, ~/.ssh/authorized_keys or ~/.ssh/authorized_keys2 would allow a remote SSH client to login with SSH keys (i.e. no user/pass). This would show up clearly in the SSH logs you had earlier though. Failing that, the file could also have been ~/.ssh/known_hosts. This file stores a fingerprint of every SSH host you've connected to and their IP address (there's no option way to disable this for remote hosts, but there are a few workarounds: http://linuxcommando.blogspot.com/2008/10/how-to-disable-ssh-host-key-checking.html)

Hope this helps,




2011 Dec 01, 01:32

Strangely named files

0 = .
00 = ..
000000 = config
00000000000 = known_hosts


Sam Crawford

2011 Dec 01, 01:42

Re: Strangely named files

00000000000 is one character short for known_hosts, but that's most likely a typo or truncation. Alternatively, it could have been id_rsa.pub or id_dsa.pub, but we should have an ida_rsa/id_dsa file to accompany that really.



2011 Dec 01, 01:51

Re: Re: Strangely named files

Not if they were doing kerb auth forwarding though, right? They'd have no reason to delete the actual root ssh keys, and if they used kerb instead of keys the attacker wouldn't have put any on the box.

It's too bad the .ssh/config wasn't recovered :(


Sam Crawford

2011 Dec 01, 02:05

Re: Re: Re: Strangely named files

Kerb auth or kerb auth forwarding? I've just been reading the docs on the latter, wondering if the agent forwarding feature that Jesse mentioned could be related to Kerberos auth being enabled. I don't think it is - you need TGT forwarding enabled to forward Kerberos auth, but there could be some combination or external configuration I'm overlooking.

Looking at Costin's post, I don't believe there was a .ssh/config - the only file under .ssh/ was 00000000000, which I'm in agreement is almost certainly known_hosts. Given that they were (relatively) careful, I'd be surprised if they created a .ssh/config file, when they could just pass everything to ssh via -o options.



2011 Dec 01, 02:07

Re: Re: Re: Re: Strangely named files

Whoops, good point. I saw .ssh in the path for the first and assumed the rest since it conveniently (almost) fit.


Jesse Carter

2011 Dec 01, 02:17

Re: Re: Re: Re: Strangely named files

To my knowledge you have to compile TGT in explicitly, which I didn't see in the logs. Feel free to correct me if I'm wrong. But I'd have to agree that kerb is in use for a reason. I guess I know what I'm reading up on tonight. :)


Jesse Carter

2011 Dec 01, 02:06

Re: Re: Re: Kerberos OpenSSH


Helpful notes. I don't disagree that they can replicate *most* of what netcat can do using existing techniques.

My point was that it makes lots of things much simpler over tunneled services.
Additionally netcat has the potential to open up a couple of security loopholes when trying to compromise a host remotely (as stated in the article you referenced).

Nice catch on known_hosts by the way. You beat me to it. :)


Sam Crawford

2011 Dec 01, 02:17

Re: Re: Re: Re: Kerberos OpenSSH

Jesse - I wasn't disagreeing with you, I was disagreeing (only in a minor way) with Costin's description of the -W option.

Anyway, I doubt the -W option was the driver for installing the newer SSH server, given (a) the fact it's a client side option, (b) is netcat-like, but not actually netcat (using -L/-R with netcat on one side would give you more flexibility, as -W purely uses stdin and stdout)

Really interesting topic! Imagine what we'd do if we had access to all of the logs :-)



2011 Dec 01, 01:55

Re: Re: Kerberos OpenSSH

It has been suggested (http://twitter.com/#!/_dvorak_/status/141981510914949120) that the changelog for OpenSSH 4.4 might provide clues (http://www.openssh.org/txt/release-4.4).

One of the changes was a pre-auth DoS found by Mark Dowd that could possibly allow RCE on the portable (non-OpenBSD) version if GSSAPIAuthentication was enabled (I believe it is by default). Since it involves a race condition, perhaps the repeated login failures were not password brute-forcing, but rather it took several attempts to win the race.


Sam Crawford

2011 Dec 01, 02:30

Re: Re: Re: Kerberos OpenSSH

I did the same as _dvorak_ earlier and had a dig through all of the intermediate versions :)

The DoS issue didn't seem to allow unauthorised access - just denial of service. I did wonder earlier if 8 minute pause between SSH logins earlier was used to change the root password via some unknown exploit, which then allowed them to SSH in (and potentially restore the old root password, so as to avoid detection).

There were also quite a few Kerberos related fixes/changes (disabling SPNEGO, checking extra paths for krb5-config, etc), but none struck me as a major feature.

I'm also leaning towards the agent forwarding feature that Jesse and Costin mentioned earlier. It'd be interesting to work out if that could be related to the Kerberos auth being enabled - I cannot see a relation there at the moment.


Jesse Carter

2011 Dec 01, 01:56

Re: Re: Kerberos OpenSSH


No problem, glad to help. Stinks about the KDC. I may be mistake, but I'm under the impression that some implementations of Kerberos implement the KDC using properties instead of a .conf file. I'm not 100% sure whether or not OpenSSH has that option.

Regarding the deleted file names, I'm definitely in agreement with chort and Sam on that:

0 = .
00 = ..
000000 = config
00000000000 = known_hosts



2012 Mar 16, 21:56

Re: Kerberos OpenSSH

I wouldn't underestimate the OpenSSH+Kerberos bug theory. Let's see...
1. CentOS is known to lag behind RHEL in terms of rolling out patches, read this article dated Feb. 23, 2011 concerning CentOS <5.6 http://lwn.net/Articles/429364/.
2. On Feb 8. 2011 RedHat issued several patches including one for Kerberos (krb5). It is reasonable to think that attackers saw those patches and knew fixes were not available for CentOS 5 servers which gave anyone a window of a couple of weeks to develop an exploit
3. OpenSSH 4.3 allowed the unauthenticated client to pass embedded \0 characters in strings , this was fixed on August 31, 2010 and rolled out in OpenSSH 4.9
4. These embedded \0 characters could have been passed on to kerb5 by OpenSSH's GSS API support, see:

5. Patches for dovecot vulnerabilities were published on Feb 7, 2011 which may explain the crashes.

So it may be possible that attackers where using a OpenSSH+kerb5 bug all along and rushed to close the attack vectors when they became publicly known



2011 Nov 30, 23:20

Not buying this

There were 2 exploits used to compromise this server, the first one probably against dovecot, and the 2nd one was a local kernel exploit to get root once they got in through dovecot.

The whole thing about a bruteforce attack on the password, in only a few tries?!? come on.. that's crap.

the next part I don't buy about this is if these guys are the brains behind "Stuxnet", why the hell are they using nano and pico and running all these iptables and other commands with --help switches. Not too many self-respecting *NIX hackers would be using either of those two editors.

Too many holes in this story, this article blows..


Johnny Hughes

2011 Dec 01, 15:55

Re: Not buying this

There are no known dovecot exploits that allow remote access on CentOS. There is an exim exploit that would allow remote access and one krb5 issue that could allow remote execution of code ... and that could then give them access.

See my comments below for the exploit links.

You would need to do some research before just wildly deciding that "they got in via dovecot".

Also remember that the sources used to build CentOS contain backported code for security issues, so it is NOT just openssh-4.3p2 ... it has patches for all known security issues.

Backporting: (https://access.redhat.com/security/updates/backporting/)

Security Updates: (http://rhn.redhat.com/errata/rhel-server-errata-security.html)



2011 Dec 01, 21:18

Re: Not buying this

I agree partially. I tend to lean to some form of exploit vs brute force (whether thats exploit is on ssh or something else). But 4 failed attempts doesn't scream brute force, but crappy typer or they just didn't have the right password. The box was probably previously compromised and someone gave them the password, they tried to login with the wrong info, and had to go find the password (email, chat, irc, etc) then was able to login correctly.

I agree about the "if they are smart enough to write the worm, they probably should know their way around a unix box better". I'm guessing this particular person logging in it probably some lackey in the group vs the high level hacker that compromised the box (going back to my first paragraph).

Not to mention, isn't Root turned off on default on almost all ssh installations (regardless of distro or OS). It was probably previously hacked and then enabled.



2012 Jan 02, 22:38

Re: Not buying this

100% agree.
Only a noob with no clue in how to use linux could be using commands with "--help" switch... I mean "iptables --help"??? come on man wtf??
Also, these failed logging attempts looks more like another noob mistake like having caps lock enabled, rather than a "brute force" method.



2011 Nov 30, 23:23


Rather then having to build a backdoor'ed sshd for whatever version the server happened to be running it was easier for them to upgrade it and install their pre-built backdoor.

Unless you found some other backdoor installed this could be likely. Once you gain access to a machine you don't close the door you came in until you've opened another one.

What privilege level did the dovecot process run as? If it doesn't drop privs very well I'd think that is more likely than OpenSSH


Sam Crawford

2011 Dec 01, 00:31

Fascinating article. A couple of small observations:

1. The attacker's use of "up2date" (a RHEL-only tool in most cases) before attempting to use yum suggests that they are not reliant upon CentOS and RHEL would be an equally suitable target.

2. The netstat command suggests that they expect that some output may be returned by grepping for '1234'. This would most likely match a PID or a port, and I'd lean towards port as PIDs are dynamic. This suggests that either a service was listening on port 1234, or they were connecting to some upstream service on port 1234.

3. There is no 'openssh5' package on CentOS as standard. Either they are relying on a third party yum repository or they just don't have that much familiarity with CentOS (more likely). Additionally, updating the 'openssh-server' package will not bring you up to OpenSSH 5.8 on CentOS 5.x. The bash history suggests that they did install OpenSSH 5.8 though, which is puzzling.

Are the (anonymised) logs that Kaspersky are analysing going to be available for third parties to study?

Edited by Sam Crawford, 2011 Dec 01, 00:53


Costin Raiu

2011 Dec 01, 00:49


Sam, you are right - 1234 is indeed a port number.
Later, they do put a netcat on port 1234, although the purpose is unclear, since the output doesn't get redirected into a file.

nc -l -p 1234
nc -l 1234
nc -l 1234

Once again, it's interesting to see they are not fully familiar with nc command line parameters.


Sam Crawford

2011 Dec 01, 01:07

Re: Re:

Well one thought would be for testing:

In the simple case, can some remote server reach this server on port 1234? Earlier in the command history they flushed the iptables ruleset, but that wouldn't help if there was some upstream firewall. They could be testing this.

As a slight extension to this, they could be using it to test SSH forwarding support. e.g. Can a remote SSH client forward traffic through the local SSHd to a local service (netcat in this case)?

The SSH case would appear to be the more interesting one given that (a) it occurred after the OpenSSH upgrade and (b) there may have been some doubt about whether or not it would work and thus required testing(potentially related to the Kerberos auth?).



2011 Dec 01, 14:59

Re: Re:

these commands and iptables flushing indicates setup for reverse shell, reverse backdoor or something.
maybe it was for their tunnelings



2011 Dec 03, 05:00


On your 2nd point, re: netstat;

I'm guessing that the backdoor they're using is one for OpenSSH where you have to connect from a certain source port (1234) or some other service that is backdoored and drops you to a root shell if you connect from the correct source port. Backdoors have operated like this for years, it's a lot less inconspicuous for a backdoored daemon to act this way than instead of listening on a port that is not typically occupied by any of the normal or generic services. Think about it, you don't want your backdoor showing up in the IT guys nessus logs do you?



2011 Dec 01, 12:17

up2date: to me this implys they have a big budget and are used to running RHEL probably in their own labs and ran it out of habit.

openssh5: perhaps they edited the dns resolution or somehow manipulated the dns traffic at the IP level to point the default yum repos to their own, and that does have a backdoored openssh5 package in it. They may hope that the name 'openssh5' sets off less alarm bells in peoples heads than 'openssh-backdoored'. I suspect this is also why they compiled their own openssh on the other system. Backdoored. While they may very well be using kerberos, the backdoor may be switchable with that option. Any fragments of /var/log/yum.log available?

availability: I'd love to have a look over one of these images too.


Johnny Hughes

2011 Dec 01, 15:30

exim is a possible entry point on machines without an update

I am one of the CentOS Devs.

There is an Critical issue with exim that Red Hat published a fix for on 2010-DEC-10 (http://rhn.redhat.com/errata/RHSA-2010-0970.html). CentOS released our version of that fix on 11-Dec-2010 19:59 for CentOS-5. You can see it in the Vault here (http://vault.centos.org/5.5/updates/SRPMS/).

Other than that and some samba issues (hopefully nobody is exposing samba directly to the internet without a Firewall :D), and some web browser issues, there are not may issues that a 5.2 (or higher) server would have. There is one possible krb5 issue too (http://rhn.redhat.com/errata/RHSA-2010-0029.html) ... but that one is not likely.

They most likely used some kind of brute force method to find the root login. People should disable root logins for ssh directly and require users to "su" to root if required and they also should control their ssh connections with keys (and disable passwords altogether) and/or limit access to external ssh ports from only IPs (or networks) that require access.



2011 Dec 03, 05:04


they did not use their own yum repo, nor did they bruteforce any 'root' password, default centos distribution disables remote root login by default.. it's pretty damned obvious that none of the people commenting on this have any clue about hacking.


Costin Raiu

2011 Dec 05, 16:39

Re: Lord..

Hi there,

" default centos distribution disables remote root login by default.."

Sorry to say, but you are wrong - default CentOS installs allow remote root logins. See above for the comment from one of the CentOS devs who recommends disabling remote root login and restricting access to trusted IPs only.


Digital Human

2012 Mar 09, 13:00

Re: Lord..

I do agree with you. Brute forcing is unlikely. About the netcat thing. I used it once to execute commands from a remote host. With a simpel bash script its more then enough to keep a door open.
My other idea about the: yum update openssh5, what if they just use they're own repo and redirected the dns. Why else did they disable DNS in ssh_config? Only for speed? 1 rule with hacking: DONT HURRY



2011 Dec 07, 20:01

dear friend,

The updating the OpenSSH 4.3 to version 5.8 as soon as they hack a machine running version 4.3

"they updating it become...the new update because...the update will be need to unistall the old file to make sure the new update is working...properly in the machine..In this case ...it make's the attacker easier to clean all the trace....that had been made.....so it is clear...about the update...as you can see...the older version...had been hack...so the information are still in it...that's why the attacker updating..it.../
to make sure that the information will be not fall in your hand...
Otherwise, it is easier to find out..where the source come from....

Thank You.....(Vegeances) p/s : I'm volunteer myself to help you guy....(age : 16) I just find the solution after I read this...
(I hope I can be like...you guyz..) (future) aha



2011 Dec 22, 05:56

What makes version 5.8 so special?

There is nothing special about 5.8 actually.
The special i believe is in version 5.6.

this version allows ssh connection multiplexing and supports remote forwarding with dynamic port allocation it can report the allocated port back.

The Connection multiplexing allows a lot faster connection while using proxied SSH.
The remote connection forwarding can be used to bypass restrictive firewalls ;)

From my point of view,

1. During the duqu updates, the attacker logged in manually just for updating OpenSSL. Maybe the duqu itself had no provision to run shell commands?(this sounds quiet unrealistic, but may be a case with unknown reason)

2. With updated version of OpenSSL, the attacker might have tried to made the communication of the Ququ servers more secure and fast and probably it avoided risk of the control takeover by defending parties.

3. with new method Ququ may leave back less traces and may become harder to analyze.

Also, I am sure you have not posted much data over here or you have posted most irrelevant data which may not alert the attackers. :D



2011 Dec 30, 12:58

Ops vs. Developers

Based on the logs it seems clear that the people using the C C servers are not very Linux savvy. There are two possibilities:

1) The duqu / stuxnet developers are experts about windows but don't know Linux very well. Possible but unlikely.

2) The people who "use" duqu / stexnet are different from the people who wrote them. Which would be classic operational security practiced by countries the world over. Basically, you tell the developers as little as possible about the target or the time frame for using the tool. You then have a separate ops team the knows the target and time frame, but has limit knowledge of how the software works.

If you assume that duqu / stuxnet is government sponsored then you have "government quality" individuals on the ops team and world class developers writing the tools. Which would explain the behavior seen in the logs.



2012 Jan 02, 23:02

Re: Ops vs. Developers

If that's the case, then there's not such a thing like a "server hack".
What we are looking at is just a noob server op trying to do his work and these C C servers belong to the attackers.
If you have a big operation going on, you won't risk it by using noob people as server ops.



2012 Jan 03, 01:13

the intelligence of those responsible

the only possible explanation for the occurence -help and man is part smoke and mirrors and mind-games.
should an admin or someone interupt the attacker while he's doing his work it would be easy for most and apparently a great majority to discount the intelligence or significance. For instance observing this both real-time (thru binoculars or MiTM) it could seem amateurish as well as post-partem a coincedental power-outage, HW failure, or other external something - interuppted the compromise leaving it unclean.. admin discovers through a simple #history that some newb script-kiddie with no clue how to use iptables yadda yadda...
admin wouldnt be calling the law because someone made his box a c c server..



2012 Mar 08, 01:55

why openssh_5.8p1-4ubuntu1.debian.tar.gz?

one question i have not seen asked yet is why is this person(s) using an ubuntu patched version of openssh on a CentOS box.

some possibilities:
- there is an exploit in this version of openssh that the ubuntu folks are shipping, and the people behind this are utilizing it on these CentOS boxes.


- as mentioned earlier, there are junior people doing the actual logging in and hacking on these systems. they are probably working off less than clear instructions or documentation, don't have much *nix foo and are probably working on a lousy internet cafe connection.

i'm kinda leaning towards the second possibility based on the huge difference in quality of the code that has been written and how clumsy deploying the backdoor is. aside from the choice of editors, lack of knowledge in how iptables et. al. work - a seasoned attacker would not rely on compilers being available on a target system to build an apparently required binary (in this case openssh-5.x).

having said that - the first possibility *is* interesting :)


Digital Human

2012 Mar 09, 13:04


I do agree with the fact brute forcing is unlikely. About the netcat thing. I used it once to execute commands from a remote host. Explains the fast whype. With a simpel bash script its more then enough to keep a door open.
My other idea about the: yum update openssh5, what if they just use they're own repo and redirected the dns. Why else did they disable DNS in ssh_config? Only for speed?
1 rule with hacking: DONT HURRY it's just to cover-up DNS requests.



2012 Mar 10, 07:38

the intelligence of those responsible

I found it very strange use of "nano" and etc. ..
I believe that whoever carried out the cleaning and the updates were people follow a how-to, like a soldier of an army on a mission.


Hilbert Space

2012 Mar 10, 20:59

Re: the intelligence of those responsible

"iptables -F"

... All iptables chains are deleted
... This is stupid and only applicable to vanilla systems; if the iptables have forwarding rules and rewrites or a default DROP policy, downtime and customer anger WILL alert the sysop.
Not to mention that your current SSH connection's packets might well go to the bit bucket. End of story.

"uname -a"
"cat /etc/issue"

... These guys don't know what system this is!

"yum install openssh5" ("No such package")
"up2date" ("Command not found")
"yum search" ("What was that package name again?")

Not sure whether this is not a uni freshman who begged long and hard for a root password in chatrooms.

Edited by Hilbert Space, 2012 Mar 10, 21:31


Darrin Ward

2012 Mar 14, 09:12

I am so utterly fascinated by all of this. I am no *nix admin but I do play around with the command line a lot to get some work done... I compile some source code and hack things, but nothing major. I spend the vast majority of my time in the command line with Apache and php.

I total agree about the uname -a and cat /etc/issue... When I saw that I was thinking "they don't know what system they're on".

I wonder if the port 80 and 443 issue has been fully investigated. I mean, obviously they're the standard HTTP and HTTPS ports, but I would expect most machines to already be running servers on those ports (as you see they ran into this by getting the binding errors - the ports were already in use). And you couldn't just disable the existing servers because then obviously the box owners would know that the Web services are down, giving them away.

So you'd have 2 choices: First option would be to use ports other than 80 and 443. Of course, then you would have to make sure those ports were open through firewalls (we can see they already worked with iptables), and traffic to a non-standard port would be suspicious anyway (although, even if servers weren't being run on 80 and 443, traffic to those ports would be suspicious!) ...

OR, second option, you could configure the existing server(s) operating on those ports to do the proxying of requests to other servers for you. That way you can leave the existing Web server(s) serving the anticipated content, but maybe set up a vhost for something else which is proxied. Apache httpd and most other obviously can do this. So, I'm wondering if the httpd.conf files or other config files for Web servers were checking for anything suspicious?

If you would like to comment on this article you must first

Bookmark and Share