Monday, January 22, 2007

In which I tell you where to stick it...

You think you're funny, don't you?

I hear this a lot, especially from my family, but now I can honestly say it's true. I'm funny.

What, you don't believe me? F-Secure says I am, so it must be true! MUST!

Anyway, thanks to everyone who voted for my contribution. I can't wait to get my stickers!

Thursday, January 11, 2007

A new location for the new year

It's 2007, and since I just love to tinker with things that aren't broken, I've decided to move this blog to a new address, blog.vorant.com. The old blog and feed URLs will still work, they'll just redirect to the new location. Still, please let me know if you have any trouble with it, and thanks for reading!

Wednesday, January 10, 2007

I know the feeling...

Sometimes, when I bust out with just the right one liner to find the critical data, I feel like this.

Tuesday, January 09, 2007

The Security Journal

The Security Journal is an online quarterly newsletter put out by Security Horizon. Some of the topics in recent issues include secure programming, HDES, wireless video security and the DefCon CTF competition. Check it out!

Wednesday, January 03, 2007

Locate new phishing sites before they're used against you

I just read this article that shows one way to locate new potential phishing sites by tracking domain registrations. Ok, so it's not a very high-tech way, but it's probably effective (except for the sites that just try to use IP addresses).

The service listed in the article requires a subscription fee. Does anyone know of any free equivalents?

Monday, December 18, 2006

Practicing What I Preach

Imagine for a moment that you worked for an employer who didn't do regular backups of their important data. As a security pro, the idea of regular backups is a central part of a good information protection strategy. If you're like me, you'd probably never agree that a lack of backups was a good long-term strategy for a business. That's just obvious.

Now stop imagining, and ask yourself whether or not you're following your own advice: are you backing up your personal data just as dilligently as protect your employer's?

I asked myself that same question last week, and the answer was, "no." I'm sure most of my readers are in the same boat, so consider this my Christmas present to you: a recommendation for a cheap, easy way to backup your personal data. (Sorry, it's Windows only, since that's most of what I run at home.)

It turns out that there are a lot of players in this area, many of which are targeted to the individual home user as well as to the small business. This is a pretty good roundup of several services. After doing a bit of reading, I decided to give one of them a try.

I chose Mozy, partly for it's price ($4.95/month for unlimited storage), partly for it's ease of use (set it and forget it), but mainly because it is the only one I saw that would allow me to both encrypt my backups and to set my own encryption key that is not stored by the company itself. The key is only available on my own PC, and as their warning dialog says, I'm "hosed" if I ever forget it. Still, it means that it's very unlikely that anyone but myself will ever be able to recover my files from the company's backup servers.

My initial experiences so far have been quite positive. Setup was dead simple, and although it's going to take quite a while to upload that first massive 52GB bolus, block-level incremental backups should help keep the volume down after that. I've tried some sample restores from the Web, and they've worked out well. For an additional fee, Mozy will even FedEx me a set of restore DVDs if I need to do a full recovery and don't want to wait for a giant download. About the only thing I don't like is that Mozy won't back up the simple NAS box I just bought, but I guess I can live with that.

New Year's Day is fast approaching, and setting up a good personal backup system would make a great resolution. You can try many of these systems for free (including Mozy) and choose which you like best. Then you can party all night, secure in the knowledge that all those boozy, embarrassing digital photos from the office party will be safe from "accidental" deletion.

PS. If you do decide to try Mozy, you'll get 2GB of storage for free. For an extra bonus, you can click through this link and get an extra 256MB. I've already paid for the full Mozy subscription, so I'm not getting any referral bonus, but I thought I'd post it anyway. Who couldn't use an extra 256MB of space?

Update 2007-01-02 08:43: Just for your reference, it took me about 12 days to complete the first full backup. It actually came out to about 62GB, as I added some more files to be backed up even before the initial set finished. Also, I also installed Mozy on my wife's computer, so I was doing two backups over a single connection for about 5 days.

All in all, everything has worked well so far. The biggest problem I've had was that the servers tend to disconnect the backup session during large (multi-gigabyte) transfers. This isn't a serious problem, because the clients will simply restart the backup again in a little while, but I'm sure it slowed things down a lot. I don't see this much now that I've completed my initial backup, though. Things seem to be going quite smoothly, and I'm still very pleased with the service.

Thursday, December 14, 2006

Sguil vs. BASE

This afternoon, someone asked me how I would categorize the differences between Sguil and BASE. I started with the standard response: "BASE is an alert browser, but Sguil encourages a more structured approach."

By the end of my reply, though, I found myself thinking about how to express this in a different way, something that emphasized the functionality of the two systems.

Here's what I came up with, excerpted from my own private email reply:

You can think of the process of intrusion analysis as formulating and
then trying to answer a series of questions. For example, one series
might be:


  1. Was this an actual attack?
  2. If so, was the attack successful?
  3. What other systems may also have been attacked?
  4. What activities did the intruder try to carry out?
  5. What other resources were they able to gain access to?
  6. How should we contain, eradicate and recover from the intrusion?

In this sequence, BASE does a great job of answering question #1. It may also have certain information about #3, but it probably wouldn't supply enough information to give good answers to questions #2, #4 or #5. By correlating the additional information sources, Sguil is often able to come up with very good answers to each of the first five questions. Of course, the more information you have at your disposal, the easier it will be to answer the most important question, #6.


Of course, I'd be very interested to hear from any BASE users who would like to either confirm or dispute my analysis. If that's you, leave a comment!

Monday, October 23, 2006

Comparing Automated Malware Analysis Services

Wow! It's been nearly two months since I posted anything here, for which I apologize. I've been busy as a bee. In fact, I think I caught the bees slacking off. I should be able to post a bit more regularly now, which ought to help me avoid losing my loyal reader.

This weekend, I happened to catch a computer that was infected with a fairly garden-variety IRC botnet, but this post really isn't about that. During the course of the investigation, I saw that the C&C server had instructed it's victim to download and run a certain file. As I frequently do, I used wget to download the same file so I could see what was in it.

Normally, my first stop for malware analysis is the UNIX strings command, but in this case, the binary was packed, so there was very little information available.

Now, I'm no programmer. I can do some simple C and some less simple Perl, but to be honest, I don't really have the sk1llz to do my own detailed malware analysis. Fortunately, this isn't much of a problem, because I rely on three excellent websites which can do this for me automatically.

I'm speaking, of course, about the Norman Sandbox, the CWSandbox and Virustotal. Each of these services offers a web-based interface for you to submit malware samples for automated analysis, usually returning the results within two or three minutes. Conceptually, they're all very similar, but they each have a different focus, so I thought a brief comparison might be in order.

Let's start with Virustotal, as it's much simpler than the others. Virustotal simply takes your sample and runs it against a variety of antivirus programs (I counted 26 this morning). The list includes most of the industry heavyweights, with the exception of Symantec. The end result is that you have a nice report that shows what, if anything, was detected in your sample malware. In my experience, it is rare that more than a handful of products properly detect the newest malware I find, so it's very handy to have a service that just runs it through through so many products and summarizes the output. The final report is delivered right on the web site, so once you submit your sample, you get a results page immediately, and as the scans progress, it is updated with fresh results.

Where Virustotal simply tells you the name of the detected malware, both the Norman and CW sandboxes provide a much more detail about the internal workings of the suspect binary. Using a combination of simulation, API hooking and other techniques, they will actually run the binary in the sandbox environment and look to see what actions it takes.

The Norman analysis (sample) relies on the same sandbox engine they use in their commercial antivirus products, and apparently you can purchase your own copy of the same sandbox system that the free website uses. The report comes back as a text-based email, and typically includes a list of files or registry keys that are created, deleted or modified. It will also usually tell you if the malware automatically restarts at boot time or not. Other features, like URLs opened or processes spawned, are kind of hit-and-miss. Norman failed to detect any of the network communication in the sample I submitted this morning. Overall, the report is a little on the basic side, but it's good for a quick read-through to see if you should be worried.

By far the most informative of the three services is the CWSandbox. Based on work performed by Carsten Willems at the University of Mannheim, CWSandbox is optimized for analyzing binaries which contain botnet software. The report is far more detailed than the other two systems, and includes much better information about files, registry keys and processes used by the malware. In fact, in my test this morning, it was the only one of the three to extract the list of IPs and ports that the binary would attempt to connect to. Unfortunately, the report is in XML format only, which would be good if I were going to process it with a script of some sort, but it does make it a bit harder for the analyst to read directly. Still, since CWSandbox gives the most detail of three, the extra effort is probably worthwhile.

If I had to pick just "the best" of the three, it would be CWSandbox, because it consistently gives more detailed information about the suspect binaries I upload. Fortunatley, I don't have to pick just one, and I routinely use all three to give myself the broadest coverage, and thus the best chance of finding out what the code really does.

Update 2006-10-30 14:08: Anti-spyware company Sunbelt Software also has an automated analyzer, it turns out. They're using the CWSandbox engine, but it delivers the reports in HTML or text-based emails. The HTML reports use very little formatting, but they're a lot easier to read than the XML reports CWSandbox itself returns.

I tried the Sunbelt version against the same binary I used to test the others, and though I found the report to be more useful overall (because it was readable), it didn't seem to include quite as much information about the Internet hosts the binary would attempt to communicate with. Perhaps Sunbelt has an older version of the engine?

In any case, Sunbelt generates my new favorite reports, though I still recommend submitting samples to more than one engine, just to make sure you've got your bases covered.

Update 2007-09-24 13:55: Paperghost over at Vitalsecurity.org posted a link to a new service called ThreatExpert.com. TE is part online sandbox and part malware encyclopedia. Basically, you upload malware samples and you get a nice report. The reports are also available to other users, indexed by their standard malware names (if known).

There are tons of sample reports in the database, and they are by far the best looking and easiest to read of all the similar services. They seem to contain basically the same type of information you'll find in the competing services, such as registry keys modified, files created, processes forked, etc. I've also seen a few really nice extras, such as the inclusion of screen shots of any windows that the code creates. TE also tries to identify the probable country of origin for the file, though I really have no idea how they do this. Strings analysis of the language included in the embedded text, perhaps? Another big plus is that it's the only such service to provide an optional user account that allows you to quickly locate and review reports for any of the samples you submitted. I can see this being very useful.

You'll notice that I used the phrase "seem to" in the last paragraph. For some reason, both of the UPX-packed samples I submitted failed to run properly. I got a report, but the sandbox indicated that the file could not be analyzed. To be fair, both the Norman and the Sunbelt sandboxes choked on it as well, failing to report any malicious activity at all. While this did keep me from generating a good comparative report, I'd have to say that ThreatExpert seems like a promising addition to my list of "go to" services for automated malware analysis.

Thursday, August 31, 2006

Listing active Tor servers...

Want to know the list of active Tor servers at any given time? It turns out that this is fairly easy to do.

When a Tor client starts up, and from time-to-time during normal operation, it needs to refresh it's list of active servers. The Tor network maintains several "directory servers" which keep track of which nodes are willing to perform onion routing functions. Because the Tor client may not have yet started (it can't join the Tor network before it knows which Tor servers to use), anyone can fetch the directory from the public Internet using HTTP.

The basic idea is pretty simple. Just visit one of the directory servers with a URL like the following:


http://belegost.mit.edu/tor/

The belegost.mit.edu system is one of the five or so authoritative Tor directories on the Internet, so it should be fairly stable. The file it returns contains all the information that a Tor client needs to know about each of the servers, including IP addresses and port numbers. The Tor directory protocol document can help you interpret the details fairly easily.

Of course, if you just want to know the list of active servers for monitoring or blocking purposes, you can just run the following perl script, which will dump out the server names, IP addresses and onion routing ports for you.

#!/usr/bin/perl
#
# Fetch the list of known Tor servers (from an existing Tor server) and
# display some of the basic info for each router.

use LWP::Simple;

# Hostname of an existing Tor router. We use one of the directory authorities
# since that's pretty much what they're for.
$INITIAL_TOR_SERVER = "18.244.0.114"; # moria2 / belegost.mit.edu
$INITIAL_TOR_PORT = 80;

# Fetch the list of servers
$content = get("http://$INITIAL_TOR_SERVER:$INITIAL_TOR_PORT/tor/");
@lines = split /\n/,$content;

foreach $router (@lines) {
if($router =~ m/^router\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s*$/m) {
($name, $address, $or_port, $socks_port, $directory_port) =
($1, $2, $3, $4, $5, $6);
print "$name $address $or_port\n";
}
}


Update 2008-06-09 11:38: In the nearly two years since I wrote this original post, the Tor folks have updated their directory protocol, and this script no longer works. Please see my newer post for an update and some working code.

Tuesday, August 29, 2006

Network Security Monitoring Wiki Launched

In cooperation with the Sguil project, I've just launched a new wiki, dedicated to Sguil and all things NSM: NSMWiki.

If you're a Sguil user, an NSM/IDS analyst, or just interested in network security in general, you should find things of interest there. It's just a skeleton now, but the great thing about wikis is that you can help us flesh it out!

Friday, August 25, 2006

The Pwnion

Inspired in part by my earlier posting about The Hidden Dangers of Protocols, some of us on #snort-gui were playing with the idea of The Onion applied to security. Here are some of the headlines we came up with.

<Hanashi> "Area Man's Password Hacked After He Exchanges it for Chocolate Bar"
<Hanashi> "Commentary: I Totally Pwn!"
<Nigel_VRT> "Wireless: No Wires! What's up with that?"
<Nigel_VRT> "Undisclosed remote bug in software that some people use"
<Hanashi> "Study: Second graders way too amused by phrase 'IP'"
<helevius> "Jumbo Frames: Obese, or Just Big Boned?"
<Hanashi> "Covert Channel Discovered in IPv6: The Use of IPv6"
<Nigel_VRT> "If you don't use Microsoft Windows, you're an evil hacker. By Staff writer Will G. Ates"


What about you? Post your best funny headlines in the comments!

Wednesday, August 23, 2006

I love command-line perl

If you read my blog on a regular basis, you've probably figured out that my favorite tool for performing general analysis and data manipulation is perl. Preferably, perl from the command line:

% cat data.dat | perl -e '...some code...' | less

Daniel Wesemann, one of the handlers at the Internet Storm Center posted a neato article on decoding traffic to uncover hidden executable content. And guess, what it uses??

Daniel, I like your style!

Tuesday, August 22, 2006

Dear BoingBoing...

If you've ever checked out my blogroll, you might have noticed that I read BoingBoing on a regular basis. Most of their posts are pretty interesting, but they seem to have a chip on their collective shoulder when it comes to a certain Internet content filtering product. Their most recent post on the subject really got my attention.

In response, here is an open letter to the editors at BoingBoing.

Dear BoingBoing,

I'm right there with you when you cover issues of government-sponsored censorship. Censorship is contrary to the ideas of freedom and democracy. But it's not censorship when the administrator of a corporate or privately-owned network enforces a policy on how their network should be used. In most cases, that's just good security management. Please don't confuse the issues, because it weakens your position when you speak out about legitimate abuses.

WebSense isn't censoring you. WebSense doesn't censor anyone. All they do is categorize websites, and it's up to their individual customers to create and enforce filtering policies based on those categories. In this latest case, the site archive.org does in fact offer software downloads and is correctly categorized. And whether or not you agree with the idea, as a security pro, let me assure you that filtering such sites is a very reasonable thing to do for many organizations, especially those that have policies against installing arbitrary software.

Of course, as you're finding out, filtering technology isn't perfect. Sites are misclassified or simply not categorized at all. Filtering software can be bypassed, fooled or even brought down. That does not mean that it is worthless and evil, however. Every little bit helps, and the technology improves each day (albeit slowly).

Anyway, thanks for all your hard work on BoingBoing. I do enjoy reading most of what you write, but I hope you'll keep my points in mind next time you write a story about Internet content filters.

I answered my own question

A couple of weeks ago, I wrote about the FBI's call for help from the security community. My question then was, "How do we get in touch with the FBI to offer our help?"

It turns out not to be very difficult. The same day, I wrote the following email to my local FBI field office:

Good morning. I've been reading news articles for the past few days about FBI Cybercrime Unit Chief Daniel Larkin's speech at the Blackhat conference last week. In it, he called for assistance from the information security community. I admit that I'm not really sure exactly what sort of help is required, but I'd like to contact someone in your office in order to offer my assistance.

I've been doing IT and Cybersecurity for about 15 years, and my specialties include analyzing hacker techniques, security incident reponse and security monitoring/intrusion detection. I run my own security consulting company, which you can read about on my website (www.vorant.com).

I'd like you to know that if the need arises, I'm here to help.


To my surprise, I got an email from one of the Special Agents assigned to the cybercrime team at the local field office. We met this morning at the local coffee shop.

It turns out that the number one thing he needs is a network of people on the front line that can identify potentially-significant threats, ideally before they can become truly dangerous. New hacking techniques, novel scams and the like. As NSM/IDS people, we're likely to pick up on these sorts of things before they are otherwise widely-known, and this could provide law enforcement with a valuable heads-up.

Now, I'm not suggesting that you hand over your monitoring data to the FBI or any other agency at the slightest provocation. There are significant privacy and organizational policy concerns to be aware of. Unless you're actually reporting a crime, be careful about what you reveal. In most cases, however, there's no problem with sharing information about the technique or the possible impact of successful attack.

If you'd like to try hooking up with your local FBI field office, check out this list of field offices. Give it a try and let me know how it goes.

Friday, August 18, 2006

Tony Bradley: Reform Candidate for the ISC(2) Board

Those of you who hold the CISSP know that elections for the ISC(2) Board of Directors are coming up soon. Usually the existing Board members put forth a slate of candidates themselves, but there is a provision for an individual to be added to the ballot without the Board's approval. If the candidate can get the support of 1% of the membership on a petition, they're added to the ballot.

Today, I got the official notice that one member, Tony Bradley (CISSP-ISSAP), has elected to pursue this route. After reading his platform, I've decided to sign his petition. Here's an excerpt, which summarizes things nicely:

There are three primary components of my platform and vision for ISC2 should I be elected.

  • ISC2 should work to cooperate and coordinate with other security organizations and professionals to develop a set of generally accepted information security principles.
  • ISC2 should publicly vet the contents of the CBK in order to improve the information as well as expanding the acceptance and respect granted to those who show an understanding of the CBK through achieving the CISSP certification.
  • Election procedures should be modified to reduce the hegemony of the sitting Board of Directors and ensure a fair election process that allows the constituents to pursue election more freely.

I strongly support his first two points. We're plagued with "professionals" without proper training or knowledge and "solutions" that only solve vendor cash flow problems. Common Criteria is an attempt to solve the "solutions" problem, but so far there isn't a good solution for ensuring that we're hiring competent professionals. Certifications (like the CISSP) and accreditations may be an answer, but without an industry-wide agreement on their content, it's difficult to evaluate their effectiveness.

That's where the ISC(2)'s Common Body of Knowledge (CBK) comes in. It's probably the most extensive security knowledgebase available, but it's also proprietary, and that holds it back. Having just taken the CISSP test this year, my experience with the CBK is fresh in my mind. There are parts of it that sorely need the kind of open review process Bradley proposes.

If you're an ISC(2) member, I encourage you to read Mr. Bradley's comments.

Wednesday, August 16, 2006

Dumbest. Seminar. Ever.

I got the following today, from a very reputable security vendor. I'm not going to name them, because I think they're otherwise pretty good, but this is really just pathetic:

The Hidden Dangers of Protocols

Protocols are the sets of rules governing the communications of network applications, such as instant messaging, peer-to-peer file sharing, and streaming media. But, protocols may allow these communications to transmit confidential information, spyware, keyloggers, worms, viruses, and other security threats into and out of your organization. Protocols may also allow users to access inappropriate applications. Plus, they may be consuming valuable network bandwidth and draining user productivity in the process!

Protocols are dangerous? I've got news for this writer: protocols put the "work" in "network". Without TCP/IP, odds are you don't have a network, and that's not counting the myriad other protocols above and below the IP level.

Clearly, the vendor (as a whole) understands the concept of protocols perfectly well. So I conclude that either they've hired an incompetent writer or editor in the marketing department, or they're taking the chance to do a little FUD-mongering. I'll be charitable and guess it's the former.

Seriously, this is the dumbest thing I've seen from a security company in quite a long time.

Tuesday, August 08, 2006

Decrypting XORed ICMP Packets

This morning, WebSense issued an alert about a new trojan that uses ICMP packets to send banking data back to the controller's collection site. As far as I can tell, it registers as as browser helper object (BHO) with IE, then waits for you to go to any of a predefined list of banking site login pages. When you submit your authentication data, it sends a copy back to the attacker via ICMP.

This is new in a mass-market trojan, but the technique has been known for some time (google loki ICMP to see some early work on ICMP tunnels). The interesting part is that WebSense posted a sample packet in both encrypted and decrypted form (minus a few bytes that contained personally identifiable information about the victim).

Just for fun, I decided to find the decryption key. For your reference, here is the key used to encrypt the WebSense sample packet:


60 31 75 70 73 a0 a9 91 d7 c4 09 7b 78 e9 b3 8e
89 8b 5f 6d 6f 6a 2b 53 48 54 5c b9 2c 36 63 04
d3 1a 58 02 fa e7 d9 17 3f da db 41 bb f7 b3 83
86 83 4e 20 7f 6d 79 07 8c 10 56 01 f6 23 d1 1a
e0 5d XX XX XX bc 3e 13 1b 0f 01 4d bd 9f b8 92
9e 8b 7e 22 XX XX XX XX XX XX XX b7 ca f4 eb f0
c7 e2 08 f5 05 e5 a4 85 87 82 f6 6e 54 3f XX XX
XX XX XX f8 e6 e1 8b 90 83 8b 82 98 6d 73 60 47
92 90 9c 7f 68 6a 7c 42 52 53 42 2f 34 37 37 04
1d 72 68 ab 93 98 a5 97 97 94 98 fc e2 e9 e4 d3
2a c6 da 34 54 40 5c 1c c2 04 06 67 15 a4 a0 98
9a 1c bd 5c 71 6e b2 a2 e2 1c 49 3f 2c d7 f5 0e
1f 5f c1 ea fc ed 10 24 cc d7 dc fe 90 ab a0 9b
53 75 89 6a 2c 45 74 5f 5a 96 a5 3f 28 60 2c 00
17 36 d1 f1 e9 eb 0b c1 85 95 9d e5 f7 12 34 0c
59 56 5a a0 b3 65 64 55 56 56 03 78 0a 6d 1a fc
f0 d5 9a 41 57 4b 52 5a 36 4f 3d 66 77 54 7e 6a
90 cf b8 41 3e 34 ff e5 f3 cb de b2 eb f3 e9 cf
c2 0d 1c ee ab cb e4 de 8c 65 63 a1 bd 77 69 69
8e 92 86 6f 82 60 26 54 4c 4a 53 da 56 4a 47 6f
4f d2 a4 25 00 00

The bold XX bytes are the ones for which the plaintext was obscured, so I couldn't easily recover those portions of the key with just the single sample packet.

Bonus points to the first reader who posts code to verify this. It's pretty simple, but a fun etude for security folks. Extra bonus points if you can confirm that all packets use the same key and/or find missing key bytes that weren't available in the sample packet. What are you waiting for?

Monday, August 07, 2006

"FBI needs hackers" -- Ok, now what?

Breaking non-news! Vnunet reports that the FBI needs hackers to help it understand and combat cybercrime. I've heard this a million times before, and not just from Blackhat last week.

So my question is what, specifically, should we be doing? I'm sure I'm not the only security pro willing to lend a hand, but how should we each go about doing it? There are plenty of us who have solid technical chops, the ability to communicate with non-technical audiences and the strong desire to use our powers for good. Has anyone successfully stepped up, and can you share your thoughts on how to get started?

Thursday, August 03, 2006

Crazy Botnet Idea

I read with interest Gadi Evron's recent post, mitigating botnet C&Cs has become useless (while you're at it, read Richard Bejtlich's response, too). Gadi Evron has probably forgotten, relearned and forgotten again more things about botnets than I will ever know, so when he speaks on the subject, I pay attention.

As with most security problems, the basic issue here is that competition drives innovation. Like biological evolution, the digitial evolutionary imperative mandates that as we improve our defensive techniques, the Bad Guys also improve their offense. With the transition from amateur malware writers to professional bad actors, the pace of this evolution has already and will continue to increase dramatically.

So what can we do? First, let me make the following assertion:

Botnets are harmful both to local networks and users, and to the Internet community as a whole.

I don't think anyone would disagree with that statement. Let's follow it up with another:

Doing nothing to protect ourselves is not an option.

Now, if we accept Gadi's premise that cutting off C&C channels is harmful in the long run, where does that leave us?

The root of that premise is that long-term harm occurs because our current methods have too high an impact on botnets. By shutting off the C&C servers, we take a discernable action, forcing the botnet owners to respond or lose access to their net.

Let's think about this for a second. If we've so far fought the problem by shutting down C&C servers, it stands to reason that most of the botnet countermeasures have also evolved to deal specifically with that sort of threat. If we're able to change our response, we may gain (temporary) advantage.

So here's my crazy idea. The HoneyNet Project already has tools that dynamically modify network traffic in order to prevent outgoing attacks from their honeypots from affecting other Internet hosts. Let's repurpose this technology to act in reverse; let's use it to protect our hosts from fully participating in identified botnet C&C channels.

How would this work? I haven't worked all the details out, but I'm thinking of something like the following:

  1. The attacker sends a command to a host through the C&C channel (e.g., .download http://my.bad.host/file.exe)
  2. An anti-botnet gateway at the user's site intercepts this, and transparently modifies the packet to say something slightly different (e.g., .d0wnl04d http://my.bad.host/file.exe)
  3. The botnet node on the compromised PC doesn't recognize this as a command, so takes no action
  4. The attacker never realizes that he's lost control of the botnet node, because it's still participating in the C&C communication channel. Even though he's not getting any responses to some of his dangerous commands, perhaps less dangerous commands are working well, disguising the fact that we're interfering with his attacks.

You're probably saying to yourself, "Hey, this requires a lot of work to maintain." If so, you'd be right. We'd have to track the botnet C&C channels, analyze their command structures and somehow get updates out to the protected sites. I think of this as sort of an extension to current spyware/malware discovery processes, and we'd probably need some serious commercial support if this idea were to ever be implemented.

The point is, we've tried attacking the problem both from the C&C side (shutting down servers) and from the client side (patch management & user education). Now let's try something that more fully leverages our control over our own network infrastructure, too.

I told you it was a crazy idea.

Friday, July 21, 2006

Extracting gzipped or Unix script files from pcap data

During an incident response, it's often handy to be able to examine the actual attack traffic, and if you're using Sguil, you probably have it handy. One common situation is that an intruder has transferred files to or from your network, and you'd really like to see what's in them.

There's a great tool for extracting arbitrary files from pcap dumps, tcpxtract. Similar to the way hard drive forensic tools look through bytes on the disk to find the "magic headers" at the beginning of various types of files, tcpxtract combs through pcaps looking for file types it knows, regardless of the transport protocol used to ship them over the network. When it finds them, it writes them out to individual files for manual analysis.

Tcpxtract is a great tool for NSM practicioners, and should be in everyone's standard kit. There are a few common types of files that it doesn't support, but you can easily fix this by simply editing the tcpxtract.conf file to add support for new types if you know their magic numbers.

My friend geek00L has already blogged about adding Windows PE executable support. Now I'm here to tell you how to add support for gzipped files and Unix script files like "#!/some/file" ("#!/bin/sh" or "#!/usr/bin/perl" for example).

Just add the following two lines to the end of tcpxtract.conf:


gzip(1000000, \x1f\x8b\x08);
script(1000000, \x23\x21\x2f);

A little anti-climactic after all that buildup, wasn't it? I've had some advice that the script detection is likely to throw lots of false positives in an SSL session, so maybe you should keep it commented out until you know there are script files in the session that you need to find.