Thursday, May 10, 2007

Forensics in the Enterprise

I had the opportunity last night to attend a demo of Guidance Software's EnCase Enterprise product. I use the standalone version of their product, EnCase Forensic already, and the Enterprise edition looks like an interesting extension.

EnCase Forensic runs on a single Windows workstation and allows you to image suspect hard drives and conduct detailed analysis on their contents. It's got a number of handy features built in, like the ability to do keyword searches, extract web viewing history and identify email messages. Pretty nice, and it makes most common forensic tasks a breeze.

The Enterprise edition builds on that by decoupling the data storage from the analysis workstation, providing a SAFE (I forget what this acronym stands for), which is basically a data warehouse for disk images and other extracted evidence. That's OK, I guess, but the really interesting part is the "servlet", their word for an agent that you install on the systems you manage.

The idea is that the servlet is installed on all the computers you manage, be they Windows, Linux, OS X, Solaris or whatever. They claim a very small footprint, about 400k or so. Using the analysis workstation, you can direct the servlets to do things like capture snapshots of volatile memory or process data, browse for specific information or even image the disk remotely, all supposedly without disrupting the user.

As a forensic analysis tool, this makes a lot of sense to me, but there's something I find even more interesting. The Enterprise edition is apparently a platform on which Guidance is building additional tools for Incident Response and Information Assurance. For example, their Automated Incident Response Suite (AIRS) ties in with Snort, ArcSight and other security tools and can use the agent to trigger data collection based on security events on your network.

For example, if Snort detects an exploit against one of your web servers, AIRS can automatically initiate collection of memory and process data every thirty seconds for the next few minutes. When an analyst comes by later to evaluate the Snort alert, they can view the snapshots to see if the exploit was successful and to help them track the attacker's actions on the system.

I have to admit that this falls squarely into the category of "I'll believe it when I see it working on my network", but I'm intrigued. I think this approach shows promise, and is a natural complement to the Network Security Monitoring model. Being able to tie host-based data into the NSM mechanism could be quite useful.

Other interesting products that use the servlet are their eDiscovery and their Information Assurance Suite. eDiscovery is pretty much just what it sounds like: it helps you discover specific information on your network, perhaps in response to a subpoena. The IA Suite is similar, but also includes configuration control features and a module designed to help government and military agencies recover from classified data spillage.

Ok, by this point it probably sounds like I'm shilling for the vendor, but really I'm just interested in some of the ideas behind their products, specifically AIRS. I hope to learn more soon, and maybe to be able to experiment with some of these features.

Tuesday, May 08, 2007

Helper script for email monitoring

During an investigation, it may sometimes be necessary to monitor email traffic to and from a specific address, or to capture emails which contain specific keywords or expressions. In the past, I've usually used Dug Song's Dsniff package for this, specifically the mailsnarf tool.

Mailsnarf sniffs a network interface looking for connections using the SMTP protocol. When it finds one, it extracts the email message and outputs it to stdout in the Unix mbox format. You can filter the traffic with a BPF filter to restrict the number of packets it needs to process, and you can provide a regular expression to make sure it only dumps emails you're interested in. All in all, it's a great tool.

As a monitoring solution, though, mailsnarf is a bit lacking. If it dies, your monitoring is over, and if you have more than one thing to monitor, you either intermingle them into the same mbox or try to manage multiple mailsnarf instances. It can be done, but it's fairly clumsy.

That's why I've decided to release my home-brewed wrapper script, mailmon. It provides a config file so that multiple email monitoring sessions can be configured and managed at once. It also checks up on the individual mailsnarf processes under it's control, restarting them automatically if they die.

So far I've tested it on RHEL4, but it should work anywhere you have perl and dsniff available. The startup script is Linux-specific, so *BSD users may have to provide their own, but it's really pretty simple.

You can download mailmon from the downloads page. If you give it a try, I'd love to hear about your experiences.

Update 2007-05-08 09:58: I can't believe I forgot to mention this, but the mailmon package also contains a patch to mailsnarf. I found that some mail servers can transition into status 354 ("send me the DATA") without ever having to use the DATA command. Mailsnarf wasn't picking up on this, so was missing a lot of the messages. My patch fixes this, and can be used independently of the rest of the monitoring package. In fact, I recommend the patch to all mailsnarf users.