Friday, December 21, 2007

Heading home for the holidays?

Take a look at PortableApps.com, a repository of free software that's modified to run directly from a USB stick. Not only will you find a nice anti-virus scanner to help you tune up your relatives' PCs (ClamAV), but you'll also get Firefox, Thunderbird, GAIM and other useful applications to help you keep connected.

Really, I just want everyone to run AV on their family computers. I just put that stuff about Firefox and Thunderbird in to tempt you into downloading the package. Heh.

Monday, November 12, 2007

Holy Grail Found

...or not. Who knows? This article published by the University of Wisconsin-Madison, claims that UWM's Paul Barford has developed technology (called "Nemean") to automatically identify botnet traffic. So far, so good. To be honest, the industry could really use this kind of solution.

Here's the part I have trouble with, though:

The Achilles’ heel of current commercial technology is the number of false positives they generate, Barford says. Hackers have become so adept at disguising malicious traffic to look benign that security systems now generate literally thousands of false positives, which Nemean virtually eliminates.

In a test comparing Nemean against a current technology on the market, both had a high detection rate of malicious signatures — 99.9 percent for Nemean and 99.7 for the comparison technology. However, Nemean had zero false positives, compared to 88,000 generated by the other technology.


I'm not even sure I know where to start here. These numbers sound quite impressive, but without knowing much more about how the tests were conducted, they really don't mean very much. First of all, how many unique types of botnets were tested, and what were they? Also, who chose the test cases? I can achieve a 99.9% detection with 0% false positives too, if I control both the software to be tested and the selection of the test cases.

Next, I wonder what specific technology they tested against, and whether or not they tuned it properly for their test environment. I suspect not, as 88,000 false positives is quite high. I doubt they had 88,000 unique botnet samples, either, so I'm sure there are multiple alerts for each, which can easily be tuned down.

Finally, and most importantly, even if you do achieve near perfect results on a selection of current botnets, this is likely to be only temporary. The Bad Guys are virtually guaranteed to step up their game too.

To be fair, this is addressed in the article:

While Barford has high hopes for Nemean, he says Internet security is a continuous process and there will never be a single cure-all to the problem.

“This is an arms race and we’re always one step behind,” he says. “We have to cover all the vulnerabilities. The bad guys only have to find one.”


At least it sounds as though Dr. Barford does have an idea of the scope of the challenge. Perhaps there is more to this project than was presented in this fluff article. I really would like to hope so, but I'm a bit skeptical based only on the information presented here.

Saturday, November 10, 2007

File under: Things You Wish You Didn't Know

After reading this, I may never be able to attend a certain security convention ever again.

Thursday, November 01, 2007

NSMWiki in print again

This month's ISSA Journal has another article by Russ McRee on NSM topics. You may remember that I've blogged about Russ' articles before. This time, he's writing about Argus, an excellent suite of tools for implementing distributed collection of network flow information. It also happens that he mentions NSMWiki, which I maintain for the Sguil project.

If you're not an ISSA member, you can read Russ' article here.

Thursday, October 25, 2007

Sguil training: is there demand?

Ok, so I know that there are a fair amount of Sguil users out there now, and more coming every day. Some of us have kicked around the idea of providing some Sguil training, though we never seemed to have the critical mass of potential students. Maybe the time has come to consider this again?

What I'd like to know is this: if there were a one day class that covered Sguil administration (installation, troubleshooting and maintenance) offered in the Hampton Roads, VA area (Norfolk, VA Beach, Williamsburg), do you think you would be interested in attending?

If you would be interested in attending, how much would you be willing to pay for such a class?

Would your answer change at all if the class also taught you how to actually analyze events and research incidents with Sguil, possibly as a second day?

I'm not promising to run a class, but I've been interested in doing so for quite some time, and if I get enough interest, it's certainly a possibility.

Please email me or leave a comment here with your thoughts. Thanks!

Tuesday, October 02, 2007

Ok, there's something wrong here...

You may have seen this article about a survey McAfee conducted in conjuction with the National Cyber Security Alliance. In brief, they polled a few hundred random Americans and asked them about the security software they were using on their PCs, then followed up to see if the actual configuration matched what the user's thought. No surprise that they found only something like 50% of the people had up-to-date AV software, had turned on their firewall or were running any sort of anti-spyware software.

The numbers sounded reasonable in this article, but then I read the following sentence:

But when pollsters asked to remotely scan the respondents' 
computers, the story turned out to be very different.

Just who exactly would allow strangers to "remotely scan" their computers to verify their security settings? Hmm... Oh! I know! Perhaps the very people who are least likely to already practice good security measures!

Seriously, I'm no statistician or pollster, but this methodology sounds fishy to me. This sounds like a self-selected sample to me (the group of all people who lack enough basic security skills to properly secure their systems from strangers). I'd love to know if this was somehow accounted for in their methodology, but until then, I don't think these numbers are at all useful.

Update: 2007-10-02 13:34: BTW, I forgot to make my other point, and that is that the pollsters seem to have used exactly the same sort of techniques that we try to condition our users against, namely social engineering attacks. So that lends further credence to my belief that they probably ended up doing a survey of those who were already the worst at security.

Sguil covered in Information Security Magazine

Richard Bejtlich points out that the October issue of Information Security Magazine has an article by Russ McRee, entitled Putting Snort to Work. The article is about Knoppix-NSM, a Linux LiveCD designed for easy monitoring. Knoppix-NSM includes a preconfigured Sguil server and sensor, and Russ has a lot of nice things to say about it.

It's really good to see Sguil in some mainstream security press. VictorJ's modsec2sguil custom agent and our very own NSMWiki even get mentioned, so I know he's done some homework.

Thursday, September 27, 2007

Snort 2.8 is out

Grab it from the Snort download page, and read my previous post about some of the new features.

I still haven't tried this myself yet, but soon. Honest.

Thursday, September 13, 2007

I call shenanigans!

The Telegraph reported yesterday that aerospace giant EADS has developed "the world's first hacker-proof encryption technology for the internet" [sic].

Amazing! There are at least three separate errors in just that one 10 word sentence fragment. I tried to find some more info on the company's website, but all I could find was this year-old press release on a product called ECTOCRYP. I assume this to be the same product, because how many encryption protocols would this company develop? Let's take a look at some of the problems with this company's statement.

First off, as everyone reading this post probably already knows, there's no such thing as "hacker-proof". The definition of a hacker is someone who can change the rules of the game in his or her favor, so in order to be "hacker-proof" you've got to somehow deny them this opportunity. Most knowledgable attackers realize that attacking the encryption algorithm itself is usually unnecessary. For example, in order to be truly "hacker-proof", you must not only have a robust algorithm, but also perfect key management (both protocols and implementation) and secure endpoints on both sides of the communication. This last, by the way, also implies users and administrator who never make any mistakes in configuration or use of the system.

The second problem is that "world's first" thing. Probably the most commonly deployed encryption technology today is SSL, and let me tell you, it's pretty much "hacker-proof" if you are just trying to cryptanalyze a captured session, which, as I just mentioned, you probably wouldn't do.

Along the same lines, here's a quote from the article:

At the heart of the system is the lightning speed with which the "keys" needed to enter the computer systems can be scrambled and re-formatted. Just when a hacker thinks he or she has broken the code, the code changes. "There is nothing to compare with it," said Mr [Gordon] Duncan [the company's government sales manager].

So their big innovation is that they change the keys frequently? It's true, there's nothing like it... except for all the things that already do that! TKIP has been doing this for years ("T" is for "Temporal", and that's good enough for me). Even SSL can do this (via it's key renegotiation protocol), though this is admittedly rare since most SSL sessions are too short.

Finally, I'm impressed that they've developed an encryption technology "for the internet". IPv6 or IPSEC might be "for the internet" in the sense that they're tied directly to network protocols used to communicate over the Internet, but that doesn't seem to be the case with ECTOCRYP. It's probably just a stream or block cipher, which could be used equally well on, say, a dedicated serial line. I think what they mean to say is that it's a protocol used for protecting information in transit, as opposed to encrypting files at rest on the disk. That doesn't mean that it's designed "for the internet".

All of this is totally beside the main point, however, that I couldn't find any technical details on the encryption algorithm itself. I'm no professional cryptographer, so perhaps it has been published in a trade journal somewhere that I can't find in Google, but compared to other widely available encryption algorithms, I doubt it has undergone much peer review. This in itself makes the system suspect.

By the way, if you do Google "ECTOCRYP" you'll find this craptastic marketing video.

Tuesday, September 11, 2007

The embassy password thing

I wasn't going to comment on this, but we were discussing it a bit on #snort-gui today, and it made me wonder...

People everywhere are talking about how the embassies are foolish for using Tor to provide "secure" remote access to their systems. Ok, we can all agree that Tor isn't really suitable for that. But as someone on #snort-gui pointed out, how do we know it was an official embassy-supported application, and not just some power users who decided to start using Tor on their own? We don't! At least, I couldn't find any confirmation that the network admins were pushing Tor on their users.

But really, this got me thinking some more this afternoon. How do we know it was the embassy users who were using Tor? If I were a hacker who had somehow gained access to accounts in the embassies, I might use Tor to disguise my origin and wouldn't care if the passwords were exposed. I think there are any number of actors who could be behind this, so I don't want to name any names, but Certainly, Hacking Insecure Networks Around the world is a specialty of some...

Update 2007-09-11 16:02: giovani from #snort-gui reminded me that he was the one who brought up the idea of the power users. And he is unnervingly happy about being referenced here, even indirectly. Thanks, giovani!

Would you notice this?

Just a quick thought I had. If your organization is using virtualization to pack many VMs onto your existing server platforms (as many sites are trying to do these days), would you necessarily notice if an additional VM popped up?

It turns out that VMs can be very small (the smallest VMWare image I found with a quick google search was 10MB, which could fit in my /tmp partition). Many, perhaps all, of the VM packages provide command-line level access to manage the running guest systems. VMWare even provides a Perl API for this.

If I were an attacker who managed to get access to your VM system, could I insert my own VM image and make it run? If so, I could potentially have my own custom hacking environment, with root privileges and whatever software I needed, without creating too many files or new processes on the host OS. Unless you're looking carefully at every file on the system, or watching what VMs are running, would you notice?

Would anyone with real-world virtual server experience care to share their thoughts?

Find Storm with Sguil (and a little ingenuity)

My friend nr has started a new security blog. His first real post is about using Sguil to detect Storm worm infections. Welcome to the blog world, nr!

Thursday, September 06, 2007

Cloaking your investigative activities in Sguil

I've written before on disguising your outbound DNS queries. In short, even looking up an IP to find the hostname might alert an attacker that you're on to them if they control their own DNS server. I briefly described how to create a simple proxy DNS server that could send all your queries through a third-party, like OpenDNS, which should make it harder for the bad guys to figure out just who is checking them out.

I'm happy to say the new Sguil 0.7.0 client now incorporates third-party DNS server support natively. In sguil.conf, you simply set the EXT_DNS, EXT_DNS_SERVER and HOME_NET variables, like so:


# Configure optional external DNS here. An external DNS can be used as a
# way to prevent data leakage. Some users would prefer to use anonymous
# DNS as a way to keep potential malicious sources from knowing who is
# interested in their activities.
#
# Enable Ext DNS
set EXT_DNS 1
# Define the external nameserver to use. OpenDNS list 208.67.222.222 and 208.67.220.220
set EXT_DNS_SERVER 208.67.222.222
# Define a list of space separated networks (xxx.xxx.xxx.xxx/yy) that you want
# to use the OS's resolution for.
set HOME_NET "192.168.1.0/24 10.0.0.0/8"

In this example, I've configued Sguil to use one of the OpenDNS name servers (208.67.222.222) to look up all hosts except those on my local LAN (addresses in either the 192.168.1.0/24 or the 10.0.0.0/8 range).

So that takes care of DNS, but what about WHOIS? Ok, so maybe it's a bit less likely that the attacker has also compromised a WHOIS server, but less likely doesn't mean that it hasn't happened. It probably has, and it probably will in the future. Therefore, it's prudent to also try to disguise the source of your WHOIS lookups.

Sguil has always had the ability to call a user-supplied command to perform WHOIS operations, so here is a simple script you can use to proxy all of Sguil's WHOIS lookups through a third party (Geek Tools, in this case). This should work on any version of Sguil.

#!/bin/sh
#
# Simple script to proxy all whois requests through whois.geektools.com
# to help keep the bad guys from figuring out that we're onto them when
# Sguil looks up a record.
/usr/bin/whois -h whois.geektools.com $*

To use this, just set the WHOIS_PATH in your sguil.conf file, like so:

set WHOIS_PATH /home/sguil/bin/sguil-whois.sh

So now you have it. By implementing DNS and WHOIS proxies in Sguil, you can add an additional layer of protection against bad guys who may be monitoring their systems for signs that you have discovered their attacks.

Thursday, August 23, 2007

Snort 2.8 Beta

Sourcefire just announced the availability of their Snort 2.8 beta code in CVS. According to the announcement, the new code contains several interesting features:


  • Port lists
  • IPv6 support
  • Packet performance monitoring
  • Experimental support for target-based stream and IP frag reassembly
  • Ability to take actions on preprocessor events
  • Detection for TCP session hijacking based on MAC address
  • Unified2 output plugin
  • Improved performance and detection capabilities

I downloaded the code and looked through some of the documentation on the new features, but haven't used any of them yet. From my point of view, the improvements from 2.7 to 2.8 are a lot bigger than those made when upgrading from 2.6 to 2.7.

I'm particulary happy about the idea of port lists. Finally, you can cherry pick exactly the ports you want to monitor. If you wanted to look at, say, ports 80, 8080 and 8888 in Snort 2.7 and below, you had to write:

var MYPORTS 80:8888

This means to monitor all ports 80 through 8888 inclusive. That's a lot! With Snort 2.8, you can write:

portvar MYPORTS [80,8080,8888]

which is obviously much more efficient.

IPv6 support is also pretty major, though right now it's only a first pass. You can now specify IPv6 addresses pretty much anywhere you can use an IPv4 address, but several of the preprocessors don't yet support IPv6 and there's no fragmentation reassembly. IMHO, it's probably not quite ready for primetime, though I guess it could be good for simple tasks. I might deploy it on places that should not see any IPv6, just to alert whenever any is seen.

Snort 2.8 also extends the active response functions to events generated by the various preprocessors. You used to only be able to take actions based on signatures detected by the rules engine, but now you could potentially RST connections based on http_inspect or some other preprocessor events. Could be useful, though I personally don't rely much on Snort's built-in active response. I bet this would really shine in a Snort-inline environment, though.

The other items look cool too, but those are the ones I'm most interested in. I'm still trying to find out more about unified2 (specifically, if anything out there reads it yet, since barnyard only handles Unified Classic and isn't really developed anymore anyway).

Have you tried Snort 2.8 yet? If so, let me know what you think!

Friday, August 17, 2007

Sourcefire buys ClamAV

If you haven't seen this article yet, Sourcefire (the developers behind Snort) have acquired the open source anti-virus project ClamAV.

I sure didn't see this coming. I wonder if it means that a gateway security product is in the works? In any case, Sourcefire has an excellent track record with the OSS community (recent licensing issues notwithstanding), so I can only see this as a positive for ClamAV and ClamAV users.

Tuesday, July 17, 2007

I'm getting the strangest looks today

In response to a conversation we had last week in #snort-gui, I created a t-shirt store with only a single shirt in it. Thus was the Information Security Workers Union, Local 1337 born.

I got mine last night, and I thought it looked pretty good, so I'm wearing it today. Several people have already asked me if I'm trying to unionize. Heh. I guess you have to be a ISWU member to get the joke.

Friday, July 13, 2007

My input to the ROI spat

Richard Bejtlich wrote a blog post entitled Are the Questions Sound? which has prompted a number of replies. One in particular, Bejtlich and Business: Will It Blend? took him to task on several issues.

I'll say up front that Richard is a friend of mine, and I've always found him to be very focused on how security adds (or detracts) from an organization's business. He's the rare person that possesses both deep technical knowledge and the ability to understand and communicate business drivers. If he stays long enough, he'll probably end up being GE's CISO one day.

Back to the task at hand. The thing that struck me most about Kenneth's posting was the false dichotomy he laid out in his first point:


1. Either the CISO does not know what he’s talking about at all or Richard clearly misinterpreted him

I suggest another possibility: the CISO was intentionally being a dick.

Yes, this is extremely common in the financial world. I have consulted with several large insurance and brokerage firms, and the problem is epidemic, especially on Wall Street. Executives like to push around their vendors, probably to reinforce their ideas about their own power and influence.

I've had this happen to me, and I've seen it happen to others. It's quite possible that the CISO knew what he was talking about, but chose to bully the consultant, just to see how he'd take it. I once had the VP of a Wall Street firm call my cell phone on Christmas Eve (Friday) and demand that I have some consultants on site the following Monday or he'd call my CEO and tell him "what kind of people he has working for him."

We'd already been there for most of a year, and they had already stiffed us on $300k worth of consulting fees, so I just told him not to threaten me, and then gave him the CEO's phone number. He backed down immediately. I later found out that he was in the process of being fired, and was trying to get us involved so he could point fingers. I'm sure that company is just a little better off today without him there.

Kenneth's second point is all about security ROI:

2. Information Security does not have an ROI and that security mechanisms are not business enablers

My friend in the financial risk department read Richard’s statement that “Security does not have an ROI” and he laughed. He commented, “Just let some hackers change some numbers in a banks financial system and you’ll see that security has ROI.”


It happens that I do agree that security can have an ROI, but the scenario given is not an example of that. It's an example of loss prevention and, to a certain extent, business enablement (to enable the bank to survive, which it really wouldn't if any Joe could log in and change account balances at will). You can best see this by examining the underlying requirement:

  • No one should be allowed to alter account balances without proper authorization.

Given that requirement, you can implement the countermeasures in various ways. For example, you could:

  1. Apply an application-layer defense to validate changes before they happen.
  2. Report on suspicious changes that have already occurred, and roll back anything that was not authorized.

Both achieve the same end, and both are used in real world situations. Given the requirement, both are functionally equivalent, but the second method more clearly shows the loss-prevention aspect.

If you'd like an example of how security can be an enabler, go back to my original comment about any Joe changing the database. Even better, you can see how it can become a positive source of revenue by examining Bank of America's Sitekey system. They're clearly selling the security of their online banking as a key differentiator, hoping to drive customers their way. And I'm pretty sure it's working.

Finally, Kenneth's point #3 is a bit inconsistent:

3. Some quotes are just wrong or nonsensical because they take the five-digit accuracy too literally instead of using modeling to understand risk :

* “Assumptions make financial “five digit accuracy” possible.”
o Actually mathematics make five digit accuracy possible: I can assume anything


There's more to this quote, but I'd just like to point out that if you're using a "model", this is a short word for "a set of assumptions we believe are close enough to true to be useful for predictive purposes."

Overall, I'm really kind of puzzled by the whole ROI debate. I guess I can understand that some people have staked their reputations on one side or the other, but really it's a lot like "VI or EMACS" for the infosec crowd. It's more of a philosophical debate based on which models you are more comfortable with.

Tuesday, July 10, 2007

RIP SysAdmin Magazine

I got my last issue of SysAdmin Magazine last week, though I didn't open it until last night. I was shocked to discover that, according to the editor, this was to be their final issue. I've been a subscriber for the past few years, and have always enjoyed their practical, in-the-trenches approach to system administration topics. I particularly liked that they always did one or two issues on security each year. And now they're gone, just about 3 months after I renewed my subscription. I'll be sorry to see them go.

Searching inside payload data

Almost all of my searches involve IPs and/or port numbers, and Sguil has a lot of built-in support for these types of database queries, making them very easy to deal with. Sometimes, though, you want to search on something a little more difficult.

This morning, for example, I had a specific URL that was used in some PHP injection attack attempts, and I wanted to find only those alerts that had that URL as part of their data payload.

Constructing a query for this is actually pretty easy, if you use the HEX() and the CONCAT() SQL functions. If you're using the GUI interface, you only have to construct the WHERE clause, so you can do something like the following:

WHERE start_time >= "2007-07-09" \
AND data.data_payload like \
CONCAT("%", HEX("my.url.or.some.other.string"), "%")

The main problem with this type of query is that the data_payload field is not indexed, so it results in a table scan. You really need to make sure you have some other criteria that is indexed. In this case, I used the date to restrict the number of rows to search, but you could use IPs or port numbers as well.

Tuesday, July 03, 2007

Tired of all the talk

I read White House council puts cybersecurity in focus this morning, and I have to say: "Enough is enough."

The Federal government needs to stop talking about cybersecurity and start doing cybersecurity. If they're just now putting cybersecurity into focus, where has it been for the past several years?

Basically, the article talks about improving communications between first responders at the Federal, state and local levels, and about providing better cybersecurity guidance. The problem is that this is all just talk and paperwork, like most of the Federal cybersecurity initiatives. Yes, communication is important, as is guidance, but do know what actually makes things more secure? People. And money. Neither of these are features commonly seen in Federal cybersecurity initiatives.

Note to all my readers in the White House: The Federal government is too big and the agencies too diverse to effectively push cybersecurity from the top down. Instead of trying to centralize the cybersecurity programs at the Executive level, focus on supporting the agencies by giving them the resources necessary to develop and maintain their own effective security programs. Stop funding them as an afterthought, and get real about how much it costs to hire and train effective security personnel. Recognize that security requires positive actions to make computing safer, not just getting the FISMA reports done on time.

When the government starts doing these things, Federal security will improve. Then you can worry about centralization. Until then, though, it's just a bunch of useless talk.

Wednesday, June 27, 2007

The Right Way to Establish a Culture of Security

After reading this article, my hat is off to Yahoo's Arturo Bejar. Not only does he have the worlds coolest job title ("Chief Paranoid Yahoo"), but he's taken some extremely creative measures to help build a pervasive culture of security at the Internet behemoth. I especially like the part about the t-shirts, since it not only gives people a reward to strive for, but they are also free advertising for the program. And the multiple tiers sounds like it would really spur some competition to get those coveted red shirts.

As someone who has been called a raving paranoiac himself, I really appreciated this approach. There are some great lessons in that article.

Tuesday, June 19, 2007

MySQL Database Tuning Tips

I came across a great article on MySQL performance tuning. It's got a few very practical tips for examining the database settings and tweaking them to achieve the best performance.

"What's this got to do with security", you ask? As you know, Sguil stores all of it's alert and network session data in a MySQL backend. If you monitor a bunch of gigabit links for any amount of time, you're going to amass a lot of data.

I try to keep a few months of session data online at any given time, and my database queries have always been kinda slow. I learned to live with it, but after reading this article, I decided to check a few things and see if I could improve my performance, even a little.

I started by examining the query cache to see how many of my database queries resulted in actual db searches, and how many were just quick returns of cached results.


mysql> show status like 'qcache%';
+-------------------------+-----------+
| Variable_name | Value |
+-------------------------+-----------+
| Qcache_free_blocks | 1 |
| Qcache_free_memory | 134206272 |
| Qcache_hits | 395 |
| Qcache_inserts | 248 |
| Qcache_lowmem_prunes | 0 |
| Qcache_not_cached | 75 |
| Qcache_queries_in_cache | 2 |
| Qcache_total_blocks | 6 |
+-------------------------+-----------+
8 rows in set (0.00 sec)

The cache miss rate is given by Qcache_inserts / Qcache_hits, so in my case 63% of queries resulted in database searches. The converse is that 37% of my queries were served up straight from the cache. Almost all of the queries are either from Sguil or from my nightly reports, so this was actually a much better rate than expected. Notice, though, that I had 127MB of unused cache memory. My server's been running for quite some time, so it seemed to me like I was wasting most of that. I had originally set the query_cache_size (in /etc/my.cnf) to be 128M. I decided to reduce this to only 28M, reclaiming 100M for other purposes.

Next I wanted to examine the number of open tables. Sguil uses one table per sensor per day to hold session data, and another five tables per sensor per day to hold alert data. That's a lot of tables! But how many are actually in use?

mysql> show status like 'open%tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables | 46 |
| Opened_tables | 0 |
+---------------+-------+
2 rows in set (0.00 sec)

So the good news is that not many tables are open at any given time. I actually tried to do several searches through alert and session data over the last few weeks, just to see if I could drive this number up. I could, but not significantly. I had set table_cache to be 5000, but that seemed quite high given this new information, so I set it down to 200. I'm not sure exactly how much (if any) memory I reclaimed that way, but saner settings are always better.

Finally, the big one: index effectiveness. I wanted to know how well Sguil was using the indices on the various tables.

mysql> show status like '%key_read%';
+-------------------+-----------+
| Variable_name | Value |
+-------------------+-----------+
| Key_read_requests | 205595231 |
| Key_reads | 4703977 |
+-------------------+-----------+
2 rows in set (0.00 sec)

By dividing Key_reads / Key_read_requests, determined that approximately 2% of all requests for index data resulted in a disk read. That seemed OK to me, but the article suggested that the target for that be only 0.1%. Given that, I figured that the best thing I could do would be to reallocate that 100M I reclaimed earlier as extra key space. I did this by changing the key_buffer setting, which is now at a whopping 484M.

Did all this optimization make any difference? I didn't do any database performance benchmarking either before or after, but my percieved response time while using Sguil is a lot faster. It still takes time to do these complex searches, of course, but at least now I can go back through four or six weeks of data without having to leave for lunch. It seems to have made a big difference for me, so why not try it out yourself? If you do, post your experiences below and let us know how it went!

Tuesday, June 12, 2007

I'M IN YR RRs, QUERYING YR HOSTS

Probably the single best thing I like about Sguil is the development philosophy. Most of the features are suggested by working intrusion analysts. Just the other day, we were chatting on IRC (freenode's #snort-gui) about DNS lookups. Specifically, I asked about having the Sguil client modified to give the analyst more control of DNS lookups. I am always concerned that when I ask the GUI to resolve an IP for me, I might be tipping off an attacker who is monitoring the DNS server for their zone. I mentioned that it'd be cool to have the ability to look up internal IPs in my own DNS servers but forward all external IPs to some third party (e.g., OpenDNS) in order to mask the true source of the queries.

Based on some of the discussion, I decided it'd be very handy to have a lightweight DNS proxy server that could use a few simple rules to determine which servers to query. I had some time this afternoon, so I hacked one up in Perl. It turns out that the same feature has now made it into the CVS version of Sguil, but I thought that my proxy would still be of use for other purposes. So without further ado, I hereby post it for your downloading pleasure.

You'll need to make a few edits to the script to tailor it to your own environment. First, set up the @LOCAL_NETS list to contain a list of regexps for your local network. All incoming queries will be compared against these expressions, and any that match will be considered "local" addresses. Queries that don't match anything in the list will be considered "public" queries. Note that reverse lookups require the "in-addr.arpa" address format. Here's a sample:


@LOCAL_NETS = ( "\.168\.192\.in-addr\.arpa",
"\.10\.in-addr\.arpa",
"\.mydomain\.com" );

You'll also need to set the $LOCAL_DNS and $PUBLIC_DNS strings to space delimited lists of DNS servers to use for each type of query, like so:

$LOCAL_DNS = qw("10.10.10.2 10.10.10.3"); # My DNS Servers
$PUBLIC_DNS = qw("208.67.222.222 208.67.220.220"); # OpenDNS Servers

Once you've done this, just run the script (probably in the background) and it will start listening for DNS requests on 127.0.0.1 port 53 UDP & TCP. Then configure your system's /etc/resolve.conf to use the local proxy and you're all set.

If you want to open it up a little and provide DNS proxy service for other systems on your network, you can do this by changing the LocalAddr parameter in the call to Nameserver->new(). Set the IP to your system's actual address and anyone who can connect to your port 53 will be able to resolve through you. I haven't really done any security testing on that, so it probably has some nasty remotely exploitable bugs. You have been warned!

Anyway, I hope you find it useful. It only does straight forward and reverse lookups (A and PTR records), so it will fail if you ask for any other types of requests. If these things are really necessary, though, let me know and I'll consider adding them. If you decide to use this at your site, please drop me a comment to let me know (if you can).

Thursday, May 10, 2007

Forensics in the Enterprise

I had the opportunity last night to attend a demo of Guidance Software's EnCase Enterprise product. I use the standalone version of their product, EnCase Forensic already, and the Enterprise edition looks like an interesting extension.

EnCase Forensic runs on a single Windows workstation and allows you to image suspect hard drives and conduct detailed analysis on their contents. It's got a number of handy features built in, like the ability to do keyword searches, extract web viewing history and identify email messages. Pretty nice, and it makes most common forensic tasks a breeze.

The Enterprise edition builds on that by decoupling the data storage from the analysis workstation, providing a SAFE (I forget what this acronym stands for), which is basically a data warehouse for disk images and other extracted evidence. That's OK, I guess, but the really interesting part is the "servlet", their word for an agent that you install on the systems you manage.

The idea is that the servlet is installed on all the computers you manage, be they Windows, Linux, OS X, Solaris or whatever. They claim a very small footprint, about 400k or so. Using the analysis workstation, you can direct the servlets to do things like capture snapshots of volatile memory or process data, browse for specific information or even image the disk remotely, all supposedly without disrupting the user.

As a forensic analysis tool, this makes a lot of sense to me, but there's something I find even more interesting. The Enterprise edition is apparently a platform on which Guidance is building additional tools for Incident Response and Information Assurance. For example, their Automated Incident Response Suite (AIRS) ties in with Snort, ArcSight and other security tools and can use the agent to trigger data collection based on security events on your network.

For example, if Snort detects an exploit against one of your web servers, AIRS can automatically initiate collection of memory and process data every thirty seconds for the next few minutes. When an analyst comes by later to evaluate the Snort alert, they can view the snapshots to see if the exploit was successful and to help them track the attacker's actions on the system.

I have to admit that this falls squarely into the category of "I'll believe it when I see it working on my network", but I'm intrigued. I think this approach shows promise, and is a natural complement to the Network Security Monitoring model. Being able to tie host-based data into the NSM mechanism could be quite useful.

Other interesting products that use the servlet are their eDiscovery and their Information Assurance Suite. eDiscovery is pretty much just what it sounds like: it helps you discover specific information on your network, perhaps in response to a subpoena. The IA Suite is similar, but also includes configuration control features and a module designed to help government and military agencies recover from classified data spillage.

Ok, by this point it probably sounds like I'm shilling for the vendor, but really I'm just interested in some of the ideas behind their products, specifically AIRS. I hope to learn more soon, and maybe to be able to experiment with some of these features.

Tuesday, May 08, 2007

Helper script for email monitoring

During an investigation, it may sometimes be necessary to monitor email traffic to and from a specific address, or to capture emails which contain specific keywords or expressions. In the past, I've usually used Dug Song's Dsniff package for this, specifically the mailsnarf tool.

Mailsnarf sniffs a network interface looking for connections using the SMTP protocol. When it finds one, it extracts the email message and outputs it to stdout in the Unix mbox format. You can filter the traffic with a BPF filter to restrict the number of packets it needs to process, and you can provide a regular expression to make sure it only dumps emails you're interested in. All in all, it's a great tool.

As a monitoring solution, though, mailsnarf is a bit lacking. If it dies, your monitoring is over, and if you have more than one thing to monitor, you either intermingle them into the same mbox or try to manage multiple mailsnarf instances. It can be done, but it's fairly clumsy.

That's why I've decided to release my home-brewed wrapper script, mailmon. It provides a config file so that multiple email monitoring sessions can be configured and managed at once. It also checks up on the individual mailsnarf processes under it's control, restarting them automatically if they die.

So far I've tested it on RHEL4, but it should work anywhere you have perl and dsniff available. The startup script is Linux-specific, so *BSD users may have to provide their own, but it's really pretty simple.

You can download mailmon from the downloads page. If you give it a try, I'd love to hear about your experiences.

Update 2007-05-08 09:58: I can't believe I forgot to mention this, but the mailmon package also contains a patch to mailsnarf. I found that some mail servers can transition into status 354 ("send me the DATA") without ever having to use the DATA command. Mailsnarf wasn't picking up on this, so was missing a lot of the messages. My patch fixes this, and can be used independently of the rest of the monitoring package. In fact, I recommend the patch to all mailsnarf users.

Friday, April 27, 2007

Log Management Summit Wrap-Up

As I mentioned before, I had the opportunity to speak about OSSEC at the SANS Log Management Summit 2007.

In case you've never been to one, these SANS summits are multi-day events filled with short user-generated case studies. In this case, the log summit was scheduled alongside the Mobile Encryption Summit, and the attendees were free to pop back-and-forth, which I did.

The conference opened Monday with a keynote by SANS' CEO Stephen Northcutt. Somehow, despite my four SANS certs, three stints as a local mentor and various other dealings with SANS, I'd never heard him talk. He's quite an engaging speaker. This time around, he was talking about SANS' expert predictions for near-future trends in cybersecurity. There were no shockers in the presentation, but it was a good overview of where smart people think things are headed in the next 12 months.

My favorite presentation on Monday, though, was Chris Brenton's talk, entitled Compliance Reporting - The Top Five most important Reports and Why. As you know, I've been doing a lot of work recently on NSM reports, and although log reporting isn't quite the same, the types of things that an analyst looks for are very similar. I got some great ideas which may show up in my Sguil reports soon.

On Tuesday morning, I gave my own presentation, "How to Save $45k (and Look Great Doing it)." This is the story of how we bought a commercial SEM product, only to find that it didn't really do what we wanted, and replaced it with the free OSSEC. Bad on us for not having our ducks in a row at first, I know. To be totally honest, it wasn't so easy to get up in front of 100 people and say, "You know, we made this really expensive mistake", but sometimes you have to sacrifice for the greater good. ;-)

My favorite talk on Tuesday, though, was Mike Poor's Network Early Warning Systems: Mining Better Quality Data from Your Logging Systems. In this talk, he presented a bunch of free tools to help you keep an eye out for storms on the horizon (to use the ISC's metaphor). Some of the tools were more like websites (Dshield.org, for example), and some were software. He even provided several detailed slides showing OSSEC alerts, which was a nice compliment to my own presentation.

The level of vendor spending on an event is in direct proportion to the number of security managers and CSOs in the crowd. Judging by the vendor-hosted lunches and hospitality suites, there were a bunch of them in San Jose this week. I attended a couple of vendor lunches, and I have to say that I was quite impressed with Dr. Anton Chuvakin's Brewing up the Best for Your Log Management Approach on Tuesday. He's the Director of Product Management at LogLogic, so I was expecting a bit more of a marketing pitch, but in reality he delivered a very balanced and well-thought-out presentation exploring the pros and cons of buying a log management system off-the-shelf or creating your own with open source or custom-developed tools. I think it was the most popular of all the lunch sessions that day, too. I came a few minutes late and had to sit on the floor for a while before additional tables and chairs arrived!

Overall, I think the summit was a good experience for most of the attendees. Many of the talks were more security-management oriented, and I cannot tell you how many times speakers said completely obvious things like, "You need to get buy-in!" or "Compliance issues can kill you!". Still, there was real value in having the ability to sit down with someone who's already gone through this process and learn from their successes (or in my case, mistakes).

SANS has promised to post the slides from each of the tals online. Once I find them, I'll link to them here. I'm not sure if that will include Dr. Chuvakin's talk or not, but I hope that will be available at some point as well.

Update 2007-04-30 09:33: SANS has posted the presentations on their site. This bundle includes all the slides that were printed in the conference notebook, including the one from the famous Mr. Blank Tab.

Update 2007-04-30 10:35: Daniel Cid points out that the slides for the concurrent Encryption Summit are also available.

Wednesday, April 18, 2007

Generating Sguil Reports

I've been running Sguil for a few years now, and while it's great for interactive analyst use, one of it's main drawbacks is the lack of a sophisticated reporting tool. The database of alerts and session information is Sguil's biggest asset, and there is a lot of information lurking there, just waiting for the right query to come along and bring it to light. Sguil has a few rudimentary reports built in, but lacks the ability to create charts and graphs, to perform complex pre- or post-processing or to schedule reports to be generated and distributed automatically.

To be honest, many Sguil analysts feel the need for more sophisticated reporting. Paul Halliday's excellent Squert package fills part of this void, providing a nice LAMP platform for interactive reports based on Sguil alert information. I use it, and it's great for providing some on-the-fly exploration of my recent alerts.

I wanted something a bit more flexible, though. A reporting package that could access anything in the database, not just alerts, and allow users to generate and share their own reports with other Sguil analysts. I have recently been doing a lot of work with the BIRT package, an open source reporting platform built on Tomcat and Eclipse.

BIRT has a lot of nice features, including the ability to provide sophisticated charting and graphing. The report design files can be distributed to other analysts who can then load them into their own BIRT servers and start generating new types of reports. It even separates the reporting engine from the output format, so the same report can generate HTML, PDF, DOC or many other types of output. Best of all, you can totally automate the reporting process and just have them show up in your inbox each morning, ready for your perusal.

If this all sounds good to you, check out a sample report, then read my Sguil Reports with BIRT HOWTO for more information.

If you decide to try this, please post a comment. I'd love to hear your thoughts, experiences and suggestions.

Big thanks go to John Ward for getting me started with BIRT and helping me through some of the tricky parts.

Thursday, April 12, 2007

SANS Log Management Summit

I've been invited to speak at the SANS Log Management Summit 2007 later this month in San Jose. I'll be presenting a short user case study on OSSEC. If you're there, find me and say hi! Maybe we can grab some beers or something.

Update 2007-04-19 15:55: According to the agenda, I'll be speaking on Tuesday morning as part of a panel entitled, "Practical Solutions for Implementing Log Management".

Cool network visualization

I highly recommend that you take a look at this. You just need to see it to believe it. I find their "network activity as videogame" metaphor oddly appealing. It's almost like it's translating your network activity into the story of an epic space battle. Since humans are conditioned by long experience to assimilate and relate well to stories, this could be quite an effective approach. Joseph Campbell as NSM analyst.

I'd really like to be able to run this against my own network to see how effective it would be in every day use. It's a bit hard to tell based on the clip they've put out.

Monday, March 26, 2007

"Online Ads Unsafe". No Duh.

I've written about this before, so it's not exactly new news, but Computerworld is reporting that nearly 80% of all malware delivered to your browser is delivered via ads embedded in web sites, rather than in the content of the sites themselves.

This makes perfect sense to me, since most of the ad networks are pretty incestuous, and an ad placed on one network could easily be distributed by other advertising partners. With sometimes several layers of abstraction between advertiser and advertisee, it can be fairly difficult to trace a malicious ad back to its source.

The article is based on Finjan Software's latest Web Security Trends Report. You can download the complete report here (registration required), or see the summarized press release here.

Thursday, March 22, 2007

Fun Crypto Toy

I took some time off earlier this month to have a family vacation in London. Overall, it was pretty fun, but we wanted to get out of the city for a bit, so we took the train to Bletchley Park. BP is where the British government ran their super-secret cryptography operations against the German ciphers during WWII.

Of course, the most famous of these ciphers is the Enigma machine, of which there were actually several different types in use in the various branches of the German military. I don't want to bore you with all the details of Enigma, especially since they're probably familiar to many of you. What I wanted to tell you about was this fun toy I bought, the Pocket Enigma cipher machine.

Housed in CD jewel case and made entirely of paper, it emulates a simple one-rotor system (with a choice of two possible rotors). There's no plugboard or anything complex like that, so it's really easy to understand, and it's a lot of fun to send seekrit messages to your buddies, in a flashlight-under-the-covers kind of way.

Here's my contribution to the fun. See if you can decode the following message, enciphered on my very own Pocket Enigma. The first person to post a correct solution in the comments wins... umm... the people's ovation and fame forever!


II B
X VJFEI OJAXQ PBUXQ DTFVH GFAKQ UQGES IOGZW

Wednesday, March 07, 2007

More on Tagging

You may remember my previous post about IP tagging, in which I described my idea for a Web 2.0-ish tagging system for NSM analysts. Well, geek00L pointed out that I'm not the first person to think of this idea. It seems that a couple of bright guys at Georgia Tech had the idea last year, and implemented it in the form of a tool called FlowTag. I recommend their paper for some real-world examples of how a tagging system can enhance analyst productivity.

The dogs must be crazy...

My friend Shirkdog offers this post about doing NSM without the backend database that solutions like Sguil offer. Personally, I'm not a fan of using grep for my core analysis workflow, but I am a fan of doing whatever gets the job done, within the limits of the resources available to you.

Thursday, February 08, 2007

IP Tagging

I just read Godfadda's blog entry about a prototype system for tagging IP addresses in Splunk. His system analyzes IP addresses as they're about to be inserted into his log/event tracking system and cross references them with several databases in order to generate additional tags to provide additional context to the event.

Right now, his system deals with geography ("which branch office is this in?") and system role ("Is this an admin's workstation, a server or a public PC?"). I really like this idea, as it provides valuable context when evaluating events.

In fact, I've done a lot of thinking recently about how to add more context to NSM information, but it wasn't until I read this article that I realized that what I was looking for would probably best be implemented as a tagging system. What if Sguil were to incorporate tagging? Well, we'd first have to figure out what to tag. I'd like to be able to tag several types of objects:


  1. IP addresses
  2. Ports
  3. Events
  4. Session records
  5. Packet data for specific flows (probably would treat the pcap file and any generated transcript as a single taggable object)

As for the tags themselves, the system should automatically generate tags based on some criteria, just as in Godfadda's system. Maybe it would automatically tag everything in my exposed web server network as internet,webserver, for example, or maybe it could correlate my own IPs to an asset tracking system to identify their function and/or location.

But here's the part I think would be even more useful: I'd like to have the analyst be able to tag things on-the-fly, and later search on those tags to find related information. For example, if someone has broken into my web server, I could tag the original IDS alert(s) with an incident or case number (e.g., "#287973"). Perhaps this would also automatically tag the attacker's IP with the same number. As I continue to research the incident, I will probably perform SANCP searches, examine the full packet data and generate transcripts. I could tag each of the interesting events with the same ID. At the end of the investigation, I could just do a search for all objects tagged with #287973, order them by date & time, and presto! The technical portion of my report is almost written for me! This is quite similar to other forensic analysis tools (EnCase, for example) that allow you to "bookmark" interesting pieces of information and generate the report from the bookmark list.

To go a bit further, what if the same attacking IP came back six months after the above incident, this time with an FTP buffer overflow exploit? You might not remember the address as the origin of a previous incident, especially if you have a large operation and the original incident was logged by a different analyst. However, if the console says that the address was tagged as being part of a specific incident, you'll know right away to treat it with more suspicion that you might otherwise have done.

To be honest, these are just some of my first ideas on the power of tags; the real power could come as we consider more elaborate scenarios. What if you could tag any item more than once? Well, by associating multiple incident tags with an item, you just might uncover relationships that you didn't realize existed. It doesn't take much to imagine a scenario where you can build a chain of related tags that could imply association between two very different things, perhaps by creating a series of Kevin Bacon links between addresses and or events.

So, will any of this show up in Sguil? Probably not any time soon. Maybe if I can convince Bamm that I'm not insane, maybe it'll find its way onto a feature wish list. Or maybe another project or product will beat us to the punch (it is the era of Web 2.0 after all, and tagging is like breathing to some folks nowdays). But I do fantasize about it, and I live in hope.

NSMWiki update

I spent some time this morning reconfiguring the NSMWiki to provide secure login and password management pages. So first off, now you know that.

The real point of this post, though, is that I had a hard time finding good examples that applied to my specific situation, so I'm documenting the process here. I hope this will make things easier for the next person.

Here's what I started with:


  • A shared webhost (Linux, Apache & mod_rewrite)
  • An SSL and a non-SSL server serving identical content
  • MediaWiki 1.9.0

I wanted to let mod_rewrite do the hard work, so I did some web searches on the topic. Unfortunately, I always came up with pages that had the same two problems:

  1. They assumed that I had control of the server-wide configuration files for Apache, but I use a shared web host, so I needed to use a .htaccess file to set the rewrite settings on a per-directory basis.
  2. The examples only provided protection for the login page, but none of the other pages that dealt with password information .

The first problem is that mod_rewrite sees different URLs when it's configured on a per-directory basis than when it's configured for the entire server. The change is small, but important. In a server wide configuration, URLs begin with the "/" character. When run per-directory, they don't. As I initially started by using some online examples, this tripped me up until I figured it out.

The second problem is that none of the examples cared about the other MediaWiki password management options, just the login page. Oops!

Fortunately, the solution to both problems was easy, with a little tinkering. Here's a .htaccess file that will do the right thing. Drop it in your MediaWiki directory, edit the RewriteRules to reflect the correct path in your wiki URLs, and you should be good to go.

RewriteEngine on
# Any Wiki page that uses the Special:Userlogin page (account login, creation),
# the Special:Resetpass (password reset) or Special:Preferences (where normal
# password changes operations are done, among other things) should get
# redirected to the SSL server. Note check to make sure we're not ALREADY
# using the SSL server, to avoid an infinite redirection loop
RewriteCond %{QUERY_STRING} ^title=Special:Userlogin [OR]
RewriteCond %{QUERY_STRING} ^title=Special:Resetpass [OR]
RewriteCond %{QUERY_STRING} ^title=Special:Preferences
RewriteCond %{SERVER_PORT} !443
RewriteRule nsmwiki/index.php https://%{SERVER_NAME}/nsmwiki/index.php?%{QUERY_STRING} [L,R]

# Any Wiki page that's NOT one of the specific SSL pages should not be using
# the SSL server. This rule redirects everything else on the SSL server
# back to the non-SSL server.
RewriteCond %{QUERY_STRING} !^title=Special:Userlogin
RewriteCond %{QUERY_STRING} !^title=Special:Resetpass
RewriteCond %{QUERY_STRING} !^title=Special:Preferences
RewriteCond %{SERVER_PORT} 443
RewriteRule nsmwiki/index.php http://%{SERVER_NAME}/nsmwiki/index.php?%{QUERY_STRING} [L,R]

As I said, this was my first foray into mod_rewrite, and I'm pretty happy with the functionality it gives me. I know there are some gurus out there who are much more familiar with mod_rewrite and/or MediaWiki, though, so if you can suggest any improvements, please leave a comment.

Update 2007-06-17 17:25: I upgraded the wiki software to 1.10.0 the other day and today I got an email to tell me that no one could log in. I did a little poking around, and sure enough, they couldn't. It turns out that you should also set the following line in your ''LocalSettings.php'' file:

$wgCookieSecure = false;

Because the login page was using the SSL server, MediaWiki was issuing "secure" cookies (i.e., cookies that can only be sent via SSL). Only the login and a few other pages use SSL, though, so most of the rest of the wiki session simply wasn't seeing the cookies. The setting existing in the old software, but I guess it wasn't being used.

Monday, January 22, 2007

In which I tell you where to stick it...

You think you're funny, don't you?

I hear this a lot, especially from my family, but now I can honestly say it's true. I'm funny.

What, you don't believe me? F-Secure says I am, so it must be true! MUST!

Anyway, thanks to everyone who voted for my contribution. I can't wait to get my stickers!

Thursday, January 11, 2007

A new location for the new year

It's 2007, and since I just love to tinker with things that aren't broken, I've decided to move this blog to a new address, blog.vorant.com. The old blog and feed URLs will still work, they'll just redirect to the new location. Still, please let me know if you have any trouble with it, and thanks for reading!

Wednesday, January 10, 2007

I know the feeling...

Sometimes, when I bust out with just the right one liner to find the critical data, I feel like this.

Tuesday, January 09, 2007

The Security Journal

The Security Journal is an online quarterly newsletter put out by Security Horizon. Some of the topics in recent issues include secure programming, HDES, wireless video security and the DefCon CTF competition. Check it out!

Wednesday, January 03, 2007

Locate new phishing sites before they're used against you

I just read this article that shows one way to locate new potential phishing sites by tracking domain registrations. Ok, so it's not a very high-tech way, but it's probably effective (except for the sites that just try to use IP addresses).

The service listed in the article requires a subscription fee. Does anyone know of any free equivalents?