Friday, December 21, 2007

Heading home for the holidays?

Take a look at PortableApps.com, a repository of free software that's modified to run directly from a USB stick. Not only will you find a nice anti-virus scanner to help you tune up your relatives' PCs (ClamAV), but you'll also get Firefox, Thunderbird, GAIM and other useful applications to help you keep connected.

Really, I just want everyone to run AV on their family computers. I just put that stuff about Firefox and Thunderbird in to tempt you into downloading the package. Heh.

Monday, November 12, 2007

Holy Grail Found

...or not. Who knows? This article published by the University of Wisconsin-Madison, claims that UWM's Paul Barford has developed technology (called "Nemean") to automatically identify botnet traffic. So far, so good. To be honest, the industry could really use this kind of solution.

Here's the part I have trouble with, though:

The Achilles’ heel of current commercial technology is the number of false positives they generate, Barford says. Hackers have become so adept at disguising malicious traffic to look benign that security systems now generate literally thousands of false positives, which Nemean virtually eliminates.

In a test comparing Nemean against a current technology on the market, both had a high detection rate of malicious signatures — 99.9 percent for Nemean and 99.7 for the comparison technology. However, Nemean had zero false positives, compared to 88,000 generated by the other technology.


I'm not even sure I know where to start here. These numbers sound quite impressive, but without knowing much more about how the tests were conducted, they really don't mean very much. First of all, how many unique types of botnets were tested, and what were they? Also, who chose the test cases? I can achieve a 99.9% detection with 0% false positives too, if I control both the software to be tested and the selection of the test cases.

Next, I wonder what specific technology they tested against, and whether or not they tuned it properly for their test environment. I suspect not, as 88,000 false positives is quite high. I doubt they had 88,000 unique botnet samples, either, so I'm sure there are multiple alerts for each, which can easily be tuned down.

Finally, and most importantly, even if you do achieve near perfect results on a selection of current botnets, this is likely to be only temporary. The Bad Guys are virtually guaranteed to step up their game too.

To be fair, this is addressed in the article:

While Barford has high hopes for Nemean, he says Internet security is a continuous process and there will never be a single cure-all to the problem.

“This is an arms race and we’re always one step behind,” he says. “We have to cover all the vulnerabilities. The bad guys only have to find one.”


At least it sounds as though Dr. Barford does have an idea of the scope of the challenge. Perhaps there is more to this project than was presented in this fluff article. I really would like to hope so, but I'm a bit skeptical based only on the information presented here.

Saturday, November 10, 2007

File under: Things You Wish You Didn't Know

After reading this, I may never be able to attend a certain security convention ever again.

Thursday, November 01, 2007

NSMWiki in print again

This month's ISSA Journal has another article by Russ McRee on NSM topics. You may remember that I've blogged about Russ' articles before. This time, he's writing about Argus, an excellent suite of tools for implementing distributed collection of network flow information. It also happens that he mentions NSMWiki, which I maintain for the Sguil project.

If you're not an ISSA member, you can read Russ' article here.

Thursday, October 25, 2007

Sguil training: is there demand?

Ok, so I know that there are a fair amount of Sguil users out there now, and more coming every day. Some of us have kicked around the idea of providing some Sguil training, though we never seemed to have the critical mass of potential students. Maybe the time has come to consider this again?

What I'd like to know is this: if there were a one day class that covered Sguil administration (installation, troubleshooting and maintenance) offered in the Hampton Roads, VA area (Norfolk, VA Beach, Williamsburg), do you think you would be interested in attending?

If you would be interested in attending, how much would you be willing to pay for such a class?

Would your answer change at all if the class also taught you how to actually analyze events and research incidents with Sguil, possibly as a second day?

I'm not promising to run a class, but I've been interested in doing so for quite some time, and if I get enough interest, it's certainly a possibility.

Please email me or leave a comment here with your thoughts. Thanks!

Tuesday, October 02, 2007

Ok, there's something wrong here...

You may have seen this article about a survey McAfee conducted in conjuction with the National Cyber Security Alliance. In brief, they polled a few hundred random Americans and asked them about the security software they were using on their PCs, then followed up to see if the actual configuration matched what the user's thought. No surprise that they found only something like 50% of the people had up-to-date AV software, had turned on their firewall or were running any sort of anti-spyware software.

The numbers sounded reasonable in this article, but then I read the following sentence:

But when pollsters asked to remotely scan the respondents' 
computers, the story turned out to be very different.

Just who exactly would allow strangers to "remotely scan" their computers to verify their security settings? Hmm... Oh! I know! Perhaps the very people who are least likely to already practice good security measures!

Seriously, I'm no statistician or pollster, but this methodology sounds fishy to me. This sounds like a self-selected sample to me (the group of all people who lack enough basic security skills to properly secure their systems from strangers). I'd love to know if this was somehow accounted for in their methodology, but until then, I don't think these numbers are at all useful.

Update: 2007-10-02 13:34: BTW, I forgot to make my other point, and that is that the pollsters seem to have used exactly the same sort of techniques that we try to condition our users against, namely social engineering attacks. So that lends further credence to my belief that they probably ended up doing a survey of those who were already the worst at security.

Sguil covered in Information Security Magazine

Richard Bejtlich points out that the October issue of Information Security Magazine has an article by Russ McRee, entitled Putting Snort to Work. The article is about Knoppix-NSM, a Linux LiveCD designed for easy monitoring. Knoppix-NSM includes a preconfigured Sguil server and sensor, and Russ has a lot of nice things to say about it.

It's really good to see Sguil in some mainstream security press. VictorJ's modsec2sguil custom agent and our very own NSMWiki even get mentioned, so I know he's done some homework.

Thursday, September 27, 2007

Snort 2.8 is out

Grab it from the Snort download page, and read my previous post about some of the new features.

I still haven't tried this myself yet, but soon. Honest.

Thursday, September 13, 2007

I call shenanigans!

The Telegraph reported yesterday that aerospace giant EADS has developed "the world's first hacker-proof encryption technology for the internet" [sic].

Amazing! There are at least three separate errors in just that one 10 word sentence fragment. I tried to find some more info on the company's website, but all I could find was this year-old press release on a product called ECTOCRYP. I assume this to be the same product, because how many encryption protocols would this company develop? Let's take a look at some of the problems with this company's statement.

First off, as everyone reading this post probably already knows, there's no such thing as "hacker-proof". The definition of a hacker is someone who can change the rules of the game in his or her favor, so in order to be "hacker-proof" you've got to somehow deny them this opportunity. Most knowledgable attackers realize that attacking the encryption algorithm itself is usually unnecessary. For example, in order to be truly "hacker-proof", you must not only have a robust algorithm, but also perfect key management (both protocols and implementation) and secure endpoints on both sides of the communication. This last, by the way, also implies users and administrator who never make any mistakes in configuration or use of the system.

The second problem is that "world's first" thing. Probably the most commonly deployed encryption technology today is SSL, and let me tell you, it's pretty much "hacker-proof" if you are just trying to cryptanalyze a captured session, which, as I just mentioned, you probably wouldn't do.

Along the same lines, here's a quote from the article:

At the heart of the system is the lightning speed with which the "keys" needed to enter the computer systems can be scrambled and re-formatted. Just when a hacker thinks he or she has broken the code, the code changes. "There is nothing to compare with it," said Mr [Gordon] Duncan [the company's government sales manager].

So their big innovation is that they change the keys frequently? It's true, there's nothing like it... except for all the things that already do that! TKIP has been doing this for years ("T" is for "Temporal", and that's good enough for me). Even SSL can do this (via it's key renegotiation protocol), though this is admittedly rare since most SSL sessions are too short.

Finally, I'm impressed that they've developed an encryption technology "for the internet". IPv6 or IPSEC might be "for the internet" in the sense that they're tied directly to network protocols used to communicate over the Internet, but that doesn't seem to be the case with ECTOCRYP. It's probably just a stream or block cipher, which could be used equally well on, say, a dedicated serial line. I think what they mean to say is that it's a protocol used for protecting information in transit, as opposed to encrypting files at rest on the disk. That doesn't mean that it's designed "for the internet".

All of this is totally beside the main point, however, that I couldn't find any technical details on the encryption algorithm itself. I'm no professional cryptographer, so perhaps it has been published in a trade journal somewhere that I can't find in Google, but compared to other widely available encryption algorithms, I doubt it has undergone much peer review. This in itself makes the system suspect.

By the way, if you do Google "ECTOCRYP" you'll find this craptastic marketing video.

Tuesday, September 11, 2007

The embassy password thing

I wasn't going to comment on this, but we were discussing it a bit on #snort-gui today, and it made me wonder...

People everywhere are talking about how the embassies are foolish for using Tor to provide "secure" remote access to their systems. Ok, we can all agree that Tor isn't really suitable for that. But as someone on #snort-gui pointed out, how do we know it was an official embassy-supported application, and not just some power users who decided to start using Tor on their own? We don't! At least, I couldn't find any confirmation that the network admins were pushing Tor on their users.

But really, this got me thinking some more this afternoon. How do we know it was the embassy users who were using Tor? If I were a hacker who had somehow gained access to accounts in the embassies, I might use Tor to disguise my origin and wouldn't care if the passwords were exposed. I think there are any number of actors who could be behind this, so I don't want to name any names, but Certainly, Hacking Insecure Networks Around the world is a specialty of some...

Update 2007-09-11 16:02: giovani from #snort-gui reminded me that he was the one who brought up the idea of the power users. And he is unnervingly happy about being referenced here, even indirectly. Thanks, giovani!

Would you notice this?

Just a quick thought I had. If your organization is using virtualization to pack many VMs onto your existing server platforms (as many sites are trying to do these days), would you necessarily notice if an additional VM popped up?

It turns out that VMs can be very small (the smallest VMWare image I found with a quick google search was 10MB, which could fit in my /tmp partition). Many, perhaps all, of the VM packages provide command-line level access to manage the running guest systems. VMWare even provides a Perl API for this.

If I were an attacker who managed to get access to your VM system, could I insert my own VM image and make it run? If so, I could potentially have my own custom hacking environment, with root privileges and whatever software I needed, without creating too many files or new processes on the host OS. Unless you're looking carefully at every file on the system, or watching what VMs are running, would you notice?

Would anyone with real-world virtual server experience care to share their thoughts?

Find Storm with Sguil (and a little ingenuity)

My friend nr has started a new security blog. His first real post is about using Sguil to detect Storm worm infections. Welcome to the blog world, nr!

Thursday, September 06, 2007

Cloaking your investigative activities in Sguil

I've written before on disguising your outbound DNS queries. In short, even looking up an IP to find the hostname might alert an attacker that you're on to them if they control their own DNS server. I briefly described how to create a simple proxy DNS server that could send all your queries through a third-party, like OpenDNS, which should make it harder for the bad guys to figure out just who is checking them out.

I'm happy to say the new Sguil 0.7.0 client now incorporates third-party DNS server support natively. In sguil.conf, you simply set the EXT_DNS, EXT_DNS_SERVER and HOME_NET variables, like so:


# Configure optional external DNS here. An external DNS can be used as a
# way to prevent data leakage. Some users would prefer to use anonymous
# DNS as a way to keep potential malicious sources from knowing who is
# interested in their activities.
#
# Enable Ext DNS
set EXT_DNS 1
# Define the external nameserver to use. OpenDNS list 208.67.222.222 and 208.67.220.220
set EXT_DNS_SERVER 208.67.222.222
# Define a list of space separated networks (xxx.xxx.xxx.xxx/yy) that you want
# to use the OS's resolution for.
set HOME_NET "192.168.1.0/24 10.0.0.0/8"

In this example, I've configued Sguil to use one of the OpenDNS name servers (208.67.222.222) to look up all hosts except those on my local LAN (addresses in either the 192.168.1.0/24 or the 10.0.0.0/8 range).

So that takes care of DNS, but what about WHOIS? Ok, so maybe it's a bit less likely that the attacker has also compromised a WHOIS server, but less likely doesn't mean that it hasn't happened. It probably has, and it probably will in the future. Therefore, it's prudent to also try to disguise the source of your WHOIS lookups.

Sguil has always had the ability to call a user-supplied command to perform WHOIS operations, so here is a simple script you can use to proxy all of Sguil's WHOIS lookups through a third party (Geek Tools, in this case). This should work on any version of Sguil.

#!/bin/sh
#
# Simple script to proxy all whois requests through whois.geektools.com
# to help keep the bad guys from figuring out that we're onto them when
# Sguil looks up a record.
/usr/bin/whois -h whois.geektools.com $*

To use this, just set the WHOIS_PATH in your sguil.conf file, like so:

set WHOIS_PATH /home/sguil/bin/sguil-whois.sh

So now you have it. By implementing DNS and WHOIS proxies in Sguil, you can add an additional layer of protection against bad guys who may be monitoring their systems for signs that you have discovered their attacks.

Thursday, August 23, 2007

Snort 2.8 Beta

Sourcefire just announced the availability of their Snort 2.8 beta code in CVS. According to the announcement, the new code contains several interesting features:


  • Port lists
  • IPv6 support
  • Packet performance monitoring
  • Experimental support for target-based stream and IP frag reassembly
  • Ability to take actions on preprocessor events
  • Detection for TCP session hijacking based on MAC address
  • Unified2 output plugin
  • Improved performance and detection capabilities

I downloaded the code and looked through some of the documentation on the new features, but haven't used any of them yet. From my point of view, the improvements from 2.7 to 2.8 are a lot bigger than those made when upgrading from 2.6 to 2.7.

I'm particulary happy about the idea of port lists. Finally, you can cherry pick exactly the ports you want to monitor. If you wanted to look at, say, ports 80, 8080 and 8888 in Snort 2.7 and below, you had to write:

var MYPORTS 80:8888

This means to monitor all ports 80 through 8888 inclusive. That's a lot! With Snort 2.8, you can write:

portvar MYPORTS [80,8080,8888]

which is obviously much more efficient.

IPv6 support is also pretty major, though right now it's only a first pass. You can now specify IPv6 addresses pretty much anywhere you can use an IPv4 address, but several of the preprocessors don't yet support IPv6 and there's no fragmentation reassembly. IMHO, it's probably not quite ready for primetime, though I guess it could be good for simple tasks. I might deploy it on places that should not see any IPv6, just to alert whenever any is seen.

Snort 2.8 also extends the active response functions to events generated by the various preprocessors. You used to only be able to take actions based on signatures detected by the rules engine, but now you could potentially RST connections based on http_inspect or some other preprocessor events. Could be useful, though I personally don't rely much on Snort's built-in active response. I bet this would really shine in a Snort-inline environment, though.

The other items look cool too, but those are the ones I'm most interested in. I'm still trying to find out more about unified2 (specifically, if anything out there reads it yet, since barnyard only handles Unified Classic and isn't really developed anymore anyway).

Have you tried Snort 2.8 yet? If so, let me know what you think!

Friday, August 17, 2007

Sourcefire buys ClamAV

If you haven't seen this article yet, Sourcefire (the developers behind Snort) have acquired the open source anti-virus project ClamAV.

I sure didn't see this coming. I wonder if it means that a gateway security product is in the works? In any case, Sourcefire has an excellent track record with the OSS community (recent licensing issues notwithstanding), so I can only see this as a positive for ClamAV and ClamAV users.

Tuesday, July 17, 2007

I'm getting the strangest looks today

In response to a conversation we had last week in #snort-gui, I created a t-shirt store with only a single shirt in it. Thus was the Information Security Workers Union, Local 1337 born.

I got mine last night, and I thought it looked pretty good, so I'm wearing it today. Several people have already asked me if I'm trying to unionize. Heh. I guess you have to be a ISWU member to get the joke.

Friday, July 13, 2007

My input to the ROI spat

Richard Bejtlich wrote a blog post entitled Are the Questions Sound? which has prompted a number of replies. One in particular, Bejtlich and Business: Will It Blend? took him to task on several issues.

I'll say up front that Richard is a friend of mine, and I've always found him to be very focused on how security adds (or detracts) from an organization's business. He's the rare person that possesses both deep technical knowledge and the ability to understand and communicate business drivers. If he stays long enough, he'll probably end up being GE's CISO one day.

Back to the task at hand. The thing that struck me most about Kenneth's posting was the false dichotomy he laid out in his first point:


1. Either the CISO does not know what he’s talking about at all or Richard clearly misinterpreted him

I suggest another possibility: the CISO was intentionally being a dick.

Yes, this is extremely common in the financial world. I have consulted with several large insurance and brokerage firms, and the problem is epidemic, especially on Wall Street. Executives like to push around their vendors, probably to reinforce their ideas about their own power and influence.

I've had this happen to me, and I've seen it happen to others. It's quite possible that the CISO knew what he was talking about, but chose to bully the consultant, just to see how he'd take it. I once had the VP of a Wall Street firm call my cell phone on Christmas Eve (Friday) and demand that I have some consultants on site the following Monday or he'd call my CEO and tell him "what kind of people he has working for him."

We'd already been there for most of a year, and they had already stiffed us on $300k worth of consulting fees, so I just told him not to threaten me, and then gave him the CEO's phone number. He backed down immediately. I later found out that he was in the process of being fired, and was trying to get us involved so he could point fingers. I'm sure that company is just a little better off today without him there.

Kenneth's second point is all about security ROI:

2. Information Security does not have an ROI and that security mechanisms are not business enablers

My friend in the financial risk department read Richard’s statement that “Security does not have an ROI” and he laughed. He commented, “Just let some hackers change some numbers in a banks financial system and you’ll see that security has ROI.”


It happens that I do agree that security can have an ROI, but the scenario given is not an example of that. It's an example of loss prevention and, to a certain extent, business enablement (to enable the bank to survive, which it really wouldn't if any Joe could log in and change account balances at will). You can best see this by examining the underlying requirement:

  • No one should be allowed to alter account balances without proper authorization.

Given that requirement, you can implement the countermeasures in various ways. For example, you could:

  1. Apply an application-layer defense to validate changes before they happen.
  2. Report on suspicious changes that have already occurred, and roll back anything that was not authorized.

Both achieve the same end, and both are used in real world situations. Given the requirement, both are functionally equivalent, but the second method more clearly shows the loss-prevention aspect.

If you'd like an example of how security can be an enabler, go back to my original comment about any Joe changing the database. Even better, you can see how it can become a positive source of revenue by examining Bank of America's Sitekey system. They're clearly selling the security of their online banking as a key differentiator, hoping to drive customers their way. And I'm pretty sure it's working.

Finally, Kenneth's point #3 is a bit inconsistent:

3. Some quotes are just wrong or nonsensical because they take the five-digit accuracy too literally instead of using modeling to understand risk :

* “Assumptions make financial “five digit accuracy” possible.”
o Actually mathematics make five digit accuracy possible: I can assume anything


There's more to this quote, but I'd just like to point out that if you're using a "model", this is a short word for "a set of assumptions we believe are close enough to true to be useful for predictive purposes."

Overall, I'm really kind of puzzled by the whole ROI debate. I guess I can understand that some people have staked their reputations on one side or the other, but really it's a lot like "VI or EMACS" for the infosec crowd. It's more of a philosophical debate based on which models you are more comfortable with.

Tuesday, July 10, 2007

RIP SysAdmin Magazine

I got my last issue of SysAdmin Magazine last week, though I didn't open it until last night. I was shocked to discover that, according to the editor, this was to be their final issue. I've been a subscriber for the past few years, and have always enjoyed their practical, in-the-trenches approach to system administration topics. I particularly liked that they always did one or two issues on security each year. And now they're gone, just about 3 months after I renewed my subscription. I'll be sorry to see them go.

Searching inside payload data

Almost all of my searches involve IPs and/or port numbers, and Sguil has a lot of built-in support for these types of database queries, making them very easy to deal with. Sometimes, though, you want to search on something a little more difficult.

This morning, for example, I had a specific URL that was used in some PHP injection attack attempts, and I wanted to find only those alerts that had that URL as part of their data payload.

Constructing a query for this is actually pretty easy, if you use the HEX() and the CONCAT() SQL functions. If you're using the GUI interface, you only have to construct the WHERE clause, so you can do something like the following:

WHERE start_time >= "2007-07-09" \
AND data.data_payload like \
CONCAT("%", HEX("my.url.or.some.other.string"), "%")

The main problem with this type of query is that the data_payload field is not indexed, so it results in a table scan. You really need to make sure you have some other criteria that is indexed. In this case, I used the date to restrict the number of rows to search, but you could use IPs or port numbers as well.

Tuesday, July 03, 2007

Tired of all the talk

I read White House council puts cybersecurity in focus this morning, and I have to say: "Enough is enough."

The Federal government needs to stop talking about cybersecurity and start doing cybersecurity. If they're just now putting cybersecurity into focus, where has it been for the past several years?

Basically, the article talks about improving communications between first responders at the Federal, state and local levels, and about providing better cybersecurity guidance. The problem is that this is all just talk and paperwork, like most of the Federal cybersecurity initiatives. Yes, communication is important, as is guidance, but do know what actually makes things more secure? People. And money. Neither of these are features commonly seen in Federal cybersecurity initiatives.

Note to all my readers in the White House: The Federal government is too big and the agencies too diverse to effectively push cybersecurity from the top down. Instead of trying to centralize the cybersecurity programs at the Executive level, focus on supporting the agencies by giving them the resources necessary to develop and maintain their own effective security programs. Stop funding them as an afterthought, and get real about how much it costs to hire and train effective security personnel. Recognize that security requires positive actions to make computing safer, not just getting the FISMA reports done on time.

When the government starts doing these things, Federal security will improve. Then you can worry about centralization. Until then, though, it's just a bunch of useless talk.