Thursday, January 21, 2010

Review of "Inside Cyber Warfare" posted

Yes, I've been on a reading kick lately, and my most recent selection was Jeffrey Carr's Inside Cyber Warfare: Mapping the Cyber Underworld.  I'm not really a big fan of using the cyber- prefix unless you've got some kind of bionic implant, but otherwise I really liked this book and gave it the full 5 stars:

"Inside Cyber Warfare: Mapping the Cyber Underworld" is the best book I've seen for those of us charged with defending against the highest-end threats to information security. It provides a comprehensive intelligence briefing on actors, capabilities, motivations and possible responses to acts of cyberwar. I highly recommend this for government, military and corporate readers who are responsible for either securing their own networks or for setting security policy. The threat is real, and these groups are active. Inside Cyber Warfare is the guide you need to help you understand the context in which your organization operates on the modern battlefield.

You can read the full review here if you like.

Thursday, January 14, 2010

Is active response a valid approach to dealing with APT?

I recently picked up a copy of Jeffrey Carr's Inside Cyber Warfare: Mapping the Cyber Underworld (review pending, stay tuned!).  One of the chapters struck a particular chord with me, and I thought I'd share my views.

Chapter 4, Responding to International Cyber Attacks as Acts of War (written by guest author Lt. Cmdr. Matthew J. Sklerov) is a fascinating legal analysis of the basis of the legal use of armed force between nation-states.  In it, Sklerov proposes that not only do states have the legal right to respond to network intrusions with "active response" (AKA "hack backs"), but that this is a far more effective response than traditional passive defense. 

Of course, people have been talking about "active response" for almost as long as I've been in the security field.  However, most of that talk was oriented towards garden-variety attacks, not the high end attackers referred to in this analysis.  Yes, this chapter (in fact, much of the book) is talking about what we refer to as the Advanced Persistent Threat (APT). 

This chapter talks a lot about whether a cyber attack could be construed as the use of armed force (answer: yes) and if it would be legal for one state to launch a cyber attack on another in order to defend or deter them from such attacks (answer: yes).  I think everyone can agree that attacks that are clearly government sponsored would warrant some sort of action by the victim state.

The really interesting argument, though, is whether the attacks have to be clearly attributed to a state, or whether it is sufficient that they merely be imputed.  In other words, do you have to prove that the state performed the attack directly, or is it sufficient to know that the attack was carried out by non-state actors either under the direction of or with the tacit complicity of the attacking state?

This is a very important question, because states that are known to be actively engaged in cyber conflicts rarely do their own dirty work.  Russia and China, for example, are both widely believed to work through their own extensive networks of civilian hacker groups, which somehow seem to escape prosecution as long as their targets are the enemies of their respective states.  If the governments approve of what the hacker groups are doing, they allow them to operate and offer them a large degree of immunity from international investigation.  The whole time, the government gets plausible deniability

According to Sklerov's argument (which I'm brutally mangling by boiling it down into simplistic terms), states that perform cyber attacks, or through inaction allow cyberattacks to be performed from within their borders, have sacrificed their right to remain unmolested and opened themselves up to hack backs by their victims. 

However, Sklerov's entire thesis is colored by his military background.  His argument is based on the premise that the victim is a nation-state, not a corporation, or even a group or individual.  It's very easy to talk about how a nation-state has the ability to use armed force to protect it's interests, but the international law that he marshals to support his argument does not apply at all unless you are a government. 

Sure, APT is out there targeting governments, but they're also inside companies as we learned this week with Google's announcement that it was being hacked by China.  My biggest question, then, is "Where does this leave the rest of us?"  I very much doubt that any non-governmental organization or individual is prepared to open a cyberwar with a nation-state.  If they did, their actions would by definition be illegal and would probably result in criminal charges in their own country.  In fact, it would be that country's positive duty to fully investigate the hack back and enforce the law, or they'd open themselves up to hack backs by the aggressor nation due to their inaction.

Sklerov also spends some time discussing technological aspects of active attack: specifically, how to accurately determine the source of the attack in order to target your foes.  I admit that I'm a little confused by this:

Automated or administrator-operated trace programs can trace attacks back to their point of origin.  These programs can help system administrators classify cyber attacks as armed attacks [...] and evaluate whether attacks originate from a state previously declared a sanctuary state.

A few pages later, he also writes:

Cyber attacks are frequently conducted through intermediate computer systems to disguise the true identity of the attacker.  Although trace programs are capable of penetrating intermediate disguises back to their electronic source, their success rate is not perfect. 

Honestly, I have absolutely no idea what he's talking about here.  "Trace programs?"  I see those on TV and in the movies, but never in real life.  The technology just does not exist.  He's right that APT attacks never come from the attackers' own computers; they compromise other systems and then use those as disposable launch points from which to attack their real targets.  When a launch point is "burned", they discard it and start again somewhere else.  You can easily identify these intermediaries, but there's no way to trace back any farther without contacting the system owners (or hacking back) and performing a manual, time-consuming analysis.  This idea of an automated trace just isn't based in reality.

In my view, these are the two points that really undermine the value of active response as an APT fighting tool:  Unless you're a government it's still illegal, and it's not technically feasible to trace the source back to an appropriate target anyway (even if you are a government). 

Having said that, I still think this is a fascinating and compelling legal argument.  Governments have the duty to protect the interests of their citizens (and by extension, companies and other organizations based in their country), so perhaps this could serve as a legal basis for allowing the government to take a more active role in helping to defend private networks.  In that case, the active response could be performed with the assistance of or under the direction of an authorized federal agency, which might eliminate some of the legal barriers and do some real good in the field.

Tuesday, January 12, 2010

Review of "Hacking: The Next Generation"

One of my New Year's Resolutions for 2010 is to get back into the habit of reading lots of security books.  I used to read at least one a month, and often more, but somehow I got out of the habit last year.  This year I want to do better!

To start off with, I read Hacking: The Next Generation by Nitesh Dhanjani, Billy Rios and Brett Hardin.  You can find the full 4-star review at, but I'll excerpt it here:

I first saw this book in the store, and a quick glance through the Table of Contents got me pretty excited. I saw topics like mobile security, the phishing underground, targeted attacks against company executives and (the big selling point for me) attacks against cloud computing. In fact, I was so excited to read it that I ordered it from Amazon on the spot, through my phone. After having read this book, I can say that it lived up to most of my expectations.

 I should also mention that the focus of this book is primarily on high-end attackers, some of whom fall under the category of "Advanced Persistent Threat" or "APT", even though the authors don't use this term.  Although experienced APT fighters might not find a lot new here, I'm very happy to see that some of the new crop of books are starting to address these types of threats.

Friday, January 01, 2010

Follow me on Twitter

Just a note:  if you'd like to see the things I felt were too small to blog but interesting enough to point out, follow me on Twitter.

Why your CIRT should fail!

Ok, you know I don't mean that literally!  However, "FAIL" is the theme of this month's Wired magazine, specifically, how you can turn losing into winning.  As I was reading Jonah Lehrer's Accept Defeat: The Neuroscience of Screwing Up, I realized that it had some important implications for CIRTs and other security teams.

In the article, Lehrer writes about how scientists often fail to make the most of their experimental results:

Over the past few decades, psychologists have dismantled the myth of objectivity. The fact is, we carefully edit our reality, searching for evidence that confirms what we already believe. Although we pretend we’re empiricists — our views dictated by nothing but the facts — we’re actually blinkered, especially when it comes to information that contradicts our theories.

An intrusion analyst has this problem all the time.  While responding to a possible incident, an investigator collects a lot of information and tries to organize it in such a way as to tell a coherent story, the better to judge what happened, whether it was significant, and what to do about it.  Creating this story, then, is the equivalent of coming up with a theory about the event. 

The problem is that there is often a lot of confusing and/or contradictory evidence in a complex investigation.  The investigator comes to the table, though, with a built-in biased based on his previous experience with similar incidents, his knowledge of the local user population, his understanding of the organization's IT infrastructure and a lot of other factors.  Can a biased analyst be relied upon to create an unbiased view of events?  Yes, says Lehrer:

It’s normal to filter out information that contradicts our preconceptions. The only way to avoid that bias is to be aware of it.

This is one of the two big take-aways I got from this article:  Simply being aware that you have a tendency to create a biased story is the most effective way to keep your investigations objective.
The other important point this article makes is that the composition of your research team (or in this case, your CIRT) is crucial, but maybe not in the way you think:

There are advantages to thinking on the margin. When we look at a problem from the outside, we’re more likely to notice what doesn’t work. Instead of suppressing the unexpected, shunting it aside with our “Oh shit!” circuit and Delete key [brain functions discussed earlier in the article], we can take the mistake seriously. A new theory emerges from the ashes of our surprise.

In other words, if your CIRT is composed entirely of experts in the fields in which you expect to operate, you may hit some real challenges when an attacker does something unexpected, something contrary to the conventional wisdom.  

To illustrate this point, Lehrer relates a story about two research labs trying to solve the same problem (unwanted proteins sticking to a filter in their equipment).  One lab had a team of researchers who were all experts in the same field, while the other was composed of a diverse set of researchers from different scientific disciplines.  

The first team "took a brute-force approach, spending several weeks methodically testing various fixes. 'It was extremely inefficient,' Dunbar says. 'They eventually solved it, but they wasted a lot of valuable time.'"

The second team, composed of experts in many different fields, however, had a different result:

The diverse lab, in contrast, mulled the problem at a group meeting. None of the scientists were protein experts, so they began a wide-ranging discussion of possible solutions. At first, the conversation seemed rather useless. But then, as the chemists traded ideas with the biologists and the biologists bounced ideas off the med students, potential answers began to emerge. “After another 10 minutes of talking, the protein problem was solved,” Dunbar says. “They made it look easy.”
The reason the second team was more successful?  They had the "outsider's" point of view.  They were able to examine fresh approaches to the problem, kick them around and come up with a creative solution far more easily than the "expert" team bound by a sense of "this is how we always do it".

It's pretty easy to see how this could apply to security.  We're constantly called to come up with creative solutions to problems, though our problems are usually called "incidents."
The key to a successful CIRT is the diversity of the team.  You need intrusion analysts, sure, but you also need specialists in other areas such as incident response, malware reverse engineering, system administration, threat intelligence and as many other relevant disciplines as you can lay your hands on.  Put these people together on the same team, give them easy communication and good collaboration tools and watch them go! 
I'm extremely fortunate that I work in just such an environment, and I can tell you from experience that this approach works.  It's almost shocking how effective it is.  

I never met an analyst that thought she was good enough to deal with everything by herself, nor a CIRT that felt it was able to handle incidents well enough.  This is an profession where we have a constant need for improvement and adversaries who are often extremely skilled.  To respond well to compromises, it's important to recognize how you can turn failure into excellence.  This article shows how this is possible, for both individuals and teams.

2010: The Year We Make Contact (with the readers, that is)

Hello, and welcome to 2010. Somehow I've managed to go through all of 2009 without making a single blog entry! I never meant my last post to be my last post, so here's to 2010: The Year I Start Blogging Again.

Sunday, November 23, 2008

Time for a change

As most of you probably know, I've operated my own security consulting business for the past few years. What is less widely known is that it was a part time affair, as I kept a full time job working for a US nuclear physics research lab. I was very happy with the way things were going, but sometimes you just get an offer that's too good to refuse...

That's pretty much how I felt when I started talking to Richard Bejtlich about coming over to GE to help them build an enterprise-wide CSIRT. This is an exceptional chance to help create an incident response capability that could have a very real effect on the security posture of one of the largest companies in the world, and who could turn that down? To top it off, I know some of the other team members, and I can honestly say that they're a top-notch group of people. And in my book, interesting work combined with a high powered team of experts equals an opportunity I just had to take.

So starting tomorrow, I'll be an Information Security Incident Handler for GE. I promise this won't turn into a GE blog, though. You'll still see the same quality technical articles you've come to expect. That is, if my new boss will unshackle me from the oars every now and then. Heh.

Monday, November 03, 2008

Detecting outgoing connections from sensitive networks with Bro

As I mentioned in my last post, I've been playing with the Bro IDS. I wanted to take a stab at creating my own policy, just to see what it's like to program for Bro. It turned out to be surprisingly easy.

I started with the following policy statement: There are certain hosts and subnets on my site which should never initiate connections to the Internet. Given that, I want to be notified whenever these forbidden connections happen. To accomplish this, I created restricted-outgoing.bro.

To use this sample, copy into your $BRO_DIR/site directory, and add "@load restricted-outgoing" to your startup policy (the mybro.bro script if you're following my previous example).

## Detect unexpected outgoing traffic from restricted subnets
@load restricted-outgoing

Now the code is loaded and running, but it must be configured according to your site's individual list of restricted hosts and subnets. Following the above lines, you can add something like:

redef RestrictedOutgoing::restricted_outgoing_networks = {, # restricted subnet, # restricted subnet, # individual host

If you run Bro now, you should start seeing lines like the following in your $BRO_DIR/logs/alarms.log file:

1225743661.858791 UnexpectedOutgoingUDPConnection
x.x.x.x/netbios-ns > y.y.y.y/netbios-ns : Restricted
Outgoing UDP Connection

1225743661.858791 UnexpectedOutgoingTCPConnection
x.x.x.x/netbios-ns > y.y.y.y/netbios-ns : Restricted
Outgoing TCP Connection

Similar entries will also show up in your $BRO_DIR/logs/restricted-outgoing.file.

This script considers a connection to be a tuple composed of the following values: (src_ip, dst_ip, dst_port). When it alerts, that connection is placed on a temporary ignore list to suppress further alerts, and a per-connection timer starts counting down. Additional identical connections reset the timer. When the counter finally reaches 0, the connection is removed from the ignore list, so you'll receive another alert next time it happens. The default value for this timer is 60 seconds, but you can change it by using the following code:

redef RestrictedOutgoing::restricted_connection_timeout = 120 secs;

Even though your policy may state that outgoing connections are not allowed from these sources, it may be the case that you have certain exceptions. For example, Microsoft Update servers are useful for Windows systems. There are three ways to create exceptions for this module:

First, you can define a list of hosts that any of the restricted nets is allowed to access:

redef RestrictedOutgoing::allowed_outgoing_dsts = {, # Akamai, usually updates

Second, you can list services which particular subnets or hosts are allowed to contact:

redef RestrictedOutgoing::allowed_outgoing_network_service_pairs = {
[, 25/tcp], # SMTP server
[, 80/tcp], # Web proxy
[, 53/udp], # DNS server
[, 123/udp], # NTP

Finally, you can list specific pairs of hosts which are allowed to communicate:

redef RestrictedOutgoing::allowed_outgoing_dst_pairs = {

Note that this is the only one of the three options that allow you specify individual IPs without CIDR block notation, or to use hostnames. The hostnames are especially useful, as Bro automatically knows about all the different IPs that the hostnames resolve to, so the "" above would match any of the IPs returned by DNS.

Overall, I found the Bro language pretty easy to learn, and very well suited for the types of things a security analyst typically wants to look for. I was able to bang out a rough draft of this policy script on my second day working with Bro, and I refined it a bit more on the third day. Of course, I'm sure an actual Bro expert could tell me all sorts of things I did wrong. If you're that Bro expert, please leave a comment below!

Update 2008-11-06 14:45 I have uploaded a new version of the code with some additional functionality. Check the comments below for details. And thanks to Seth for his help!

Getting started with Bro

Lately, I've been playing with Bro, a very cool policy-based IDS. I say "policy-based" because, unlike Snort, Bro doesn't rely on a database of signatures in order to detect suspicious traffic. Rather, Bro breaks things down into different types of network events, and Bro analysts write scripts to process these events based on their particular detection policies and emit alarms.

At first, I was pretty puzzled about how to get started. The Bro website has some quick start docs, but they direct you to use the "brolite" configuration (a kind of simplified, out-of-the-box configuration). The bad news for that, however, is two-fold. First, the configuration is listed as deprecated in the files that come with the source tarball, and second, the brolite installation process doesn't work right under Linux.

So for the record, here's what you need to get started with Bro. (Thanks to Bro Guru Scott Campbell for helping me out with this):

  1. # ./configure --prefix=/usr/local/bro
  2. # make && make install
  3. # cp /usr/local/bro/share/bro/mt.bro /usr/local/bro/site/mybro.bro

After that, create the file

export BROPATH=/usr/local/bro/policy: \

./bro -i eth1 --use-binpac -W mybro.bro
Now you can just run and it'll do the right thing. The new mybro.bro file will be a very stripped down default set of policies. It won't do that much, but you can then add to it as you see fit. You can find more details about this in the Bro User Manual and Bro Reference Manual.

By the way, this example uses the --use-binpac option to enable some new-style compiled binary detectors. This caused Bro to crash frequently on my RHEL testbed, so if the same happens to you, you might need to leave that option out.

Wednesday, September 10, 2008

Catastrophic consequences of running JavaScript

Wow, I had no idea simply running JavaScript could be this bad. I'm really happy now to be running the excellent NoScript Firefox extension.


Notice that the page displays a simple "Nope", indicating that the world has not yet ended. Whew!

Next, view the source for that page. You'll see the following snippet of code:

<script type="text/javascript">
if (!(typeof worldHasEnded == "undefined")) {
} else {

Let me walk you through that code... First, if you have JavaScript enabled, everything between <script> and </script> is executed, which comes out to be a single if statement, where one of the possible outcomes is that the world, in fact, has been destroyed.

On the other hand, if your browser doesn't support JavaScript, the page simply renders whatever is inside the <noscript> stanza, which always evaluates to "NOPE."

In other words, with JavaScript enabled, there's a small (but finite) chance that the world could end! However, with no JavaScript, there's zero chance. Therefore, JavaScript is demonstrably dangerous! The risk far outweighs any temporary benefit we could gain from this technology!

Do not tempt fate! Disable JavaScript everywhere IMMEDIATELY! You have been warned!