Tuesday, November 29, 2005

Need a better reason to come to HRSUG?

You're probably tired of hearing me talk about it, but the first Hampton Roads Snort Users Group (HRSUG) meeting is this week (details here). As if networking with your peers or listening to our guest speaker weren't enough, there's one more reason to come... free training!

The folks at Sourcefire have donated a training package door prize worth almost $2,000. In short, if you attend and are the lucky winner, you'll get:



I've recently taught both of these classes, and I can tell you from my personal experience that they're very good. The winner will have their choice of instructor-led classroom training or the self-paced online classes. Other meeting attendees can register for the same classes/exam at a 50% discount through December 31st, 2005.

So... you're coming, right?

Monday, November 28, 2005

On the dangers of speaking outside your area of competence

Ok, this is just dumb. According to this article, Richard Carrigan, a physicist at Fermilab, is concerned that aliens (as in E.T.) are going to "infect the Internet". He claims that the signals processed by the millions of computers participating in the SETI@Home distributed computing project are capable of carrying malicious code, and the SETI project should implement some sort of signal quarantine to protect us. Kind of like a reverse Jeff Goldblum manoeuver from Independence Day.

The thing is, this isn't a very likely scenario. First, the signals are data, and not executable code. That's our first layer of protection.

Now, we could posit a software flaw in the SETI@Home client that could lead to some sort of overflow that allowed arbitrary code to be executed, but in order for aliens to successfully exploit it, they'd need to know an awful lot about how our computers work, and about our current software versions, and the laws of physics are working against them.

The closest star is about 4.5 light years away from Earth. Assuming that we broadcast complete technical details of the x86 architecture and an entire copy of the Windows OS, along with a comprehensive set of security bulletins and an SDK, the necessary roundtrip time for data travelling at the speed of light would mean that by the time the "exploit" could arrive here, we'd be about 9 years further on. Let's see, 9 years ago, we'd all have been running NT 4 and Windows 95. Good luck trying a Win95 overflow on my XP system! The offsets are wrong now, and new security technologies exist now that weren't dreamed of then (like the non-executable stack). What will we have 9 years from now? I don't know (and neither do the aliens), but I do know the aliens don't stand a chance.

Seriously, I think he's missing the point. If you want to be concerned with the security of the SETI@Home software or their new replacement, BOINC, don't bring aliens into the picture. Security concerns are legitimate, yes, but it is far more likely that if a software bug does exist that allows remote code execution, it'll be exploited by a human, not an alien.

Unless, of course, you believe this guy.

Update 2005-11-28 09:48 -- Check out Richard Carrigan's website for more information on his idea. There's a presentation and a copy of his paper on the subject.

Sunday, November 27, 2005

HRSUG Meeting Reminder

Just a reminder that the inaugural meeting of the Hampton Roads Snort Users Group (HRSUG) is just around the corner! Read the meeting details, then join the mailing list!

Thursday, November 10, 2005

HRSUG Mailing List

As you may know, I recently announced the formation of a new Snort users' group in the Hampton Roads, VA area. I'm happy to say that the group's mailing list is now available. If you want to be kept up to date on our meeting schedule, or if you just want to connect with other security people in the area, sign up now!

Wednesday, November 09, 2005

RSA: Phishing experiments hook net users

Here's a nifty RSA press release describing a recent experiment they conducted in NYC. Experimenters posed as tourism pollsters and in most cases were able to gather enough information from their subjects to divine possible passwords they might use for their various accounts. Oddly, most people wouldn't give out their password itself, or the method by which they come up with new passwords.

The article points out that the most likely explanation is that most people just aren't aware of how other personal information can be used as "back doors" (e.g., by using the mother's maiden name to reset "forgotten" passwords). I'm kind of at a loss on this one. Who hasn't had to set up a security question for this purpose? Are people setting these up and forgetting about them, or maybe blindly answering the questions without understanding the purpose of collecting the information?

Here's what I'd like to do. I'd love to repeat this experiment, with a twist. At the end of each interview, hand the person a scorecard telling them how well they protected their information, and providing suggestions for improvement. Be sure to give examples of how each piece of requested information could have been used against them. That way you gather results and educate the public at the same time.