Wednesday, June 27, 2007

The Right Way to Establish a Culture of Security

After reading this article, my hat is off to Yahoo's Arturo Bejar. Not only does he have the worlds coolest job title ("Chief Paranoid Yahoo"), but he's taken some extremely creative measures to help build a pervasive culture of security at the Internet behemoth. I especially like the part about the t-shirts, since it not only gives people a reward to strive for, but they are also free advertising for the program. And the multiple tiers sounds like it would really spur some competition to get those coveted red shirts.

As someone who has been called a raving paranoiac himself, I really appreciated this approach. There are some great lessons in that article.

Tuesday, June 19, 2007

MySQL Database Tuning Tips

I came across a great article on MySQL performance tuning. It's got a few very practical tips for examining the database settings and tweaking them to achieve the best performance.

"What's this got to do with security", you ask? As you know, Sguil stores all of it's alert and network session data in a MySQL backend. If you monitor a bunch of gigabit links for any amount of time, you're going to amass a lot of data.

I try to keep a few months of session data online at any given time, and my database queries have always been kinda slow. I learned to live with it, but after reading this article, I decided to check a few things and see if I could improve my performance, even a little.

I started by examining the query cache to see how many of my database queries resulted in actual db searches, and how many were just quick returns of cached results.


mysql> show status like 'qcache%';
+-------------------------+-----------+
| Variable_name | Value |
+-------------------------+-----------+
| Qcache_free_blocks | 1 |
| Qcache_free_memory | 134206272 |
| Qcache_hits | 395 |
| Qcache_inserts | 248 |
| Qcache_lowmem_prunes | 0 |
| Qcache_not_cached | 75 |
| Qcache_queries_in_cache | 2 |
| Qcache_total_blocks | 6 |
+-------------------------+-----------+
8 rows in set (0.00 sec)

The cache miss rate is given by Qcache_inserts / Qcache_hits, so in my case 63% of queries resulted in database searches. The converse is that 37% of my queries were served up straight from the cache. Almost all of the queries are either from Sguil or from my nightly reports, so this was actually a much better rate than expected. Notice, though, that I had 127MB of unused cache memory. My server's been running for quite some time, so it seemed to me like I was wasting most of that. I had originally set the query_cache_size (in /etc/my.cnf) to be 128M. I decided to reduce this to only 28M, reclaiming 100M for other purposes.

Next I wanted to examine the number of open tables. Sguil uses one table per sensor per day to hold session data, and another five tables per sensor per day to hold alert data. That's a lot of tables! But how many are actually in use?

mysql> show status like 'open%tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables | 46 |
| Opened_tables | 0 |
+---------------+-------+
2 rows in set (0.00 sec)

So the good news is that not many tables are open at any given time. I actually tried to do several searches through alert and session data over the last few weeks, just to see if I could drive this number up. I could, but not significantly. I had set table_cache to be 5000, but that seemed quite high given this new information, so I set it down to 200. I'm not sure exactly how much (if any) memory I reclaimed that way, but saner settings are always better.

Finally, the big one: index effectiveness. I wanted to know how well Sguil was using the indices on the various tables.

mysql> show status like '%key_read%';
+-------------------+-----------+
| Variable_name | Value |
+-------------------+-----------+
| Key_read_requests | 205595231 |
| Key_reads | 4703977 |
+-------------------+-----------+
2 rows in set (0.00 sec)

By dividing Key_reads / Key_read_requests, determined that approximately 2% of all requests for index data resulted in a disk read. That seemed OK to me, but the article suggested that the target for that be only 0.1%. Given that, I figured that the best thing I could do would be to reallocate that 100M I reclaimed earlier as extra key space. I did this by changing the key_buffer setting, which is now at a whopping 484M.

Did all this optimization make any difference? I didn't do any database performance benchmarking either before or after, but my percieved response time while using Sguil is a lot faster. It still takes time to do these complex searches, of course, but at least now I can go back through four or six weeks of data without having to leave for lunch. It seems to have made a big difference for me, so why not try it out yourself? If you do, post your experiences below and let us know how it went!

Tuesday, June 12, 2007

I'M IN YR RRs, QUERYING YR HOSTS

Probably the single best thing I like about Sguil is the development philosophy. Most of the features are suggested by working intrusion analysts. Just the other day, we were chatting on IRC (freenode's #snort-gui) about DNS lookups. Specifically, I asked about having the Sguil client modified to give the analyst more control of DNS lookups. I am always concerned that when I ask the GUI to resolve an IP for me, I might be tipping off an attacker who is monitoring the DNS server for their zone. I mentioned that it'd be cool to have the ability to look up internal IPs in my own DNS servers but forward all external IPs to some third party (e.g., OpenDNS) in order to mask the true source of the queries.

Based on some of the discussion, I decided it'd be very handy to have a lightweight DNS proxy server that could use a few simple rules to determine which servers to query. I had some time this afternoon, so I hacked one up in Perl. It turns out that the same feature has now made it into the CVS version of Sguil, but I thought that my proxy would still be of use for other purposes. So without further ado, I hereby post it for your downloading pleasure.

You'll need to make a few edits to the script to tailor it to your own environment. First, set up the @LOCAL_NETS list to contain a list of regexps for your local network. All incoming queries will be compared against these expressions, and any that match will be considered "local" addresses. Queries that don't match anything in the list will be considered "public" queries. Note that reverse lookups require the "in-addr.arpa" address format. Here's a sample:


@LOCAL_NETS = ( "\.168\.192\.in-addr\.arpa",
"\.10\.in-addr\.arpa",
"\.mydomain\.com" );

You'll also need to set the $LOCAL_DNS and $PUBLIC_DNS strings to space delimited lists of DNS servers to use for each type of query, like so:

$LOCAL_DNS = qw("10.10.10.2 10.10.10.3"); # My DNS Servers
$PUBLIC_DNS = qw("208.67.222.222 208.67.220.220"); # OpenDNS Servers

Once you've done this, just run the script (probably in the background) and it will start listening for DNS requests on 127.0.0.1 port 53 UDP & TCP. Then configure your system's /etc/resolve.conf to use the local proxy and you're all set.

If you want to open it up a little and provide DNS proxy service for other systems on your network, you can do this by changing the LocalAddr parameter in the call to Nameserver->new(). Set the IP to your system's actual address and anyone who can connect to your port 53 will be able to resolve through you. I haven't really done any security testing on that, so it probably has some nasty remotely exploitable bugs. You have been warned!

Anyway, I hope you find it useful. It only does straight forward and reverse lookups (A and PTR records), so it will fail if you ask for any other types of requests. If these things are really necessary, though, let me know and I'll consider adding them. If you decide to use this at your site, please drop me a comment to let me know (if you can).

Thursday, May 10, 2007

Forensics in the Enterprise

I had the opportunity last night to attend a demo of Guidance Software's EnCase Enterprise product. I use the standalone version of their product, EnCase Forensic already, and the Enterprise edition looks like an interesting extension.

EnCase Forensic runs on a single Windows workstation and allows you to image suspect hard drives and conduct detailed analysis on their contents. It's got a number of handy features built in, like the ability to do keyword searches, extract web viewing history and identify email messages. Pretty nice, and it makes most common forensic tasks a breeze.

The Enterprise edition builds on that by decoupling the data storage from the analysis workstation, providing a SAFE (I forget what this acronym stands for), which is basically a data warehouse for disk images and other extracted evidence. That's OK, I guess, but the really interesting part is the "servlet", their word for an agent that you install on the systems you manage.

The idea is that the servlet is installed on all the computers you manage, be they Windows, Linux, OS X, Solaris or whatever. They claim a very small footprint, about 400k or so. Using the analysis workstation, you can direct the servlets to do things like capture snapshots of volatile memory or process data, browse for specific information or even image the disk remotely, all supposedly without disrupting the user.

As a forensic analysis tool, this makes a lot of sense to me, but there's something I find even more interesting. The Enterprise edition is apparently a platform on which Guidance is building additional tools for Incident Response and Information Assurance. For example, their Automated Incident Response Suite (AIRS) ties in with Snort, ArcSight and other security tools and can use the agent to trigger data collection based on security events on your network.

For example, if Snort detects an exploit against one of your web servers, AIRS can automatically initiate collection of memory and process data every thirty seconds for the next few minutes. When an analyst comes by later to evaluate the Snort alert, they can view the snapshots to see if the exploit was successful and to help them track the attacker's actions on the system.

I have to admit that this falls squarely into the category of "I'll believe it when I see it working on my network", but I'm intrigued. I think this approach shows promise, and is a natural complement to the Network Security Monitoring model. Being able to tie host-based data into the NSM mechanism could be quite useful.

Other interesting products that use the servlet are their eDiscovery and their Information Assurance Suite. eDiscovery is pretty much just what it sounds like: it helps you discover specific information on your network, perhaps in response to a subpoena. The IA Suite is similar, but also includes configuration control features and a module designed to help government and military agencies recover from classified data spillage.

Ok, by this point it probably sounds like I'm shilling for the vendor, but really I'm just interested in some of the ideas behind their products, specifically AIRS. I hope to learn more soon, and maybe to be able to experiment with some of these features.

Tuesday, May 08, 2007

Helper script for email monitoring

During an investigation, it may sometimes be necessary to monitor email traffic to and from a specific address, or to capture emails which contain specific keywords or expressions. In the past, I've usually used Dug Song's Dsniff package for this, specifically the mailsnarf tool.

Mailsnarf sniffs a network interface looking for connections using the SMTP protocol. When it finds one, it extracts the email message and outputs it to stdout in the Unix mbox format. You can filter the traffic with a BPF filter to restrict the number of packets it needs to process, and you can provide a regular expression to make sure it only dumps emails you're interested in. All in all, it's a great tool.

As a monitoring solution, though, mailsnarf is a bit lacking. If it dies, your monitoring is over, and if you have more than one thing to monitor, you either intermingle them into the same mbox or try to manage multiple mailsnarf instances. It can be done, but it's fairly clumsy.

That's why I've decided to release my home-brewed wrapper script, mailmon. It provides a config file so that multiple email monitoring sessions can be configured and managed at once. It also checks up on the individual mailsnarf processes under it's control, restarting them automatically if they die.

So far I've tested it on RHEL4, but it should work anywhere you have perl and dsniff available. The startup script is Linux-specific, so *BSD users may have to provide their own, but it's really pretty simple.

You can download mailmon from the downloads page. If you give it a try, I'd love to hear about your experiences.

Update 2007-05-08 09:58: I can't believe I forgot to mention this, but the mailmon package also contains a patch to mailsnarf. I found that some mail servers can transition into status 354 ("send me the DATA") without ever having to use the DATA command. Mailsnarf wasn't picking up on this, so was missing a lot of the messages. My patch fixes this, and can be used independently of the rest of the monitoring package. In fact, I recommend the patch to all mailsnarf users.

Friday, April 27, 2007

Log Management Summit Wrap-Up

As I mentioned before, I had the opportunity to speak about OSSEC at the SANS Log Management Summit 2007.

In case you've never been to one, these SANS summits are multi-day events filled with short user-generated case studies. In this case, the log summit was scheduled alongside the Mobile Encryption Summit, and the attendees were free to pop back-and-forth, which I did.

The conference opened Monday with a keynote by SANS' CEO Stephen Northcutt. Somehow, despite my four SANS certs, three stints as a local mentor and various other dealings with SANS, I'd never heard him talk. He's quite an engaging speaker. This time around, he was talking about SANS' expert predictions for near-future trends in cybersecurity. There were no shockers in the presentation, but it was a good overview of where smart people think things are headed in the next 12 months.

My favorite presentation on Monday, though, was Chris Brenton's talk, entitled Compliance Reporting - The Top Five most important Reports and Why. As you know, I've been doing a lot of work recently on NSM reports, and although log reporting isn't quite the same, the types of things that an analyst looks for are very similar. I got some great ideas which may show up in my Sguil reports soon.

On Tuesday morning, I gave my own presentation, "How to Save $45k (and Look Great Doing it)." This is the story of how we bought a commercial SEM product, only to find that it didn't really do what we wanted, and replaced it with the free OSSEC. Bad on us for not having our ducks in a row at first, I know. To be totally honest, it wasn't so easy to get up in front of 100 people and say, "You know, we made this really expensive mistake", but sometimes you have to sacrifice for the greater good. ;-)

My favorite talk on Tuesday, though, was Mike Poor's Network Early Warning Systems: Mining Better Quality Data from Your Logging Systems. In this talk, he presented a bunch of free tools to help you keep an eye out for storms on the horizon (to use the ISC's metaphor). Some of the tools were more like websites (Dshield.org, for example), and some were software. He even provided several detailed slides showing OSSEC alerts, which was a nice compliment to my own presentation.

The level of vendor spending on an event is in direct proportion to the number of security managers and CSOs in the crowd. Judging by the vendor-hosted lunches and hospitality suites, there were a bunch of them in San Jose this week. I attended a couple of vendor lunches, and I have to say that I was quite impressed with Dr. Anton Chuvakin's Brewing up the Best for Your Log Management Approach on Tuesday. He's the Director of Product Management at LogLogic, so I was expecting a bit more of a marketing pitch, but in reality he delivered a very balanced and well-thought-out presentation exploring the pros and cons of buying a log management system off-the-shelf or creating your own with open source or custom-developed tools. I think it was the most popular of all the lunch sessions that day, too. I came a few minutes late and had to sit on the floor for a while before additional tables and chairs arrived!

Overall, I think the summit was a good experience for most of the attendees. Many of the talks were more security-management oriented, and I cannot tell you how many times speakers said completely obvious things like, "You need to get buy-in!" or "Compliance issues can kill you!". Still, there was real value in having the ability to sit down with someone who's already gone through this process and learn from their successes (or in my case, mistakes).

SANS has promised to post the slides from each of the tals online. Once I find them, I'll link to them here. I'm not sure if that will include Dr. Chuvakin's talk or not, but I hope that will be available at some point as well.

Update 2007-04-30 09:33: SANS has posted the presentations on their site. This bundle includes all the slides that were printed in the conference notebook, including the one from the famous Mr. Blank Tab.

Update 2007-04-30 10:35: Daniel Cid points out that the slides for the concurrent Encryption Summit are also available.

Wednesday, April 18, 2007

Generating Sguil Reports

I've been running Sguil for a few years now, and while it's great for interactive analyst use, one of it's main drawbacks is the lack of a sophisticated reporting tool. The database of alerts and session information is Sguil's biggest asset, and there is a lot of information lurking there, just waiting for the right query to come along and bring it to light. Sguil has a few rudimentary reports built in, but lacks the ability to create charts and graphs, to perform complex pre- or post-processing or to schedule reports to be generated and distributed automatically.

To be honest, many Sguil analysts feel the need for more sophisticated reporting. Paul Halliday's excellent Squert package fills part of this void, providing a nice LAMP platform for interactive reports based on Sguil alert information. I use it, and it's great for providing some on-the-fly exploration of my recent alerts.

I wanted something a bit more flexible, though. A reporting package that could access anything in the database, not just alerts, and allow users to generate and share their own reports with other Sguil analysts. I have recently been doing a lot of work with the BIRT package, an open source reporting platform built on Tomcat and Eclipse.

BIRT has a lot of nice features, including the ability to provide sophisticated charting and graphing. The report design files can be distributed to other analysts who can then load them into their own BIRT servers and start generating new types of reports. It even separates the reporting engine from the output format, so the same report can generate HTML, PDF, DOC or many other types of output. Best of all, you can totally automate the reporting process and just have them show up in your inbox each morning, ready for your perusal.

If this all sounds good to you, check out a sample report, then read my Sguil Reports with BIRT HOWTO for more information.

If you decide to try this, please post a comment. I'd love to hear your thoughts, experiences and suggestions.

Big thanks go to John Ward for getting me started with BIRT and helping me through some of the tricky parts.

Thursday, April 12, 2007

SANS Log Management Summit

I've been invited to speak at the SANS Log Management Summit 2007 later this month in San Jose. I'll be presenting a short user case study on OSSEC. If you're there, find me and say hi! Maybe we can grab some beers or something.

Update 2007-04-19 15:55: According to the agenda, I'll be speaking on Tuesday morning as part of a panel entitled, "Practical Solutions for Implementing Log Management".

Cool network visualization

I highly recommend that you take a look at this. You just need to see it to believe it. I find their "network activity as videogame" metaphor oddly appealing. It's almost like it's translating your network activity into the story of an epic space battle. Since humans are conditioned by long experience to assimilate and relate well to stories, this could be quite an effective approach. Joseph Campbell as NSM analyst.

I'd really like to be able to run this against my own network to see how effective it would be in every day use. It's a bit hard to tell based on the clip they've put out.

Monday, March 26, 2007

"Online Ads Unsafe". No Duh.

I've written about this before, so it's not exactly new news, but Computerworld is reporting that nearly 80% of all malware delivered to your browser is delivered via ads embedded in web sites, rather than in the content of the sites themselves.

This makes perfect sense to me, since most of the ad networks are pretty incestuous, and an ad placed on one network could easily be distributed by other advertising partners. With sometimes several layers of abstraction between advertiser and advertisee, it can be fairly difficult to trace a malicious ad back to its source.

The article is based on Finjan Software's latest Web Security Trends Report. You can download the complete report here (registration required), or see the summarized press release here.

Thursday, March 22, 2007

Fun Crypto Toy

I took some time off earlier this month to have a family vacation in London. Overall, it was pretty fun, but we wanted to get out of the city for a bit, so we took the train to Bletchley Park. BP is where the British government ran their super-secret cryptography operations against the German ciphers during WWII.

Of course, the most famous of these ciphers is the Enigma machine, of which there were actually several different types in use in the various branches of the German military. I don't want to bore you with all the details of Enigma, especially since they're probably familiar to many of you. What I wanted to tell you about was this fun toy I bought, the Pocket Enigma cipher machine.

Housed in CD jewel case and made entirely of paper, it emulates a simple one-rotor system (with a choice of two possible rotors). There's no plugboard or anything complex like that, so it's really easy to understand, and it's a lot of fun to send seekrit messages to your buddies, in a flashlight-under-the-covers kind of way.

Here's my contribution to the fun. See if you can decode the following message, enciphered on my very own Pocket Enigma. The first person to post a correct solution in the comments wins... umm... the people's ovation and fame forever!


II B
X VJFEI OJAXQ PBUXQ DTFVH GFAKQ UQGES IOGZW

Wednesday, March 07, 2007

More on Tagging

You may remember my previous post about IP tagging, in which I described my idea for a Web 2.0-ish tagging system for NSM analysts. Well, geek00L pointed out that I'm not the first person to think of this idea. It seems that a couple of bright guys at Georgia Tech had the idea last year, and implemented it in the form of a tool called FlowTag. I recommend their paper for some real-world examples of how a tagging system can enhance analyst productivity.

The dogs must be crazy...

My friend Shirkdog offers this post about doing NSM without the backend database that solutions like Sguil offer. Personally, I'm not a fan of using grep for my core analysis workflow, but I am a fan of doing whatever gets the job done, within the limits of the resources available to you.

Thursday, February 08, 2007

IP Tagging

I just read Godfadda's blog entry about a prototype system for tagging IP addresses in Splunk. His system analyzes IP addresses as they're about to be inserted into his log/event tracking system and cross references them with several databases in order to generate additional tags to provide additional context to the event.

Right now, his system deals with geography ("which branch office is this in?") and system role ("Is this an admin's workstation, a server or a public PC?"). I really like this idea, as it provides valuable context when evaluating events.

In fact, I've done a lot of thinking recently about how to add more context to NSM information, but it wasn't until I read this article that I realized that what I was looking for would probably best be implemented as a tagging system. What if Sguil were to incorporate tagging? Well, we'd first have to figure out what to tag. I'd like to be able to tag several types of objects:


  1. IP addresses
  2. Ports
  3. Events
  4. Session records
  5. Packet data for specific flows (probably would treat the pcap file and any generated transcript as a single taggable object)

As for the tags themselves, the system should automatically generate tags based on some criteria, just as in Godfadda's system. Maybe it would automatically tag everything in my exposed web server network as internet,webserver, for example, or maybe it could correlate my own IPs to an asset tracking system to identify their function and/or location.

But here's the part I think would be even more useful: I'd like to have the analyst be able to tag things on-the-fly, and later search on those tags to find related information. For example, if someone has broken into my web server, I could tag the original IDS alert(s) with an incident or case number (e.g., "#287973"). Perhaps this would also automatically tag the attacker's IP with the same number. As I continue to research the incident, I will probably perform SANCP searches, examine the full packet data and generate transcripts. I could tag each of the interesting events with the same ID. At the end of the investigation, I could just do a search for all objects tagged with #287973, order them by date & time, and presto! The technical portion of my report is almost written for me! This is quite similar to other forensic analysis tools (EnCase, for example) that allow you to "bookmark" interesting pieces of information and generate the report from the bookmark list.

To go a bit further, what if the same attacking IP came back six months after the above incident, this time with an FTP buffer overflow exploit? You might not remember the address as the origin of a previous incident, especially if you have a large operation and the original incident was logged by a different analyst. However, if the console says that the address was tagged as being part of a specific incident, you'll know right away to treat it with more suspicion that you might otherwise have done.

To be honest, these are just some of my first ideas on the power of tags; the real power could come as we consider more elaborate scenarios. What if you could tag any item more than once? Well, by associating multiple incident tags with an item, you just might uncover relationships that you didn't realize existed. It doesn't take much to imagine a scenario where you can build a chain of related tags that could imply association between two very different things, perhaps by creating a series of Kevin Bacon links between addresses and or events.

So, will any of this show up in Sguil? Probably not any time soon. Maybe if I can convince Bamm that I'm not insane, maybe it'll find its way onto a feature wish list. Or maybe another project or product will beat us to the punch (it is the era of Web 2.0 after all, and tagging is like breathing to some folks nowdays). But I do fantasize about it, and I live in hope.

NSMWiki update

I spent some time this morning reconfiguring the NSMWiki to provide secure login and password management pages. So first off, now you know that.

The real point of this post, though, is that I had a hard time finding good examples that applied to my specific situation, so I'm documenting the process here. I hope this will make things easier for the next person.

Here's what I started with:


  • A shared webhost (Linux, Apache & mod_rewrite)
  • An SSL and a non-SSL server serving identical content
  • MediaWiki 1.9.0

I wanted to let mod_rewrite do the hard work, so I did some web searches on the topic. Unfortunately, I always came up with pages that had the same two problems:

  1. They assumed that I had control of the server-wide configuration files for Apache, but I use a shared web host, so I needed to use a .htaccess file to set the rewrite settings on a per-directory basis.
  2. The examples only provided protection for the login page, but none of the other pages that dealt with password information .

The first problem is that mod_rewrite sees different URLs when it's configured on a per-directory basis than when it's configured for the entire server. The change is small, but important. In a server wide configuration, URLs begin with the "/" character. When run per-directory, they don't. As I initially started by using some online examples, this tripped me up until I figured it out.

The second problem is that none of the examples cared about the other MediaWiki password management options, just the login page. Oops!

Fortunately, the solution to both problems was easy, with a little tinkering. Here's a .htaccess file that will do the right thing. Drop it in your MediaWiki directory, edit the RewriteRules to reflect the correct path in your wiki URLs, and you should be good to go.

RewriteEngine on
# Any Wiki page that uses the Special:Userlogin page (account login, creation),
# the Special:Resetpass (password reset) or Special:Preferences (where normal
# password changes operations are done, among other things) should get
# redirected to the SSL server. Note check to make sure we're not ALREADY
# using the SSL server, to avoid an infinite redirection loop
RewriteCond %{QUERY_STRING} ^title=Special:Userlogin [OR]
RewriteCond %{QUERY_STRING} ^title=Special:Resetpass [OR]
RewriteCond %{QUERY_STRING} ^title=Special:Preferences
RewriteCond %{SERVER_PORT} !443
RewriteRule nsmwiki/index.php https://%{SERVER_NAME}/nsmwiki/index.php?%{QUERY_STRING} [L,R]

# Any Wiki page that's NOT one of the specific SSL pages should not be using
# the SSL server. This rule redirects everything else on the SSL server
# back to the non-SSL server.
RewriteCond %{QUERY_STRING} !^title=Special:Userlogin
RewriteCond %{QUERY_STRING} !^title=Special:Resetpass
RewriteCond %{QUERY_STRING} !^title=Special:Preferences
RewriteCond %{SERVER_PORT} 443
RewriteRule nsmwiki/index.php http://%{SERVER_NAME}/nsmwiki/index.php?%{QUERY_STRING} [L,R]

As I said, this was my first foray into mod_rewrite, and I'm pretty happy with the functionality it gives me. I know there are some gurus out there who are much more familiar with mod_rewrite and/or MediaWiki, though, so if you can suggest any improvements, please leave a comment.

Update 2007-06-17 17:25: I upgraded the wiki software to 1.10.0 the other day and today I got an email to tell me that no one could log in. I did a little poking around, and sure enough, they couldn't. It turns out that you should also set the following line in your ''LocalSettings.php'' file:

$wgCookieSecure = false;

Because the login page was using the SSL server, MediaWiki was issuing "secure" cookies (i.e., cookies that can only be sent via SSL). Only the login and a few other pages use SSL, though, so most of the rest of the wiki session simply wasn't seeing the cookies. The setting existing in the old software, but I guess it wasn't being used.

Monday, January 22, 2007

In which I tell you where to stick it...

You think you're funny, don't you?

I hear this a lot, especially from my family, but now I can honestly say it's true. I'm funny.

What, you don't believe me? F-Secure says I am, so it must be true! MUST!

Anyway, thanks to everyone who voted for my contribution. I can't wait to get my stickers!

Thursday, January 11, 2007

A new location for the new year

It's 2007, and since I just love to tinker with things that aren't broken, I've decided to move this blog to a new address, blog.vorant.com. The old blog and feed URLs will still work, they'll just redirect to the new location. Still, please let me know if you have any trouble with it, and thanks for reading!

Wednesday, January 10, 2007

I know the feeling...

Sometimes, when I bust out with just the right one liner to find the critical data, I feel like this.

Tuesday, January 09, 2007

The Security Journal

The Security Journal is an online quarterly newsletter put out by Security Horizon. Some of the topics in recent issues include secure programming, HDES, wireless video security and the DefCon CTF competition. Check it out!

Wednesday, January 03, 2007

Locate new phishing sites before they're used against you

I just read this article that shows one way to locate new potential phishing sites by tracking domain registrations. Ok, so it's not a very high-tech way, but it's probably effective (except for the sites that just try to use IP addresses).

The service listed in the article requires a subscription fee. Does anyone know of any free equivalents?