Search this site


Metadata

Articles

Projects

Presentations

logstash is ready for use

I've talked for a while about logging problems. Parsing, storing, searching, reacting.

Today is the first release of logstash.

What is logstash? Free and open source log/event management and search. It's designed to scale and help you more easily manage your logs and provides you a great way to search them for debugging, postmortems, and other analysis.

You can read more about it on logstash.googlecode.com.

Random thoughts: Log analytics with open source

Over the past few years, I've tinkered on and off with various projects to help me do log analysis, data aggregation, graphing, etc. Recently, I had a discussion with a coworker about alternatives to Splunk (specifically, free ones). Turns out there aren't any projects, as far as I can tell, that provide most of what Splunk does.

With all the awesome open source projects available to date that focus on tight features and perform well, how much work would it be to tie them together and produce a tool that's able to compete with Splunk?

I hooked grok and Lucene together last night to parse and index logs, and the results were pretty slick. I could query for any keyword I wanted, etc. If I wanted logs involving specific fields like IP address, apache response code, etc, I could do it. Grok does the hard part of eating a log line and outputting key:value pairs while Lucene does the hard part of indexing field values and such.

Indexing logs in Lucene required using it in a somewhat strange way: We treat every log entry as a unique document. This way, each log line can have several key:value pairs (fields) associated with it, and searching becomes easy.

  • Log parsing: grok and other tools have this done.
  • Log indexing: lucene
  • On-demand graph tools: python matlotlib, javascript flot, etc
  • Alerting: nagios
  • Fancy web interface: Ruby on Rails, or whatever
Indexing non-log data, such as SNMP queries, only requires you feed Lucene with the right data.

The hard part, from an implementation perspective, is only as hard as taking output (logs, data, whatever) and feeding your indexer with the fields you want to store.

Parsing all kinds of log formats isn't a trivial task, since different log formats will require new pattern matching. However, grok's automatic pattern discovery could be used to help fill in gaps where you may not yet have defined patterns.

Pending time and energy, I might have time to pursue this project.

fancydb performance

Various trials with basically the same input set: 2.5 million row entries, maximum 1 entry per second. The insertion rate drops by 60% if you add rule evaluations, which is an unfortunate performance loss. I'll work on making rules less invasive. Unfortunately, python threading will never run on two processors at once I can't gain significant performance from sharding rule processing to separate threads; most unfortunate. Maybe fork+ipc is necesary here, but I am somewhat loathe to doing that.

The slowdown when rules are present are to the record keeping that is done to notify that a rule should be evaluated again (rule evaluations are queued). Basicaly the loop 'is this row being watched by a rule' is the slowdown. I'll try attacking this first.

With 2 rules (unoptimized rules):
    hits.minute => hits.mean.1hour @ 60*60
    hits.minute => hits.mean.1day @ 60*60*24
  insertion rate = 7600/sec

With 2 rules (optimized chaining)
    hits.minute => hits.mean.1hour @ 60*60
    hits.mean.1hour => hits.mean.1day @ 60*60*24
  insertion rate = 12280/sec

With 9 rules (optimized chaining):
  insertion rate: 10000/sec

With 0 rules:
  trial 1: 40000/sec
  trial 2: 26700/sec

Strange patterns in my apache logs

A few weeks ago, ajaxrain.com linked to my jquery puffer example. I randomly peruse my apache logs looking for new inbound link activity or new search activity.

I saw lots of hits with a referrer of ajaxrain.com but were to pages not linked from that site. I thought this was someone's webbrowser with prefetch gone crazy, but apache logs tell me what it is. An example log is this:

203.190.116.249 - - [07/Aug/2007:15:05:01 -0400] "GET /projects/xboxproxy HTTP/1.0" 301 467 "http://www.ajaxrain.com/" "Mozilla/4.5 (compatible; HTTrack 3.0x; Windows 98)"
HTTrack? Windows 98? Mozilla 4.5? Ooookay. Sure.

Turns out HTTrack is a "website copier" tool. It isn't very nice to the webserver, as it requests many pages per second.

Splitting large logs

I've started using grok more. I don't have any regular log processing to do, so any usage is random and not related to previous runs. As such, I'd would really love it if grok could process faster. Rather than concentrating on a potentially-dead-end of making grok faster (complex regex patterns are cpu intensive), I think it'll be easier to make it parallelizable. The first step towards parallelization of grok is being able to split logs into small pieces, quickly.

Both FreeBSD's and Linux's (GNU, really) split(1) tool is fairly slow when you want to split by line count. Splitting by byte count is (should be) faster, but that will break logs because the byte boundary probably won't ever align on a log boundary..

I'd prefer both: the speed of byte size splitting and the sanity of line based splitting. To that end, I wrote a small tool to do just that. Yes, there's probably a tool already available that does exactly what I want, but this was only 100 lines of C, so it was quick to write. GNU split's --line-bytes option is mostly what I want, but it's still crap slow.

download fastsplit

Here's a comparison between my tool and gnu's split, run on the fastest workstation I have access to. My tool runs 4 times faster than gnu split for this task.

# Source file is 382 megs, which is tiny.
% du -hs access
382M    access

# (Fast) Split into approximately 20meg chunks while preserving lines.
% time ./fastsplit -b 20971520 -p ./split.fast. access 

real    0m1.260s
user    0m0.018s
sys     0m1.242s

# (GNU) Split into 87000-line chunks, no guarantee on size.
% time split -l 87000 access split.normal..

real    0m4.943s
user    0m0.395s
sys     0m2.440s

# (GNU) Split into 20mb (max) chunks, preserve lines.
% time split --line-bytes 20971520 access split.normal_bytes.

real    0m4.391s
user    0m0.001s
sys     0m1.779s
You can see that the actual 'system' time is somewhat close (mine wins by 0.4s), but 'real' time is much longer for Linux's split(1).. My solution is really good if you want quickly split logs for parallel processing and you don't really care how many lines there are so much as you get near N-sized chunks.

What's the output look like?

fast split gnu split -l gnu split --line-bytes
% wc -l split.fast.*
 86140 split.fast.00000
 81143 split.fast.00001
 92725 split.fast.00002
...
 91067 split.fast.00016
 86308 split.fast.00017
 84533 split.fast.00018
  1654604 total
% wc -l split.normal.*
 87000 split.normal.aa
 87000 split.normal.ab
 87000 split.normal.ac
...
 87000 split.normal.ar
 87000 split.normal.as
  1604 split.normal.at
  1654604 total
% wc -l split.normal_bytes.*
 85973 split.normal_bytes.aa
 80791 split.normal_bytes.ab
 92363 split.normal_bytes.ac
...
 86141 split.normal_bytes.ar
 85665 split.normal_bytes.as
  3999 split.normal_bytes.at
  1654604 total
% du -hs split.fast.*
21M     split.fast.00000
21M     split.fast.00001
21M     split.fast.00002
...
21M     split.fast.00016
21M     split.fast.00017
20M     split.fast.00018
% du -hs split.normal.*
21M     split.normal.aa
22M     split.normal.ab
19M     split.normal.ac
...
21M     split.normal.ar
21M     split.normal.as
352K    split.normal.at
% du -hs split.normal_bytes.*
21M     split.normal_bytes.aa
21M     split.normal_bytes.ab
21M     split.normal_bytes.ac
...
21M     split.normal_bytes.ar
21M     split.normal_bytes.as
896K    split.normal_bytes.at

grok for apache log analysis

I recently made a small change to my rss and atom feeds. I add a tracker image in the content. It looks like this:
<img src="/images/spacer.gif?path_of_a_blog_entry">
Any time someone views a post in an RSS feed, that image is loaded and the client browser happily reports the referrer url and I get to track you! Wee.

This is in an effort to find out how many people actually read my blog. Now that I can track viewship of the rss/atom feeds, how do I go about analyzing it? grok to the rescue:

% grep 'spacer.gif\?' accesslog \
   | perl grok -m '%APACHELOG%' -r '%IP% %QUOTEDSTRING:REFERRER%' \
   | sort | uniq -c | sort -n
<IP addresses blotted out, only a few entries shown>
  1 XX.XX.XX.XX "http://www.google.com/reader/view/"
  9 XX.XX.XX.XX "http://whack.livejournal.com/friends"
  10 XX.XX.XXX.XXX "http://www.bloglines.com/myblogs_display?sub=44737984&site=6302113"
  10 XX.XXX.XXX.XX "http://www.semicomplete.com/?flav=rss20"
  27 XX.XXX.XXX.XX "http://whack.livejournal.com/friends"
Each line represents a unique viewer, and tells me what the reader was using to view the feed.

Yay grok.

Grok plans and ideas

I'm almost done and graduated from RIT, which is why I haven't posted about anything recently. Finally done with this school. Wee!

Some noise woke me up from my pleasant slumber, and I can't seem to get back to sleep, so I might aswell do some thinking.

I think grok's config syntax is far too cluttered. I would much rather make it more simplified, somehow. Something like:

file /var/log/auth.log: syslog
{blockuser, t=3, i=60} Invalid user %USER% from %IP%

file /var/log/messages: syslog
{tracksudo, prog=su} BAD SU %USER:F% to %USER:T% on %TTY%

reaction blockuser "pfctl -t naughty -T add %IP%"
reaction tracksudo "echo '%USER:F% failed su. (to %USER:T%)' >> /var/log/su.log"
Seems like it's less writing than the existing version. Less writing is good. A more relaxed syntax would be nicer aswell -such as not requiring quotes around filenames.

This new syntax also splits reaction definitions from pattern matches, allowing you to call reactions from different files and other matches. I'll be keepin the perlblock and other features that reactions have, becuase they are quite useful.

I've also come up with a simple solution to reactions that execute shell commands. The current version of grok will run system("your command") every time a shell reaction is run. This is tragicaly suboptimal due to startup overhead. The simple solution is to run "/bin/sh -" so that there is already a shell accepting standard input and waiting for my commands. Simply write the shell command to that program.

I wrote a short program to test the speed of using system() vs printing to the input of a shell. You can view the testsh.pl script and the profiled output.

An abbreviated form of the profiled output follows:

%Time    Sec.     #calls   sec/call  F  name
92.24    1.5127     1000   0.001513     main::sys
 2.04    0.0335     1000   0.000033     main::sh
sys() calls system(), where sh() simply sends data to an existing shell process. The results speak for themselves, even though this example is only running "echo" - the startup time of the shell is obviously a huge factor in runtime. The difference is incredible. I am definitely implementing this in the next version of grok. I've already run into many cases where I am processing extremely large logs post-hoc and I end up using perl block reactions to speed up execution. This shell execution speed-up will help make grok even faster, and it can always use more speed.

Still no time to work on projects right now, perhaps soon! I'm moving to California in less than a week, so this will have to wait until after the move.

Parallelization with /bin/sh

I have 89 log files. The average file size is 100ish megs. I want to parse all of the logs into something else useful. Processing 9.1 gigs of logs is not my idea of a good time, nor is it a good application for a single CPU to handle. Let's parallelize it.

I abuse /bin/sh's ability to background processes and wait for children to finish. I have a script that can take a pool of available computers and send tasks to it. These tasks are just "process this apache log" - but the speed increase of parallelization over single process is incredible and very simple to do in the shell.

The script to perform this parallization is here: parallelize.sh

I define a list of hosts to use in the script and pass a list of logs to process on the command line. The host list is multiplied until it is longer than the number of logs. I then pick a log and send it off to a server to process using ssh, which calls a script that outputs to stdout. Output is captured to a file delimited by the hostname and the pid.

I didn't run it single-process in full to compare running times, however, parallel execution gets *much* farther in 10 minutes than single proc does. Sweet :)

Some of the log files are *enormous* - taking up 1 gig alone. I'm experimenting with split(1) to split these files into 100,000 lines each. The problem becomes that all of the tasks are done except for the 4 processes handling the 1 gig log files (there are 4 of them). Splitting will make the individual jobs smaller, allowing us to process them faster becuase we have a more even work load across proceses.

So, a simple application benefiting from parallelization is solved by using simple, standard tools. Sexy.

RRDTool to graph log-originating data.

I need to relearn rrdtool, again, for this sysadmin time machine project. Today's efforts were spent testing for features I hoped were in RRDTool. So far, my feature needs are met :)

Take something simple, like webserver logs. Let's graph the hits.

Create the RRD:

rrdtool create webhits.rrd --start 1128626000 -s 60 \
   DS:hits:GAUGE:120:0:U RRA:AVERAGE:.5:5:600000 \
   RRA:AVERAGE:.5:30:602938 RRA:AVERAGE:.5:60:301469 \
   RRA:AVERAGE:.5:240:75367 RRA:AVERAGE:.5:1440:12561
My logs start *way* back in November of last year, so I create the rrd with a start date of sometime in Novemeber. The step is 60, so it expects data every minute. I then specify one data type, hits, which is a gaugue (rate), and ranges from 0 to infinity (U). The rest of the command is RRA's defining how data is stored. The first one says take 5 samples and average them, and store 600,000 of these samples, at a maximum.

Now that we have the database, we need a "hits-per-minute" data set. I wrote a short perl script, parsehttp that will read from standard input and calculate hits-per-minute and output rrdtool update statements. Capture this output and run it through sh:

./parsehttp < access.log | sh -x
Simple enough. This will calculate hits-per-minute for all times in the logs and store it in our RRD.

Now that we have the data, we can graph it. However, since I want to view trends and compare time periods, I'll need to do something fancier than simple graphs.

RRDTool lets you graph multiple data sets on the same graph. So, I want to graph this week's hits and last week's hits. However, since the data sets are on different time intervals, I need to shift last week's set forward by one week. Here's the rrdtool command that graphs it for us, with last week's and this week's data on the same graph, displayed at the same time period:

rrdtool graph webhits.png -s "-1 week" \
   DEF:hits=webhits.rrd:hits:AVERAGE  \
   DEF:lastweek=webhits.rrd:hits:AVERAGE:start="-2 weeks":end="start + 1 week" \
   SHIFT:lastweek:604800 \
   LINE1:lastweek#00FF00:"last week" LINE1:hits#FF0000:"this week"
That'll look like line noise if you've never used RRDTool before. I define two data sets with DEF: hits and lastweek. They both read from the 'hits' data set in webhits.rrd. One starts at "-1 week" (one week ago, duh) and the other starts 2 weeks ago and ends last week. I then shift last week's data forward by 7 days (604800 seconds). Lastly, I draw two lines, one for last weeks (green), the other for this weeks (red).

That graph looks like this:

That's not really useful, becuase there's so many data points the graph almost becomes meaningless. This is due to my poor creation of RRAs. We can fix that by redoing the database, or using the TREND feature. Change our graph statement to be:

rrdtool graph webhits.png -s "-1 week" \
   DEF:hits=webhits.rrd:hits:AVERAGE  \
   DEF:lastweek=webhits.rrd:hits:AVERAGE:start="-2 weeks":end="start + 1 week" \
   SHIFT:lastweek:604800 \
   CDEF:t_hits=hits,86400,TREND CDEF:t_lastweek=lastweek,86400,TREND \
   LINE1:lastweek#CCFFCC:"last week" LINE1:hits#FFCCCC:"this week" \
   LINE1:t_lastweek#00FF00:"last week" LINE1:t_hits#FF0000:"this week"
I added only two CDEF statements. They take a data set and "trend" it by one day (86400 seconds). This creates a sliding average across time. I store these in new data sets called t_hits and t_lastweek and graph those aswell.

The new graph looks like this:

You'll notice the slide values are chopped off on the left, that's becuase it doesn't have enough data points at those time periods to make an average. However, including the raw data makes the graph scale as it did before, making viewing the trend difference awkward. So, let's fix it by not graphing the raw data. Just cut out the LINE1:lastweek and LINE1:hits options.

Fixing the sliding average cutoff, add a title, and a vertical label:

rrdtool graph webhits.png -s "-1 week" \
   -t "Web Server Hits - This week vs Last week" \
   -v "hits/minute" \
   DEF:hits=webhits.rrd:hits:AVERAGE:start="-8 days":end="start + 8 days"  \
   DEF:lastweek=webhits.rrd:hits:AVERAGE:start="-15 days":end="start + 8 days" \
   SHIFT:lastweek:604800 \
   CDEF:t_hits=hits,86400,TREND CDEF:t_lastweek=lastweek,86400,TREND \
   LINE1:t_lastweek#00FF00:"last week" LINE1:t_hits#FF0000:"this week" \
The graph is still from one week ago until now, but our data sets used extend beyond those boundaries, so that sliding averages can be calculated throughout. The new, final graph, looks like this:

Now I can compare this week's hits against last weeks, quickly with a nice visual. This is what I'm looking for.

This would become truely useful if we had lots of time periods (days, weeks, whatever) to look at. Then we could calculate standard deviation, etc. A high outlier could be marked automatically with a label, giving an instant visual cue that something is potentially novel. It might be simple to create a sort-of sliding "standard deviation" curve. I haven't tried that yet.