Search this site





Vertical tabs in Firefox 2

Update: Vertigo has been released for Firefox 2! Yay :)

The 'Vertigo' extension doesn't work in Firefox 2. Some googling finds a few solutions, all of which suck. That said, I think I'm going to dive back into playing with firefox and make an extension.

So far I've managed to get vertical tabs with a scrollbar that pops up when there are more-than-displayable tabs open. However, much of tonight left me extremely frustrated.

Development with Firefox seems to be exceedingly dependent on trial-and-error. Save whatever files, restart firefox. Repeat. Repeat. Repeat. Firefox is not lightning quick to startup, and I'm not sure how to edit extensions that are currently running without a restart. Maybe there's a debugger I don't know about. Mostly I'd just like to explore the DOM while it's running (Firefox's XUL DOM, not the current web page).

All I wanted to add (tonight) was the ability to choose what side of the browser the tab bar went on.

The following CSS will move the bar to the right (with my extension):

#appcontent tabbox {
  -moz-box-direction: reverse;
Also doing <tabbox dir="reverse" in the XUL works too. I need to set this in javascript.

This means = "reverse" should work, right? Here's everything I tried:

var tabbox = document.getElementsByTagName("tabbox")[0];

// Doesn't work (trying either 'reverse' or 'rtl'): = "reverse"; = "reverse";
tabbox.dir = "reverse";
tabbox.direction = "reverse";

//Try to tell the vbox  (tab list) to order after/before the browser pane:
tabbox.childNodes[0].ordinal = 0;
tabbox.childNodes[0].ordinal = 2;
I'm at a total loss. My lack of familiarity with XUL is hurting me here. What's confusing, is the following code outputs "ltr" (left to right), meaning = "rtl" should work:
  var x = window.getComputedStyle(tabbox, ""):
Googling for 'tabbox dir' and other variants doesn't show much promise. Wrapping the contents of the tabbox in an hbox and attempting to tweak the direction of the hbox fails, too.

The following code produces something interesting:

alert(tabbox.childNodes[0].nodeName + " / " + tabbox.childNodes[1].nodeName);
The output is "tabs / tabpanel". It should be "vbox / splitter" or something close to that.

Further investigation lands me at gBrowser.mTabBox which has the correct children (has the full xul dom within the real tabbox. where tabbox.childNodes[0] should be a vbox, and it is only when I access mTabBox, not through the tag lookup.

gBrowser.mTabBox.dir = "reverse";
And voila, the tab bar is on the right.

I'm not sure why the following statements yield different values:

 document.getElementsByTagName("tabbox")[0] != gBrowser.mTabBox
Very strange... These should point to the same objects, and while they both are 'tabbox' elements, their children are quite different (the former is an element-trimmed version containing only tabs and tabpanel).

Anybody? ;)

New event recording database prototype

I finally managed to find time today to work on my events database project. In the processes of doing so, I found a few bugs in grok that needed to get fixed. Some of my regular expressions were being a bit greedy, so certain pattern expansion was breaking.

To summarize the event recording system, it is a webserver listening for event publish requests. It accepts the "when" "where" and "what" of an event, and stores it in a database.

To have my logs pushed to the database, I'll leverage the awesome power of Grok. Before I do that, I gathered all of the auth.log files and archives and compiled them into their respective files.

The grok.conf for this particular maneuver:

exec "cat ./logs/nightfall.auth.log ./logs/sparks.auth.log ./logs/whitefox.auth.log" {
   type "all syslog" {
      match_syslog = 1;
      reaction = 'fetch -qo - "http://localhost:8080/?when=%SYSLOGDATE|parsedate%&where=%HOST%/%PROG|urlescape|shdq%&what=%DATA:GLOB|urlescape|shdq%"';
This is farily simple. I added a new standard filter, 'urlescape' to grok becuase I needed it. it will url escape a data piece. Hurray!

Run grok, and it sends event notifications to the webserver for every syslog-matching line. Using FreeBSD's command-line web client, fetch.

sqlite> select count(*) from events;
Now, let's look for something meaningful. I want to know what happened on all sshd services between 1am and 3am this morning (Today, May 3rd):
nightfall(~/projects/eventdb) % date -j 05030100 +%s
nightfall(~/projects/eventdb) % date -j 05030400 +%s
Now I know the Unix epoch times for May 3rd at 1am and 4am.
sqlite> select count(*) from events where time >= 1146632400 
   ...> and time <= 1146643200 and location like "%/sshd" 
   ...> and data like "Invalid user%";
This query is instant. Much faster than doing 'grep -c' on N log files across M machines. I don't care how good your grep-fu is, you aren't going to be faster.This speed feature is only the beginning. Think broader terms. Nearly instantly zoom to any point in time to view "events" on a system or set of systems. Filter out particular events by keyword or pattern. Look for the last time a service was restarted. I could go on, but you probably get the idea. It's grep, but faster, and with more features.

As far as the protocol and implementation goes, I'm not sure how well this web-based concept is going to prevail. At this point, I am not interested in protocol or database efficiency. The prototype implementation is good enough. From what I've read about Splunk in the past months in the form of advertisements and such, it seems I already have the main feature Splunk has: searching logs easily. Perhaps I should incorporate and sell my own, better-than-Splunk, product? ;)

Bear in mind that I have no idea what Splunk actually does beyond what I've gleaned from advertisements for the product. I'm sure it's atleast somewhat useful, or no one would invest.

Certainly, a pipelined HTTP client could perform this much faster than doing 10000 individual http requests. A step further would be having the web server accept any number of events per page request. The big test is going to see how well HTTP scales, but that can be played with later.

At this point, we have come fairly close to the general idea of this project: Allowing you to zoom to particular locations in time and view system events.

The server code for doing this was very easy. I chose Python and started playing with CherryPy (a webserver framework). I had a working event reciever server in about 30 minutes. 29 minutes of that time was spent writing a threadsafe database class to front for pysqlite. The CherryPy bits only amount to about 10 lines of code, out of 90ish.

The code to do the server can be found here: /scripts/

Event DB - temporal event storage

Travelling always gives me lots of time to think about new ideas. Today's 12 hours of flight gave me some some to spend brainstorming some ideas for my "sysadmin time machine" project.

I've come up with the following so far:

  • The concept of an event is something which has "when, where, and what"-ness. Other properties of events such as significance and who-reported-it are trivial. The key bits are when the event occurred, where it occurred, and what the event was.
  • Software logs happen to have these three key properties. Put simply: store it in a database that lets you search over a range of times and you have yourself a time machine.
  • Couple this with visualizations and statistical analysis. Trends are important. Automatic novelty detection is important.
  • Trends can be seen by viewing data over time - whether visual or formulaic (though the former is easier for Joe Average to see). An example trend would be showing a gradual increase in disk usage over a period of time.
  • Novelty detection can occur a number of ways. Something as simple as a homoskedasticity test could show if data were "normal" - though homoskedasticity only works well for linear models, iirc. I am not a statistician.
  • Trend calculation can provide useful models predicting resource exhaustion, MTBNF, and other important data.
  • Novelty detection aids in fire fighting and post-hoc "Oops it's broken" forensics.
I'm hoping to find time to get an event storage prototype working soon. The following step would be to leverage RRDtool as a numeric storage medium and perform novelty/trend detection/analysis from its data.

The overall goal of this is to somewhat automate problem detection and significantly aid in problem cause/effect searching.

The eventdb system will likely support many interfaces:

  • syslog - hosts can log directly to eventdb
  • command line - scriptably/manually push data to the eventdb
  • generic numeric data input - a lame frontend to rrdtool, perhaps
Thus, all data would be pushed through eventdb which would figure out which on-disk data medium to store it in. Queries could be done asking eventdb about things such as "show me yesterday's mysql activity around 3am" or "compare average syscall usage across this week and last week"

This sort of trend and novelty mapping would be extremely useful in a production software environment to compare configuration or software changes. That is, last month's syscall averages might be much lower than this months - and perhaps the only change was a configuration file change or new software being pushed to production. You would be able to track the problem back to when it first showed up - hopefully correllating to some change that was known about. After all, the first step in solving a problem is knowing of its existence.

My experience with data analysis techniques is not extensive. So I wouldn't expect the data analysis tools in the prototype to sport anything fancy.

I need more hours in a day! Oh well, back to doing homework. Hopefully I'll have some time to prototype something soon.