Search this site

[prev]  Page 6 of 54  [next]





Monitoring system - request for input

I'm working on a new monitoring system because I can't find one that solves enough of my problems. It's going to be free and have an unrestricted open source license.

I could use your help.

At this stage, the best way you can help is to make sure I get lots of data about various infrastructure architectures, monitoring needs, reporting needs, alerting needs.

If you can, please share with me the following:

  • A description (or diagram) of your infrastructure including network, servers, services, storage, etc.
  • What you are using now for monitoring (can be any number of tools)
    • Why you like them
    • Why you don't like them
    • What you'd rather have, if anything
  • What tools are missing that you wish existed?
  • Would more documentation on monitoring, in general, help?
  • Do you carry a pager? If not, why not? If so, what does it support? (email, sms, html email, mobile web, normal web)
  • Would more documentation help?
    • Better documentation on how to monitor the things you need to monitor, such as best practices, better tool docs, etc?
    • Best practices for monitoring various scenarios?
Any thoughts, please email me [email protected] - I'll be collecting this data into my design document, which you can view in unfinished form, here: Current Design Doc

Resetting your firewall (iptables) during testing

Ever configured a firewall remotely? Ever blocked yourself and had to get physical hands to fix it?

Kind of sucks.

So you're going to start playing with some new firewall rules, but you learned from the past and now you have a cron(8) or at(8) job that will reset the firewall rules to permissive every so often, just in case you lock yourself out.

I used to do that. Until I realized today that I'm frankly too lazy to wait the N minutes I'll have to wait for my at(8) job to kick in.

Now I sniff packets and have a script trigger from that.

On the remote server, I'll use ngrep to watch for a specific payload in an icmp echo packet. This works because bpf(4) gets packets before the firewall has a chance to filter them, meaning even if you deny all packets, bpf(4) (libpcap, tcpdump, ngrep, etc) will still see those packets. Here's the script I use on the remote server:

# Look for any icmp echo packets containing the string 'reset-iptables'
ngrep -l -Wnone -d any 'reset-iptables' 'icmp and icmp[icmptype] = icmp-echo' \
| grep --line-buffered '^I ' \
| while read line ; do 
  iptables -F
  iptables -P INPUT ACCEPT
  iptables -P OUTPUT ACCEPT
  iptables -P FORWARD ACCEPT

The ngrep line will output this whenever it sees a matching packet:

remotehost% ngrep -l -Wnone -d any 'reset-iptables' 'icmp and icmp[icmptype] = icmp-echo'
interface: any
filter: (ip) and ( icmp and icmp[icmptype] = icmp-echo )
match: reset-iptables
We'll grep for just the 'I' line, then trigger a full firewall reset.

I couldn't figure out how to use ping(8) and set a specific payload, so I'll use scapy.

workstation% echo 'sr1(IP(dst="")/ICMP(type="echo-request")/"reset-iptables")' | sudo scapy
Now, if I accidentally lock myself out through firewall rule changes, I can trivially reset them using that 'echo | scapy' onliner.

Obviously, I don't keep the reset script running after the firewall rules are tested and known-good, but it's a great instant-gratification means to solving the locked-out problem you may face when testing new firewall rules.

A decade of growing

I started the decade still in high school, as a junior, with after-school activities including track, marching band, and computer team. Already building my late-night hacking habits, I spent the late hours doing things like scripting in mircscript. Computer team was QBASIC or C++, but most of us used QBASIC, including me, despite having easily passed AP Computer Science the year prior (which taught C++).

I remember trying C++ outside of the AP class, but the C++ they taught us used this horrible bastard misimplementation what looked like STL - instead of "vector" you'd use "apvector" etc... which I couldn't find anywhere outside of class and thus ended my C++ adventures. Here's a tip for teachers and professors: Always use practical material when doing introductory material. Don't create some crap sandbox that isn't useful knowledge outside of the classroom.

I started my computer science degree at RIT in 2001 and joined the Computer Science House (CSH). CSH turned out to be the best part of college by a large measure. From many perspectives, CSH is the best part about RIT: social, learning, common ground, etc. Classes were sometimes interesting, but were often boring and unchallenging. This fact explains why I had a 2.8 GPA when graduating in 2006.

I socialed my way (by knowing professors) into some senior-level IT classes without any prerequisites. I remember them being fun, but mostly I remember the fleet if whiney IT students complaining of the work which I viewed as necessary and also exciting. I took two or three senior IT classes, and the whining was consistent throughout. Kind of depressing, because I was still trying to find an environment outside of CSH where folks actually cared about what they were doing. This student culture seemed to cause RIT to dumb-down it's IT program frequently while I was enrolled. I don't know if it's improved.

School also helped push me to like meritocracy. One summer co-op, I worked for a professor, who's PhD thesis we were implementing in code. When we had questions about implementation problems, the professor gave us the "I don't know" response. I'm implementing your idea, and you don't know? And you got a doctorate from this? I lost respect for master and PhD status after that; I treat them now like any certification and mostly ignore them.

Respect and status is earned, not given. Apparently PhDs and master awards are sometimes given, not earned.

My code growth over 7 years
From 2002 to today, there are more than 70 projects in my svn repo. Linux, Solaris, FreeBSD, and Windows projects are here. C, C++, Perl, Python, Ruby, JavaScript, XML, and more. It is totally overwhelming to consider all of the things I've worked on over the past few years, so I'll just pick a few.

I started learning PHP in early 2002, then Perl in late 2002, Python in 2004, Ruby in 2007. The first version of this site was written in PHP, then redone in Perl and HTML::Mason, then I migrated to pyblosxom.

In late 2002, I found out about RSS; a friend said I should write an rss aggregator, so I did. I used this project as an excuse to learn perl. I wrote the first version of the aggregator was written using perl CGI and DBI, even had a few folks besides me using it. I let the project die after I didn't really gain from having an rss reader.

Another project was an awesome jukebox software called Pimp. Pimp started as local mp3 player in perl, then became a telnet-controlled jukebox system written. Later versions were in python and sported a decent web interface, multiple simultaneous streams, and some othe cool features. For streaming, I had to reverse engineer the shoutcast protocol to add the in-stream metadata (showing you what song is playing).

In 2003, I wrote an aim client in perl using Net::OSCAR. I used it for a year or two, and don't remember why I stopped. That project taught me a lot about terminal interfaces. I also came up with some clever regular expression tricks for doing line editing.

I've learned a great deal of things outside of code, too. I've read Rands in Repose and Eric Raymond's Hacker Howto. Communication mediums like BarCamp, IRC, Twitter, and others are also important here. These have helped me find communities, large and small, like similar to what I had at CSH.

Then there's my career path. Ultimately, I'm a hacker at heart, so I'm looking for challenging problems to solve. Through college and my early jobs, I've learned more about what I need to help me focus on solving challenging problems. I want an environment that is supportive and productive. I want active communication, especially about blocking issues or direction changes.

I've also discovered that I enjoy learning from mistakes. It is difficult to own up to failures of any size and scope, but I find it is more educational, personally, to admit failures and move on to work on learning from those failures. I've had the C-level folks at my current job (Rocket Fuel) explain this exact thing during meetings - about a problem that was quickly acknowledged, responded to, repaired, and post-mortemed. That's responsibility and passion.

I joined Google immediately after graduating from RIT (seriously, I graduated on friday, and started work on the monday). I knew nothing of negotiations, only that I knew I wanted to work there. As a result, I got a crap salary with crap stock. Now I know. Always negotiate. I left Google for OnLive and it's technical challenge. As a side note, my career at Google was going no where; after almost two years, I hadn't really moved, despite some effort, in areas of greater responsibility or in technical challenge.

I left OnLive because the company was going nowhere. Terrible leadership, poor communication, nepotism, and irresponsibility permeated the employee stack up to (and especially) the C-level folks. If successful startups need great people to succeed, then I can only conclude that OnLive will fail. Leaving was a pity, for me, because the exciting technical challenge was still there. My favorite response from management when I talked to them about company-wide problems was "Wait 3 months" - waiting is hoping, and hope is not a strategy. Despite that, I did stay longer, and that was a mistake. Their other senior operations sysadmin guy left shortly after me for the same reasons. Now, I'm a stock holder and also have friends there, so I still hope OnLive does great things, but again, hope is not a strategy.

Both Google and OnLive gave me great perspective and input on what I should look for in future employment.

More career-wise, I continue to have high expectations of my coworkers and especially of leadership. I value responsibility and passion more than I do technical prowess, because responsibility and passion are more rare qualities and are at least as valuable as technical prowess. I expect good communication. I like to ask 'why' questions, and I expect that an answer of "I don't know" should immediately followed by "but I will find out". Living in the dark is counterproductive. I've learned to be patient and learned to translate. Translation is critical. Using a common syntax and terminology is critical when communicating with others. Patience is critical.

Email is the worst, so to assure you that I am in a cheery mood and willing to assist, I will drown your inbox with smiley-face emoticons - nothing else seems to work. It is extra difficult because of the BOFH persona folks generally attribute to support groups like corp IT and production staff. BOFH is hilarious, but it's an antipattern for treating your users, just like calling them 'lusers'. People have been trained to think they are at fault when some crappy piece of software they use misbehaves, and they have been trained to feel like they are massively inconveniencing you when they ask for support that you were hired to give. It's upsetting - I'm here to help.

I've also found the uncomfortable realization that my jobs are often not just hacking on problems. Politics (and the falsehoods required) are an unavoidable part of the job process.

On the life side, I got married to the greatest girl in the world and got two dogs. Wendy and I had been together for almost six years before getting married. Our first dog died suddenly of an autoimmune disease at age five, and our current doggy is quite goofy and going happy at age two.

While at RIT, I learned to rollerblade, skateboard, and play ice hockey. I still skateboard and would like to get back into hockey.

I also traveled a lot. I lived in Georgia, then New York, then California. I've been to Dublin, Seattle, Washington DC, the Caribbean, Vegas, and New Orleans. I started going with friends to Defcon and got involved with Hack or Halo at Shmoocon. I also went to BarCamps in NYC, Rochester, and the Bay Area.

Dublin, Ireland and Seattle were for Google work trips. Dublin was awesome. My trip coincided with Mashup Camp, where I met up with some Yahoo folks on a after-event bar hop. Dublin's Temple Bar district is good fun and reminded me a bit of Bourbon Street in New Orleans with all its shenanigans. Seattle was more business and less social, only stopping in at Burger Master for some good burgers while in town.

BarCamps were awesome everywhere. The first one I went to was NYC in 2006, where a fellow CSHer demoed the first version of jQuery; I remember staying up until almost 4AM at CollegeHumor's offices (the barcamp location, I think?) hacking with it.

There were also Yahoo! Hack Days; I went to both of the ones held here in the Bay Area. The first hack day had me hacking up and also writing keynav (a project still maintained and used!). My hack resulted in a Wall Street Journal "Marketing" section front-page column, which was one of the most awesome things ever (article also viewable here). I'd never been in any newspaper before.

Two years later, Yahoo! held another open hack day where I wrote SnackUpon, which got coverage on lifehacker and a few other sites. These events weren't my first 12-hour hackathons. At RIT, CSH ran yearly team coding competitions sponsored by Red Bull and Bawls; my team won two years.

These hackathons repeatedly highlight that my productivity spikes after midnight. I've been fortunate to have jobs that understand this and don't require me to be at the office every day at 9AM.

Wrapping up a decade that had lots of travel, learning, hacking, and networking (with people) on and offline, it doesn't seem likely the trend of travel, learning, hacking, and networking is showing any signs of downturn.

new xdotool version available (20091231)

A new release of xdotool is available. This release has only minor changes based on changes needed to help Debian and other folks package this. Speaking of which: thanks to Daniel Kahn Gillmor and Wen-Yen Chuang for helping maintain the Debian packages for xdotool and keynav and for working with me on these changes.

Hop on over to the xdotool project page and download the new version.

The changelist follows:

  No functional changes.
  - fix linking problems and just use $(CC) for build and linking
  - Make the tests headless (requires Xvfb and GNOME)
  - Make the t/ test runner exit-code friendly

  No xdotool changes.
  libxdo changes:
    * Rename keysymcharmap -> keysym_charmap
    * Expose keysym_charmap and symbol_map as xdo_keysym_charmap()
      and xdo_symbol_map()


new keynav version available (20091231)

Hop on over to the keynav project page and download the new version.

The changelist from the previous announced release is as follows:

  - Try repeatedly to grab the keyboard with XGrabKeyboard on 'start' commands.
    This loop is a necessary workaround for programs like xbindkeys that could
    launch keynav but at the time of launch still hold a keyboard grab
    themselves. Reported by Colin Shea.

  - Nonfunctional bug fixes and other code cleanup from patches by Russell Harmon

  - Some internal code refactor/cleanup
  - Reduce drawing flicker by drawing to a Pixmap and blitting to the window.
  - Allow commands to be given on keynav startup. (Reported by Colin Shea)
    The same commands valid as keybindings are valid as startup commands:
    % keynav "start, grid 3x3"
  - Allow clicking through the keynav grid window area (Reported by Yuri D'Elia)
  - Support daemonizing using the 'daemonize' command in keynavrc. Added an
    example to the distributed keynavrc.
  - Use new library features given by xdotool/libxdo 20091231.01

  - Support linking against if it is found, otherwise we build xdo.o
    into keynav. The original intent of including xdotool in the release package
    was to make make it easy to build keynav without a packaging system. Now
    that more distros have keynav and xdotool, this requirement is less

    This change is in response to Debian rqeuest:

Terminals, titles, and prompts.

Drew Stephens spent some time on Christmas to share some of his shell configuration, including different ways he uses prompts and colors.

I'll start with prompts.

I use zsh. My prompt looks like this:

# Plain
snack(~) % 

# Long directory is truncated at the left
snack(...jects/grok/ruby/test/general) % 

# I get exit status only if it is nonzero:
snack(~) % true
snack(~) % false
snack(~) !1! % 

# if I am root, and using zsh, the '%' becomes '#'
snack(~) # 
This is all achieved with the following PS1 in zsh:
PS1='%m(%35<...<%~) %(?..!%?! )%# '
We have configurable prompts to give us all kinds of information, why? It's a place to gather context from. I include host, directory, exit status, and an "am i root" flag.

PS1 isn't the only place you can store useful state. I like to have similar information in my terminal's titlebar, too. I use screen and xterm, and both can be fed some delicious data.

I use this in my .screenrc, which tells screen to have some default status format and tells screen how to change xterm's title. I have it include the screen window number (%n), hostname (%h), and terminal title (%t):

hardstatus string "[%n] %h - %t"
termcapinfo xterm 'hs:ts=\E]2;:fs=\007:ds=\E]2;screen (not title yet)\007'
windowlist title "Num Name%=Location Flags"
windowlist string "%03n %t%=%h %f"
I also use this bit of .vimrc, which tells vim what kind of title I want, and if the $TERM is screen, how to tell screen about it.
" Set title string and push it to xterm/screen window title
set titlestring=vim\ %<%F%(\ %)%m%h%w%=%l/%L-%P
set titlelen=70
if &term == "screen"
  set t_ts=^[k
  set t_fs=^[\
if &term == "screen" || &term == "xterm" 
  set title
And then use this bit of my zshrc.

All of these combined together make for some pretty good terminal and screen titles. The functions preexec, precmd, and title, mentioned below, come from the above zshrc link.

The preexec function in my zshrc runs before each command execution and allows me to change the terminal title to reflect the command I am running. It also supports resumed execution of a process: if you run 'cat', then hit ^Z, then type 'fg', the title will correctly be set to 'cat' again.

The precmd function runs before each prompt. Rather than cluttering up $PS1 with byte strings to set the title, I just make precmd set the title to 'zsh - $PWD'.

The title function takes care of any necessary escaping and also does nice things like string truncation if it is too long (similar to how my $PS1 is configured).

I only use vim's titlestring options because it gives me some better context on what I am doing in vim at the time, mainly because vim allows you to edit multiple files at once.

Here's an example of a few screen windows in a single screen session when viewed in the windowlist:

The first 3 columns are most meaningful: number, name, and location. Note that each location correctly identifies the host that shell is using. My zshrc 'title' function manages setting the name and the location.

The same data listed above is combined into the actual terminal's title. Window 2 above would have this title in xterm:

[2] jls - zsh - /home/jsissel

I mentioned above that I use screen and xterm together. I do this for everything using This script will run screen in an xterm with a randomly chosen, dark color background. I find the dark-random color selection quite a nice deviation from the solid-black my desktop used to bear. Here's what it looks like if I run a 20+ xterms on a blank desktop:

new grok version available (20091227.01)

The latest release is another important step in grok's life. Most major changes were outside of the code:
  • FreeBSD users can install grok via ports: sysutils/grok. Thanks to sahil and wxs for making this happen.
  • The project has online documentation and also ships with a manpage.

Hop on over to the grok project page and download the new version.

Changes since last announced release:

 - Add function to get the list of loaded patterns.
 - Ruby: new method Grok#patterns returns a Hash of known patterns.
 - Added flags to grok: -d and --daemon to daemonize on startup (after config
   parsing). Also added '-f configfile' for specifying the config file.
 - Added manpage (grok.1, generated from grok.pod)

 - match {} blocks can now have multiple 'pattern:' instances
 - Include samples/ directory of grok configs in release package.

Sysadvent 2009 now online.

Sysadvent 2009 is upon us. First article is online as of now.

This year's been much more pleasant to plan. There are many more people helping out this year, so I get to relax more and work at a much more sane pace.

Special thanks to Matt for being a driving force in wrangling up new volunteers, and certainly to all the folks helping out this year.

new keynav version available (20091108)

Hop on over to the keynav project page and download the new version.

The changelist from the previous announced release is as follows:

  - Added xinerama support.
    * Default 'start' will now only be fullscreen on your current xinerama
    display. You can move between screens by using the move-* actions to move
    the current selection outside the border of the current screne.
  - All xdotool commands now return integers so we can forward their return
    status to the user.
  - Actually handle SIGCHLD now so the shell commands get reaped on exit.

Ruby metaprogramming will cost you documentation.

Ruby, like many other dynamic and modern languages, makes it easy for you to do fun stuff like metaprogramming.

Ruby, also like other nice languages, comes with a builtin documentation generator that scans your code for comments and makes them available in html and other formats.

... until you start metaprogramming.

Take a simple example, the Fizzler! The name of the class is unimportant; this class will simply provide a new way to define methods, simply for the sake of showing some metaprogramming and how ruby's rdoc fails on it.

class Fizzler
  def self.fizzle(method, &block)
    self.class_eval do
      define_method method, &block

class Bar < Fizzler
  # Print a worldly message
  fizzle :hello do
    puts "hello world!"
  # A simple test
  def test
    puts "testing, 1 2 3!"

# Now some sample code, let's invoke the new 'hello' method we generated with
# 'fizzle'.
bar =
The output looks like this:
% ruby fizzler.rb 
hello world!
All is well! We are generating new methods on the fly, etc etc, all features of metaprogramming. However, we can never make this 'hello' method obviously available to the world via rdoc, at least as far as I can tell. The rdoc generated looks like this:

Note the lack of any mention of 'hello' as a method. I cannot simply do what works for lots of other normal ruby code and ask for the documentation of hello by running 'ri Bar#hello' - because rdoc simply doesn't see it.

I recall in python, if you were dynamically generating methods and classes, you could also inject their documentation by simply setting the '__doc__' property on your class or method. Ruby doesn't appear to have such a thing.

Additionally, in some metaprogramming cases, the stack traces are actually harder to read. For example, ActiveRecord makes extensive use of 'method_missing' rather than dynamically generate methods. The output is the same, but the stacktraces are now littered with 'method_missing' and references to files and lines you don't own, rather than containing stacktraces to named functions and other useful pointers. This perhaps is a feature, but for cases like method_missing, being able to add other useful data onto the stack trace would greatly aid in debugging.

So, if long term necessities like documentation and easy debuggability (stack traces, etc), are hindered by metaprogramming, at least in ruby, what are we left to do? Metaprogramming is clearly a win in some places, but the automatic losses seem to detract from any value it may have.