Search this site


Page 1 of 3  [next]

Metadata

Articles

Projects

Presentations

Terminals, titles, and prompts.

Drew Stephens spent some time on Christmas to share some of his shell configuration, including different ways he uses prompts and colors.

I'll start with prompts.

I use zsh. My prompt looks like this:

# Plain
snack(~) % 

# Long directory is truncated at the left
snack(...jects/grok/ruby/test/general) % 

# I get exit status only if it is nonzero:
snack(~) % true
snack(~) % false
snack(~) !1! % 

# if I am root, and using zsh, the '%' becomes '#'
snack(~) # 
This is all achieved with the following PS1 in zsh:
PS1='%m(%35<...<%~) %(?..!%?! )%# '
We have configurable prompts to give us all kinds of information, why? It's a place to gather context from. I include host, directory, exit status, and an "am i root" flag.

PS1 isn't the only place you can store useful state. I like to have similar information in my terminal's titlebar, too. I use screen and xterm, and both can be fed some delicious data.

I use this in my .screenrc, which tells screen to have some default status format and tells screen how to change xterm's title. I have it include the screen window number (%n), hostname (%h), and terminal title (%t):

hardstatus string "[%n] %h - %t"
termcapinfo xterm 'hs:ts=\E]2;:fs=\007:ds=\E]2;screen (not title yet)\007'
windowlist title "Num Name%=Location Flags"
windowlist string "%03n %t%=%h %f"
I also use this bit of .vimrc, which tells vim what kind of title I want, and if the $TERM is screen, how to tell screen about it.
" Set title string and push it to xterm/screen window title
set titlestring=vim\ %<%F%(\ %)%m%h%w%=%l/%L-%P
set titlelen=70
if &term == "screen"
  set t_ts=^[k
  set t_fs=^[\
endif
if &term == "screen" || &term == "xterm" 
  set title
endif
And then use this bit of my zshrc.

All of these combined together make for some pretty good terminal and screen titles. The functions preexec, precmd, and title, mentioned below, come from the above zshrc link.

The preexec function in my zshrc runs before each command execution and allows me to change the terminal title to reflect the command I am running. It also supports resumed execution of a process: if you run 'cat', then hit ^Z, then type 'fg', the title will correctly be set to 'cat' again.

The precmd function runs before each prompt. Rather than cluttering up $PS1 with byte strings to set the title, I just make precmd set the title to 'zsh - $PWD'.

The title function takes care of any necessary escaping and also does nice things like string truncation if it is too long (similar to how my $PS1 is configured).

I only use vim's titlestring options because it gives me some better context on what I am doing in vim at the time, mainly because vim allows you to edit multiple files at once.

Here's an example of a few screen windows in a single screen session when viewed in the windowlist:

The first 3 columns are most meaningful: number, name, and location. Note that each location correctly identifies the host that shell is using. My zshrc 'title' function manages setting the name and the location.

The same data listed above is combined into the actual terminal's title. Window 2 above would have this title in xterm:

[2] jls - zsh - /home/jsissel

I mentioned above that I use screen and xterm together. I do this for everything using run-xterm.sh. This script will run screen in an xterm with a randomly chosen, dark color background. I find the dark-random color selection quite a nice deviation from the solid-black my desktop used to bear. Here's what it looks like if I run a 20+ xterms on a blank desktop:

Shebang (#!) fix.

Most shebang implementations seem to behave contrary to my expectations.

As an example, prior to today, I would have expected the following script to output 'debug: true'

#!/usr/bin/env ruby -d
puts "debug: #{$DEBUG}"
Running it, I get this:
% ./test.rb
/usr/bin/env: ruby -d: No such file or directory
This is because the 'program' executed is '/usr/bin/env' and the first argument passed is 'ruby -d', exactly as if you had run: /usr/bin/env "ruby -d"

My expectation was that the above would behave exactly like this:

% /usr/bin/env ruby -d test.rb
debug: true
It doesn't. The workaround, however, is pretty trivial. It's only a few lines of C to get me a program that works as I want. I call the program 'shebang'. Why is it C and not a script? Because most platforms have a requirement that the program being executed from the shebang line be a binary, not another script.

#!/usr/local/bin/shebang ruby -d
puts "debug: #{$DEBUG}"
Now run the script again, with our new shebang line:
% ./test.rb
debug: true
Simple and works perfectly.

keynav shell command examples

Press 'f' when keynav is active and instantly jump to my firefox window.
# in .keynavrc
f sh "activate-firefox.sh", end
Now, in activate-firefox.sh:
#!/bin/sh
# activate-firefox.sh
xdotool windowactivate $(xdotool search -title -- '- Mozilla Firefox')
Extend that and press 'g' to jump to gmail, assuming that tab is open. (Requires my firefox tabsearch plugin)
# in .keynavrc
g sh "activate-gmail.sh"
Now, in activate-gmail.sh:
#!/bin/sh
# activate-gmail.sh

./activate-firefox.sh
xdotool key ctrl+apostrophe
xdotool type gmail
xdotool key Return

Shortcuts in your shell

I always run across commands I want to run more than once, but don't necessarily merit an alias in my zshrc file. For these commands, I abuse environment variables and use them as prefixes.

For instance, I have one command that runs mplayer in a loop, in case the connection drops:

while true; do mplayer -cache 48 -prefer-ipv4 http://foo.com/streamthing; done
Normally, I might use !while to re-invoke this command. However, I have lots of oneliners in my shell history that start with while. So, let's hack around it:
MPLAYER= while true; do mplayer -cache 48 -prefer-ipv4 http://foo.com/streamthing; done
This will set the environment variable 'MPLAYER' to an empty string and pass it to the while subshell (and thus mplayer), but since MPLAYER isn't used as an environment variable in mplayer, we won't break anything.

Now, any time I want to rerun this specific command, I can just do !MPLAYER and we're all set. Doing this is *extremely* useful and allows you to define alias-like procedures in real-time, assuming you have a persistent shell history. If you don't have a persistent shell history, set it up, as it's useful for more things than the above hack.

Parallelization with /bin/sh

I have 89 log files. The average file size is 100ish megs. I want to parse all of the logs into something else useful. Processing 9.1 gigs of logs is not my idea of a good time, nor is it a good application for a single CPU to handle. Let's parallelize it.

I abuse /bin/sh's ability to background processes and wait for children to finish. I have a script that can take a pool of available computers and send tasks to it. These tasks are just "process this apache log" - but the speed increase of parallelization over single process is incredible and very simple to do in the shell.

The script to perform this parallization is here: parallelize.sh

I define a list of hosts to use in the script and pass a list of logs to process on the command line. The host list is multiplied until it is longer than the number of logs. I then pick a log and send it off to a server to process using ssh, which calls a script that outputs to stdout. Output is captured to a file delimited by the hostname and the pid.

I didn't run it single-process in full to compare running times, however, parallel execution gets *much* farther in 10 minutes than single proc does. Sweet :)

Some of the log files are *enormous* - taking up 1 gig alone. I'm experimenting with split(1) to split these files into 100,000 lines each. The problem becomes that all of the tasks are done except for the 4 processes handling the 1 gig log files (there are 4 of them). Splitting will make the individual jobs smaller, allowing us to process them faster becuase we have a more even work load across proceses.

So, a simple application benefiting from parallelization is solved by using simple, standard tools. Sexy.

The CSH Bawls Programming Competition

Yesterday, I participated in a 12-hour coding-binge competition. It started at 7pm Friday night and ran until 7am Saturday morning. It was fueled by Computer Science House and Bawls, both sponsors of the event. Needless to say, I haven't gotten much sleep today.

The competition website is here. Go there if you want to view this year's objectives.

The Dream Team consisted of John Resig, Darrin Mann, Matt Bruce, and myself. Darrin, Resig, and I are all quite proficient at web development, so we decided this year we would represent ourselves as "Team JavaScript" - and do everything possible in javascript. Bruce is not a programmer, but I enlisted his graphical art skills because I figured with our team doing some web-based project, we definitely needed an artist.

After reviewing all the objectives, we came up with a significant modification upon the Sudoku objective. The sudoku objective was a problem that lacked much room for innovation, so we went further and instead of solving Sudoku, wrote a web-based version of an extremely popular game in Second Life. The contest organizer approved of our new objective, so we did just that.

Resig worked on game logic, I worked on chat features, Darrin worked on scoring and game generation, and Bruce worked on the interface graphics. Becuase our tasks were all mostly unrelated, we could develop them independently. Most of the game was completed in about 6 hours, and the remainder of the time was spent fixing bugs, refactoring, and some minor redesign.

The backends were minimal. The chat backend was only 70 lines of perl, and the score backend was 9 lines of /bin/sh. Everything else was handled in the browser. We leveraged Resig's jQuery to make development faster. Development went extremely smooth, a testament to the "Dream Team"-nature of our team, perhaps? ;)

The game worked by presenting everyone with the same game - so you can compete for the highest score. You could also chat during and between games, if you wanted to.

A screenshot can be found here. At the end of the competition, we only had one known bug left. That bug didn't affect gameplay, and we were all tired, so it didn't get fixed. There were a few other issues that remained unresolved that may or may not be related to our code. Firefox was having issues with various things we were doing, and we couldn't tell if it was our fault or not.

Despite the fact that I probably shouldn't have attended the competition due to scholastic time constraints, I was glad I went. We had a blast writing the game.

We may get some time in the near future to improve the codebase and put it up online so anyone can play. There are quite a few important features that need to be added before it'll be useful as a public game.

statistic deltas using awk

Short shell script I call 'delta' - It is useful for groking 'vmstat -s' output (and possibly other commands) to view time-based deltas on each counter.
#!/bin/sh
while :; do
   $* | sed -e 's/^ *//';
   sleep 1;
done | awk '
{
   line = substr($0, length($1)+1);

   if (foo[line]) {
      printf("%10d %s\n", $1 - foo[line], line);
   }
   foo[line] = $1;
   fflush();
}'
Example usage:
delta vmstat -s | grep -E 'system calls|fork'

       792  system calls
         3   fork() calls
         0  vfork() calls
       120  pages affected by  fork()
         0  pages affected by vfork()
       680  system calls
         3   fork() calls
         0  vfork() calls
       120  pages affected by  fork()
         0  pages affected by vfork()
      1150  system calls
         3   fork() calls
         0  vfork() calls
       120  pages affected by  fork()
         0  pages affected by vfork()

Poor man's todo list.

I've often had a yearning for any kind of a todo list that meets the following requirements:
  • Simple to use
  • Easy to maintain
  • Quick to start using
  • High Mobility
  • Require low effort
I have tried many kinds of "todo" lists. The first one is the ungeeky kind, Ye Olde Paper. Paper is great, unfortunately it doesn'treplicate easily and is easily lost when made portable. Post-It notes fall under this category - easily lost, not mobile without high loss.

Next, I tried online "todo" lists such as tadalist.com. These such organizational tools are great and meet all but one of my requirements: requiring low effort. I am a creature of habit, and learning new habits is difficult. This "new habit" would be my continual visitation of the online todo list. What actually happens is I update the todo list once, then promptly forget about it. That means I need some sort of periodic reminder.

There exist many kinds of virtual "postit" programs. One such program is called xpostitPlus. It wastes valuable screen realestate and is ugly. Furthermore, it is not mobile unless I use X forwarding or replicate the notes database. GNOME has a similar program called 'stickynotes-applet' or something similar. I don't use GNOME, and I imagine that the stickynotes applet suffers from the same problems as xpostitPlus.

So I got to thinking about how to best solve this problem. I immediately thought about writing my own python-gtk app that would let me do it, but I realized quickly that it would fall under the same problems as the virtual postit programs. Furthermore, that's overengineering a solution to a simple problem that can have a simple solution (pen and paper, remember). I remembered that zsh has a 'periodic' feature that you can schedule jobs to occur every N seconds. "Every N seconds" isn't quite true, and this becomes beneficial. It actually schedules execution for N seconds after the last run but doesn't actually execute until you reach a prompt. My solution is very simple, portable, and easy for me to use.

In my .zshrc:

# Periodic Reminder!
PERIOD=3600                       # Every hour, call periodic()
function periodic() {
        [ -f ~/.plan ] || return

        echo
        echo "= Todo List"
        sed -e 's/^/   /' ~/.plan
        echo "= End"
}
Periodic is scheduled as soon as the shell starts, so I see my '.plan' file as soon as I open an xterm or otherwise login. every hour I get a reminder. I may change this to once a day or something, but for the most part my solution is complete and meets my requirements.

My '~/.plan' file:

* Register for classes
* Pimp
* newpsm/newmoused 
* rum
This solution is stupid simple and is effective:
  • Simple to use: integrated into my shell
  • Easy to maintain: with vi
  • Quick to start using: 10 lines of shell and vi... done
  • High Mobile: via ssh, local .plan replica, etc
  • Require low effort: reminders are automatic
I may improve this later using xsltproc(1) so I can set priority levels and other things, but for now this will definatley suffice.

Poor Man's Backup: rsync + management

I got bored and made some useful adaptations on a backup script I wrote for class. It turned into a simple backup/recovery script that supports multiple host backups and very easy recovery.

Read more about the project on the project page: projects/pmbackup

There are a few caveats of the way I currently do it. The first, being, that file ownership is not preserved. This is only an option if rsync is being run as root on the backup server while doing backups, or as root on the client when doing recovery. I'm going to setup a "backup jail" on my machine with only rsync and sshd in it so I can have an ssh key let me login to that jail as root so backups can preserve file ownerships.
There may be a better way to do this, but jailing seems the simplest and most secure.

updated vimrc and zshrc available

I recently made a few changes to my vimrc and zshrc. The changes are somewhat trivial, but made working in zsh and vim easier. You can get the files here:

Downloadables: