Search this site





jQuery Mobile - Full Height content area

I was working on integrating jQuery Mobile stuff into fingerpoken and needed a way to make the content area of pages full-screen. By 'full screen' I mean still showing the header and footer, but otherwise the content needs to fill the rest.

I couldn't find an easy way to do this while googling, and even the jQuery Mobile demos didn't do it.

So here's a demo of what I came up with here: fullheight jQuery Mobile demo Javascript:

  var fixgeometry = function() {
    /* Some orientation changes leave the scroll position at something
     * that isn't 0,0. This is annoying for user experience. */
    scroll(0, 0);

    /* Calculate the geometry that our content area should take */
    var header = $(".header:visible");
    var footer = $(".footer:visible");
    var content = $(".content:visible");
    var viewport_height = $(window).height();
    var content_height = viewport_height - header.outerHeight() - footer.outerHeight();
    /* Trim margin/border/padding height */
    content_height -= (content.outerHeight() - content.height());
  }; /* fixgeometry */

  $(document).ready(function() {
    $(window).bind("orientationchange resize pageshow", fixgeometry);

jquerycmd+xpathtool == direction scraping on google

Show the first 3 steps that google maps tells you to take.
./ "atlanta to nyc" | head -3
Head southeast on Trinity Ave SW toward Washington St SW        0.2mi
Slight left at Memorial Dr SW   0.3mi
Turn left at Martin St SE       361ft
Pipe that to lpr and you've got printed directions on under 5 seconds.

Why not just do this with plain page scraping? Because there's lots of javascript in google maps that presents the user with the directions. Firefox (Gecko, really) already parses it, so why bother reinventing the wheel? Let's use the wheel that already works.

Download jquery-20070623.1828.tar.gz. The download of jquerycmd comes with the xul app, '' and ''.

For the lazy who just want to see the scripts:

At SuperHappyDevHouse 18

I've been working on the jquery commandline tool. The base features work, now all that remains is endlessly interating over adding features.
% ./ --url --query img
<IMG width='276' height='110' src='/intl/en_ALL/images/logo.gif' alt='Google' />
% ./ --url --query form
<FORM method='post' action='/query.php'>
        <div class="controls">
            <a href="/search?advanced">advanced search</a>
... < remainder cut > ...

jQuery puffer

The Interface elements plugin for jQuery is super slick. It has a puffer function I want to use. However, the act of 'puffing' makes the element disappear. I want to clone the element and puff the cloned version.
  function magicpuff() {
    $("img").mousedown(function() {
      pos = findPos(this)
      left = pos[0];
      top = pos[1];

      puffer = this.cloneNode(true); = left + "px"; = top + "px"; = "absolute";
      $(puffer).Puff(1000, function() { $(puffer).remove() });

      return false;
This code will duplicate the image clicked placing it directly on top of the old element. It then puffs the new element and removes it when the puff has completed. Simple enough.

What good is code without a fun little demo? View the puffer demo

I should note that it seems that the remove portion doesn't always remove the cloned object. This is especially noticable (though, not visually) when you activate puffing on more than one image at a time. You need somewhat fast hands to do this. Firefox's DOM inspector will show you the additional elements parented by the body tag.

This depends on findPos available from quirksmode, jQuery, and the forementioned Interface plugin.

jQuery autofill version 2

This post marks 4 in one day. Whew!

Resig and I were bouncing ideas around after I made the form filler, and we came up with something that fits very nicely into the jQuery api (in the form of something very pluggable).

You'll need the following code that will extend jQuery's functionality. Basically, it adds 'saveAsCookie' and 'loadAsCookie' function calls to $() objects.

$.fn.saveAsCookie = function(n,t){
   return this.each(function(){
     createCookie( (n || '') + ( ||, this.value, t );

$.fn.loadAsCookie = function(n){
  return this.each(function(){
    this.value = readCookie( (n || '') + ( || );
You can safely put that code somewhere and load it anywhere you need autofill. Reusable code is awesome.

Now, we don't want to cache *all* input elements, becuase only some contain user-input and only some need to be saved. For this, I put the class 'cookieme' on all input elements I wanted to save.

The arguments to 'saveAsCookie' and 'loadAsCookie' are namespace prefixes. This way, you can avoid namespace collisions with other cookies. All of my autofill cookies will be prefixed with 'formdata' and suffixed with the element name or id attribute.

So, we squished the code down to 6 lines, 4 of which are actually meaningful.


jQuery+cookies = trivially simple form autofill

It's always nice when websites you commonly visit remember things about you, or atleast give the perception that they remember things about you.

The Pyblosxom comment plugin doesn't autofill the form. That's too bad. I don't really want to dig into the python code to do any cookie-setting on submission, becuase I have never looked at the code and thusly am unfamiliar with the effort required for such a change. Luckily, we can use javascript to store data in cookies too!

I love jQuery, so that's what I'll use for this little hack. On the comments page, I add the following javascript:

   var uname = "";
   var uemail = "";
   var usite = "";

   function saveCommentInformation() {
      // Save user information from the form!
      createCookie(uname, $("input[@name='author']").val());
      createCookie(uemail, $("input[@name='email']").val());
      createCookie(usite, $("input[@name='url']").val());

   function initCommentForm() {
      // Autofill user information if available

      // Save comment information when form is submitted

That's all we need. Whenever someone submits, we will store useful information in a cookie. Whenever that person comes back, we'll pull the data out of the cookie and put it back in the form. User Experience is happier, atleast as far as I am concerned (as a user).

If you are wondering about the 'readCookie' and 'createCookie' functions, you can find them on

Update: Check out this followup post that implements this in a more jquery-like way.

The CSH Bawls Programming Competition

Yesterday, I participated in a 12-hour coding-binge competition. It started at 7pm Friday night and ran until 7am Saturday morning. It was fueled by Computer Science House and Bawls, both sponsors of the event. Needless to say, I haven't gotten much sleep today.

The competition website is here. Go there if you want to view this year's objectives.

The Dream Team consisted of John Resig, Darrin Mann, Matt Bruce, and myself. Darrin, Resig, and I are all quite proficient at web development, so we decided this year we would represent ourselves as "Team JavaScript" - and do everything possible in javascript. Bruce is not a programmer, but I enlisted his graphical art skills because I figured with our team doing some web-based project, we definitely needed an artist.

After reviewing all the objectives, we came up with a significant modification upon the Sudoku objective. The sudoku objective was a problem that lacked much room for innovation, so we went further and instead of solving Sudoku, wrote a web-based version of an extremely popular game in Second Life. The contest organizer approved of our new objective, so we did just that.

Resig worked on game logic, I worked on chat features, Darrin worked on scoring and game generation, and Bruce worked on the interface graphics. Becuase our tasks were all mostly unrelated, we could develop them independently. Most of the game was completed in about 6 hours, and the remainder of the time was spent fixing bugs, refactoring, and some minor redesign.

The backends were minimal. The chat backend was only 70 lines of perl, and the score backend was 9 lines of /bin/sh. Everything else was handled in the browser. We leveraged Resig's jQuery to make development faster. Development went extremely smooth, a testament to the "Dream Team"-nature of our team, perhaps? ;)

The game worked by presenting everyone with the same game - so you can compete for the highest score. You could also chat during and between games, if you wanted to.

A screenshot can be found here. At the end of the competition, we only had one known bug left. That bug didn't affect gameplay, and we were all tired, so it didn't get fixed. There were a few other issues that remained unresolved that may or may not be related to our code. Firefox was having issues with various things we were doing, and we couldn't tell if it was our fault or not.

Despite the fact that I probably shouldn't have attended the competition due to scholastic time constraints, I was glad I went. We had a blast writing the game.

We may get some time in the near future to improve the codebase and put it up online so anyone can play. There are quite a few important features that need to be added before it'll be useful as a public game.

I miss web programming.

I drank one of those Sobe No Fear GOLD things earlier, so I'm still wide awake. Waste not productivity? However, I'm going to be quite dead for classes tomorrow. Though, my classes aren't particularly important to me anymore. My philosophy of "learn what you want" landed me a dream job with Google, so there's no sense in turning away from it now. My algorithms class is getting cooler now that we're doing graph and tree algorithms like spanning trees, red-black trees, and other things. Beyond that, my interest in my other classes is very much dwindling. Only 4 weeks left.

The past few months have let little time for fun web projects. Web javascripty stuff is almost always an extremely fun endeavor, despite it often being a frustrating adventure in non-compliance! Looking at Opera 9's new fancy widget system makes me want to get back into web programming again.

The most fun project I've done recently has definitely been working on Pimp and pseudo-helping with jQuery development. I wrote more javascript during BarCamp NYC than I had in ages, and it was a great time.

This year's Bawls Programming Competition at RIT should be more fun this year now that Resig, Darrin, and myself are *much* more experienced with JavaScript, CSS, et al. Look forward to whatever project we come up with ;)

So what project should I start or work on next? I'd *love* to get working on Pimp again. Maybe I'll work on that or something similar soon. Now that jQuery has AJAXey support, it's almost worth it to rewrite the whole web interface with it. I'm also hoping to find time to work on my sysadmin time machine project - web-based searchey-goodness for logs and events.

Definite todos:

  • Fix newer xmlpresenter code to work in all browsers (mostly css issues?)
  • Update xmlpresenter project page
  • Write "magic database" thing for storing logs and events
  • Write happy web frontend

Not that many people read this site, but if you've got ideas for projects I'd be interested in, Let me know. I'm always up for ignoring structured book learning in favor of more educational adventures. After all, that's why I run this site: to catalogue my research adventures. Notice how (almost?) all of the content here is lacking in relation to my academics?

jQuery on XML Documents

Ever since BarCampNYC, I've been geeking out working with jQuery, a project by my good friend John Resig. It's a JavaScript library that takes ideas from Prototype and Behavior and some good smarts to make writing fancy JavaScript pieces so easy I ask myself "Why wasn't this available before?" I won't bother going into the details of how the library works, but it's based around querying documents. It supports CSS1, CSS2 and CSS3 selectors (and some simple XPath) to query documents for fun and profit.

In the car ride back from BarCampNYC, I asked Resig if he knew whether or not jQuery would work for querying on xml document objects. "Well, I'm not sure" was the response. I took the time today to test that theory. Becuase jQuery does not rely on document.getElementById() to look for elements the way Prototype does. Bypassing that limitation, you can successfully query XML documents and even subdocuments of HTML or XML. This is fantastic.

Today's magic was a demo I wrote to pull my rss feed via XMLHttpRequest (AJAX) and very simply pull the data I wanted to use out of the XML document object returned.

The gist of the magic of jQuery revolves around the $() function. This function is generations ahead of what the Prototype $() function provides.

The magic is here, in the XMLHttpRequest onreadystatechange function

// For each 'item' element in the RSS document, alert() out the title.
var entries = $("item",xml.responseXML).each(
   function() {
      var title = $(this).find("title").text();
      alert("Title: " + title);
The actual demo is quite impressive, I think. I can query through a complex XML document in only a few lines of code. Select the data you want, use it, go about your life. So simple!

View the RSS-to-HTML jQuery Demo