Search this site

[prev]  Page 2 of 3  [next]





jQuery autofill version 2

This post marks 4 in one day. Whew!

Resig and I were bouncing ideas around after I made the form filler, and we came up with something that fits very nicely into the jQuery api (in the form of something very pluggable).

You'll need the following code that will extend jQuery's functionality. Basically, it adds 'saveAsCookie' and 'loadAsCookie' function calls to $() objects.

$.fn.saveAsCookie = function(n,t){
   return this.each(function(){
     createCookie( (n || '') + ( ||, this.value, t );

$.fn.loadAsCookie = function(n){
  return this.each(function(){
    this.value = readCookie( (n || '') + ( || );
You can safely put that code somewhere and load it anywhere you need autofill. Reusable code is awesome.

Now, we don't want to cache *all* input elements, becuase only some contain user-input and only some need to be saved. For this, I put the class 'cookieme' on all input elements I wanted to save.

The arguments to 'saveAsCookie' and 'loadAsCookie' are namespace prefixes. This way, you can avoid namespace collisions with other cookies. All of my autofill cookies will be prefixed with 'formdata' and suffixed with the element name or id attribute.

So, we squished the code down to 6 lines, 4 of which are actually meaningful.


jQuery+cookies = trivially simple form autofill

It's always nice when websites you commonly visit remember things about you, or atleast give the perception that they remember things about you.

The Pyblosxom comment plugin doesn't autofill the form. That's too bad. I don't really want to dig into the python code to do any cookie-setting on submission, becuase I have never looked at the code and thusly am unfamiliar with the effort required for such a change. Luckily, we can use javascript to store data in cookies too!

I love jQuery, so that's what I'll use for this little hack. On the comments page, I add the following javascript:

   var uname = "";
   var uemail = "";
   var usite = "";

   function saveCommentInformation() {
      // Save user information from the form!
      createCookie(uname, $("input[@name='author']").val());
      createCookie(uemail, $("input[@name='email']").val());
      createCookie(usite, $("input[@name='url']").val());

   function initCommentForm() {
      // Autofill user information if available

      // Save comment information when form is submitted

That's all we need. Whenever someone submits, we will store useful information in a cookie. Whenever that person comes back, we'll pull the data out of the cookie and put it back in the form. User Experience is happier, atleast as far as I am concerned (as a user).

If you are wondering about the 'readCookie' and 'createCookie' functions, you can find them on

Update: Check out this followup post that implements this in a more jquery-like way.

Pyblosxom contributed plugins for 1.3.x finally out

Subject says it all. I've really wanted comments working in pyblosxom 1.3 for quite some time. However, all googling points to old versions of the comments plugin that only works on the older versions. I checked the pyblosxom page tonight and was pleasantly surprised that a contributed plugin set had been released early this month.

If you use pyblosxom 1.3.x, plugin updates are ready for your use.

On that note, comment functionality is FINALLY here on this site. To those of you reading this site, feel free to comment! I love feedback.

Go to the pyblosxom website

All known-broken entries now fixed

Thanks to my handy vim-pybloxsom hack, I took time today to go back through all entries I had marked broken during the move from my old site to here.

Those entries should now be good now with fixed links, etc. Ahh, sweet productivity.

Content mostly moved

I woke up at 1am and my body decided it was time to be awake. So, it's now almost 5am. From what I can tell, I'm finished moving data and fixing problem with the projects and articles sections.

The only changes left are to update the older entries with link and formatting fixes. Whew! Good thing there's only 150+ entries to look at. </sarcasm>

I may write a quick wrapper using mod_(perl|python|whocares) around pyblosxom to do page caching. There's no sense in regenerating a page (running python, reading files, etc) every time a page hit occurs. There are caching abilities in pyblosxom itself, but neither cache method (pickle or shelve) actually made page loads faster in testing.

If you have comments about the new site, let me know.

A few steps closer to done

Spent lots of time today updating content, fixing links, etc. I rewrote most of my about section, seeing as how I haven't done so in a very long time.

I'm starting to really like the new layout I've made. It feels a bit more relaxed and less rigid than the old layout.

I also spent a few hours of boredom playing in Gimp making a cute, running, stick man icon to go with my "halfway to the finish line" tag line. As I've said before, attempting to do precision mouse movement with a mouse nipple (trackpoint) is very taxing.

There's still lots of content (100+ posts, and other stuff) that needs to be updated to fix links and other issues. I'm hoping to have everything done by the end of the weekend.

Site move almost done

Spent a bit playing with pyblosxom and such. It's crazy easy to setup. It meets my requirements in that it lets me post things, has useful plugins I want to use, and doesn't require a massive database system.

There's still lots to do:

  • Write a new script to let me post things with my own metadata
  • Integrate CVS/SVN repo information?
  • Add projects and articles sections
  • integrate project and article pages somehow into the blog.

New article: ssh security and idiot administrators

Long story short: I found a security-hole type thing in some of RIT's servers. I contacted ITS (RIT's systems and network team) about the problem. I heard from the grapevine that they wouldn't fix it, despite the easy fix, so as a way of venting anger towards sysadmins who respond negatively when you report a problem, I wrote this article. It explains why setting my shell to /bin/false is not preventing me from using ssh on your machine.

Link: /articles/ssh-security

Search engine referrer urls and your website.

Lots of websites are similar to mine: A front page containing recent entries. i've noticed that quite a few search engine requests actually direct people to the main page long after the content has disappared into the archive. Personally, when I go searching on the net for "things" and I find some guy's blog whose entry of interest to me has been gone to his archive for months, I get mad.

So what can we do? Surely there are plenty of solutions, right? I mean, with wonderful things like RDF there's got to be a way to tell search engines where the data is going to be living on a permanent basis? If not, there's still a few things left we can do. Firstly, I think it is crucial that websites provide a means of which to search their content. Secondly, and sometimes this bothers me, somtimes I like it, some websites will take the referrer url into account and use that in helping display the webpage. This is most often seen by websites highlighting search terms that it sees from your google referrer url.

While this is helpful, it's not always useful. Blogware ought to recognize that the front page is dynamic and be able to understand search engine referrers. Using this knowledge it should display (in some meaningful and useful way) a localized search result to the user. This means if I search for "ppp over ssh" and google points me to, my front page needs to realize that my post about 'ppp over ssh' probably isn't there anymore, and it should ideally point me at the pppp over ssh article.

Certain features such as text highlighting are quite useful, but giving the user a way to turn them off to ease readability is necessary, in my mind. There are other features I imagine could be incorporated here, aswell, but I haven't actually put much thought into it right now, perhaps later ;)

A website's goal is to convey information. If this information cannot be found quickly and easily, users will go elsewhere. For this reason, I believe it is crucial for website maintainers to periodically look at things like webserver logs for search engine referrers. If google is sending your users to the wrong page you have two options: Make the correct page more visible (difficult with current search engine algorithms which aren't very intelligent), or make your website smarter.

xml for articles

I needed a neat way to write and present articles. I got bored and wrote a little xsl script that turns my happy new article xml into some html-ish stuff. So far it's looking very cool. For instance, it will do automatic table-of-contents generation with proper anchor tags, etc. I'll post more on this later when it's finished. I'd finish it tonight, but I'm tired.