Search this site

[prev]  Page 4 of 54  [next]





xdotool 2.20100818.3004

Here's a new xdotool release. This one includes lots of new features and some bug fixes. All of the tests pass in the test suite (225 tests, 2779 assertions, 173 seconds).

New commands:

  • get_desktop_viewport and set_desktop_viewport. Window managers (like compiz) use viewports instead of the standard 'virtual desktop' model. This means there is usually one big desktop area that you have a small view on, and switching desktops means moving that view.
  • selectwindow. This command lets you pick a window by clicking on it. Useful for chaining to other commands.
  • windowreparent. Change which window parents another. Useful if you want to move a window to be inside another.
  • windowkill. Kills a window.
  • getwindowname. Prints the window name (title).
  • behave. Allows you to bind additional xdotool commands to run based on certain window events. See the manpage for examples.

Download: xdotool-2.20100818.3004.tar.gz

As usual, if you find problems or have feature requests, please file bugs or send an email to the list. The changelist from the previously-announced release:

  - New commands: 'get_desktop_viewport' and 'set_desktop_viewport'. Some
    window managers don't actually do 'multiple desktops' but instead show
    workspaces as views on a large desktop. For example, Compiz Fusion does
    this, which is why you may see 4 or more desktops, but 'xdotool
    get_num_desktops' reports only one.
  - New command: 'selectwindow' command which picks a window by clicking on it.
  - New command: 'windowreparent' with original patch by Markus Opitz.
  - New command: 'windowkill' will kill a window and the client that owns it
    (via XKillClient)
  - New command: 'getwindowname' for printing the name (title) of a window.
    Requested by Dominik Reichl for KeePass.
  - New command: 'behave'. This allows you to run xdotool commands on window
    events. For example, you can achieve 'mouse-follows-focus' with this:
      xdotool search --class xterm behave %@ mouse-enter windowfocus
    This will find all xterms and, upon mouse entering, will focus that window.
    Not really sure what I wanted with this feature, but it can grow as needed.
  - Support percentages in 'windowsize'. Percentages are relative to the size
    of the screen the window is on. 'windowsize 12345 100% 50%' will make the
    window the full width and half height of the screen.
  - When typing, if we can't find a character in the existing keymap, try
    looking up in keysym_charmap (xdo_util.h) and binding a key if we find one
    that matches. Reported by Victor Baulac on the mailing list. This allows
    us to type '@', for example, even if the current keymap does not have
    this key.

  - No functional changes.
  - Add missing file 'Doxyfile' to the release package so the xdo docs can be
    built from a release tarball. (Reported by Daniel Kahn Gillmor)
  - Skip 'getwindowfocus' misc test if we don't have a windowmanager due to
    a bug/feature in Xvfb and Xephyr if there is only one client and no wm.

Hack for quickly trimming invalid ssh keys

If you reimage a machine or change dns, you may get any of these messages when sshing in:
Offending key for IP in /home/jsissel/.ssh/known_hosts:239
Matching host key in /home/jsissel/.ssh/known_hosts:252
Offending key in /home/jsissel/.ssh/known_hosts:237
Seem familiar? Here's a very quick way to trim those.

eval "value=\$$#"

if [ "$#" -lt 1 ] ; then
  echo "Invalid arguments."
  exit 1

if ! echo "$value" | egrep -q '.*:[0-9]+$' ; then
  echo "Invalid file:lineno format: $value"
  exit 1

echo "$value" | awk -F: '{print "sed -i -e "$2"d",$1}' | sh -x
  • Put this in ~/bin/
  • chmod 755 ~/bin/
  • ln -s ~/bin/ ~/bin/Matching
  • ln -s ~/bin/ ~/bin/Offending
Now the next time you see these messages and want to clear the offending key, just paste the log message, as a command, into your terminal:
jls(~) % Offending key for IP in /home/jsissel/.ssh/known_hosts:239
+ sed -i -e 239d /home/jsissel/.ssh/known_hosts
Makes for a quick fix if you hit these messages in your normal day.

I prefer this to using 'ssh-keygen -R' as the error message has exactly the information you need to clear the bad key.

DevOpsDays 2010

Spent last Friday in Mountain View at DevOpsDays 2010. It was well executed and got great vendor support (lunch, tshirts, free) - total victory there for the folks involved in organizing! The format was a mix of panel discussions with a set of lightning talks, ignite-style, between them.

I wasn't in the target audience if this event. Frankly I think that's a good thing, as I already do devops-ey things. I didn't find myself enlightened by any particular topic covered, though it was good to finally put names to faces for some of the more prominent folks in the DevOps movement and other folks I've met online.

The target audience of today's DevOpsDays event was basically two sets. First, people who were in ops (staff or managers, all disciplines: IT, prod ops, netops, etc) who wanted to learn about DevOps. Second, people who wanted to know how to shift their ops culture towards something that is better for the business.

If you aren't yet aware of DevOps, the main idea is that Ops (sysadmins, IT, netops, whatever) needs to focus on providing and being seen as business value. Many businesses see IT in-general as a cost-of-doing-business piece, which is likely why the common culture is what it is: Execs treat IT as "just keep it up and don't ask me for help."

Successful devops implementation at many web companies generally first manifests itself as "smooth deployments" as that is the most common point of failure in interaction between the internal development teams and operations steams. Change causes outages when not done properly. Devops more or less encompasses modern best practices and ways to improve visibility and business value in IT, so don't let any fixation with automated deployments sway you from the larger ideas present here.

Visibility has two fronts. Technical visibility in that you drive towards systems that encourage monitorability and monitoring. Pretty much every discussion at devopsdays mentioned the need for metrics. You can't fight the science if you have supporting data. Second, company visibility in that your group shows that ops provides real value.

xdotool 2.20100623.2949 released

The latest release of xdotool includes a few bug fixes, minor improvements, and one major feature addition. All of the tests pass in the test suite (156 tests, 1892 assertions, 126 seconds).

The major feature is the new command chaining. That is, you can use multiple commands from the same xdotool invocation. Any window querying (search, getactivewindow, getwindowfocus) will save the result for use with future commands on the same invocation. For a simple example, we can resize the current window by doing this:

% xdotool getactivewindow windowsize 600 400
You can also activate the first Firefox window found:
% xdotool search --class firefox windowactivate
Or move all gimp windows to a specific desktop and then activate (switch to) gimp:
% xdotool search --class gimp set_desktop_for_window %@ 2 windowactivate %@

# the "%@" means all windows found in the search. The default when you do not
# specify a window is always "%1" aka the first window found by the search.
This idea for this feature came primarily from detailed suggestions by Henning Bekel. Huge thanks for dumping ideas my way :)

In addition, libxdo is now documented with Doxygen and is available here. Remember, if you have a feature request, it can't hurt to ask (or send patches!). If I have time/energy to work on it, I'll do what I can.

Download: xdotool-2.20100623.2949.tar.gz

As usual, if you find problems or have feature requests, please file bugs or send an email to the list. Changelist from previously-announced release:

  - Added 'window stack' and 'command chaining' support. Basically lets you
    include multiple commands on a single xdotool invocation and saves the last
    window result from search, getactivewindow, and getwindowfocus. For example,
    the default window selector is "%1" meaning the first window. 'xdotool
    search --class xterm windowsize 500 500' will resize the first xterm found.
    All commands that take window arguments (as flags or otherwise) now default to
    "%1" if unspecified. See xdotool(1) manpage sections 'COMMAND CHAINING'
    and 'WINDOW STACK' for more details. This feature was suggested (with great
    detail) by Henning Bekel.
  - To simplify command chaining, all options now terminate at the first
    non-option argument. See getopt(3) near 'POSIXLY_CORRECT' for details.
  - Add --sync support to windowsize.
  - Update docs and output to indicate that 'search --title' is deprecated (but
    still functions). Use --name instead.
  - Fix mousemove --screen problem due to bug in XTEST. Reported by
    Philipp Specht,
  - Fix segfault when invoking xdotool with an invalid command.
    Reported by: Bruce Jerrick, Sven Lankes, Daniel Kahn Gillmor, and Thomas
  - Fix bug --clearmodifiers bug caused by an uninitialized value being
    interpreted as the 'modmask' causing us to try to clear invalid modifiers.
    Reported by Hong-Leong Ong on the mailing list.
  - Lots of internal refactoring and documentation improvements.
  - Testing additions for several commands in addition to command chaining.
  - Documented libxdo through xdo.h. Docs can be generated by 'make docs'
    from the xdotool release.
  - libxdo: xdo_window_translate_with_sizehint
  - libxdo: xdo_window_wait_for_size

Debugging java threads with top(1) and jstack.

At work, we're testing some new real-time bidding on ADX and are working through some performance issues.

Our server is jetty + our code, and we're seeing performance problems at relatively low QPS.

The first profiling attempt used YJP, but when it was attached to our server, the system load went up quickly and looked like this:

load average: 2671.04, 1653.95, 771.93
Not good; the load average while running with the profiler attached jumps to a number roughly equal to the number of application threads (3000 jetty threads). Possible PEBCAK here, but needless to say we couldn't use YJP for any meaningful profiling.

Stepping back and using some simpler tools proved to be much more useful. The standard system tool "top" can, in most cases show per-thread stats instead of just per-process. Unfortunately, top(1) doesn't seem to have a "parent pid" option, but luckily for us, our java process was the only one running as the user 'rfi'. Further, some top implementations can take a single process (top -p PID) which would be useful here.

If we make top output threads and sort by cpu time, we get an idea of the threads that spend the most time being busy. Here's a sample:

top - 22:29:34 up 87 days, 21:45,  9 users,  load average: 10.31, 12.58, 8.45
Tasks: 3237 total,   5 running, 3231 sleeping,   0 stopped,   1 zombie
Cpu(s): 53.9%us,  2.6%sy,  0.0%ni, 42.3%id,  0.0%wa,  0.0%hi,  1.1%si,  0.0%st
Mem:  16432032k total, 10958512k used,  5473520k free,   149100k buffers
Swap: 18474740k total,    77556k used, 18397184k free,  5321140k cached

32149 rfi       16   0 13.7g 6.8g  13m S 50.7 43.3 134:38.08 134:38 java
28993 rfi       15   0 13.7g 6.8g  13m S  5.5 43.3  13:26.85  13:26 java
28995 rfi       16   0 13.7g 6.8g  13m S  3.6 43.3   8:17.84   8:17 java
32151 rfi       15   0 13.7g 6.8g  13m S  1.9 43.3   5:47.59   5:47 java
29143 rfi       16   0 13.7g 6.8g  13m S  0.0 43.3   4:18.51   4:18 java
28990 rfi       15   0 13.7g 6.8g  13m S  1.0 43.3   3:58.86   3:58 java
28989 rfi       15   0 13.7g 6.8g  13m S  0.6 43.3   3:58.83   3:58 java
28992 rfi       15   0 13.7g 6.8g  13m S  1.0 43.3   3:58.72   3:58 java
28986 rfi       16   0 13.7g 6.8g  13m S  1.0 43.3   3:58.00   3:58 java
28988 rfi       16   0 13.7g 6.8g  13m S  0.6 43.3   3:57.94   3:57 java
28987 rfi       15   0 13.7g 6.8g  13m S  0.6 43.3   3:56.89   3:56 java
28991 rfi       15   0 13.7g 6.8g  13m S  1.0 43.3   3:56.71   3:56 java
28985 rfi       15   0 13.7g 6.8g  13m S  1.3 43.3   3:55.79   3:55 java
28994 rfi       15   0 13.7g 6.8g  13m S  1.0 43.3   2:05.35   2:05 java
29141 rfi       15   0 13.7g 6.8g  13m S  0.0 43.3   1:09.66   1:09 java
32082 rfi       15   0 13.7g 6.8g  13m S  0.0 43.3   0:56.29   0:56 java
28998 rfi       15   0 13.7g 6.8g  13m S  0.0 43.3   0:52.27   0:52 java
29359 rfi       15   0 13.7g 6.8g  13m S  3.2 43.3   0:45.11   0:45 java
31920 rfi       15   0 13.7g 6.8g  13m S  0.0 43.3   0:43.46   0:43 java
31863 rfi       15   0 13.7g 6.8g  13m S  0.0 43.3   0:41.67   0:41 java
31797 rfi       15   0 13.7g 6.8g  13m S  0.0 43.3   0:41.56   0:41 java
31777 rfi       15   0 13.7g 6.8g  13m S  0.0 43.3   0:41.31   0:41 java
30807 rfi       15   0 13.7g 6.8g  13m S  0.0 43.3   0:41.22   0:41 java
31222 rfi       15   0 13.7g 6.8g  13m S  0.0 43.3   0:40.70   0:40 java
30622 rfi       15   0 13.7g 6.8g  13m S  3.9 43.3   0:40.56   0:40 java
31880 rfi       15   0 13.7g 6.8g  13m S  0.0 43.3   0:39.77   0:39 java
29819 rfi       15   0 13.7g 6.8g  13m S  4.8 43.3   0:39.15   0:39 java
29438 rfi       15   0 13.7g 6.8g  13m S  0.0 43.3   0:39.11   0:39 java
30363 rfi       15   0 13.7g 6.8g  13m R  0.0 43.3   0:39.03   0:39 java
31554 rfi       15   0 13.7g 6.8g  13m S  0.0 43.3   0:38.82   0:38 java

I included lots of threads in the output above so you can get an idea, visually, of the time buckets you could place each thing in.

The important column is "TIME+" here. You can see that there's some uniqueness in the times that probably give you an indication of the similarity in duties between each thread with similar times (all the near 4-hour times are probably of similary duty, etc).

If you grab a stack trace from your java process, you can map each process id above to a specific thread. You can get a stack by using jstack(1) or sending SIGQUIT to the java process and reading the output.

Java stack traces, for each thread, have similar headers:

"thread name" prio=PRIORITY tid=THREADID nid=LWPID STATE ...
The thing we care about here is the LWPID (light weight process id) or also known as the process id.

Find my java process (must be run as the user running the process):

% sudo -u rfi jps
28982 ServerMain
3041 Jps
If I take the 2nd highest pid from the top output above, 28993, what thread does this belong to? If you look at the stack trace, you'll see that java outputs the 'nid' field in hex, so you'll have to convert your pid to hex first.
% printf "0x%x\n" 28993

% grep nid=0x7141 stacktrace
"VM Thread" prio=10 tid=0x000000005e6d9000 nid=0x7141 runnable 

# What about the first pid?
% printf "%0x%x\n" 32149

% grep -A3 nid=0x7d95 stacktrace
2010-06-22 03:15:26.489485500 "[email protected] Thread Pool-2999 - Acceptor0 [email protected]:2080" prio=10 tid=0x00002aad3dd43000 nid=0x7d95 runnable [0x00002aadc62d9000]
2010-06-22 03:15:26.489507500    java.lang.Thread.State: RUNNABLE
2010-06-22 03:15:26.489512500   at Method)
2010-06-22 03:15:26.489518500   at
From the above output, I can conclude that the most cpu-intensive threads in my java process are the Jetty socket acceptor thread and the "VM Thread". A red flag should pop up in your brain if a 4-hour-old process has spent 2.25 hours spinning in accept().

FYI, googling for "VM Thread" and various java keywords doesn't seem to find much documentation, though most folks claim it has GC and other maintenance-related duties. I am not totally sure what it does.

In fact, if you look at the top 10 pids, none of them are actually anything that runs the application code - they are all management threads (thread pool, socket listener, garbage collection) outside of my application.

32149 [email protected] Thread Pool-2999 - Acceptor0 [email protected]:2080
28993 VM Thread
28995 Finalizer
32151 pool-7-thread-1
29143 pool-3-thread-1
28990 GC task thread#5 (ParallelGC)
28989 GC task thread#4 (ParallelGC)
28992 GC task thread#7 (ParallelGC)
28986 GC task thread#1 (ParallelGC)
28988 GC task thread#3 (ParallelGC)
To be fair to the rest of this app, there are 3000 worker threads that each have around 40 minutes (or less) of cpu time, so the application is doing useful work, it's important to observe where you are wasting time - in this case, the jetty accept thread. The GC-related data may be an indication that garbage collection could be tuned or avoided (through thread-local storage, etc).

Lastly, since each of these IDs are LWP IDs, you can use other standard tools (strace/truss, systemtap/dtrace, etc) on them.

Knowing the accept thread is consuming lots of time should make you run tcpdump if you haven't already, and in this case I see a few hundred new connections per second, which isn't very high and leads us further down the road of debugging.

Apache httpd config to cache the main maven repo

At work, I'm experimenting with Maven as a way to manage java package builds and dependencies.

The maven mirror documentation says pretty explicitly "Do not rsync the entire repo" - it recommends the use of caching proxies. We can do this trivially with apache httpd.

# httpd.conf:
ProxyPass /maven/proxy

# What http path to cache for
CacheEnable disk /maven/proxy
# Where on disk to store the cached data
CacheRoot   /srv/repo/maven/proxy

CacheDirLength 2
CacheDirLevels 3 

# Override default cache expiration and control
CacheDefaultExpire 2419200
CacheMaxExpire 2419200

# Ignore requests to not serve from cache. Maven data never changes.
CacheIgnoreCacheControl On

# Default max file size is 64K. Set to 1GB.
CacheMaxFileSize 1073741824
The above config will take requests to http://yourserver/maven/main/... and proxy them through to the main maven repo and also cache the fetch local to your webserver so future fetches will be local.

You tell maven to use your local repo in ~/m2/settings.xml:

        <name>Local maven mirror</name>
        <-- Replace 'repo.local' with whatever your webserver's name is -->
Now all my maven dependency fetches are going through a local repo and files get cached to disk for future requests.

Since I already had a repo server for local rpm and rubygems, pushing this 5 line httpd config change with puppet was practically a no-op in terms of implementation.

SSL handshake latency and HTTPS optimizations.

At work today, I started investigating the latency differences for similar requests between HTTP and HTTPS. Historically, I was running with the assumption that higher latency on HTTPS (SSL) traffic was to be expected since SSL handshakes are more CPU intensive. I didn't really think about the network consequences of SSL until today.

It's all in the handshake.

TCP handshake is a 3-packet event. The client sends 2 packets, the server sends 1. Best case, you're looking at one round-trip for establishing your connection. We can show this empirically by comparing ping and tcp connect times:

% fping -q -c 5 : xmt/rcv/%loss = 5/5/0%, min/avg/max = 112/115/123
Average is 115ms for ping round-trip. How about TCP? Let's ask curl how long tcp connect takes:
% seq 5 | xargs [email protected] -n1 curl -so /dev/null -w "%{time_connect}\n"
There's your best case. This is because when you (the client) receive the 2nd packet in the handshake (SYN+ACK), you reply with ACK and consider the connection open. Exactly 1 round-trip is required before you can send your http request.

What about when using SSL? Let's ask curl again:

% curl -kso /dev/null -w "tcp:%{time_connect}, ssldone:%{time_appconnect}\n"
tcp:0.117, ssldone:0.408

# How about to google?
% curl -kso /dev/null -w "tcp:%{time_connect}, ssldone:%{time_appconnect}\n"
tcp:0.021, ssldone:0.068

3.5x jump in latency just for adding SSL to the mix, and this is before we sent the http request.

The reason for this is easily shown with tcpdump. For this test, I'll use tcpdump to sniff https traffic and then use openssl s_client to simply connect to the http server over ssl and do nothing else. Start tcpdump first, then run openssl s_client.

terminal1 % sudo tcpdump -ttttt -i any 'port 443 and host'

terminal2 % openssl s_client -connect

Tcpdump output trimmed for content:

# Start TCP Handshake
00:00:00.000000 IP snack.home.40855 > Flags [S] ...
00:00:00.114298 IP > snack.home.40855: Flags [S.] ...
00:00:00.114341 IP snack.home.40855 > Flags [.] ...
# TCP Handshake complete.

# Start SSL Handshake.
00:00:00.114769 IP snack.home.40855 > Flags [P.] ...
00:00:00.226456 IP > snack.home.40855: Flags [.] ...
00:00:00.261945 IP > snack.home.40855: Flags [.] ...
00:00:00.261960 IP > snack.home.40855: Flags [P.] ...
00:00:00.261985 IP snack.home.40855 > Flags [.] ...
00:00:00.261998 IP snack.home.40855 > Flags [.] ...
00:00:00.273284 IP snack.home.40855 > Flags [P.] ...
00:00:00.398473 IP > snack.home.40855: Flags [P.] ...
00:00:00.436372 IP snack.home.40855 > Flags [.] ...

# SSL handshake complete, ready to send HTTP request. 
# At this point, openssl s_client is sitting waiting for you to type something
# into stdin.

Summarizing the above tcpdump data for this ssl handshake:
  • 12 packets for SSL, vs 3 for TCP alone
  • TCP handshake took 114ms
  • Total SSL handshake time was 436ms
  • Number of network round-trips was 3.
  • SSL portion took 322ms (network and crypto)
The server tested above has a 2048 bit ssl cert. Running 'openssl speed rsa' on the webserver shows it can do a signature in 22ms:
                  sign    verify    sign/s verify/s
rsa 2048 bits 0.022382s 0.000542s     44.7   1845.4
Anyway. The point is, no matter how fast your SSL accelerators (hardware loadbalancer, etc), if your SSL end points aren't near the user, then your first connect will be slow. As shown above, 22ms for the crypto piece of SSL handshake, which means 300ms of the SSL portion above was likely network latency and some other overhead.

Once SSL is established, though, it switches to a block cipher (3DES, etc) which is much faster and the resource (network, cpu) overhead is pretty tiny by comparison.

Summarizing from above: Using SSL incurs a 3.5x latency overhead for each handshake, but afterwards it's generally fast like plain TCP. If you accept this conclusion, let's examine how this can affect website performance.

Got firebug? Open any website. Seriously. Watch the network activity. How many HTTP requests are made? Can you tell how many of those that go to the same domain use http pipelining (or keepalive)? How many initiate new requests each time? You can track this with tcpdump by looking for 'syn' packets if you want (tcpdump 'tcp[tcpflags] == tcp-syn').

What about the street wisdom for high-performance web servers? HAProxy's site says:

"If a site needs keep-alive, there is a real problem. Highly loaded sites often disable keep-alive to support the maximum number of simultaneous clients. The real downside of not having keep-alive is a slightly increased latency to fetch objects. Browsers double the number of concurrent connections on non-keepalive sites to compensate for this."
Disabling keep-alive on SSL connections means every single http request is going to take 3 round-trips before even asking for data. If your server is 100ms away, and you have 10 resources to serve on a single page, that's 3 seconds of network latency before you include SSL crypto or resource transfer time. With keep alive, you could eat that handshake cost only once instead of 10 times.

Many browsers will open multiple simultaneous connections to any given webserver if it needs to fetch multiple resources. Idea is that parallelism gets you more tasty http resources in a shorter time. If the browser opens two connections in parallel, you'll still incur many sequential SSL handshakes that slow your resource fetching down. More SSL handshakes in parallel means higher CPU burden, too, and ultimately memory (per open connection) scales more cheaply than does CPU time - think: above, one active connection cost 22ms of time (most of which is spent in CPU) and costs much more than that connection holds in memory resources and scales better (easier to grow memory than cpu).

For some data, Google and Facebook both permit keep-alive:

% URL=
% curl  -w "tcp: %{time_connect} ssl:%{time_appconnect}\n" -sk -o /dev/null $URL -o /dev/null $URL
tcp: 0.038 ssl:0.088
tcp: 0.000 ssl:0.000

% URL=
% curl  -w "tcp: %{time_connect} ssl:%{time_appconnect}\n" -sk -o /dev/null $URL -o /dev/null $URL
tcp: 0.054 ssl:0.132
tcp: 0.000 ssl:0.000
The 2nd line of output reports zero time spent in tcp and ssl handshaking. Further, if you tell curl to output response headers (curl -D -) you'll see "Connection: keep-alive". This is data showing that at least some of big folks with massive qps are using keep alive.

Remember that new handshakes are high cpu usage, but existing SSL connections generally aren't as they are using a cheaper block cipher after the handshake. Disabling keep alive ensures that every request will incur an SSL handshake which can quickly overload a moderately-used server without SSL acceleration hardware if you have a large ssl key (2048 or 4096bit key).

Even if you have SSL offloading to special hardware, you're still incuring the higher network latency that can't be compensated by faster hardware. Frankly, in most cases it's more cost effective to buy a weaker SSL certificate (1024 bit) than it is to buy SSL hardware - See Google's Velocity 2010 talk on SSL.

By the way, on modern hardware you can do a decent number of SSL handshakes per second with 1024bit keys, but 2048bit and 4096bit keys are much harder:

# 'openssl speed rsa' done on an Intel X5550 (2.66gHz)
rsa 1024 bits 0.000496s 0.000027s   2016.3  36713.2
rsa 2048 bits 0.003095s 0.000093s    323.1  10799.2
rsa 4096 bits 0.021688s 0.000345s     46.1   2901.5

Fixing SSL latency is not totally trivial. The CPU intensive part can be handled by special hardware if you can afford it, but the only way sure way to solve network round-trip latency is to be closer to your user and/or to work on minimizing the total number of round-trips. You can be further from your users if you don't force things like keep-alive to be off, which can save you money in the long run by letting you have better choices of datacenter locations.

xdotool 2.20100601 release

Thanks to some early testing and feedback from the previous xdotool release, I've put together a new release that fixes a few minor problems.

Download: xdotool-2.20100601.2912.tar.gz

As usual, if you find problems or have feature requests, please file bugs or send an email to the list.

Changelist since previous announcement:

  - Add --sync and --clearmodifiers support to mousemove_relative
  - Fix bug in mousemove_relative --polar (Reported by Paul S via mailing list)
  - Change polar coordinates to be 'north'-oriented (0 is up, 90 is right...)
    (Requested by Paul S via mailing list)
  - Changed xdotool search flags. '--title' now means '--name' to match the
    window name (shown in the window manager title bar for the window).
    --title still works, but you will get a warning about deprecation.
  - Walked through all commands and tried to make sure the manpage reflects
    reality and has more detail where needed.

  - Skip certain tests when the requirements aren't met (ie; no such window manager, etc)
    Reported by Daniel Kahn Gillmor.

New eventmachine-tail features

I added block support to EM::file_tail in the eventmachine-tail gem. This lets you give a handler to the tail instead of a class, which could mean less code to write for the common case (handle lines of input):
require "rubygems"
require "eventmachine-tail"

def main(args)
  if args.length == 0
    puts "Usage: #{$0} <path> [path2] [...]"
    return 1
  end do
    args.each do |path|
      EventMachine::file_tail(path) do |filetail, line|
        # filetail is the 'EventMachine::FileTail' instance for this file.
        # line is the line read from thefile.
        # this block is invoked for every line read.
        puts line
  end #
end # def main

You can use gem to install the latest eventmachine-tail.

You can also pull the latest from github

xdotool 2.20100524 release

New xdotool for your enjoyment.


Major changes:

  • xdotool getmouselocation now supports --shell for outputting something you can simply eval into your shell
  • Added 'sync' support to several commands. The --sync flag, if supported, will wait until the requested operation has completed before exiting.
  • Added --polar (polar coordinate) support to mousemove
  • Fixed some keymap bugs

As usual, if you find problems or have feature requests, please file bugs or send an email to the list.

Changelist since previous announcement:

  - Add --sync flag to many commands. This flag will make any supporting command
    block until the operation completes. For example, a 'windowmove --sync' will
    only exit once the window has moved. Commands supported:
    * windowactivate
    * windowfocus
    * windowmap
    * windowmove
    * windowunmap
  - Fix
    Multiple keyboard layouts are now supported better. Added unit tests
    to ensure consistency in typing across releases.
  - Added polar coordinate support to mouse movement. 'mousemove' and
    'mousemove_relative' support this. Flag is --polar. 'x' becomes angle (degrees)
    and 'y' becomes distance. The default origin is screen center. If you specify
    a window (to mousemove) then the origin is the center of that window.
    Requested by Paul S on the xdotool-users mailing list.
  - internal: Reworked testing to be less flakey
  - internal: split commands into one per C file
  - API changes that required a version bump.
    * xdo_keysequence and friends now take a delay.
  - New libxdo functions:
    * xdo_window_wait_for_map_state
    * xdo_window_wait_for_active
    * xdo_get_window_location
    * xdo_window_wait_for_focus

  - Add support for searching classnames (not just class)
  - Support switched xkbmaps. We do this by checking if the keycode found for a keysym
    has that keysym as the first or second (normal or with shift-key) mapping, if not
    we pretend we didn't find it and request it be bound as if the keysym wasn't mapped
    to any key.

    This is a hack to avoid knowing what other modifiers are required to type
    keysyms mapped at indexes 2 or beyond (0 is normal, 1 is shift, 2 and beyond
    are unknown).
    Verified fixes this:
    Should fix some of this:
  - Add --shell flag to getmouselocation. This will output data you can safely
    eval in your shell.
  - Add platform detection for Darwin when building.
    Should fix:

  - Fix bug in search where --name wasn't working. Test written to cover this.

  - Bug fixes only this release
  - Rewrote tests to use ruby and will now test against Xvfb and Xephyr using:
    no window manager, openbox, and then gnome.
  - Fix build problems on Arch linux (or gentoo?)
  - Fix searching for only visible windows.
  - Fix xdotool search flags: --title, --name, --class
  - Add include for keysym.h to fix compile errors reported on the mailing