Search this site


Metadata

Articles

Projects

Presentations

TF2 performance on wine+linux

I recently gave up windows 8 (which is horrible, by the way) to install Fedora on my workstation at home.

I wanted to still play TF2, so while I wait for the steam linux/tf2 beta, I figured wine would work.

I used the fedora wine packages as well as 'winetricks' to install steam (winetricks is awesomesauce.). Basically, with winetricks, you just do 'winetricks steam' to install steam. Bam! Done.

To run steam after winetricks installs it, you'll want to do this crazy business, because winetricks installs steam to it's own "wine prefix":

WINEPREFIX=$HOME/.local/share/wineprefixes/steam/ wine "C:\\Program Files (x86)\\Steam\\Steam.exe"

When running TF2, I noticed framerates were pretty crappy. Most googling I did found practically no useful information except for one or two forum posts that indicate CPU affinity being the most likely cause. This made sense given frame rates were completely the same regardless of graphical settings used (resolution or features like model quality, etc).

First step is to enable 'multicore rendering'

Then, when tf2 is up, run this script below. It looks for the most cpu-hungry threads in hl2.exe (tf2) as well as for the steam/wine/xorg processes. It then pins them to specific CPUs.

(
  # Get the top cpu-using threads for the 'hl2.exe' process (tf2)
  # then pin each to a separate CPUs and give them elevated scheduling priority
  top -bn1 -Hp $(pgrep hl2.exe) \
  | awk '$NF == "hl2.exe" { split($(NF - 1), t, ":"); cputime = t[1] * 60.0 + t[2]; print cputime, $1 }'  \
  | sort -n | tail -3 | awk '{print $2}'

  pgrep 'Steam|wineserver|Xorg'
) \
| awk '{print NR, $1}' \
| sudo xargs -tn2 sh -c 'taskset -p $((1 << ($1 - 1))) $2; renice -n -2 $2' -
Once I run this command while tf2 is up, frame rate doubles (to around 60) and is much more consistent (bursty drops in framerates)

Making iptables changes atomically and not dropping packets.

I'm working on rolling out iptables rules to all of our servers at work. It's not a totally simple task, as many things can go wrong.

The first problem is the one where you can shoot yourself in the foot. Install a new set of rules for testing on a remote server, and suddenly your ssh session stops responding. I covered how to work around that in a previous post.

Another problem is ensuring you make your firewall changes atomically. All rules pushed in a single step. In linux, if you have a script with many lines of 'iptables' invocations, running it will make one rule change per iptables command. And what if you write your rules like this?

# Flush rules so we can install our new ones.
iptables -F

# First rule, drop input by default
iptables -P INPUT DROP

# Other rules here...
iptables -A INPUT ... -j ACCEPT
iptables -A INPUT ... -j ACCEPT
If your server is highly trafficked, then the delay between the 'DROP' default and accept rules can mean dropped traffic. That sucks. This is an example of a race condition. Additionally, there's a second race condition earlier in the script where, depending on the default rule for INPUT, we may drop or accept all traffic for a very short period. Bad.

One other problem I thought could occur was a state tracking problem with conntrack. If previously we weren't using conntrack, what would happen to existing connections when I set default deny and only allowed connections that were established? Something like this:

iptables -P INPUT DROP
iptables -A INPUT -i eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 22 --syn -j ACCEPT
I did some testing with this, and I may be wrong here, but it does not drop my existing sessions as I had predicted. This is a good thing. Turns out, when this runs, the 'conntrack' table is populated with existing connections from the network stack. This further helps us not drop traffic when pushing new firewall rules. You can view the current conntrack table in the file /proc/net/ip_conntrack.

What options do we have for atomically applying a bunch of rules so we don't drop traffic? The iptables tool set comes with 'iptables-save' which lets you save your existing iptables rules to a file. I was unable to find any documentation on the exact format of this file, but it seems easy enough to read. The output includes rules and counters for each table and chain. Counters are optional.

All the documentation I've read indicates that using 'iptables-restore' will apply all of the rules atomically. This lets us set a pile of rules all at once without any race conditions.

So I generate an iptables-restore file and use iptables-restore to install it. No traffic dropped. I'm generating it with a shell script, so there was one gotcha - I basically take iptables commands and output them to a file. I do this with a shell function I wrote, called 'addrule'. However, I have some rules like this:

addrule -A INPUT -p tcp -m limit --limit 5/min -j LOG --log-prefix "Denied TCP: " --log-level debug
I quoted the argument in the addrule invocation, but we need to also produce a quoted version in our iptables-restore rule file, otherwise --log-prefix will get set to 'Denied' and we'll also fail because 'TCP:' is not an option iptables expects. It appears to be safe to quote all arguments in the iptables-restore files except for lines declaring chain counters (like ':INPUT ACCEPT [12345:987235]'), defining tables (like '*filter'), or the 'COMMIT' command. Instead of quoting everything, I just quote everything with spaces in an argument.

The fix makes my 'addrule' function look like this:

rulefile="$(mktemp)"

addrule() {
  while [ $# -gt 0 ] ; do
    # If the current arg has a space in it, output "arg"
    if echo "$1" | grep -q ' '  ; then
      echo -n "\"$1\""
    else
      echo -n "$1"
    fi
    [ $# -gt 1 ] && echo -n " "
    shift
  done >> $rulefile
  echo >> $rulefile
}

# So this:
#   addrule -A INPUT -j LOG --log-prefix "Hello World"
# will output this to the $rulefile
#   -A INPUT -j LOG --log-prefix "Hello World"
So now the quoted arguments stay quoted. All of that madness is in the name of being able to simple replace 'iptables' with 'addrule' and you're good to go. No extra formatting changes necessary.

One last thing I did was to make sure iptables-restore didn't reject my file, and if it did, to tell me:

if iptables-restore -t $rulefile ; then
  echo "iptables restore test successful, applying rules..."
  iptables-restore -v $rulefile
  rm $rulefile
else
  echo "iptables test failed. Rule file:" >&2
  echo "---" >&2
  cat $rulefile >&2
  rm $rulefile
  exit 1
fi
Throw this script into puppet and we've got automated firewall rule management that won't accidentally drop traffic on rule changes.

Shebang (#!) fix.

Most shebang implementations seem to behave contrary to my expectations.

As an example, prior to today, I would have expected the following script to output 'debug: true'

#!/usr/bin/env ruby -d
puts "debug: #{$DEBUG}"
Running it, I get this:
% ./test.rb
/usr/bin/env: ruby -d: No such file or directory
This is because the 'program' executed is '/usr/bin/env' and the first argument passed is 'ruby -d', exactly as if you had run: /usr/bin/env "ruby -d"

My expectation was that the above would behave exactly like this:

% /usr/bin/env ruby -d test.rb
debug: true
It doesn't. The workaround, however, is pretty trivial. It's only a few lines of C to get me a program that works as I want. I call the program 'shebang'. Why is it C and not a script? Because most platforms have a requirement that the program being executed from the shebang line be a binary, not another script.

#!/usr/local/bin/shebang ruby -d
puts "debug: #{$DEBUG}"
Now run the script again, with our new shebang line:
% ./test.rb
debug: true
Simple and works perfectly.

Subversion 1.5 on Fedora 9

jls(~) % sudo yum install subversion
Loaded plugins: refresh-packagekit
Setting up Install Process
Parsing package install arguments
Package subversion-1.4.6-7.x86_64 already installed and latest version
I had hoped (hope is not a strategy) Fedora would have given me svn 1.5 by now. Nope.

To get svn 1.5, rather than ask fedora or google, I just built it myself. I needed to 'yum install neon-devel' and used './configure --with-neon=/usr --with-ssl --with-zlib=/usr/lib' to configure subversion. Otherwise the build/install went fine.

Huzzah!

Mounting partitions within a disk image in Linux

When you create a loop device from a disk image with losetup, it doesn't bother reading the partition table from the disk image so you don't get the nice and easy access to, for example, /dev/loop0p1 for partition 1.

FreeBSD seems to get this right, as I recall, but Linux does not.

fdisk outputs these devices, but they don't exist:

% sudo fdisk -l /dev/loop0 | grep '^/'
/dev/loop0p1   *           1        1043     8377866    7  HPFS/NTFS
/dev/loop0p2            1044        2088     8393962+   7  HPFS/NTFS
Linux's mount(8) command gives you the '-o offset=XXX' option. The offset is a byte offset, and lets you decide how far into your disk image you want to start. However, fdisk doesn't output in bytes, it outputs in cylinders or sectors.

Not to worry, it helpfully outputs the conversion between the units and bytes:

% sudo fdisk -l /dev/loop0 | grep Units
Units = cylinders of 16065 * 512 = 8225280 bytes
Knowing this, let's use awk to generate the offsets for us:
% sudo fdisk -l /dev/loop0 
  | awk '/^Units/ { bytes=$(NF-1) } /^\// { print $1 "[" $NF "]: mount -o offset=" $3 * bytes }'
/dev/loop0p1[HPFS/NTFS]: mount -o offset=8225280
/dev/loop0p2[HPFS/NTFS]: mount -o offset=17174384640
Now simply mount them with 'mount -t ntfs -o loop,offset=XXXX mydiskimage /mnt' or whatever you want :)

VMware Server 2.0 Beta

I upgraded my vmware machine from vmware 1.3 to vmware 2.0 beta. The install process was great by comparison to the last two releases. This install was much nicer than the previous one for simple reasons that I didn't have to hack the perl script to not misbehave, and I didn't have to mess around compiling or finding my own vmware kernel modules. Everything Just Worked during the install.

On the downside, vmware-server-console is deprecated. Vmware Server 2.0 uses Vmware Infrastructure, which appears to be tomcat+xmlrpc and other things. The New Order seems to be that you manage your vms with the webbrowser, which isn't a bad idea. However, we must remember that Good Ideas do not always translate into Good Implementations.

The web interface looks fancy, but the code looks like it's from 1998. The login window consists of layers and layers of nested tables and a pile of javascript all in the name of getting the login window centered in the browser. You can see the page align itself upon rendering even on my 2gHz workstation with Firefox. Horrible.

Once you log in, you're presented with a visually-useful-but-still-runs-like-shit interface. The interface itself appears useful and nice, but again fails to respond quickly presumably due to the piles of poorly written javascript involved.

Since VMware thought this was a fresh install, it didn't know about any of my old virtual machines. Adding them using the web interface causes vmware to crash. Oops. So, I found a vmware infrastructure client executable randomly in the package; "find ./ -name '*.exe'" will find it for you. Copied this to my windows box and installed it. I used this tool to re-add my old vmware machines.

Unfortunately, "raw disks" are disabled in this free version of vmware server. I'm not sure why. My Solaris VM uses raw disks for its zfs pool, so this was a problem. Luckily, this is purely a gui limitation and not a vmware limitation. To repair my Solaris VM, I created a new virtual machine with the same features and told it where it's first disk lived (the first disk was a normal file-backed vmware disk image). After that, I looked at the old vm's .vmx file and copied in the lines detailing the raw drives to the new .vmx file:

scsi0:1.present = "true"
scsi0:1.filename = "zfs-sdb.vmdk"
scsi0:1.deviceType = "rawDisk"
scsi0:2.present = "true"
scsi0:2.filename = "zfs-sdc.vmdk"
scsi0:2.deviceType = "rawDisk"

Everything's backup and running sanely now in vmware. Hurray :)

Ubuntu 64bit / vmware server

Now that I have all the hardware/bios problems fixed on this system, I've started installing vitrual machines. However, getting vmware to go was no small task.

  • The install script failed to build the vmmon kernel module, so I hacked the script to not do it.
  • Ubuntu has packages for vmmon and vmnet, but installs them in /lib/modules/.../vmware-server/ instead of /lib/modules/.../misc/ where the vmware init.d script expects them. Hacked that with a symlink. The init script looks for 'foo.o' and the ubuntu package provides 'foo.ko'.
  • I couldn't verify my license key because vmware-vmx would fail to run with an error of "No such file or directory". Turns out this really means "You are running a 32 bit binary and I can't find the libraries it needs". The solution is to apt-get install ia32-libs and possibly others.
There are probably other hacks I had to do, but it's 5am and I don't remember them right now.

Booting from SATA on ASUS K8N-DL.

So my new fancy computer is here. Turns out I originally bought the wrong formfactor motherboard, because I had a silly moment.

Either way, I've now got the system running, but not without some serious battle scars.

Ubuntu happily installed (very slow to partition/newfs stuff though). However, upon reboot, the bios clearly couldn't see the boot drive. My SATA drives are plugged into the on-board Silicon Image RAID controller with no raid configurations set up.

Guessing, I told the raid controller to create a 1-disk concatonation with the disk I wanted to boot from. Voila, the BIOS sees the one disk now and I can boot from it. Linux finds the other two SATA drives when booting.

Sigh..

Also, when Ubuntu says "Computing the new partitions" it really means "I'm creating a new partition right now. Go get something to eat, I'm going to be here for a while." Large partitions, for some reason, take quite some time to create.

Fedora's package manager

-bash-3.1# yum install django
No Match for argument: django
Nothing to do

-bash-3.1# yum install Django
Downloading Packages:
(1/1): Django-0.95.1-1.fc 100% |=========================| 1.5 MB    00:02
Ahh. Clearly.