Search this site

Page 1 of 2  [next]





Networking upgrade - WRT54GL

A few days ago, I noticed that for whatever reason my net4501 box was seemingly incapable of routing packets from comcast to the local lan at faster than 800KB/sec. If I removed some PF rules (such as scrubbing) I could get 900KB/sec, but even still 100% of the cpu was consumed by interrupts (as reported by top(1)).

So, I went to Fry's and picked up a WRT54GL. Got it home, installed dd-wrt on it, and am now happy. PowerBoost (Comcast's "here's lots of bandwidth for the first few seconds of your packet stream") works, and I can download at full 8mbit (1MB/sec) as I am supposed to. Sweet.

This frees up my Soekris net4501 box for new projects. Bonus :)

Interesting FreeBSD rc.conf network option

In rc.conf, I can put:
And do /etc/rc.d/netif restart bge0

and we get:

% ifconfig bge0
        inet netmask 0xffffff00 broadcast
        inet6 fe80::20a:e4ff:fe3f:92ee%bge0 prefixlen 64 scopeid 0x1 
        inet netmask 0xffffffff broadcast
        inet netmask 0xffffffff broadcast
        inet netmask 0xffffffff broadcast
        inet netmask 0xffffffff broadcast
        inet netmask 0xffffff00 broadcast
        inet netmask 0xffffffff broadcast
        inet netmask 0xffffffff broadcast
        inet netmask 0xffffffff broadcast
        ether 00:0a:e4:3f:92:ee
        media: Ethernet autoselect (none)
        status: no carrier
Neat. That's one way to take an entire subnet.

Routing all traffic through VPN

So, I have a pptp vpn server running in my apartment. I desire this setup:
I VPN to my apartment. *All* traffic will go through this vpn
PPP has features to negotiate IP-level information such as DNS and "Here's your IP" using IPCP. However, it doesn't seem to be able to share routes. However, my local ppp.conf can say add default HISADDR and suddenly all my traffic wants to go through the vpn. However, once I do this, I lose all connectivity to the vpn because it is off-subnet - my machine forgets how to route data to the vpn, oops!

Is there a way to have ppp add an additional route that I want? Specifically, I want to take the existing known gateway (say, my wifi gateway) and do: add [vpnhostname] [currentroute] and then add a default route for the tunnel. This will allow all traffic to want to go through the tunnel, but still allow the OS to know how to *get* to that tunnel.

A hacky solution involves some pre-vpn discovery. I need to figure out what my default route is. Once I know that, I can simply add a single line in my ppp.conf and I have all traffic routing through my apartment.

 add myvpnhostname mycurrentdefaultroute
 add default HISADDR
These two lines create 2 routes. The first keeps the system aware of how to reach the vpn server. The second ensure a default route to the vpn gateway.

While this is suboptimal, it is easy to automate. My vpn script can simply generate a new ppp.conf and grab the default route with:

nightfall# netstat -rn -finet | awk '/^default/ { print $2 }'

Apartment networking, v1

I've finally got non-free internet access. Prior to that, I was using Google's free wifi. Turns out there's a wireless node quite close to my apartment. To get online, I used my soekris net4501 w/ wireless card to associate to google's wifi. Google wifi rocks, it's so nice. Internally, I ran used dhcp and nat to provide multiple machines with network access through the soekris box, and thus google wifi. This worked quite well.

Now that I have Comcast, I can use the wireless card in the soekris as an access point, rather than a client. The setup is as follows:

  • wired subnet: (gateway on soekris)
  • wireless subnet: (gateway on soekris)
  • vpn subnet: (gateway is vpn server)
  • vpn/dhcp/dns server running in FreeBSD on vmware on Windows
  • dhcprelay on soekris relaying dhcp requests from wifi to wired.
  • nat everything through the soekris box, which connects to Comcast
  • dhcp with ddns so I don't have to remember IP addresses
So far, everything's working well. My new Dell (2.8gHz/1gig) runs vmware well. With Candice's help, I was able to get a poptop server going quite easily. Now I can vpn into my apartment from Windows and FreeBSD, which is good if I want an easy, secure connection while I'm on wifi. I'll post a howto about poptop+freebsd later.

The next step is to "secure" wireless. I don't care to block people, because someone will just get arond it. I plan on filtering unauthorized wireless access, limiting it so only ssh/http/https/icmp/dns and little else. Bandwidth-limited, ofcourse. My traffic is more important than yours!

After that, I'd like to automate network maintenance. That is, have a single script that will push changes to wherever is necessary: firewall, dhcp, dns, vpn, whatever. Then, perhaps some network optimizations such as a transparent squid proxy, etc.

I'm hoping that I can work on my pam_captcha research soon, too, now that I have a machine with a real IP online.

Doing this network setup has been quite the refresher on DNS, DHCP, et al. I'd prefer having this kind of crap documented, so I'll hopefully get around to writing an article about it.

OpenLDAP authenticating against saslauthd

I've been doing research on the internets trying to get OpenLDAP to allow simple binds that can authenticate against Kerberos. Turns out the default SASL support is to only handle GSSAPI when talking to Kerberos V servers. This means you can only authenticate if you have a kerberos tgt.

Problem with SASL/GSSAPI is that address book clients aren't going to support much beyond simple authentication over SSL. Thusly, we need a way to use a simple bind over SSL and still authenticate against Kerberos.

The LDAP Server's slapd.conf has the following to translate between gssapi auth DNs and real user object DNs:

authz-policy from
authz-regexp "^uid=([^,]+),cn=gssapi,cn=auth" "uid=$1,ou=Users,dc=csh,dc=rit,dc=edu"
This allows GSSAPI authentication:
nightfall(~) % ldapwhoami
SASL/GSSAPI authentication started
ldap_sasl_interactive_bind_s: Local error (-2)
        additional info: SASL(-1): generic failure: GSSAPI Error:  Miscellaneous failure (see text) (open(/tmp/krb5cc_1001): No such file or directory)

nightfall(~) !1! % kinit psionic
[email protected]'s Password: 
kinit: NOTICE: ticket renewable lifetime is 0
nightfall(~) % ldapwhoami
SASL/GSSAPI authentication started
SASL username: [email protected]
SASL installing layers
Result: Success (0)
I have a TGT and can authenticate to LDAP over SASL (my ldap.conf defaults to sasl+ssl). However, when you try to do a simple bind:
nightfall(~) % ldapwhoami -x -D 'uid=psionic,ou=users,dc=csh,dc=rit,dc=edu' -W 
Enter LDAP Password: 
ldap_bind: Invalid credentials
On the slapd side, you'll see errors like:
==> bdb_bind: dn: uid=psionic,ou=users,dc=csh,dc=rit,dc=edu
SASL Canonicalize [conn=0]: authcid="[email protected]"
SASL Canonicalize [conn=0]: authcid="[email protected]"
SASL [conn=0] Failure: Could not open db
SASL [conn=0] Failure: Could not open db
Googling around will tell you that lots of people put the following in their "slapd.conf" files:
pwcheck_method: saslauthd
saslauthd_path: /var/run/saslauthd/mux
Now, you'll recognize that this format of "token: value" doesn't match the normal slapd.conf. There's a reason for this: This isn't OpenLDAP's slapd.conf file. It's the config file for SASL! From what I gather, saslauthd is working similarly to the way PAM works, in that it sees a service trying to use it and uses that particular service's config file. I believe saslauthd comes with a standard Sendmail.conf for example usage.

It took me several hours to find this fact out. My slapd SERVER config file is located here:

While the SASL "slapd.conf" authentication config file is located here:

If you can't find the right place, look for 'Sendmail.conf' somewhere near where you installed sasl or saslauthd.

Anyway, once we've added this magic config file and told it to use saslauthd, the SASL library LDAP's slapd is using will now attempt communication with saslauthd over the /var/run/saslauthd/mux socket. I ran saslauthd like this:

/usr/local/sbin/saslauthd -a kerberos5 -d
The directory, /var/run/saslauthd was owned by 'cyrus' and I changed the group ownership to 'ldap' so that the slapd server (running as 'ldap') could access the socket. This is a necessary step, else you will get Permission Denied errors permissions disallow you from accessing the socket or its directory.

Now, once we have saslauthd running, the sasl library slapd.conf, and the moons aligned, we can perform a simple bind:

nightfall(~) % ldapwhoami -x -D 'uid=psionic,ou=users,dc=csh,dc=rit,dc=edu' -W 
Enter LDAP Password: 
Result: Success (0)
saslauthd, in debug mode, will say something meaningful such as:
saslauthd[28910] :do_auth         : auth success: [user=psionic] [service=ldap] [realm=CSH.RIT.EDU] [mech=kerberos5]
saslauthd[28910] :do_request      : response: OK
Whew! My LDAP Authentication journey is complete. I'll post my configs later once I have a full directory system implemented here. For now, I am elated that gssapi and simple binds both work for eventually authenticating against our Kerberos server.

ISTS4 wrapup

This year's ISTS security competition has come and gone. My team was robbed of victory.

Our defensive strategy this year was some trivial security through obscurity combined with some very clever hardening. Using FreeBSD on all of our machines, we ran all of our services on one machine leaving the remaining 3 machines for attacking and forensics.

All services ran inside a single jail. The creation of the jail was done mostly with rsync to copy the freebsd base system. Inside this jail, we ran sshd, ftpd (via inetd), sendmail, popd, and apache. The jail had several mechanisms to limit malicious user activity. These include pseudo-quotas, login.conf user limits, etc.

The Plan

Run all services inside a jail. Use arpd to spoof unallocated addresses in our /24 network. Use a firewall to redirect all connections to any spoofed addresses on any ports to the real SSH server. That means we'll have almost 13 million ssh "servers" that appear to be running. One of these "servers" is the real one you can actually get a login shell for. The plan was wrapped around the assumption that the latest versions of apache, popd, and sendmail are not going to be exploitable. Generally this is a safe assumption especially in a small-scale competition like this one. So, the tools/software we used here were as follows:
  • FreeBSD 6.0
  • Sendmail 8.13.4 (freebsd 6 default)
  • popd 2.2.2a
  • default ftpd run from inetd
  • default sshd (openssh 4.2p1)
  • Apache 1.3.34

The Firewall

I am most familiar with PF, so that's the firewall we used. The pf config was pretty short.

# Redirect real-service ports first
rdr inet proto tcp to $web_ip port 80 -> port 80
rdr inet proto tcp to $mail_ip port 25 -> port 25
rdr inet proto tcp to $mail_ip port 110 -> port 110
rdr inet proto tcp to $ftp_ip port 21 -> port 21
rdr inet proto tcp to $ftp_ip port 20 -> port 20

# The REAL ssh "server"
rdr inet proto tcp to $ssh_ip port 31975 -> port 29

# Pretend everything else is ssh
rdr inet proto tcp port 1:49152 -> port 22

# Make everything pingable, too
rdr inet proto icmp ->
You'll notice the "real ssh server" is directed to port 29. We'll cover sshd next and why this is important.

The SSH Server (inside the jail)

There's nothing *too* special about our ssh config. I set it to listen in port 22 and port 29. Port 29 will be used to verify if you are connecting to the "real" server.

Network Configuration

The actual machine only had 2 IPs assigned to it. The host address,, and the private address that the jail ran from, The rest of the network was spoofed using arpd:
arpd -d
So now anyone who tries to touch our network will get a response from any ip they hit. This is similar to a honeyd or labrea approach, but better. Labrea can successfully tarpit people who don't know how to tell real hosts from fake ones, but you can almost always easily determine labrea tarpitted (faked) hosts from real ones. Another solution might have involed honeyd, but I wanted a real, usable service. Since the only damage I felt anyone could incur would be from the shell, I wanted to keep users busy by baiting them with working ssh services that they simply didn't have real shells on.

Now, SPARSA's rules stated that no arp poisoning was allowed. I don't consider this arp poisoning because I could easily accomplish the same thing arpd supplied with this one-liner, and without true spoofing:

jot 253 1 | xapply -fv 'ifconfig em0 alias 10.102.1.%1 netmask 0xffffffff' -
It's not spoofing if I actually have all of the IPs on the network, now is it?

The Shell

PORT=`echo "$SSH_CONNECTION"|awk '{print $4}'`
trap - INT

if [ "$PORT" -eq 22 ]; then
        sl -laF
        cat /root/mario
        sleep 3
        /bin/tcsh -l

This script was called 'happyshell' and each team's account had this shell. The script looks at SSH_CONNECTION for the port they sshed into the machine with. If it's 22 (the "fake" server), then they get some useless output printed to their screen, and it quits. If they hit the real server, it gives them tcsh. Very simple. If you want to know what '/root/mario' contained, check it out here: mario ascii picture

If you attempted to login via ssh to any IP:PORT other than you would get a message that would annoy you instead of a shell. Perfect, but you could still write a script to attempt to login and find the real ssh server, so we needed something else to slow you down.

Enter pam_captcha

pam_captcha was an idea Rusty, Dan, and I came up as a solution to prevent scripts from attempting logins aswell as to annoy and deter users who wanted to login and be naughty. We needed it to be difficult-to-impossible to script logins to find our real ssh server.

We initially thought a 10 second sleep delay would be sufficient, but as we discussed it further we realized we could ask the authenticating user questions to verify that they were human. The technical term for this kind of challenge-response authenticator is "captcha" - read about captchas on wikipedia.

I spent a few hours the weekend before the competition (last weekend) to write pam_captcha. There are currently 3 kinds of captchas. The first is a identifying a random string that's been run through figlet, turning it into text-ascii art. The second captcha is a simple math problem also run through figlet; users must solve this math problem to continue. The third captcha has no real practical uses, but for the context of the competition would be both annoying to users and hilarious for me. It involves users performing physical activities, such as stealing a competitor's hat, or singing a song. Verification was done by a human who then alerts the computer that this person is a human. I called this 3rd captcha "Dance Dance Authentication" or DDA.

I had to turn off DDA after the first 2 hours due to complaints. This was fine becuase I only wrote it for humor's sake. The other two captchas stayed enabled throughout the competition.

Outside of this competition, pam_captcha will prevent or atleast deter script kiddies from bruteforcing login attempts via ssh. So if you're interested in preventing this, then go ahead and use it. It works in Linux and FreeBSD.


I say 'pseudo' becuase they aren't actual quotas. I used FreeBSD's mdconfig to create several vnode-backed (file-backed) disks. One for each user's home directory. The script I wrote to do this automatically can be viewed here. It created a 30 meg filesystem backed by a file for each team, initialized the file system, and unmounted it for later use. Another script let me mount a user's homedirectory. Quick and simple, this separated every user's home directory from everyone else's aswell as the rest of the file systems on the computer. This means if a user fills his/her homedir, the other file systems don't care.

The Competition: Defense

At this point, we had millions of potentially valid ssh servers and a means by which to prevent competitors from using brute force scripts to find the real server. Perfect.

During the 5-hour setup period of the competition, I installed the primary services on our system and finished some last minute testing before the attack-and-defend section began. 15 minutes before the end of the setup period, we were ready to go. 3 machines left to perform attacks and forensics, 1 to do services. So far so good, right?

Yep. Several teams attempted to find our ssh server and gave up after about 10 login attempts and moved on to easier targets (other teams). No one bothered attacking via web, ftp, or mail. Our series of tricks tied to our SSH server worked extremely well until around 6pm (1 hour left in the competition) everyone got their collective panties in a bunch and demanded that I stop being so tricksy. The rules for the competition did not cover what ports you had to run your services on, so naturally I protested. Finally, after about 20 minutes of hearing people bitch and moan, I updated the firewall rules to direct port 22 on the "real" ssh server to the actual ssh server (, remember?). Only 3 (out of 6) teams attempted to login after that. Two teams attempted to fill fill the file system and failed. Another team attempted to starve CPU, but every team's default nice level was 19, making it only run when nothing else needed to do so.

The Competition: Attack

The only attacks I did were resource starvation: fork bombs, memory hogging, cpu hogging, file system filling, inode exhaustion, etc. My teammates attempted exploits and other attacks, sometimes these were successful.

The inode exhaustion attempts had a somewhat funny side-effect. The one-liner I used to do this was this:

while :; do 
  touch $a $a$a $RANDOM $RANDOM$RANDOM $RANDOM$a; 
I just wanted to create lots and lots of files. I did this typically in my homedir or /tmp on another team's ssh server. After running for a significant amount of time, running ls in the directory became quite sluggish as it read all of the files. At one point, I had created well over 300000 files in /tmp. The team affected finally noticed this and tried to do "rm *" which inevitably failed with an error of "Too many arguments" - even attempting to do "rm 1*" failed due to too many arguments. Wonderful! Eventually they figured it out and ran 'find /tmp | xargs rm' - but this machine was also running X11 which had sockets in /tmp. They got blown away and X crashed. Whoops ;)

Beyond that, I ran multiple resource starvation attacks (that can be easily prevented) on several teams multiple times.

The Problems

Some of the SPARSA members who were running the event were on the majority very, very stupid and arrogant. I say this in a mean way becuase they've well-earned that label.
  • I made several mentions of teams not providing usable services which were almost all ignored. One particular service a team was running was a 15 line perl script that "served" web pages. It did not conform to any part of the HTTP standard. It responded with a default page, no matter what you requested, immediately after sending the 'GET' request line. Thusly, it ignored headers and the path. I argued that this was not a webserver, and a SPARSA alumni (the fool listed later) asserted that "Well it works for him to serve a webpage, therefore it is a valid webserver." I mentioned that even simple requests like POST and HEAD were reported as invalid, and the path of the request did not change the page served. I was ignored.
  • One SPARSA guy was arguing with me about how ssh is ONLY a "user and password authenticating protocol" and that my "hacked server" was illegal for the competition. He further argued that having users enter any information other than a username and a password was illegal. None of this was in the rules for the competition.
  • Another SPARSA guy, who is an RIT Alumni, began arguing with me that we should follow DNS standards and provide services on the PORTS dns was supposedly advertising for our service. I tried briefly to reeducate him that DNS only provided name-to-address mappings and not port information, but he insisted. Eventually, I said, "Look. It's obvious you have no clue what you're talking about. DNS does not serve ports. Do not argue with me about this, you will lose. Thanks." Another competitor there assisted by asserting that it could indeed not serve port information, and asked SPARSA to provide proof. No proof manifested itself. (Yes, SRV queries serve port information, but no ssh clients use it)
  • Much later in the competition, eventually everyone started complaining that they could not login successfully to our SSH server. This caused a split in the SPARSA group. I asked some of them to confirm that they could indeed login. Half confirmed, the other half said they couldn't login. I replied, "You are attempting to login on the wrong port and/or ip if you do not get a shell after authenticating." Many of the SPARSA guys went crazy with rage insisting that I was breaking rules (that weren't written anywhere?) and that I had to run ssh on the standard port. Others kept quite or commented positively on the strategy. After much arguing, one of the SPARSA folks wrote the port number that the real ssh server could be accessed through. A few minutes later I switched it to port 22, the standard port. The argument obviously enraged many of the SPARSA folks and put me in a bad light with them, I guess.
There were many more incidents that I may document later, but the overal point is that we came for a fun competition and while it was fun I did not expect to have to correct fallacies or arrogance on behalf of the SPARSA group. I do not claim to know everything, but it is ignorance to deny the truth when presented it. Cluelessness is not a fault, becuase you can always learn. Arrogance becomes a problem when I have to deal with it.

The Scoring Conspiracy

The rules stated you had a 2 minute grace period for any downtime before it would be counted against you. Our service machine was only taken down once - by a forkbomb from one of the SPARSA members (a limitation I forgot to place on the sparsa account). A quick reboot and all services came back online within 60 seconds of the down. However, something was strange with the network and Nagios only showed that our web server was online. I tested it, my teammates tested it. All of our services were online, available, and working. One SPARSA member attempted to verify this, and said that our services were down. Another SPARSA member also attempted to verify this, and said that our services were UP. Conflicting answers? Neat. I didn't touch anything, and 5-10 minutes after the first downtime, nagios magically decided our services were back online.

We insisted we be given back points for Nagios' false positives on our downtime, but we were turned away. Other teams complained about Nagios falsely reporting downtime and were awarded points back for false downtime. That's nice. It's good that the judges were fair to all teams.

No other teams were able to take down our services. Nagios, at random times, would determine that one (usually random) of our services were offline. Every time this happened, we verified all of our services and they were indeed online. Thusly, there was only about 60 seconds where any of our services were down during the entire competition. Other teams had critical failures during the competition and were also attacked and taken offline by other teams. Therefore, we had almost 100% availability, other teams were not so lucky. The only point in the competition where you can lose points is for downtime beyond 2 minutes, or so the rules state. You gain points by attacking others and partaking in the SPARSA challenge and the forensics challenge.

Long story short, we did OK with the forensics and OK on the sparsa challenge, but not great.

I'm not angry that we didn't win (or place, for that matter). I'm angry with the behavior of the members of SPARSA who made up random, unsubstantiated rules on the spot for no reason and applied them to some teams and not others. Many of them were entirely unprofessional and down-right rude and arrogant.

Dan, a friend of mine on another team, overheard several of the judges talking about my team and how they should just dock an arbitrary amount of points from us. What kind of crap is that? We paid $40 to attend this competition to get treated like this? I'm considering lobbying for a refund. We'll see. I'm extremely annoyed that there are student organizations seemingly bent around supporting their own arrogant members. If you're at RIT and are looking to join SPARSA, don't. Join CSH instead. We suck less.

At any rate, the competition itself was pretty fun if nothing else than for demonstrating pam_captcha and the better-than-labrea tarpitting-with-real-services tricks. I feel that we should've been given style points for unique solutions so hiding services and for pam_captcha. Style points weren't in the rules, but style should certainly not cause anger. Instead, The SPARSA judges just got angry. Feature? Hate the game, not the player.

Oh well... I'm done with student organizations in May (graduation!).

Shoutcast stream 'lame' proxy

Many of my mp3s are of such a high bitrate that they saturate my crappy 30k/s DSL connection. To solve that problem, I wrote a proxy that connects to the real shoutcast server and essentially pipes the output through lame before sending it to you. Doing this, I was able to easily down encode any mp3 output to something more reasonable for streaming, such as 64kbit.

If you want to take a look at it, it's only 38 lines of python.

This is probably going to become a part of Pimp itself. Instead of accessing the normal '/stream/happystream' you could do '/stream/happy?bitrate=128' and it will lame-it up for you. I've *always* wanted this feature in Pimp since version 2 (version 1 wasn't networked).

On a funnier note, it seems like the other media guys are catching on to "networked is good" - XMMS2 is being written from scratch so I hear. It's going to sport a client/server model. So far there's MPD2, XMMS2, and Gstreamer that are well known. Whatever, as far as I can tell they all have the library and player in the same place. Pimp abstracts it one more level: a control client, a server, and a media client. In this case, the control client is Firefox and media client is your favorite mp3 player. Those other projects will eventually catch up to me I suppose ;)

migrating from nis to ldap, round 1

We at CSH need to move from nis and the many other user information datastores we use to using LDAP instead. To that effort, I have started working on merging our data informations. The first step is importing NIS (passwd/group) information into ldap.

I wrote a script, passwd2ldif, to use NIS passwd information and put it in ldap.

ypcat passwd | ./passwd2ldif > cshusers.ldif
ldapadd -D "cn=happyrootuserthinghere,dc=csh,dc=rit,dc=edu" -f cshusers.ldif
Wait a while, and all users from NIS show up in ldap. I have my laptop looking at ldap for user informatin using nss_ldap:
nightfall(~) [690] % finger -m psionic
Login: psionic                          Name: Jordan Sissel
Directory: /u9/psionic                  Shell: /usr/bin/tcsh
Never logged in.
No Mail.
No Plan.
Pretty simple stuff, so far. Next step is going to involve creating a new schema to support all of the information we currently store in "member profiles." Member profiles is a huge mess of a single mysql table with lots of columns such as "rit_phone," "csh_year," "aol_im," and others. All of that can go to ldap. I'll post more on this later when I figure out what kind of schema we want.

Using PF/ALTQ to make slow connections better

ALTQ is a quality of service packet scheduler for OpenBSD pf (pf works in FreeBSD too). I'm at home right now on DSL. DSL is just fine when the only thing I'm doing is ssh and light web usage. However, once I start a download all of the bandwidth I've got ends up being used by that download. The problem is, then, that my ssh sessions become unnecessarily sluggish due to the fact that things are now having to compete for the transmission queue.

If only there were a way to give things like ACKs and ssh sessions higher priority? Oh wait, there is! PF/ALTQ to the rescue. With very minimal effort, you can effectively make your ssh sessions usable once again even though you're downloading or uploading enough to fill your pipe.

My pf.conf is as follows:


# Make a priority queue with 3 members: q_ack, q_pri, and q_def
altq on $ext_if priq bandwidth 100% queue { q_ack, q_pri, q_def }

# Give priorities
queue q_ack priority 10
queue q_pri priority 7
queue q_def priority 1 priq(default)

# ACKs get high priority
pass out on $ext_if proto tcp from $ext_if to any flags S/SA keep state queue (q_def, q_ack)
pass in  on $ext_if proto tcp from any to $ext_if flags S/SA keep state queue (q_def, q_ack)

# SSH sessions also want priority
pass out on $ext_if proto tcp from any to any port 22 keep state queue q_pri
You'll need the ALTQ and ALTQ_PRIQ options in your kernel for this to actually work. ALTQ cannot be built as a module under FreeBSD due to the way it is implemented.

If I turn pf on, and start a long file transfer (up or down), my ssh sessions won't lag anymore.

vpn + pf

Rather doing a simple vpn+nat-style situation, I decided that my local server ( needs to be available to the world. The machine I vpn into (kenya) currently has a nat rule in pf.conf so I can get to the world from whack (which is now in my room on a roadrunner line behind a nat box). I changed the nat rule to a binat rule and added an IP alias to kenya, and now you can ssh to '' from anywhere and get the box here on roadrunner. Furthermore, all my traffic comes "from", so it's as if I were on csh's network. Go go gadget vpn.

This all seems quite neat to me, I didn't expect it to be so easy...