Search this site


Metadata

Articles

Projects

Presentations

libc's tree functions, function pointers, and bdb.

Whoever wrote the tsearch/tfind/tdelete/twalk functions for libc clearly stopped thinking about how it would be used. The only way I can see to iterate the tree is to use twalk, which doesn't let you pass any extra arguments to your provided action method.

This sucks if, for example, you wanted to get a list of all entries in the tree in a threadsafe or multiplicity-safe way.

Some workarounds include:

  • Every time you insert, add the same structure to an array.
  • Use something that supports sane iteration (bdb, for example).
I looked into using bdb for some things, but the tree I wanted to iterate over most was a structure that held, among other things, a function pointer. Function pointers are magical and special and are held in a place in memory you can't simply make a copy of. If you try to store a function pointer in an in-memory BDB database, the value that comes out of your query will be different than the function pointer.

This bdb code sample attempts to store a function pointer in bdb. The output is:

stored: 607a70
actual: 4005e8
The value changed because copying a function pointer doesn't work.

There's a workaround here that might be useful - dlopen(). Any functions I want to store in bdb, I would store by string name and fetch the function pointer with dlsym().

This dlopen example shows how to dlopen yourself and fetch a function by string name.

Fun with pointers.

C vs Python with Berkeley DB

I've got a stable, threaded version of this fancydb tool I've been working on. However, the performance of insertions is less than optimal.

Then again, how much should insert performance matter on a monitoring tool? For data that comes into it gradually, speed doesn't matter much. For bulk inserts, speed matters if you want to get your work done quickly. I haven't decided if bulk insertions are necessary use case for this tool. Despite that, I'm still interested in what the limits are.

I have experimented with many different implementations of parallelism, buffering, caching, etc in the name of making insertion to a fancydb with 10 rules fast. The fastest I've gotten it was 10000/sec, but that was on an implementation that wasn't threadsafe (and used threads).

My most-recent implementation (which should be threadsafe) can do reads and writes at 30000/sec. With evaluation rules the write rate drops to about 10000/sec.

The next task was to figure out what I was doing wrong. For comparison, I wrote two vanilla bdb accessing programs. One in C and one in Python. The output of these two follows:

# The args for each program is: insertions page_size cache_size
% sh runtest.sh
Running: ./test 2000000 8192 10485760
  => 2000000 inserts + 1 fullread: 209205.020921/sec
Running: ./py-bsddb.py 2000000 8192 10485760
  => 2000000 inserts + 1 fullread: 123304.562269/sec
As expected, C clearly outperforms Python here, but the margin is pretty small (C is 69% faster for this test). Given the 120000/sec rate from Python, the poor input rate of my tool seems to be blamed on me. Is my additional code here really the reason that I can only write at 30000 per second? I may need to revisit how I'm implementing things in python. I'm not clear right now where I'm losing so much throughput.

So I use hotshot (python standard profiler) and I find that most of the time is spent in my iterator method. This method is a generator method which uses yield and loops over a cursor.

It's important to note that my python bdb 'speed test' above did not use generators, it used a plain while loop over the cursor. So, I wrote another test that uses generators. First, let's try just inserts, no reading of data:

Running: ./test 1000000 8192 10485760
  => 1000000 inserts: 261096.605744/sec
Running: ./py-bsddb.py 1000000 8192 10485760
  => 1000000 inserts: 166389.351082/sec
Now let's try with 3 different python reading methods: while loop across a cursor, generator function (using yield), and an iterator class (implementing __iter__):
Running: ./py-bsddb.py 4000000 8192 10485760
  => 1 fullread of 4000000 entries: 8.660000
Running: ./py-bsddb_generator.py 4000000 8192 10485760
  => 1 fullread of 4000000 entries: 9.124000
Running: ./py-bsddb_iterable_class.py 4000000 8192 10485760
  => 1 fullread of 4000000 entries: 13.130000
I'm not sure why implementing an iterator is so much slower (in general) than a yield-generator is. Seems strange, perhaps my testing code is busted. Either way, I'm not really closer to finding the slowness.

get this code here

fancydb performance

Various trials with basically the same input set: 2.5 million row entries, maximum 1 entry per second. The insertion rate drops by 60% if you add rule evaluations, which is an unfortunate performance loss. I'll work on making rules less invasive. Unfortunately, python threading will never run on two processors at once I can't gain significant performance from sharding rule processing to separate threads; most unfortunate. Maybe fork+ipc is necesary here, but I am somewhat loathe to doing that.

The slowdown when rules are present are to the record keeping that is done to notify that a rule should be evaluated again (rule evaluations are queued). Basicaly the loop 'is this row being watched by a rule' is the slowdown. I'll try attacking this first.

With 2 rules (unoptimized rules):
    hits.minute => hits.mean.1hour @ 60*60
    hits.minute => hits.mean.1day @ 60*60*24
  insertion rate = 7600/sec

With 2 rules (optimized chaining)
    hits.minute => hits.mean.1hour @ 60*60
    hits.mean.1hour => hits.mean.1day @ 60*60*24
  insertion rate = 12280/sec

With 9 rules (optimized chaining):
  insertion rate: 10000/sec

With 0 rules:
  trial 1: 40000/sec
  trial 2: 26700/sec

Storage utils, eventdb, etc.

Spent lots of time over thanksgiving playing with bdb in python.

Again, I still don't have releaseworthy code, but here's a snippet of rrdtool-like behavior from this system:

% ./evtool.py create /tmp/webhits.db
% ./evtool.py addrule /tmp/webhits.db http.hit agg.http.hit.daily total $((60*60*24)) time
% time cat webhits.data | ./evtool.py update /tmp/webhits.db -
11.10s user 0.80s system 94% cpu 12.627 total
% time ./evtool.py graph /tmp/webhits.db agg.http.hit.daily  
0.49s user 0.11s system 96% cpu 0.624 total
The result is exactly the same graph as mentioned in my previous post. Speed so far is pretty good. The input was 125000 entries, in 12.6 seconds; which equates roughly to 10000 updates per second. That kind of QPS seems pretty reasonable.

The primary difference today is that the aggregates are computed as data enters the system. 'Addrule' tells the database to schedule an aggregation for specific timestamps.

The goal is to be able to chain rules, and have N:M relationships between rule input and output. Those will happen soon. Chaining would've happened tonight, but I'm having some locking problems due to it being quite late ;)

The database code itself is designed to be reusable elsewhere. There are two primary classes: SimpleDB and FancyDB. SimpleDB lets you store and retrieve data based on row+timestamp => value pairs. FancyDB wraps SimpleDB and gives you operation listeners such as the rule used in the above example.

I've already used SimpleDB elsewhere; in the sms traffic tool I mentioned in my last post, I cache geocode data and traffic requests with this same database tool.

Python, event tracking, and bdb.

In previous postings, I've put thoughts on monitoring and data aggregation. Last night, I started working on prototyping a python tool to record arbitrary data. Basically it aims to solve the problem of "I want to store data" in a simple way, rather than having to setup a mysql database, user access, tables, a web server, etc, all in the name of viewing graphs and data reports.

The requirements are pretty simple and generic:

  • Storage will be mapping key+timestamp => value
  • Timestamps should be 64bit values, so we can use microsecond values (52 bits to represent current unix epoch in microsecond)
  • Access method is assumed to be random access by key, but reading multiple timestamp entries for a single key is expected to be sequential.
  • Key names must be arbitrary length
  • Storage must be space-efficient on key names
  • Values are arbitrary.
  • Minimal setup overhead (aka, you don't have to setup mysql)
The goal of this is to provide a simple way to store and retrieve timestamped key-value pairs. It's a small piece of the hope that there can be a monitoring tool set that is trivial to configure, easy to manipulate, easy to extend, and dynamic. I don't want to pre-declare data sources and data types (rrdtool, cacti), or make it difficult to add new collection sources, etc. I'm hoping that relying on the unix methodology (small tools that do one or two things well) that this can be achieved. The next steps in this adventure of a monitoring system are:
  • a graphing system
  • aggregation functions
  • templating system (web, email, etc?)
Space efficiency on key names is achieved with a secondary storage containing a list of key to keyid mappings. Key IDs are 64bit values. The first value is 1. We could use a hash function here, but I want a guarantee of zero collisions. However, this means that keys are specifically stored as key ids in insertion order, not lexigraphical order.

This may become a problem if we want to read keys sequentially. However, if we scan the secondary database (one mapping key => 64bit_keyid) we can get keys in lexigraphical order for free. So iterating over all keys starting with the string 'networking' is still possible, but it will result in random-access reads on the primary database. This may be undesirable, so I'll have to think about whether or not this use case is necessary. There are some simple solutions to this, but I'm not sure which one best solves the general case.

Arbitrary key length is a requirement because I found the limitations of RRDTool annoying, where data source names cannot be more than 19 characters - lame! We end up being more space efficient (8 bytes per name) for any length of data source name at the cost of doing a lookup finding the 64bit key id from the name.

I have some of the code written, and a demo that runs 'netstat -s' a once a second for 5 seconds and records total ip packets inbound. The key is 'ip.in.packets'

ip.in.packets[1195634167358035]: 1697541
ip.in.packets[1195634168364598]: 1698058
ip.in.packets[1195634169368830]: 1698566
ip.in.packets[1195634170372823]: 1699079
ip.in.packets[1195634171376553]: 1699590