Rusty Russell's Coding Blog | Stealing From Smart People

Jan/11

28

The LCA Dinner Rule

Dinner was great as only a room full of well-fed FOSS geeks can be, but I felt that that my time on-stage was too long and too chaotic; I apologise.  The LCA team have done such a thorough job of recovering from events that it’s a surprise when things don’t magically come together.

So I propose an “LCA Dinner Rule” for future conferences: that dinner activities span no more than 1 hour.  Preferably combining wit with its brevity.

(Oh, and I have a strong feeling that next year’s dinner is going to be a superb event…)

No tags Hide

Putting “Advanced” in the title did not have the desired effect of scaring people off.  Nonetheless, it went well: maybe because I was on the wired network so noone could access my server to find the bugs :)

There wasn’t anywhere obvious to place a link about my talk, so here’s the git repository.

No tags Hide

Dec/10

12

On C Linked Lists

There are two basic styles of double-linked lists in C; I’ll call them ring style and linear style.

The Linux kernel has ring-style, defined in include/linux/list.h.  You declare a ‘struct list_head’ and everyone who wants to be in the list puts a ‘struct list_head list’ in their structure (struct list_node in CCAN’s version).  This forms a ring of pointers in both the forward and reverse directions.

SAMBA uses linear-style, defined in lib/util/dlinklist.h.  You simply declare a ‘struct foo *’ as your list, and everyone who wants to be in the list puts a ‘struct foo *prev, *next’ in their structure.  This forms a NULL-terminated list in the forward direction (the reverse direction became a ring recently to facilitate tail access).  (Linux’s hlist.h datastructure is similar, only done in list.h style).

Points in favor of ring-style lists:

  1. Insertion and deletion are branchless, as the elements and head are homogeneous:
     head->next->prev = new;
     new->next = head->next;
     new->prev = head;
     head->next = new;

    vs:

    if (!head) {
        new->prev = head = new;
        new->next = NULL;
    } else {
        new->prev = head->prev;
        head->prev = new;
        new->next = head;
        head = new;
    }
  2. list_add, list_del are inline functions, not macros, since the types are known.

Points in favor of linear-style lists:

  1. They’re typesafe, since the head pointer and internal pointers are all the correct type.
  2. Forward iteration is simpler, since the list ends at NULL rather than back at the head. In theory this could free a register, but the bigger difference is that it’s often useful to have NULL in your iterator once the loop is done.
  3. As a corollary, iteration, initialization and emptiness testing don’t need some tricky macros:
      struct foo *head = NULL;
      if (head == NULL) ...
      for (i = head; i; i = i->next) ...
    

    vs

      LIST_HEAD(head);
      if (list_empty(&head)) ...
      list_for_each(i, head, list) ...
    
  4. Uses slightly less space for the head pointer (one pointer, vs two).

So how important is efficiency of insert and delete in a real project?  To provide some data on this, I first annotate the linux kernel so each list.h operation would increment a counter which I could dump out every so often.  Then I booted the kernel on my laptop and ran as normal for three days.

Operation Frequency
Empty Test 45%
Delete 25%
Add 23%
Iterate Start 3.5%
Iterate Next 2.5%
Replace 0.76%
Other Manipulation 0.0072%

Firstly, I was surprised that we add and remove from lists much more than we look through them. On reflection, this makes sense: if lookups are really common we don’t use a linked list. And note how often we check for whether the list is empty: this looks like a “list as a slow path” pattern. I wonder if SAMBA (or other projects) list usage looks the same… anyone?

Secondly, we delete more than we add, but we could be “deleting” initialized-but-unadded elements. I didn’t chase this beyond re-reading and enhancing my patch to measure “other manipulations” to see if they could explain it (no).

Thirdly, when we iterate it’s short: the list is empty or we break out on the first element 28% of the time (I didn’t differentiate). I wonder if a single-linked list would be an interesting micro-space-optimization for many of these lists.

Summary: I like the safety of SAMBA’s lists, but there’s clearly real merit in eliminating those branches for add and delete.  It’s a genuine performance-usability trade-off, so I I think we’ll still have both in CCAN…

No tags Hide

Dec/10

6

On Conference Harrassment…

The recent attention given to harassment at conferences was sparked by the  sexual assault described of Noirin Shirley at ApacheCon; her particular attacker’s actions were deranged and criminal, but it’s clearly a variation on an ongoing theme of harassment.

This issue raises two questions for future conferences: how do we prevent an atmosphere which encourages this, and how do we make sure everyone knows that we don’t have such an atmosphere at the conference?  The two are related, but we need both.

Atmosphere matters; let’s not discount its power because it’s intangible.  It is the atmosphere at linux.conf.au which inspires new project and enlivens existing ones among the attendees.  So let’s ensure it’s a positive one, and let’s talk about it.  I’m confident the much-harried LCA organizers will integrate an anti-harassment policy, but I encourage them to do so boldly, loudly and soon. [Correction: They already have. Front page, first paragraph has "LCA2011 is dedicated to a harassment-free conference experience for everyone. See our anti-harassment policy for details."]

It is worth expending serious effort addressing this problem.  I’ve only experienced prolonged negative sexual stereotyping once; the only help was someone who was unrelentingly positive and set a clear example of welcome, which others followed.  Let’s all try to be like that.

There are two things I promise to try to do this time around:

  1. Assume everyone is a delegate; a far lesser error than being the tenth person who assumes you are a tech-uninterested partner.
  2. Welcome a newcomer, ask about what they hack on and listen, introduce them to someone else, then leave them to it. When I do this, I always learn something.

(This post inspired by Alex, who is encouraging me to be more self-aware, by example).

No tags Hide

Nov/10

17

Not Buying Another HTC

Sent back my HTC Magic for warranty repair (wouldn’t charge) and they upgraded firmware; I can’t install cyanogenmod on it any more, making it fairly useless to me.  I spent two days of my holiday time trying and failing the goldcard lottery.

I just dropped my other HTC Magic and shattered the glass, so I’m in the market for a new Android phone via ebay.  Given my phones only last about 6-12 months, any advice for older cyanogenmod-friendly phones?  Or should I just give in and pay $500 for a Samsung Galaxy S?

No tags Hide

Nov/10

11

The Art of Community (TAoC)

Fuzzy topics annoy me, and hurt my brain as I struggle to quantitify them.  People who talk on these topics grate on my nerves; it’s a kind of geek gag reflex.

So I really didn’t like Jono Bacon’s The Art of Community.  It was like a counseling session where all the faults you half-suspected you had get laid out in a massive TODO list with cogent reasoning which makes you squirm.  By the second chapter, I really wanted to knee Jono in the nuts.  My mind kept returning to CCAN, my half forgotten could-be-great-oneday project.  It got so distracting that by halfway through the fourth chapter I put it down and haven’t picked it up again.

But I will return.  And I will re-read it from the start, for one reason: it reshaped my thinking about building community projects, and that has lead directly to the increase in activity in the past months around CCAN.  Jono has thought really hard and long about this topic, and provides a framework which the rest of us can map our projects onto.  That alone is a serious intellectual feat, the clear friendly prose is just a bonus.

Next time, I just hope I can tick enough of TAoC boxes that I don’t feel like I’ve doomed my pet project like a puppy chained to a submarine.

[Footnote: Jono's one of those annoying "doers".  I find his http://openrespect.org/ self-evident: I want to be respected while corrected, but I only recently figured that out and I suspect others might have missed it too.]

No tags Hide

I spent some of Friday and yesterday working on my CCAN hashtable module (renamed to htable); I wrote a set of benchmarks and ran them until my laptop steamed and I had burnished the implementation into a sleek shining bundle of awesome.

Then it was time to run the same benchmark on Judy Arrays; code I had only become aware of via Yenga‘s comment on my blog.  A few weeks back I wrote some friendly wrappers for libJudy, since I found the raw interface made my eyes bleed.  Fortunately, that proved quite simple, but it’s worth making a note for those who (like me three weeks ago) aren’t familiar with Judy Arrays.  The theory is that different datastructures should be used for different key layouts: high density or sparse, many or few, always concentrating on reducing cache impact.  The result is a serious amount of code for all the different cases: a risky, full-frontal assault on the problem rather than the normal cautious approach of using a single data structure then ignoring or special-casing a few corner cases.

Anyway, my initial benchmark was designed to push hashtables pretty hard, particularly as I used delete markers in my original implementation.  That tends to make performance suck after a great deal of churn, so I churn the hashtable in the middle of the benchmark.  It used consecutive keys, but I switched to spacing them out some more after that proved to be incredibly performant with Judy (which is ordered, unlike a hashtable).

I compared the JudyL implementation with my htable implementation; in fact I ran it at both 75% full (dense), 37% full (sparse) and 37% full with bitstealing disabled (dumb).  75% is high for a typical hashtable, but my implementation seems quite happy operating at that point.  I was interested in mapping an int to an object: for the hash table the object contained the int key as well as a payload pointer (to self, for consistency checking).  For Judy the object simply contained a payload pointer; a Judy array itself contains the key.  In both cases I used 50,000,000 objects, and divided the total time by the number of objects to give nanoseconds per operation.

The Good: Ordered, Dense Keys

First, doing things Judy is good at: the initial insertion of keys 0 to 49,999,999 in order, looking them up in order, looking up keys 50,000,000 to 99,999,999 in order. Note to be fair, you have to add about 150 and 300M to the hashtable memory usage to get the peak usage, since they double-allocate-and-copy, then free the old table.

Measure Judy Dense hash Sparse hash Dumb hash
Memory 420M 640M 895M 895M
Initial insert (ns) 137 332 471 482
Linear hit (ns) 19 174 176 214
Linear miss (ns) 14 234 184 286

The main story here is that Judy in order is unbeatably cache-friendly. It’s nice to notice that bitstealing optimization pays off for hashtables as fewer buckets need to be dereferenced, and similarly with lower density, but Judy wins hands down here.

OK, now we lookup random elements and access the elements: the latter is important, since it’s more realistic and hash has to access the element to verify it anyway. But since our elements are allocated as a huge array, that’s why random accessing them (even via a hash table) costs us more. Oh, and our “random” is actually “10007 apart” meaning it’s harder on caches than random would be, which might occasionally be cache hot.

Measure Judy Dense hash Sparse hash Dumb hash
Random hit (ns) 361 330 357 375

The hashtable’s doesn’t often cache miss, and the denser hash seems to win more than it loses from having to look at more buckets, but with this kind of key density Judy is very competitive anyway.

Now we delete everything and reinsert them all (in order):

Measure Judy Dense hash Sparse hash Dumb hash
Delete (ns) 199 171 200 172
Reinsert (ns) 148 191 197 175

As my hashtable doesn’t shrink it gets an artificial benefit, yet Judy again stays very competitive. Our dumb hash does surprisingly well here, too; there’s no reason for it to be different from the “smart” sparse hash, but it was a repeatable effect. Perhaps the CPU is doing the noop bit manipulations (anding with all 1s, etc) faster?

The Bad: Unordered Sparse Keys

Now let’s look at churn. Fresh datastructures are easy, but over time we want to see what happens as elements are added and deleted. In my case, I iterate through the keys and delete each object then re-add it with the key increased by 50,000,000. With a hashtable using delete markers, this can reveal a weaknesses in the way they are garbage collected. Then I iterate one last time and space the keys out 9 values apart. The objects themselves are still dense, but the keys are far less so (ranging from 0 to 449,999,991: I chose 9 because I was originally playing with a few hundred million objects and didn’t want to overflow).

So with sparser keys and a worn-in datastructure, we do our three kinds of lookups again:

Measure Judy Dense hash Sparse hash Dumb hash
Memory 607M 640M 895M 895M
Linear Hit (ns) 61 215 202 265
Linear Miss 60 301 223 278
Random Hit 736 405 386 445

Note that Judy consumes a little more memory at this point, though it’s still ahead of the densest hash.

But finally we see a case where a good hashtable is clearly superior: random hits on a full range of keys. The hashtable has suffered a little (there are quite a few deleted records there after all the churn), but 10-20% penalty for churn is not as bad as Judy’s 100% penalty for sparser keys.

Last of all, we delete every second object and then re-add with modified keys:

Measure Judy Dense hash Sparse hash Dumb hash
Delete Half (ns) 266 120 103 103
Add Half (ns) 205 200 103 105

Judy is still suffering from those sparse keys, though add is competitive with the dense hash (made even denser by those deleted elements).

Summary

Judy allows for ordered access and let you have an external key rather than putting it into your structure. If those matter, the choice is clear. Other times you don’t want a latency spike as the hash table grows and rehashes, or you’re concerned about malicious insertion patterns bombing a hashchain, which again favors using Judy arrays.

If you have adjacent access patterns, Judy will win. If you have dense keys, Judy will be competitive. If you have random access and sparse keys, Judy lookups and deletes could be twice as slow as an optimized hash table, but using 20% to 100% less memory.

And one final note on random access: if it’s not quite random, such as if every second lookup is near the previous one, time drops from 736 to 410 nanoseconds, back competitive with the hashtable result. It’s quite impressive.

[Here are the URLs: the Judy arrays homepage (but your distro probably has a package, Ubuntu does), the ccan wrapper module, and the ccan htable module.]

No tags Hide

Nov/10

7

A nomenclature anecdote

“…an author, with thesaurus in hand, chooses the names of variables carefully…” — Knuth, Literate Programming

The Elements of Style‘s invaluable advice for prose (“Omit needless words”) is mapped into C coding as “Curt names win”: I crave descriptive, short, punchy names. Though perhaps not to the extent of K&R, whose approach might be described as “Omit needless vowels”.

Sometimes though, I get baffled by my own bullshit.  Yesterday I was benchmarking Judy arrays vs hash tables (more in another post) and I noticed that I had named two equivalent functions “htable_find” vs “jmap_get”.  In my mind find has overtones of searching and get has more direct connotations: say, a simple dereference. So why did I name them differently?  I had to implement that htable_find myself, whereas for jmap_get I only implemented a one-line wrapper :)

Since the operations are very fast, jmap is already a published CCAN module, and get is one letter shorter than find, I renamed htable_find to htable_get.  Consistent naming has benefits of its own…

No tags Hide

I’ve been quietly lobbying Linux Weekly News to offer a $500 – $1000 “Supporter”-level subscription option; LWN has saved my sanity for the last two years as my conference schedule has been severely restricted.

But one problem with LWN is that they spend all their time writing and editing quality articles, instead of the boring stuff: ie. taking money from people.  The base rate finally increased this year after 8 years at $5/month (to $7), but I worry that Jon and co will look back at the years of service and lack of retirement savings and wish they’d had secure well-paying jobs like the rest of us.

After numerous monthly reminder emails and face-to-face lobbying at LinuxCon Japan, I’m giving up on “quietly” and asking your for your help.  Please: next time you comment on LWN, could you add some sig-line lobbying for the Supporter-level subscription?  Even if you have no intention of ever buying such a thing, at least you know one person will…

What Do We Want? LWN Supporter Subscription!
When Do We Want it? Before LCA 2011!

No tags Hide

Oct/10

14

On C Library Implementation

I recently came across the Judy library which seems like an impressive set of data structures to implement mappings (think variable length arrays or hash indexed by simple key).  I also looked through the codec2 sources recently, and I saw some of the same API issues, so I thought I’d crystalize some of them here:

  1. Packaging code libraries is a pain. There’s an obscene amount of clutter around the code itself. Some can be blamed on autoconf, but common practices like putting the source under src/ make it worse. In Judy there are 29 actual core code files, but 33 infrastructure files (automake et al) and AUTHORS, ChangeLog, INSTALL, COPYING and 13 README files.
  2. Typedefs which are always the same are silly. Don’t typedef long or void * or const void *, you’re confusing documentation with real typesafety. Undefined structs are your friend. If your functions take a context, make it a “struct mylib” (or union) and don’t expose the definition in the public header. Your code will be clearer and more typesafe and so will the library users’.
  3. Context creation and destruction are common patterns, so stick with “mylib_new()” and “mylib_free()” and everyone should understand what to do.  There’s plenty of bikeshedding over the names, but these are the shortest ones with clear overtones to the user.
  4. If your library needs initialization use “mylib_init()” or “mylib_setup()”.
  5. Try to make your API impossible to misuse (ie. no mylib_init() at all if you can manage it), but if that fails assert(), abort() or crash. You are not doing developers a favor by making misuse easy to ignore. If you worry about crashing on users, and can handle an API error gracefully, put that handling under #ifdef NDEBUG, like assert does. But this is not where you should be spending your development effort!
  6. If your only possible error is out-of-memory, consider not handling it at all. It’s pretty certain none of your library users will handle it (remember Tridge’s Law: untested code is broken code). If it’s convenient to return NULL, do so. Otherwise offer a way of registering an error function and let the user deal with setjmp etc. Do not complicate your API to handle this case.
  7. Always return the error. If you return a pointer, NULL is standard. Magic pointers for different errors can work too, but at that point some other method might be preferable.
  8. Think hard before implementing multiple levels of interface. It’s better to have everyone understand your interface and have 10% not use it (or hack it!) because it’s incomplete, than have 50% reinvent it because yours is too complicated.
  9. There’s a standard naming for “I know what I’m doing” low-level alternate functions: the single-underscore prefix (eg. _exit()).  That’s your one extra level of interface, and if you need more see the previous point.  Perhaps another library is in order, on top of this one? This is a standard violation: see Glen’s comment below.
  10. Don’t worry about namespace issues too much.  Avoid the standard namespace and settle on a short prefix (eg. opt_, judy_, talloc_), but there’s little you can do if your name hits someone else’s so don’t make your code a pain to use just to try to uniquify the names (ccan_rusty_russell_opt_ prefixes, for example).
  11. Your header(s) should be readable, and in a readable order.  I consider extracting comments a complete waste of time, but kerneldoc and doxgen et. al. provide a consistent model for documenting your API in the headers, and should be used as such.
  12. Don’t sweat portability too much.  Try not to paint yourself into a corner, but if you’re not going to test on other platforms leave the serious effort until someone does.
  13. I do consider use of C++ keywords in headers overly unfriendly to our C++ cousins, but in the code it’s fine; I avoid putting extern “C” in my headers, as C++ people can do that (and whatever else they need) before #including the header.  Again, if you’re not going to test it, don’t work up a sweat about it.
  14. There’s a standard naming scheme for C, and it’s all lower case with underscores as separators.  Don’t BumpyCaps.

On some level, each of these influenced CCAN as we thought “how do we encourage people to reuse this code?”   Many are specific applications of the CCAN Golden Rule (aka. “our code should not be ugly”) which drives that attempt to make code reusable.

Packaging cruft in CCAN is sidestepped entirely; since people are expected to include ccan/ in their own code, we leave that to them, with some guidelines on how to use the code:

  • You supply a “config.h” somewhere we can #include it, with various #define HAVE_FOO in it.
  • The _info file is actually a C program: the header contains (trivial-to-parse kerneldoc style) metainformation such as author, description, and example program(s).  These are aimed at human consumption: compiling and executing the code gives dependencies in a parsable form (still, only the ccan/ dependencies are the only ones really amenable to machine use).
  • The (main?) header will be the same name as the module, with “.h” appended, eg “ccan/foo/foo.h”.  The kerneldoc-style comments are the module API documentation.
  • The test/ subdir contains tests of standard forms (eg. run-* test are to be executed).
  • DEBUG can be defined for “give me extra debugging on this module, performance cost be damned”.  This is separate from normal sanity checks which would be disabled by defining NDEBUG.

In other words, you see the code first and foremost when you look in a CCAN module.

We also have a “namespacize” tool (very primitive, needs love) which is designed to rewrite the code in case of namespace clashes (currently it prepends ccan_, and scans dependent modules and fixes them up too).  This allows us to ignore namespace issues :)

The “ccanlint” tool is CCAN’s always-too-primitive checker/scorer tool, which ideally would judge a module on all of the complaints above.  It currently runs the test cases, checks their coverage and the like, but of course it’ll never be sophisticated enough to judge interfaces.  However, it does check for the existence of kerneldoc comments, and even extract and compile Example: code fragments (using some fun heuristics) to make sure they haven’t bitrotted.  And I keep adding new checks which make it more useful.

This probably explains why I stay up late writing CCAN modules: I can concentrate on writing the code (and tests!) without the other cruft.  And I don’t even need to stick to an API or ABI for my modules (api* tests indicate an API guarantee, and only three modules have those!).  And because it’s source-level, you don’t have to stretch your interface to cover all the cases; users with weird needs can simply use the code as a basis for their own.

No tags Hide

« Previous Page« Previous Entries

Next Entries »Next Page »

Find it!

Theme Design by devolux.org

Tag Cloud