Summary of “Advanced C Coding For Fun!”

Perhaps there was too much fun, and not enough advanced C coding, as one attendee implied.  My original intent is to walk through a real implementation in the order I coded it, warts and all, but over 50% got cut for time.  After all, it took me 15 minutes in my BoF session just to run through the implementation of ccan/foreach.  (Hi to the three people who attended!).

So I ended up doing a fair bit of waving at other code (yes, mainly in CCAN: if I have a useful trick, I tend to put it there).  Here’s the bullet-point version of my talk with links:

  • CCAN is a CPAN-wannabe project for snippets of C code.
  • Your headers should be a readable and complete reference on your API.
  • Code documentation should be human readable and machine processable (eg. kerneldoc), but extracting it is a waste of time.  See above.
  • Your headers should contain example code, and this should be compile tested and even executed (ccanlint does this).
  • Perl’s TAP (Test Anything Protocol) has a C implementation which is easy to use.
  • You can write a better ARRAY_SIZE(arr) macro than “sizeof(arr)/sizeof((arr)[0])”, using gcc extensions to warn if the argument is actually a pointer, not an array.
  • I got bitten by strcmp()’s usually-wrong return value after coding in C for ten years.  I suggest defining a streq() macro.
  • It is possible, though quite difficult, to implement a fixed-values iterator macro, aka. foreach.  It’s even efficient if you have C99.
  • Making functions return false rather than exit, even if the caller can’t really handle the failure, makes for easier testing.
  • Making your functions use errno is a bonus, though its semantic limitations are definitely a two-edged sword.
  • A common mistake is to call close, fclose, unlink or free in error paths, not realizing that they can alter errno even if they succeed.
  • Never think to write malloc-fail-proof code without testing it thoroughly, otherwise you haven’t written malloc-fail-proof code.
  • You can test such “never-happen” failure paths automatically by forking; make sure you give a nice way to get a debugger to the fail point though, and terminate failing tests as early as possible.
  • There are libraries to make option parsing easier than getopt; popt and ccan/opt are two.
  • You can use macros to provide typesafe callbacks rather than forcing callbacks to take void * and cast internally; the compiler will warn you if you change the type of the callback or callback parameter so they no longer match.
  • Do not rely on the user to provide zero’d terminators to tables: use a non-zero value so you’re much more likely to catch a missing terminator.
  • Use talloc for allocation.
  • Don’t return a void * as a handle, even if you have to make up a type.  Your callers’ code will be more typesafe that way.
  • Don’t use global variables in routines unless it’s clearly a global requirement: keep everything in the handle pointer.
  • Valgrind is awesome.  Valgrind with failtesting is invaluable for finding use-after-free and similar exit-path bugs.
  • Fixing a test doesn’t mean your program doesn’t suck.  I “fixed” a one-client-dies-while-another-is-talking-to-it by grabbing another client; that’s stupid, though my test now passes.
  • Don’t do anything in a signal hander; write to a nonblocking pipe and handle it in your event loop.
  • The best way to see why your program is getting larger over time is to use talloc_report() and see your allocation tree (you can use gdb if you need, a-la Carl Worth.
  • You might want to do something time-consuming like that in a child; remember to use _exit() in the child to avoid side-effects.
  • There are at least two tools which help you dump and restore C structures: genstruct and cdump (coming soon, it’s in the talk’s git tree for the moment).  Both are very limited, though cdump is still being developed.
  • You can use a dump/exec/restore pattern to live-upgrade processes; forking a child to test dump and restore is recommended here!
  • If your restore code is well-defined for restoring fields that weren’t dumped, you can make significant code modifications using this pattern.
  • You can use C as a scripting language with a little boilerplate.  Use “#if 0” as the first line, followed by the code to recompile and exec, then “#else” followed by  the actual code.  Make it executable, and the shell will do the right thing.
  • You can use gdb to do just about anything to a running program; script it if you can’t afford to have it stopped for long.
  • The best hash algorithm to use is the Jenkins lookup3 hash (there’s a ccan/hash convenient wrapper too).
  • The best map/variable array algorithm to use is Judy arrays (much nicer with the ccan/jmap wrapper).

That was all I had room for; there was none for questions, and even the last two points were squished onto the final “Questions?” slide.

The LCA Dinner Rule

Dinner was great as only a room full of well-fed FOSS geeks can be, but I felt that that my time on-stage was too long and too chaotic; I apologise.  The LCA team have done such a thorough job of recovering from events that it’s a surprise when things don’t magically come together.

So I propose an “LCA Dinner Rule” for future conferences: that dinner activities span no more than 1 hour.  Preferably combining wit with its brevity.

(Oh, and I have a strong feeling that next year’s dinner is going to be a superb event…)

On C Linked Lists

There are two basic styles of double-linked lists in C; I’ll call them ring style and linear style.

The Linux kernel has ring-style, defined in include/linux/list.h.  You declare a ‘struct list_head’ and everyone who wants to be in the list puts a ‘struct list_head list’ in their structure (struct list_node in CCAN’s version).  This forms a ring of pointers in both the forward and reverse directions.

SAMBA uses linear-style, defined in lib/util/dlinklist.h.  You simply declare a ‘struct foo *’ as your list, and everyone who wants to be in the list puts a ‘struct foo *prev, *next’ in their structure.  This forms a NULL-terminated list in the forward direction (the reverse direction became a ring recently to facilitate tail access).  (Linux’s hlist.h datastructure is similar, only done in list.h style).

Points in favor of ring-style lists:

  1. Insertion and deletion are branchless, as the elements and head are homogeneous:
     head->next->prev = new;
     new->next = head->next;
     new->prev = head;
     head->next = new;

    vs:

    if (!head) {
        new->prev = head = new;
        new->next = NULL;
    } else {
        new->prev = head->prev;
        head->prev = new;
        new->next = head;
        head = new;
    }
  2. list_add, list_del are inline functions, not macros, since the types are known.

Points in favor of linear-style lists:

  1. They’re typesafe, since the head pointer and internal pointers are all the correct type.
  2. Forward iteration is simpler, since the list ends at NULL rather than back at the head. In theory this could free a register, but the bigger difference is that it’s often useful to have NULL in your iterator once the loop is done.
  3. As a corollary, iteration, initialization and emptiness testing don’t need some tricky macros:
      struct foo *head = NULL;
      if (head == NULL) ...
      for (i = head; i; i = i->next) ...
    

    vs

      LIST_HEAD(head);
      if (list_empty(&head)) ...
      list_for_each(i, head, list) ...
    
  4. Uses slightly less space for the head pointer (one pointer, vs two).

So how important is efficiency of insert and delete in a real project?  To provide some data on this, I first annotate the linux kernel so each list.h operation would increment a counter which I could dump out every so often.  Then I booted the kernel on my laptop and ran as normal for three days.

Operation Frequency
Empty Test 45%
Delete 25%
Add 23%
Iterate Start 3.5%
Iterate Next 2.5%
Replace 0.76%
Other Manipulation 0.0072%

Firstly, I was surprised that we add and remove from lists much more than we look through them. On reflection, this makes sense: if lookups are really common we don’t use a linked list. And note how often we check for whether the list is empty: this looks like a “list as a slow path” pattern. I wonder if SAMBA (or other projects) list usage looks the same… anyone?

Secondly, we delete more than we add, but we could be “deleting” initialized-but-unadded elements. I didn’t chase this beyond re-reading and enhancing my patch to measure “other manipulations” to see if they could explain it (no).

Thirdly, when we iterate it’s short: the list is empty or we break out on the first element 28% of the time (I didn’t differentiate). I wonder if a single-linked list would be an interesting micro-space-optimization for many of these lists.

Summary: I like the safety of SAMBA’s lists, but there’s clearly real merit in eliminating those branches for add and delete.  It’s a genuine performance-usability trade-off, so I I think we’ll still have both in CCAN…

On Conference Harrassment…

The recent attention given to harassment at conferences was sparked by the  sexual assault described of Noirin Shirley at ApacheCon; her particular attacker’s actions were deranged and criminal, but it’s clearly a variation on an ongoing theme of harassment.

This issue raises two questions for future conferences: how do we prevent an atmosphere which encourages this, and how do we make sure everyone knows that we don’t have such an atmosphere at the conference?  The two are related, but we need both.

Atmosphere matters; let’s not discount its power because it’s intangible.  It is the atmosphere at linux.conf.au which inspires new project and enlivens existing ones among the attendees.  So let’s ensure it’s a positive one, and let’s talk about it.  I’m confident the much-harried LCA organizers will integrate an anti-harassment policy, but I encourage them to do so boldly, loudly and soon. [Correction: They already have. Front page, first paragraph has “LCA2011 is dedicated to a harassment-free conference experience for everyone. See our anti-harassment policy for details.”]

It is worth expending serious effort addressing this problem.  I’ve only experienced prolonged negative sexual stereotyping once; the only help was someone who was unrelentingly positive and set a clear example of welcome, which others followed.  Let’s all try to be like that.

There are two things I promise to try to do this time around:

  1. Assume everyone is a delegate; a far lesser error than being the tenth person who assumes you are a tech-uninterested partner.
  2. Welcome a newcomer, ask about what they hack on and listen, introduce them to someone else, then leave them to it. When I do this, I always learn something.

(This post inspired by Alex, who is encouraging me to be more self-aware, by example).

Not Buying Another HTC

Sent back my HTC Magic for warranty repair (wouldn’t charge) and they upgraded firmware; I can’t install cyanogenmod on it any more, making it fairly useless to me.  I spent two days of my holiday time trying and failing the goldcard lottery.

I just dropped my other HTC Magic and shattered the glass, so I’m in the market for a new Android phone via ebay.  Given my phones only last about 6-12 months, any advice for older cyanogenmod-friendly phones?  Or should I just give in and pay $500 for a Samsung Galaxy S?

The Art of Community (TAoC)

Fuzzy topics annoy me, and hurt my brain as I struggle to quantitify them.  People who talk on these topics grate on my nerves; it’s a kind of geek gag reflex.

So I really didn’t like Jono Bacon’s The Art of Community.  It was like a counseling session where all the faults you half-suspected you had get laid out in a massive TODO list with cogent reasoning which makes you squirm.  By the second chapter, I really wanted to knee Jono in the nuts.  My mind kept returning to CCAN, my half forgotten could-be-great-oneday project.  It got so distracting that by halfway through the fourth chapter I put it down and haven’t picked it up again.

But I will return.  And I will re-read it from the start, for one reason: it reshaped my thinking about building community projects, and that has lead directly to the increase in activity in the past months around CCAN.  Jono has thought really hard and long about this topic, and provides a framework which the rest of us can map our projects onto.  That alone is a serious intellectual feat, the clear friendly prose is just a bonus.

Next time, I just hope I can tick enough of TAoC boxes that I don’t feel like I’ve doomed my pet project like a puppy chained to a submarine.

[Footnote: Jono’s one of those annoying “doers”.  I find his http://openrespect.org/ self-evident: I want to be respected while corrected, but I only recently figured that out and I suspect others might have missed it too.]

Hashtables vs Judy Arrays, Round 1

I spent some of Friday and yesterday working on my CCAN hashtable module (renamed to htable); I wrote a set of benchmarks and ran them until my laptop steamed and I had burnished the implementation into a sleek shining bundle of awesome.

Then it was time to run the same benchmark on Judy Arrays; code I had only become aware of via Yenga‘s comment on my blog.  A few weeks back I wrote some friendly wrappers for libJudy, since I found the raw interface made my eyes bleed.  Fortunately, that proved quite simple, but it’s worth making a note for those who (like me three weeks ago) aren’t familiar with Judy Arrays.  The theory is that different datastructures should be used for different key layouts: high density or sparse, many or few, always concentrating on reducing cache impact.  The result is a serious amount of code for all the different cases: a risky, full-frontal assault on the problem rather than the normal cautious approach of using a single data structure then ignoring or special-casing a few corner cases.

Anyway, my initial benchmark was designed to push hashtables pretty hard, particularly as I used delete markers in my original implementation.  That tends to make performance suck after a great deal of churn, so I churn the hashtable in the middle of the benchmark.  It used consecutive keys, but I switched to spacing them out some more after that proved to be incredibly performant with Judy (which is ordered, unlike a hashtable).

I compared the JudyL implementation with my htable implementation; in fact I ran it at both 75% full (dense), 37% full (sparse) and 37% full with bitstealing disabled (dumb).  75% is high for a typical hashtable, but my implementation seems quite happy operating at that point.  I was interested in mapping an int to an object: for the hash table the object contained the int key as well as a payload pointer (to self, for consistency checking).  For Judy the object simply contained a payload pointer; a Judy array itself contains the key.  In both cases I used 50,000,000 objects, and divided the total time by the number of objects to give nanoseconds per operation.

The Good: Ordered, Dense Keys

First, doing things Judy is good at: the initial insertion of keys 0 to 49,999,999 in order, looking them up in order, looking up keys 50,000,000 to 99,999,999 in order. Note to be fair, you have to add about 150 and 300M to the hashtable memory usage to get the peak usage, since they double-allocate-and-copy, then free the old table.

Measure Judy Dense hash Sparse hash Dumb hash
Memory 420M 640M 895M 895M
Initial insert (ns) 137 332 471 482
Linear hit (ns) 19 174 176 214
Linear miss (ns) 14 234 184 286

The main story here is that Judy in order is unbeatably cache-friendly. It’s nice to notice that bitstealing optimization pays off for hashtables as fewer buckets need to be dereferenced, and similarly with lower density, but Judy wins hands down here.

OK, now we lookup random elements and access the elements: the latter is important, since it’s more realistic and hash has to access the element to verify it anyway. But since our elements are allocated as a huge array, that’s why random accessing them (even via a hash table) costs us more. Oh, and our “random” is actually “10007 apart” meaning it’s harder on caches than random would be, which might occasionally be cache hot.

Measure Judy Dense hash Sparse hash Dumb hash
Random hit (ns) 361 330 357 375

The hashtable’s doesn’t often cache miss, and the denser hash seems to win more than it loses from having to look at more buckets, but with this kind of key density Judy is very competitive anyway.

Now we delete everything and reinsert them all (in order):

Measure Judy Dense hash Sparse hash Dumb hash
Delete (ns) 199 171 200 172
Reinsert (ns) 148 191 197 175

As my hashtable doesn’t shrink it gets an artificial benefit, yet Judy again stays very competitive. Our dumb hash does surprisingly well here, too; there’s no reason for it to be different from the “smart” sparse hash, but it was a repeatable effect. Perhaps the CPU is doing the noop bit manipulations (anding with all 1s, etc) faster?

The Bad: Unordered Sparse Keys

Now let’s look at churn. Fresh datastructures are easy, but over time we want to see what happens as elements are added and deleted. In my case, I iterate through the keys and delete each object then re-add it with the key increased by 50,000,000. With a hashtable using delete markers, this can reveal a weaknesses in the way they are garbage collected. Then I iterate one last time and space the keys out 9 values apart. The objects themselves are still dense, but the keys are far less so (ranging from 0 to 449,999,991: I chose 9 because I was originally playing with a few hundred million objects and didn’t want to overflow).

So with sparser keys and a worn-in datastructure, we do our three kinds of lookups again:

Measure Judy Dense hash Sparse hash Dumb hash
Memory 607M 640M 895M 895M
Linear Hit (ns) 61 215 202 265
Linear Miss 60 301 223 278
Random Hit 736 405 386 445

Note that Judy consumes a little more memory at this point, though it’s still ahead of the densest hash.

But finally we see a case where a good hashtable is clearly superior: random hits on a full range of keys. The hashtable has suffered a little (there are quite a few deleted records there after all the churn), but 10-20% penalty for churn is not as bad as Judy’s 100% penalty for sparser keys.

Last of all, we delete every second object and then re-add with modified keys:

Measure Judy Dense hash Sparse hash Dumb hash
Delete Half (ns) 266 120 103 103
Add Half (ns) 205 200 103 105

Judy is still suffering from those sparse keys, though add is competitive with the dense hash (made even denser by those deleted elements).

Summary

Judy allows for ordered access and let you have an external key rather than putting it into your structure. If those matter, the choice is clear. Other times you don’t want a latency spike as the hash table grows and rehashes, or you’re concerned about malicious insertion patterns bombing a hashchain, which again favors using Judy arrays.

If you have adjacent access patterns, Judy will win. If you have dense keys, Judy will be competitive. If you have random access and sparse keys, Judy lookups and deletes could be twice as slow as an optimized hash table, but using 20% to 100% less memory.

And one final note on random access: if it’s not quite random, such as if every second lookup is near the previous one, time drops from 736 to 410 nanoseconds, back competitive with the hashtable result. It’s quite impressive.

[Here are the URLs: the Judy arrays homepage (but your distro probably has a package, Ubuntu does), the ccan wrapper module, and the ccan htable module.]

A nomenclature anecdote

“…an author, with thesaurus in hand, chooses the names of variables carefully…” — Knuth, Literate Programming

The Elements of Style‘s invaluable advice for prose (“Omit needless words”) is mapped into C coding as “Curt names win”: I crave descriptive, short, punchy names. Though perhaps not to the extent of K&R, whose approach might be described as “Omit needless vowels”.

Sometimes though, I get baffled by my own bullshit.  Yesterday I was benchmarking Judy arrays vs hash tables (more in another post) and I noticed that I had named two equivalent functions “htable_find” vs “jmap_get”.  In my mind find has overtones of searching and get has more direct connotations: say, a simple dereference. So why did I name them differently?  I had to implement that htable_find myself, whereas for jmap_get I only implemented a one-line wrapper :)

Since the operations are very fast, jmap is already a published CCAN module, and get is one letter shorter than find, I renamed htable_find to htable_get.  Consistent naming has benefits of its own…

LWN Supporter Subscription: Take My Money!

I’ve been quietly lobbying Linux Weekly News to offer a $500 – $1000 “Supporter”-level subscription option; LWN has saved my sanity for the last two years as my conference schedule has been severely restricted.

But one problem with LWN is that they spend all their time writing and editing quality articles, instead of the boring stuff: ie. taking money from people.  The base rate finally increased this year after 8 years at $5/month (to $7), but I worry that Jon and co will look back at the years of service and lack of retirement savings and wish they’d had secure well-paying jobs like the rest of us.

After numerous monthly reminder emails and face-to-face lobbying at LinuxCon Japan, I’m giving up on “quietly” and asking your for your help.  Please: next time you comment on LWN, could you add some sig-line lobbying for the Supporter-level subscription?  Even if you have no intention of ever buying such a thing, at least you know one person will…

What Do We Want? LWN Supporter Subscription!
When Do We Want it? Before LCA 2011!