Rusty Russell's Coding Blog | Stealing From Smart People

I was delighted that Jon Corbet pinged me to say he was finally implementing a supporter option for LWN.  It’s been about 12 months since I started asking about it, and 6 since I started asking publicly.  When it finally arrived, in classical FOSS brand-suicide style, it was named the “Maniacal supporter” option.  I don’t think Jon believed anyone would actually pay more “for nothing”, but curiosity finally won out.

But he’s wrong: people want the consistent commentry and in-depth analysis that only dedicated experts like Jon can provide.  And we know that if they don’t get enough money, they’ll have to stop writing and take day jobs; this is not some abstract charity.  I want Jon to be comfortable and LWN financially secure and able to concentrate on what they do best, which seems to be a rare skill in our community.  This is a start in that direction; I welcome your suggestions on what to do next…

No tags Hide

Jokes aside, I don’t prepare my conference talks the night before.  I took a week off of work to prepare my linux.conf.au talk this year (two weeks before the conference, and I still spent a couple of work days in the week after completing it).  That kind of spontaneity takes preparation!

Here’s a rough calculator of preparation time for an LCA-style talk.  Make sure you finish at least a week before, to allow slippage!

Preparation Time for Standard Talk (~45 minutes)

If you have given it before:

  • If you were happy with it and not changing it: 15 minutes to re-check, change conference name and date.
  • If you were happy with it and changing it slightly: +1 hour for a complete run-through.
  • If you were a little unhappy with it, but content will not change: +5 hours for reviewing previous video and googling for feedback and taking notes, then running through changed sections twice and complete run-through once.

Prior work:

  • If you’re not the world expert on what you’re talking about, allow at least a week of research time.
  • If the topic is vague allow at least a month of mulling time, where the topic sits in your brain.  For longer periods I recommend jotting down your ideas.  (I did this for an entire year before my OLS keynote, and I knitted a theme from the contents of that text file full of thoughts and examples).
  • One hour to a day to plan your talk structure: what are your main points, what’s the extra magic?

Writing the talk:

  • 10 minutes per basic slide.  Usually I’d expect 25 slides, so say 4 hours.
  • 30 minutes per diagram (five minutes of this will be trying to figure out if you really need a diagram: you probably do!).  I’d expect five to ten diagrams, so say 5 hours.
  • Five hours per demo.  Not just setting it up in the first place, but making it robust and repeatable and practicing switching to and from your presentation adds time.
  • Two hours per run-through (since you tend to stop and mod the first few times).  You’ll want at least one more of these than you have time for, but I’d expect 8 hours for this.

Additional overheads:

  • Using new software: +4 hours if you’re on your own, +1 hour if you have an expert available.
  • Any project work where you have to document the steps for your talk: double your normal project time for the overhead, to ensure it’s comprehensible and maximally useful to the audience (vs. works).
  • Any new presentation technique you haven’t used before, add 4 hours for two additional run-throughs.

Preparation Time for A Tutorial (~1.5 hours)

Similar calculations, but you’ll have many more demos so it’ll be more than twice as long.  The real killer is that you have to practice timings with real people who are similar to your target audience.  This means in addition to everything else, you’ll want to give it for some local group at least twice, because there’s no way you can know what the timing will be like until you’ve done that.  Say +2 hours to organize the practice sessions, and +6 hours to run them (this includes transport, setup, testing and time overruns for each one).

Testing Time

Testing time happens standing in the venue, at the podium with your setup ready to go.  I allow 5 minutes for video setup.  If you’ve not presented on the laptop before, +15 minutes.  If you’re not always running 1024×768, +10 minutes.  If you want audio, +5 minutes. If you have a video to show, +5 minutes. If you have an interactive demo, +5 minutes to find a practice volunteer, +20 minutes to run through the demo with them.

In general, allow half your testing time the day before (ie. you’ll need to access the venue), the rest in the space before your talk.

An Example

So, this gives a preparation time for my LCA 2011 talk as:

  • 1 day planning.
  • 6 hours for 35 basic slides.
  • 2 hours for 4 diagrams.
  • 15 hours for 3 demos.
  • 8 hours for run-throughs.
  • 4 hours for messing with svgslides, even though I didn’t really use it in the end.
  • 3 days for coding up the example project, and documenting that code.
  • 4 hours for additional run-throughs because I hadn’t presented using a side-bar and emacs before.

Giving a total time of of 71 hours (assuming 8 hour days).  That’s probably about right.  And the required 30+ minutes of testing time explains why I didn’t end up having people telnet into my laptop for the demos; if I’d tested that the day before, we might have been able to organize something.

No tags Hide

Perhaps there was too much fun, and not enough advanced C coding, as one attendee implied.  My original intent is to walk through a real implementation in the order I coded it, warts and all, but over 50% got cut for time.  After all, it took me 15 minutes in my BoF session just to run through the implementation of ccan/foreach.  (Hi to the three people who attended!).

So I ended up doing a fair bit of waving at other code (yes, mainly in CCAN: if I have a useful trick, I tend to put it there).  Here’s the bullet-point version of my talk with links:

  • CCAN is a CPAN-wannabe project for snippets of C code.
  • Your headers should be a readable and complete reference on your API.
  • Code documentation should be human readable and machine processable (eg. kerneldoc), but extracting it is a waste of time.  See above.
  • Your headers should contain example code, and this should be compile tested and even executed (ccanlint does this).
  • Perl’s TAP (Test Anything Protocol) has a C implementation which is easy to use.
  • You can write a better ARRAY_SIZE(arr) macro than “sizeof(arr)/sizeof((arr)[0])”, using gcc extensions to warn if the argument is actually a pointer, not an array.
  • I got bitten by strcmp()’s usually-wrong return value after coding in C for ten years.  I suggest defining a streq() macro.
  • It is possible, though quite difficult, to implement a fixed-values iterator macro, aka. foreach.  It’s even efficient if you have C99.
  • Making functions return false rather than exit, even if the caller can’t really handle the failure, makes for easier testing.
  • Making your functions use errno is a bonus, though its semantic limitations are definitely a two-edged sword.
  • A common mistake is to call close, fclose, unlink or free in error paths, not realizing that they can alter errno even if they succeed.
  • Never think to write malloc-fail-proof code without testing it thoroughly, otherwise you haven’t written malloc-fail-proof code.
  • You can test such “never-happen” failure paths automatically by forking; make sure you give a nice way to get a debugger to the fail point though, and terminate failing tests as early as possible.
  • There are libraries to make option parsing easier than getopt; popt and ccan/opt are two.
  • You can use macros to provide typesafe callbacks rather than forcing callbacks to take void * and cast internally; the compiler will warn you if you change the type of the callback or callback parameter so they no longer match.
  • Do not rely on the user to provide zero’d terminators to tables: use a non-zero value so you’re much more likely to catch a missing terminator.
  • Use talloc for allocation.
  • Don’t return a void * as a handle, even if you have to make up a type.  Your callers’ code will be more typesafe that way.
  • Don’t use global variables in routines unless it’s clearly a global requirement: keep everything in the handle pointer.
  • Valgrind is awesome.  Valgrind with failtesting is invaluable for finding use-after-free and similar exit-path bugs.
  • Fixing a test doesn’t mean your program doesn’t suck.  I “fixed” a one-client-dies-while-another-is-talking-to-it by grabbing another client; that’s stupid, though my test now passes.
  • Don’t do anything in a signal hander; write to a nonblocking pipe and handle it in your event loop.
  • The best way to see why your program is getting larger over time is to use talloc_report() and see your allocation tree (you can use gdb if you need, a-la Carl Worth.
  • You might want to do something time-consuming like that in a child; remember to use _exit() in the child to avoid side-effects.
  • There are at least two tools which help you dump and restore C structures: genstruct and cdump (coming soon, it’s in the talk’s git tree for the moment).  Both are very limited, though cdump is still being developed.
  • You can use a dump/exec/restore pattern to live-upgrade processes; forking a child to test dump and restore is recommended here!
  • If your restore code is well-defined for restoring fields that weren’t dumped, you can make significant code modifications using this pattern.
  • You can use C as a scripting language with a little boilerplate.  Use “#if 0″ as the first line, followed by the code to recompile and exec, then “#else” followed by  the actual code.  Make it executable, and the shell will do the right thing.
  • You can use gdb to do just about anything to a running program; script it if you can’t afford to have it stopped for long.
  • The best hash algorithm to use is the Jenkins lookup3 hash (there’s a ccan/hash convenient wrapper too).
  • The best map/variable array algorithm to use is Judy arrays (much nicer with the ccan/jmap wrapper).

That was all I had room for; there was none for questions, and even the last two points were squished onto the final “Questions?” slide.

No tags Hide

Jan/11

28

The LCA Dinner Rule

Dinner was great as only a room full of well-fed FOSS geeks can be, but I felt that that my time on-stage was too long and too chaotic; I apologise.  The LCA team have done such a thorough job of recovering from events that it’s a surprise when things don’t magically come together.

So I propose an “LCA Dinner Rule” for future conferences: that dinner activities span no more than 1 hour.  Preferably combining wit with its brevity.

(Oh, and I have a strong feeling that next year’s dinner is going to be a superb event…)

No tags Hide

Putting “Advanced” in the title did not have the desired effect of scaring people off.  Nonetheless, it went well: maybe because I was on the wired network so noone could access my server to find the bugs :)

There wasn’t anywhere obvious to place a link about my talk, so here’s the git repository.

No tags Hide

Dec/10

12

On C Linked Lists

There are two basic styles of double-linked lists in C; I’ll call them ring style and linear style.

The Linux kernel has ring-style, defined in include/linux/list.h.  You declare a ‘struct list_head’ and everyone who wants to be in the list puts a ‘struct list_head list’ in their structure (struct list_node in CCAN’s version).  This forms a ring of pointers in both the forward and reverse directions.

SAMBA uses linear-style, defined in lib/util/dlinklist.h.  You simply declare a ‘struct foo *’ as your list, and everyone who wants to be in the list puts a ‘struct foo *prev, *next’ in their structure.  This forms a NULL-terminated list in the forward direction (the reverse direction became a ring recently to facilitate tail access).  (Linux’s hlist.h datastructure is similar, only done in list.h style).

Points in favor of ring-style lists:

  1. Insertion and deletion are branchless, as the elements and head are homogeneous:
     head->next->prev = new;
     new->next = head->next;
     new->prev = head;
     head->next = new;

    vs:

    if (!head) {
        new->prev = head = new;
        new->next = NULL;
    } else {
        new->prev = head->prev;
        head->prev = new;
        new->next = head;
        head = new;
    }
  2. list_add, list_del are inline functions, not macros, since the types are known.

Points in favor of linear-style lists:

  1. They’re typesafe, since the head pointer and internal pointers are all the correct type.
  2. Forward iteration is simpler, since the list ends at NULL rather than back at the head. In theory this could free a register, but the bigger difference is that it’s often useful to have NULL in your iterator once the loop is done.
  3. As a corollary, iteration, initialization and emptiness testing don’t need some tricky macros:
      struct foo *head = NULL;
      if (head == NULL) ...
      for (i = head; i; i = i->next) ...
    

    vs

      LIST_HEAD(head);
      if (list_empty(&head)) ...
      list_for_each(i, head, list) ...
    
  4. Uses slightly less space for the head pointer (one pointer, vs two).

So how important is efficiency of insert and delete in a real project?  To provide some data on this, I first annotate the linux kernel so each list.h operation would increment a counter which I could dump out every so often.  Then I booted the kernel on my laptop and ran as normal for three days.

Operation Frequency
Empty Test 45%
Delete 25%
Add 23%
Iterate Start 3.5%
Iterate Next 2.5%
Replace 0.76%
Other Manipulation 0.0072%

Firstly, I was surprised that we add and remove from lists much more than we look through them. On reflection, this makes sense: if lookups are really common we don’t use a linked list. And note how often we check for whether the list is empty: this looks like a “list as a slow path” pattern. I wonder if SAMBA (or other projects) list usage looks the same… anyone?

Secondly, we delete more than we add, but we could be “deleting” initialized-but-unadded elements. I didn’t chase this beyond re-reading and enhancing my patch to measure “other manipulations” to see if they could explain it (no).

Thirdly, when we iterate it’s short: the list is empty or we break out on the first element 28% of the time (I didn’t differentiate). I wonder if a single-linked list would be an interesting micro-space-optimization for many of these lists.

Summary: I like the safety of SAMBA’s lists, but there’s clearly real merit in eliminating those branches for add and delete.  It’s a genuine performance-usability trade-off, so I I think we’ll still have both in CCAN…

No tags Hide

Dec/10

6

On Conference Harrassment…

The recent attention given to harassment at conferences was sparked by the  sexual assault described of Noirin Shirley at ApacheCon; her particular attacker’s actions were deranged and criminal, but it’s clearly a variation on an ongoing theme of harassment.

This issue raises two questions for future conferences: how do we prevent an atmosphere which encourages this, and how do we make sure everyone knows that we don’t have such an atmosphere at the conference?  The two are related, but we need both.

Atmosphere matters; let’s not discount its power because it’s intangible.  It is the atmosphere at linux.conf.au which inspires new project and enlivens existing ones among the attendees.  So let’s ensure it’s a positive one, and let’s talk about it.  I’m confident the much-harried LCA organizers will integrate an anti-harassment policy, but I encourage them to do so boldly, loudly and soon. [Correction: They already have. Front page, first paragraph has "LCA2011 is dedicated to a harassment-free conference experience for everyone. See our anti-harassment policy for details."]

It is worth expending serious effort addressing this problem.  I’ve only experienced prolonged negative sexual stereotyping once; the only help was someone who was unrelentingly positive and set a clear example of welcome, which others followed.  Let’s all try to be like that.

There are two things I promise to try to do this time around:

  1. Assume everyone is a delegate; a far lesser error than being the tenth person who assumes you are a tech-uninterested partner.
  2. Welcome a newcomer, ask about what they hack on and listen, introduce them to someone else, then leave them to it. When I do this, I always learn something.

(This post inspired by Alex, who is encouraging me to be more self-aware, by example).

No tags Hide

Nov/10

17

Not Buying Another HTC

Sent back my HTC Magic for warranty repair (wouldn’t charge) and they upgraded firmware; I can’t install cyanogenmod on it any more, making it fairly useless to me.  I spent two days of my holiday time trying and failing the goldcard lottery.

I just dropped my other HTC Magic and shattered the glass, so I’m in the market for a new Android phone via ebay.  Given my phones only last about 6-12 months, any advice for older cyanogenmod-friendly phones?  Or should I just give in and pay $500 for a Samsung Galaxy S?

No tags Hide

Nov/10

11

The Art of Community (TAoC)

Fuzzy topics annoy me, and hurt my brain as I struggle to quantitify them.  People who talk on these topics grate on my nerves; it’s a kind of geek gag reflex.

So I really didn’t like Jono Bacon’s The Art of Community.  It was like a counseling session where all the faults you half-suspected you had get laid out in a massive TODO list with cogent reasoning which makes you squirm.  By the second chapter, I really wanted to knee Jono in the nuts.  My mind kept returning to CCAN, my half forgotten could-be-great-oneday project.  It got so distracting that by halfway through the fourth chapter I put it down and haven’t picked it up again.

But I will return.  And I will re-read it from the start, for one reason: it reshaped my thinking about building community projects, and that has lead directly to the increase in activity in the past months around CCAN.  Jono has thought really hard and long about this topic, and provides a framework which the rest of us can map our projects onto.  That alone is a serious intellectual feat, the clear friendly prose is just a bonus.

Next time, I just hope I can tick enough of TAoC boxes that I don’t feel like I’ve doomed my pet project like a puppy chained to a submarine.

[Footnote: Jono's one of those annoying "doers".  I find his http://openrespect.org/ self-evident: I want to be respected while corrected, but I only recently figured that out and I suspect others might have missed it too.]

No tags Hide

I spent some of Friday and yesterday working on my CCAN hashtable module (renamed to htable); I wrote a set of benchmarks and ran them until my laptop steamed and I had burnished the implementation into a sleek shining bundle of awesome.

Then it was time to run the same benchmark on Judy Arrays; code I had only become aware of via Yenga‘s comment on my blog.  A few weeks back I wrote some friendly wrappers for libJudy, since I found the raw interface made my eyes bleed.  Fortunately, that proved quite simple, but it’s worth making a note for those who (like me three weeks ago) aren’t familiar with Judy Arrays.  The theory is that different datastructures should be used for different key layouts: high density or sparse, many or few, always concentrating on reducing cache impact.  The result is a serious amount of code for all the different cases: a risky, full-frontal assault on the problem rather than the normal cautious approach of using a single data structure then ignoring or special-casing a few corner cases.

Anyway, my initial benchmark was designed to push hashtables pretty hard, particularly as I used delete markers in my original implementation.  That tends to make performance suck after a great deal of churn, so I churn the hashtable in the middle of the benchmark.  It used consecutive keys, but I switched to spacing them out some more after that proved to be incredibly performant with Judy (which is ordered, unlike a hashtable).

I compared the JudyL implementation with my htable implementation; in fact I ran it at both 75% full (dense), 37% full (sparse) and 37% full with bitstealing disabled (dumb).  75% is high for a typical hashtable, but my implementation seems quite happy operating at that point.  I was interested in mapping an int to an object: for the hash table the object contained the int key as well as a payload pointer (to self, for consistency checking).  For Judy the object simply contained a payload pointer; a Judy array itself contains the key.  In both cases I used 50,000,000 objects, and divided the total time by the number of objects to give nanoseconds per operation.

The Good: Ordered, Dense Keys

First, doing things Judy is good at: the initial insertion of keys 0 to 49,999,999 in order, looking them up in order, looking up keys 50,000,000 to 99,999,999 in order. Note to be fair, you have to add about 150 and 300M to the hashtable memory usage to get the peak usage, since they double-allocate-and-copy, then free the old table.

Measure Judy Dense hash Sparse hash Dumb hash
Memory 420M 640M 895M 895M
Initial insert (ns) 137 332 471 482
Linear hit (ns) 19 174 176 214
Linear miss (ns) 14 234 184 286

The main story here is that Judy in order is unbeatably cache-friendly. It’s nice to notice that bitstealing optimization pays off for hashtables as fewer buckets need to be dereferenced, and similarly with lower density, but Judy wins hands down here.

OK, now we lookup random elements and access the elements: the latter is important, since it’s more realistic and hash has to access the element to verify it anyway. But since our elements are allocated as a huge array, that’s why random accessing them (even via a hash table) costs us more. Oh, and our “random” is actually “10007 apart” meaning it’s harder on caches than random would be, which might occasionally be cache hot.

Measure Judy Dense hash Sparse hash Dumb hash
Random hit (ns) 361 330 357 375

The hashtable’s doesn’t often cache miss, and the denser hash seems to win more than it loses from having to look at more buckets, but with this kind of key density Judy is very competitive anyway.

Now we delete everything and reinsert them all (in order):

Measure Judy Dense hash Sparse hash Dumb hash
Delete (ns) 199 171 200 172
Reinsert (ns) 148 191 197 175

As my hashtable doesn’t shrink it gets an artificial benefit, yet Judy again stays very competitive. Our dumb hash does surprisingly well here, too; there’s no reason for it to be different from the “smart” sparse hash, but it was a repeatable effect. Perhaps the CPU is doing the noop bit manipulations (anding with all 1s, etc) faster?

The Bad: Unordered Sparse Keys

Now let’s look at churn. Fresh datastructures are easy, but over time we want to see what happens as elements are added and deleted. In my case, I iterate through the keys and delete each object then re-add it with the key increased by 50,000,000. With a hashtable using delete markers, this can reveal a weaknesses in the way they are garbage collected. Then I iterate one last time and space the keys out 9 values apart. The objects themselves are still dense, but the keys are far less so (ranging from 0 to 449,999,991: I chose 9 because I was originally playing with a few hundred million objects and didn’t want to overflow).

So with sparser keys and a worn-in datastructure, we do our three kinds of lookups again:

Measure Judy Dense hash Sparse hash Dumb hash
Memory 607M 640M 895M 895M
Linear Hit (ns) 61 215 202 265
Linear Miss 60 301 223 278
Random Hit 736 405 386 445

Note that Judy consumes a little more memory at this point, though it’s still ahead of the densest hash.

But finally we see a case where a good hashtable is clearly superior: random hits on a full range of keys. The hashtable has suffered a little (there are quite a few deleted records there after all the churn), but 10-20% penalty for churn is not as bad as Judy’s 100% penalty for sparser keys.

Last of all, we delete every second object and then re-add with modified keys:

Measure Judy Dense hash Sparse hash Dumb hash
Delete Half (ns) 266 120 103 103
Add Half (ns) 205 200 103 105

Judy is still suffering from those sparse keys, though add is competitive with the dense hash (made even denser by those deleted elements).

Summary

Judy allows for ordered access and let you have an external key rather than putting it into your structure. If those matter, the choice is clear. Other times you don’t want a latency spike as the hash table grows and rehashes, or you’re concerned about malicious insertion patterns bombing a hashchain, which again favors using Judy arrays.

If you have adjacent access patterns, Judy will win. If you have dense keys, Judy will be competitive. If you have random access and sparse keys, Judy lookups and deletes could be twice as slow as an optimized hash table, but using 20% to 100% less memory.

And one final note on random access: if it’s not quite random, such as if every second lookup is near the previous one, time drops from 736 to 410 nanoseconds, back competitive with the hashtable result. It’s quite impressive.

[Here are the URLs: the Judy arrays homepage (but your distro probably has a package, Ubuntu does), the ccan wrapper module, and the ccan htable module.]

No tags Hide

« Previous Page« Previous Entries

Next Entries »Next Page »

Find it!

Theme Design by devolux.org

Tag Cloud