1 Week to Go, and Rusty Goes Offline

Just as the Linux kernel merge window closes, I’m going offline.  My wedding is exactly a week away, but I’ll be entertaining guests and doing final preparation.  I’ll be back from our honeymoon and wading through mail on the 7 May.

Alex’s “A Bald Target” campaign to raise awareness for TimeForKids has been a huge success, even though we’re currently far short of the hair-shaving goal.  She’s been on one of the local radio stations, with newspaper coverage expected this weekend; two local TV stations want to cover the actual shave if it happens.  The charity is delighted with the amount of publicity they have received; given that they need local people to volunteer to mentor the disadvantaged children, that’s worth at least as much as the money.

Special thanks to a couple of people who donated direct to the charity, to avoid causing baldness!  And yes, if we were starting again, having competing “shave” vs “save” campaigns would have been awesome…

Sources of Randomness for Userspace

I’ve been thinking about a new CCAN module for getting a random seed.  Clearly, /dev/urandom is your friend here: on Ubuntu and other distributions it’s saved and restored across reboots, but traditionally server systems have lacked sources of entropy, so it’s worth thinking about other sources of randomness.  Assume for a moment that we mix them well, so any non-randomness is irrelevant.

There are three obvious classes of randomness: things about the particular machine we’re on, things about the particular boot of the machine we’re on, and things which will vary every time we ask.

The Machine We’re On

Of course, much of this is guessable if someone has physical access to the box or knows something about the vendor or the owner, but it might be worth seeding this into /dev/urandom at install time.

On Linux, we can look in /proc/cpuinfo for some sources of machine info: for the 13 x86 machines my friends on IRC had in easy reach, we get three distinct values for cpu cores, three for siblings, two for cpu family, eight for model, six for cache size, and twelve for cpu MHz.  These values are obviously somewhat correlated, but it’s a fair guess that we can get 8 bits here.

Ethernet addresses are unique, so I think it’s fair to say there’s at least another 8 bits of entropy there, though often devices have consecutive numbers if they’re from the same vendor, so this doesn’t just multiply by number of NICs.

The amount of RAM in the machine is worth another two bits, and the other kinds of devices eg. trolling /sys/devices, which can be expected to give another few bits, even in machines which have fairly standard hardware settings like laptops.  Alternately, we could get this information indirectly by looking at /proc/modules.

Installed software gives a maximum three bits, since we can assume a recent version of a mainstream distribution.  Package listings can also be fairly standard, but most people install some extra things so we might assume a few more bits here.  Ubuntu systems ask for your name to base the system name on, so there might be a few bits there (though my laptop is predictably “rusty-x201”).

So, let’s have a guess at 8 + 7 + 2 + 3 + 3 + 2 + 2, ie. 27 bits from the machine configuration itself.

Information About This Boot

I created an upstart script to reboot (and had to hack grub.conf so it wouldn’t set the timeout to -1 for next boot), and let it loop for a day: just under 2000 times in all. I eyeballed the graphs of each stat I gathered against each other, and there didn’t seem to be any surprising correlations.   /proc/uptime gives a fairly uniform range of uptime values within a range of 1 second, at least 6 bits there (every few dozen boots we get an fsck, which gives a different range of values, but the same amount of noise).  /proc/loadavg is pretty constant, unfortunately.  bogomips on CPU1 was fairly constant, but for the boot CPU it looks like a standard distribution within 1 bogomip, in increments of 0.01: say another 7 bits there.

So for each boot we can extract 13 bits from uptime and /proc/cpuinfo.

Things Which Change Every Time We Run

The pid of our process will change every time we’re run, even when started at boot.  My pid was fairly evenly divided on every value between 1220 and 1260, so there’s five bits there.  Unfortunately on both 64 and 32-bit Ubuntu, pids are restricted to 32768 by default.

We can get several more bits from simply timing the other randomness operations.  Modern machines have so much going on that you can probably count on four or five bits of unpredictability over the time you gather these stats.

So another 9 bits every time our process runs, even if it’s run from a boot script or cron.


We can get about 50 bits of randomness without really trying too hard, which is fine for a random server on the internet facing a remote attacker without any inside knowledge, but only about five of these bits (from the process’ own timing) would be unknown to an attacker who has access to the box itself.  So /dev/urandom is still very useful.

On a related note, Paul McKenney pointed me to a paper (abstract, presentation, paper) indicating that even disabling interrupts and running a few instructions gives an unpredictable value in the TSC, and inserting a usleep can make quite a good random number generator.  So if you have access to a high-speed, high-precision timing method, this may itself be sufficient.

Oh, BTW, I Am Engaged!

A few of my friends saw the LWN coverage of http://baldalex.org, and sent me a note of congratulations.  This reveals how incredibly slack I am in maintaining connections with my disparate and distributed friends.

Alex and Rusty (2009)

So: I met a wonderful lady, we fell in love, and I proposed on April the 8th last year, at Mt Lofty Gardens overlooking the scenery.  She was speechless, delighted, and said yes!  In related news, one year later, we are set to marry: 5th April 2012, and 3pm at the McLaren Vale Visitor’s centre.  All welcome!  (Yes, that’s Easter Thursday).

So I’ll be offline for April, except briefly to post pictures if we meet the target!

A Plea For Help: Charity

En-haired Fiancée Alex with my daughter Arabella

I’m getting married in just over five weeks!

My fiancée is raising money for charity; if we raise $50,000 by the big day, she will shave her head at the wedding.  Alexandra has had long hair all her life: she’s terrified but determined, so I’m determined to help.

We’re already asking for donations in lieu of wedding presents, but if you’ve ever wanted to buy me a beer for ipchains, iptables, netfilter, module-init-tools, lguest, CCAN, Rusty’s Unreliable Guides, CALU, or any other reason, I’ll take a $100/$20/$5 donation here instead :)

(Compulsory Facebook page here).

Why Everyone Must Oppose The Merging of /usr and /

As co-editor of the last edition of the File Hierarchy Standard before it merged into the Linux Standard Base, I’ve been following the discussion about combining the directories  /bin, /sbin and /lib into /usr/bin, /usr/sbin and /usr/lib respectively.  You can follow it too, via the LWN discussion.

To summarize, there are two sides to the debate.  The “pro” side points out:

  1. Nothing will really change for users, as symlinks will make old stuff still work.
  2. There are precedents in Solaris and Fedora.
  3. The weak reasonings used previously to separate / and /usr no longer apply.
  4. Separate /usr has become increasingly unsupported anyway.
  5. Moving to /usr will enable genuine R/O root filesystem sharing.

The “anti” side, however, raises very salient points:

  1. Lennart Poettering supports it.
  2. Lennart Poettering is an asshole.

Fellow Anti-mergers, I understand the pain and anguish that systemd has caused you personally, and your families.  Your hopes and dreams crushed, by someone with all the charm of a cheese grater across the knuckles.  Your remaining life tainted by this putrescent subhuman who forced himself upon your internet.

Despite the privation we have all endured, please find strength to stop this nightmarish ravaging of our once-pure filesystems.  For if he’s not stopped now, what hope for  /usr/sbin vs /usr/bin?

The Power of Undefined Values

Tools shape the way we work, because they change where we perceive risk when we write code.  If common compilers warn about something, I’ll code in a way that will trigger it in case of mistakes.  eg: instead of:

    int err = -EINVAL;
    if (something())
         goto out;
    err = -ENOSPC;
    if (something_else())
         goto cleanup_something;
    return err;

I would now set err in every branch:

    int err;
    if (something()) {
        err = -EINVAL;
        goto out;
    if (something_else()) {
        err = -ENOSPC;
        goto out;

Because when I add another clause to the initialization and forget to set err, gcc will warn me about it being uninitialized.  This bit me once, and it can be hard to spot the problem when you’re only reviewing a patch, not the code as a whole.

These days, we have valgrind, and despite its fame as a use-after-free debugger, it really shines at telling you when you rely on the results of an uninitialized field.  So, I’ve adapted to lean on it.  I explicitly don’t initialize structure members I don’t use in a certain path.  I avoid calloc(): while 0 is often less harmful than any other value, I’d much rather know that I’ve thought about and set up every field I actually use.  When changing code this is particularly important, and I spend a lot of my time changing code.  I have even changed to doing malloc() in some cases where I previously used on-stack or file-scope variables.  Valgrind doesn’t track on-stack usage very well, and static variables are defined to be zeroed, so valgrind can’t tell when I wander into the weeds.  I think these days, that’s a misfeature.

So, if I were designing a C-like language today, I’d bake in the concept of undefined values, knowing that the tools to leverage it are widely available.  10 years ago, I’d have said 0-by-default is safest, but times change.  I think Go chose wrong here, but it may not be as bad as C for other reasons.  I’d have to code in it for a few years to really tell.

PLUG: Coding: let’s have fun!

The Perth Linux User Group are flying me across next month to speak, complete with on-couch accommodation! But since I can’t find an easily-linkable synopsis of my talk, here are the details:

It’s hard to describe the joy of coding, if you haven’t experienced it,  but in this talk will try to capture some of it.  Free/Open Source lets us remove the cruft which forecloses on the joy of coding: seize this chance!

I’ll talk about some of my favourite projects over the last 15 years of  Free Software coding: what I did, how much fun it was and some surprising results which came from it.  I’ll also discuss some hard lessons learned about joyless coding, so you can avoid it.  There will also be a sneak peak from my upcoming linux.conf.au talk.

There’ll be some awesome code to delight us.  And if you’re not a programmer we’ll take you our journey and show you the moments of  brilliance which keep us coding.

linux.conf.au: Hacking your badge for lca2012

Someone mentioned that you had to look at the source code if you wanted to hack your badge this year; I would have considered that cheating if I hadn’t known.  (It’s been a few years since I last hacked my badge).  But it helps if you look in the right place: http://lca2012.blogspot.com/2011/09/feeling-silly.html

Thanks to Tony Breeds for pointing me at that after I’d given up with the github upstream source…

Speeding CCAN Testing (By Not Optimizing)

So, ccanlint has accreted into a vital tool for me when writing standalone bits of code; it does various sanity checks (licensing, documentation, dependencies) and then runs the tests, offers to run the first failure under gdb, etc.   With the TDB2 work, I just folded in the whole TDB1 code and hence its testsuite, which made it blow out from 46 to 71 tests.  At this point, ccanlint takes over ten minutes!

This is for two reasons: firstly because ccanlint runs everything serially, and secondly because ccanlint runs each test four times: once to see if it passes, once to get coverage, once under valgrind, and once with all the platform features it tests turned off (eg. HAVE_MMAP).  I balked at running the reduced-feature variant under valgrind, though ideally I’d do that too.

Before going parallel, I thought I should cut down the compile/run cycles.  A bit of measurement gives some interesting results (on the initial TDB2 with 46 tests):

  1. Compiling the tests takes 24 seconds.
  2. Running the tests takes 12 seconds.
  3. Compiling the tests with coverage support takes 32 seconds.
  4. Running the tests with coverage support takes 32 seconds.
  5. Running the tests under valgrind takes 204 seconds (17x slowdown)
  6. Running the tests with coverage under valgrind takes 326 seconds.

It’s no surprise that valgrind is the slowest step, but I was surprised that compiling is slower than running the tests.  This is because CCAN “run” tests actually #include the entire module source so they can do invasive testing.

So the simple approach of compiling up once, with -fprofile-arcs -ftest-coverage, and running that under valgrind to get everything in one go is much slower (from 325 up to 407 seconds!).  The only win is to skip running the tests without valgrind, shaving 11 seconds off (about 2%).

One easy thing to do would be to compile with optimization to speed the tests up. Valgrind documentation (and my testing) confirms that using “-O” doesn’t effect the results on any CCAN module, so that should make it run faster, for very little effort.  When I actually measured, total test time increases from 407 seconds to 495, because compiling with optimization is so slow.  Here are the numbers:

  1. Compiling the tests with optimization (-O/-O2/-O3) takes 54/77/130 seconds.
  2. Running the tests with optimization takes 11/11/11 seconds.
  3. Running the tests under valgrind with optimization takes 201/208/208 seconds

So no joy there. Time to go and fix up my tests to run faster, and make ccanlint run (and compile!) them in parallel…

Professional Photographers and Licensing: Copyright Sucks

So, Alex scoured through wedding photographers, we chose one, met them, got the contract… and it stipulates that they own the copyright, and will license the images to us “for personal use”.  So you pay over $3,000 and don’t own the images at the end (without a contract, you would).  That means no Wikipedia of course, but also no Facebook; they’re definitely a commercial organization.  No blogs with ads.  In the unlikely event that Alex or I change careers and want to use a shot for promotional materials, and the photographer has died, gone out of business or moved overseas, we’re out of luck even if we’re prepared to pay for it.

The usual answer (as always with copyright) is to ignore it and lie when asked.  But despite my resolution a few years ago to care less about copyright, this sticks in my craw.  So I asked: it’s another $1,000 for me to own the copyright.  I then started emailing other photographers, and that seems about standard.  But why?  Ignoring the obvious price-differentiation for professional vs amateur clients, photographers are in a similar bind to me: they want to use the images for promotion, say, in a collage in a wedding magazine.  And presumably, the magazine insists they own the copyright.  Since the photographers I emailed had varying levels of understanding of copyright, I can totally understand that simplification.

Fortunately, brighter minds than I have created a solution for this already: Creative Commons licensing.  On recommendation of one of Alex’s friends, we found a photographer who agreed to license the images to us under Creative Commons Attribution without additional charge; in fact, he was delighted to find out about CC, since the clear deeds make it easier for him to explain to his clients what rights they have.  All win!