Kernel Compilation Times

David S. Miller complains that CONFIG_MODULE_SIG slows down builds, and he does hundreds of allmodconfig builds every day.

This complaint falls apart fairly quickly in the comments; he knows he can simply turn it off, but what about others who he simply tells to test with allmodconfig?  One presumes they are not doing hundreds of kernel builds a day.

linux-next had the same issue, and a similar complaint; I had less sympathy when I suggested they might want to also turn off CONFIG_DEBUG_INFO if they were worried about compile speed, and indeed, found out Stephen already did.  Now they turn off CONFIG_MODULE_SIG, too.

Here are some compile times on my i386 laptop, using v3.7-rc1-1-g854e98a as I turn options off:

  • allmodconfig: 52 minutes
  • … without CONFIG_MODULE_SIG: 45 minutes
  • … without CONFIG_DEBUG_INFO: 40 minutes
  • … without CONFIG_MODVERSIONS and CONFIG_MODULE_SRCVERSION_ALL: 38 minutes
  • … without CONFIG_KALLSYMS: 37 minutes
  • … using -O1 instead of -Os: 24 minutes (not a good idea, since we lose CONFIG_DEBUG_STRICT_USER_COPY_CHECKS).

In summary, the real problem is that people don’t really want ‘allmodconfig’.  They want something which would compile a kernel with as much coverage as possible with no intention of actually booting it; say ‘allfastconfig’?

Latinoware 2012

I’m keynoting at http://latinoware.org in Brazil in two weeks (assuming I get my visa in time! ).  Looking forward to my first trip to South America as well as delivering a remix of some of my favourite general hacking talks. And of course, catching up with maddog!

What Can I Do To Help?

Enthusiasm is a shockingly rare resource, anywhere. The reason enthusiasm is a rare resource is because it’s fragile; I’ve seen potentially-great ideas abandoned because the initial response was a liturgy of reasons why it won’t work.  It’s not the criticism which kills, it’s the scorn.

So when someone emails or approaches you with something they’re excited about, please reply thinking “What can I do to help?”  Often I just provide an encouraging and thoughtful response: a perfectly acceptable minimum commitment.  If you offer pointers or advice, take extra care to fan that delicate flutter of enthusiasm without extinguishing it. Other forces will usually take care of that soon enough, but let it not be you.

FAQ: CCAN as a shared library?

This was asked of me again, by Adam Conrad of Canonical: “Why isn’t CCAN a shared library?”.  Distributions prefer shared libraries, for simplicity of updating (especially for security fixes), so I thought I’d answer canonically, once.

  • Most CCAN modules are small; many are just headers.
  • You can’t librify something which doesn’t have a stable API or ABI.
  • CCAN’s alternative is not a library, it’s cut-n-paste.

To illustrate what I mean, consider ccan/hash: it’s a convenient wrapper around the Jenkins lookup3 code.  It could be a library, but in practice it’s not.  For something as simple as that, people cut & paste the lookup3 code into their own.  It already exists in two places in Samba, for example.  It’s this level of code snippet which is served beautifully by CCAN: you drop it in a ccan/ dir in your project and you get nice, documented and tested code, with optional updates if you want them later.

You could still make ccan/hash into a shared library.  But if the upstream doesn’t do it for you, you have to check the ABI and update the version number every time it changes.  This, unfortunately, means you can no longer share it: if library A uses version 11 and library B uses version 12, you can’t link against both library A and library B.  Yet there’s nothing wrong with either: you have to change them because you librified it.

This kind of pain isn’t worth it for small snippets of code, so people cut & paste instead, and that makes our code worse, not better.  That’s what CCAN tries to fix.

Now, there may one day be modules which could be shared libraries: that’s a good thing, if the maintainer is prepared to support the ABI and API.  I’m not going to kick a module out of CCAN for being too successful.  But I’d like to explicitly label such a module, and make sure ccanlint does the appropriate checks for ABI compatibility and symbol hiding.

1 Week to Go, and Rusty Goes Offline

Just as the Linux kernel merge window closes, I’m going offline.  My wedding is exactly a week away, but I’ll be entertaining guests and doing final preparation.  I’ll be back from our honeymoon and wading through mail on the 7 May.

Alex’s “A Bald Target” campaign to raise awareness for TimeForKids has been a huge success, even though we’re currently far short of the hair-shaving goal.  She’s been on one of the local radio stations, with newspaper coverage expected this weekend; two local TV stations want to cover the actual shave if it happens.  The charity is delighted with the amount of publicity they have received; given that they need local people to volunteer to mentor the disadvantaged children, that’s worth at least as much as the money.

Special thanks to a couple of people who donated direct to the charity, to avoid causing baldness!  And yes, if we were starting again, having competing “shave” vs “save” campaigns would have been awesome…

Sources of Randomness for Userspace

I’ve been thinking about a new CCAN module for getting a random seed.  Clearly, /dev/urandom is your friend here: on Ubuntu and other distributions it’s saved and restored across reboots, but traditionally server systems have lacked sources of entropy, so it’s worth thinking about other sources of randomness.  Assume for a moment that we mix them well, so any non-randomness is irrelevant.

There are three obvious classes of randomness: things about the particular machine we’re on, things about the particular boot of the machine we’re on, and things which will vary every time we ask.

The Machine We’re On

Of course, much of this is guessable if someone has physical access to the box or knows something about the vendor or the owner, but it might be worth seeding this into /dev/urandom at install time.

On Linux, we can look in /proc/cpuinfo for some sources of machine info: for the 13 x86 machines my friends on IRC had in easy reach, we get three distinct values for cpu cores, three for siblings, two for cpu family, eight for model, six for cache size, and twelve for cpu MHz.  These values are obviously somewhat correlated, but it’s a fair guess that we can get 8 bits here.

Ethernet addresses are unique, so I think it’s fair to say there’s at least another 8 bits of entropy there, though often devices have consecutive numbers if they’re from the same vendor, so this doesn’t just multiply by number of NICs.

The amount of RAM in the machine is worth another two bits, and the other kinds of devices eg. trolling /sys/devices, which can be expected to give another few bits, even in machines which have fairly standard hardware settings like laptops.  Alternately, we could get this information indirectly by looking at /proc/modules.

Installed software gives a maximum three bits, since we can assume a recent version of a mainstream distribution.  Package listings can also be fairly standard, but most people install some extra things so we might assume a few more bits here.  Ubuntu systems ask for your name to base the system name on, so there might be a few bits there (though my laptop is predictably “rusty-x201”).

So, let’s have a guess at 8 + 7 + 2 + 3 + 3 + 2 + 2, ie. 27 bits from the machine configuration itself.

Information About This Boot

I created an upstart script to reboot (and had to hack grub.conf so it wouldn’t set the timeout to -1 for next boot), and let it loop for a day: just under 2000 times in all. I eyeballed the graphs of each stat I gathered against each other, and there didn’t seem to be any surprising correlations.   /proc/uptime gives a fairly uniform range of uptime values within a range of 1 second, at least 6 bits there (every few dozen boots we get an fsck, which gives a different range of values, but the same amount of noise).  /proc/loadavg is pretty constant, unfortunately.  bogomips on CPU1 was fairly constant, but for the boot CPU it looks like a standard distribution within 1 bogomip, in increments of 0.01: say another 7 bits there.

So for each boot we can extract 13 bits from uptime and /proc/cpuinfo.

Things Which Change Every Time We Run

The pid of our process will change every time we’re run, even when started at boot.  My pid was fairly evenly divided on every value between 1220 and 1260, so there’s five bits there.  Unfortunately on both 64 and 32-bit Ubuntu, pids are restricted to 32768 by default.

We can get several more bits from simply timing the other randomness operations.  Modern machines have so much going on that you can probably count on four or five bits of unpredictability over the time you gather these stats.

So another 9 bits every time our process runs, even if it’s run from a boot script or cron.

Conclusion

We can get about 50 bits of randomness without really trying too hard, which is fine for a random server on the internet facing a remote attacker without any inside knowledge, but only about five of these bits (from the process’ own timing) would be unknown to an attacker who has access to the box itself.  So /dev/urandom is still very useful.

On a related note, Paul McKenney pointed me to a paper (abstract, presentation, paper) indicating that even disabling interrupts and running a few instructions gives an unpredictable value in the TSC, and inserting a usleep can make quite a good random number generator.  So if you have access to a high-speed, high-precision timing method, this may itself be sufficient.

Oh, BTW, I Am Engaged!

A few of my friends saw the LWN coverage of http://baldalex.org, and sent me a note of congratulations.  This reveals how incredibly slack I am in maintaining connections with my disparate and distributed friends.

Alex and Rusty (2009)

So: I met a wonderful lady, we fell in love, and I proposed on April the 8th last year, at Mt Lofty Gardens overlooking the scenery.  She was speechless, delighted, and said yes!  In related news, one year later, we are set to marry: 5th April 2012, and 3pm at the McLaren Vale Visitor’s centre.  All welcome!  (Yes, that’s Easter Thursday).

So I’ll be offline for April, except briefly to post pictures if we meet the target!

A Plea For Help: Charity

En-haired Fiancée Alex with my daughter Arabella

I’m getting married in just over five weeks!

My fiancée is raising money for charity; if we raise $50,000 by the big day, she will shave her head at the wedding.  Alexandra has had long hair all her life: she’s terrified but determined, so I’m determined to help.

We’re already asking for donations in lieu of wedding presents, but if you’ve ever wanted to buy me a beer for ipchains, iptables, netfilter, module-init-tools, lguest, CCAN, Rusty’s Unreliable Guides, CALU, or any other reason, I’ll take a $100/$20/$5 donation here instead :)

(Compulsory Facebook page here).

Why Everyone Must Oppose The Merging of /usr and /

As co-editor of the last edition of the File Hierarchy Standard before it merged into the Linux Standard Base, I’ve been following the discussion about combining the directories  /bin, /sbin and /lib into /usr/bin, /usr/sbin and /usr/lib respectively.  You can follow it too, via the LWN discussion.

To summarize, there are two sides to the debate.  The “pro” side points out:

  1. Nothing will really change for users, as symlinks will make old stuff still work.
  2. There are precedents in Solaris and Fedora.
  3. The weak reasonings used previously to separate / and /usr no longer apply.
  4. Separate /usr has become increasingly unsupported anyway.
  5. Moving to /usr will enable genuine R/O root filesystem sharing.

The “anti” side, however, raises very salient points:

  1. Lennart Poettering supports it.
  2. Lennart Poettering is an asshole.

Fellow Anti-mergers, I understand the pain and anguish that systemd has caused you personally, and your families.  Your hopes and dreams crushed, by someone with all the charm of a cheese grater across the knuckles.  Your remaining life tainted by this putrescent subhuman who forced himself upon your internet.

Despite the privation we have all endured, please find strength to stop this nightmarish ravaging of our once-pure filesystems.  For if he’s not stopped now, what hope for  /usr/sbin vs /usr/bin?

The Power of Undefined Values

Tools shape the way we work, because they change where we perceive risk when we write code.  If common compilers warn about something, I’ll code in a way that will trigger it in case of mistakes.  eg: instead of:

    int err = -EINVAL;
    if (something())
         goto out;
    err = -ENOSPC;
    if (something_else())
         goto cleanup_something;
...
cleanup_something:
    undo_something();
out:
    return err;

I would now set err in every branch:

    int err;
    if (something()) {
        err = -EINVAL;
        goto out;
    }
    if (something_else()) {
        err = -ENOSPC;
        goto out;
    }

Because when I add another clause to the initialization and forget to set err, gcc will warn me about it being uninitialized.  This bit me once, and it can be hard to spot the problem when you’re only reviewing a patch, not the code as a whole.

These days, we have valgrind, and despite its fame as a use-after-free debugger, it really shines at telling you when you rely on the results of an uninitialized field.  So, I’ve adapted to lean on it.  I explicitly don’t initialize structure members I don’t use in a certain path.  I avoid calloc(): while 0 is often less harmful than any other value, I’d much rather know that I’ve thought about and set up every field I actually use.  When changing code this is particularly important, and I spend a lot of my time changing code.  I have even changed to doing malloc() in some cases where I previously used on-stack or file-scope variables.  Valgrind doesn’t track on-stack usage very well, and static variables are defined to be zeroed, so valgrind can’t tell when I wander into the weeds.  I think these days, that’s a misfeature.

So, if I were designing a C-like language today, I’d bake in the concept of undefined values, knowing that the tools to leverage it are widely available.  10 years ago, I’d have said 0-by-default is safest, but times change.  I think Go chose wrong here, but it may not be as bad as C for other reasons.  I’d have to code in it for a few years to really tell.