Rusty Russell's Coding Blog | Stealing From Smart People

CAT | Technical

Feb/12

28

A Plea For Help: Charity

En-haired Fiancée Alex with my daughter Arabella

I’m getting married in just over five weeks!

My fiancée is raising money for charity; if we raise $50,000 by the big day, she will shave her head at the wedding.  Alexandra has had long hair all her life: she’s terrified but determined, so I’m determined to help.

We’re already asking for donations in lieu of wedding presents, but if you’ve ever wanted to buy me a beer for ipchains, iptables, netfilter, module-init-tools, lguest, CCAN, Rusty’s Unreliable Guides, CALU, or any other reason, I’ll take a $100/$20/$5 donation here instead :)

(Compulsory Facebook page here).

No tags Hide

As co-editor of the last edition of the File Hierarchy Standard before it merged into the Linux Standard Base, I’ve been following the discussion about combining the directories  /bin, /sbin and /lib into /usr/bin, /usr/sbin and /usr/lib respectively.  You can follow it too, via the LWN discussion.

To summarize, there are two sides to the debate.  The “pro” side points out:

  1. Nothing will really change for users, as symlinks will make old stuff still work.
  2. There are precedents in Solaris and Fedora.
  3. The weak reasonings used previously to separate / and /usr no longer apply.
  4. Separate /usr has become increasingly unsupported anyway.
  5. Moving to /usr will enable genuine R/O root filesystem sharing.

The “anti” side, however, raises very salient points:

  1. Lennart Poettering supports it.
  2. Lennart Poettering is an asshole.

Fellow Anti-mergers, I understand the pain and anguish that systemd has caused you personally, and your families.  Your hopes and dreams crushed, by someone with all the charm of a cheese grater across the knuckles.  Your remaining life tainted by this putrescent subhuman who forced himself upon your internet.

Despite the privation we have all endured, please find strength to stop this nightmarish ravaging of our once-pure filesystems.  For if he’s not stopped now, what hope for  /usr/sbin vs /usr/bin?

No tags Hide

Nov/11

21

The Power of Undefined Values

Tools shape the way we work, because they change where we perceive risk when we write code.  If common compilers warn about something, I’ll code in a way that will trigger it in case of mistakes.  eg: instead of:

    int err = -EINVAL;
    if (something())
         goto out;
    err = -ENOSPC;
    if (something_else())
         goto cleanup_something;
...
cleanup_something:
    undo_something();
out:
    return err;

I would now set err in every branch:

    int err;
    if (something()) {
        err = -EINVAL;
        goto out;
    }
    if (something_else()) {
        err = -ENOSPC;
        goto out;
    }

Because when I add another clause to the initialization and forget to set err, gcc will warn me about it being uninitialized.  This bit me once, and it can be hard to spot the problem when you’re only reviewing a patch, not the code as a whole.

These days, we have valgrind, and despite its fame as a use-after-free debugger, it really shines at telling you when you rely on the results of an uninitialized field.  So, I’ve adapted to lean on it.  I explicitly don’t initialize structure members I don’t use in a certain path.  I avoid calloc(): while 0 is often less harmful than any other value, I’d much rather know that I’ve thought about and set up every field I actually use.  When changing code this is particularly important, and I spend a lot of my time changing code.  I have even changed to doing malloc() in some cases where I previously used on-stack or file-scope variables.  Valgrind doesn’t track on-stack usage very well, and static variables are defined to be zeroed, so valgrind can’t tell when I wander into the weeds.  I think these days, that’s a misfeature.

So, if I were designing a C-like language today, I’d bake in the concept of undefined values, knowing that the tools to leverage it are widely available.  10 years ago, I’d have said 0-by-default is safest, but times change.  I think Go chose wrong here, but it may not be as bad as C for other reasons.  I’d have to code in it for a few years to really tell.

No tags Hide

Sep/11

22

PLUG: Coding: let’s have fun!

The Perth Linux User Group are flying me across next month to speak, complete with on-couch accommodation! But since I can’t find an easily-linkable synopsis of my talk, here are the details:

It’s hard to describe the joy of coding, if you haven’t experienced it,  but in this talk will try to capture some of it.  Free/Open Source lets us remove the cruft which forecloses on the joy of coding: seize this chance!

I’ll talk about some of my favourite projects over the last 15 years of  Free Software coding: what I did, how much fun it was and some surprising results which came from it.  I’ll also discuss some hard lessons learned about joyless coding, so you can avoid it.  There will also be a sneak peak from my upcoming linux.conf.au talk.

There’ll be some awesome code to delight us.  And if you’re not a programmer we’ll take you our journey and show you the moments of  brilliance which keep us coding.

No tags Hide

Someone mentioned that you had to look at the source code if you wanted to hack your badge this year; I would have considered that cheating if I hadn’t known.  (It’s been a few years since I last hacked my badge).  But it helps if you look in the right place: http://lca2012.blogspot.com/2011/09/feeling-silly.html

Thanks to Tony Breeds for pointing me at that after I’d given up with the github upstream source…

No tags Hide

So, ccanlint has accreted into a vital tool for me when writing standalone bits of code; it does various sanity checks (licensing, documentation, dependencies) and then runs the tests, offers to run the first failure under gdb, etc.   With the TDB2 work, I just folded in the whole TDB1 code and hence its testsuite, which made it blow out from 46 to 71 tests.  At this point, ccanlint takes over ten minutes!

This is for two reasons: firstly because ccanlint runs everything serially, and secondly because ccanlint runs each test four times: once to see if it passes, once to get coverage, once under valgrind, and once with all the platform features it tests turned off (eg. HAVE_MMAP).  I balked at running the reduced-feature variant under valgrind, though ideally I’d do that too.

Before going parallel, I thought I should cut down the compile/run cycles.  A bit of measurement gives some interesting results (on the initial TDB2 with 46 tests):

  1. Compiling the tests takes 24 seconds.
  2. Running the tests takes 12 seconds.
  3. Compiling the tests with coverage support takes 32 seconds.
  4. Running the tests with coverage support takes 32 seconds.
  5. Running the tests under valgrind takes 204 seconds (17x slowdown)
  6. Running the tests with coverage under valgrind takes 326 seconds.

It’s no surprise that valgrind is the slowest step, but I was surprised that compiling is slower than running the tests.  This is because CCAN “run” tests actually #include the entire module source so they can do invasive testing.

So the simple approach of compiling up once, with -fprofile-arcs -ftest-coverage, and running that under valgrind to get everything in one go is much slower (from 325 up to 407 seconds!).  The only win is to skip running the tests without valgrind, shaving 11 seconds off (about 2%).

One easy thing to do would be to compile with optimization to speed the tests up. Valgrind documentation (and my testing) confirms that using “-O” doesn’t effect the results on any CCAN module, so that should make it run faster, for very little effort.  When I actually measured, total test time increases from 407 seconds to 495, because compiling with optimization is so slow.  Here are the numbers:

  1. Compiling the tests with optimization (-O/-O2/-O3) takes 54/77/130 seconds.
  2. Running the tests with optimization takes 11/11/11 seconds.
  3. Running the tests under valgrind with optimization takes 201/208/208 seconds

So no joy there. Time to go and fix up my tests to run faster, and make ccanlint run (and compile!) them in parallel…

No tags Hide

So, Alex scoured through wedding photographers, we chose one, met them, got the contract… and it stipulates that they own the copyright, and will license the images to us “for personal use”.  So you pay over $3,000 and don’t own the images at the end (without a contract, you would).  That means no Wikipedia of course, but also no Facebook; they’re definitely a commercial organization.  No blogs with ads.  In the unlikely event that Alex or I change careers and want to use a shot for promotional materials, and the photographer has died, gone out of business or moved overseas, we’re out of luck even if we’re prepared to pay for it.

The usual answer (as always with copyright) is to ignore it and lie when asked.  But despite my resolution a few years ago to care less about copyright, this sticks in my craw.  So I asked: it’s another $1,000 for me to own the copyright.  I then started emailing other photographers, and that seems about standard.  But why?  Ignoring the obvious price-differentiation for professional vs amateur clients, photographers are in a similar bind to me: they want to use the images for promotion, say, in a collage in a wedding magazine.  And presumably, the magazine insists they own the copyright.  Since the photographers I emailed had varying levels of understanding of copyright, I can totally understand that simplification.

Fortunately, brighter minds than I have created a solution for this already: Creative Commons licensing.  On recommendation of one of Alex’s friends, we found a photographer who agreed to license the images to us under Creative Commons Attribution without additional charge; in fact, he was delighted to find out about CC, since the clear deeds make it easier for him to explain to his clients what rights they have.  All win!

No tags Hide

Jul/11

21

License Boilerplates

CCAN is supposed to be about the code, so I’ve avoided the standard GPL boilerplate comment at the top of each source file.  I reluctantly include a symlink to the full license text in each directory now, since lawyers approached me to clarify the single “License:” line in _info.  A useful discussion on the samba-technical mailing list has reinforced my view that it’s marginal clutter, but most CCAN modules now have a one-line courtesy comment such as “/* Licensed under LGPLv2.1+ – see LICENSE file for details */” at the top of each .c and .h file.

Please make a conscious choice here: if license enforcement is a high priority for your project you probably want copyright assignments, license boilerplates and click-through agreements for everyone who downloads your source code.  But if you’re spending significant time or effort on legal issues for your little coding project, you’re probably doing it wrong…

(ccanlint now scans for common license boilerplates, as well as those comments; this means we can also detect use of incompatible licenses inside modules, or dependent modules.  The former test noticed that I’d labelled the md4 module as LGPL, yet it’s actually GPL.  The latter spotted that ccan/likely (LGPL) depends on ccan/htable (GPL): legal (the whole thing is actually GPL), but misleading, as Michael Adam noted).  Automating this stuff is a clear win for a project like CCAN.  I also re-licensed a bunch of useful-but-trivial modules from LGPL to public domain, as I want the BSD modules to use them).

No tags Hide

Jul/11

15

Bitcoin for something useful.

I wanted a scalable version of this poster: so I offered 1BTC on the bitcoin forum and someone produced a version I can print and put on my wall in my home office.

Here is the SVG. Enjoy!.

No tags Hide

I like bitcoins.  A simple open source client, a well-run developer community, clever algorithms, decentralized assurance model, and of course near-zero transaction fees.  For all the economic arguments (some of which sound like early anti-Wikipedia arguments, though I hesitate to argue by analogy), when I first used it to tip a website, I fell in love.  It took me about 3 minutes to transfer $50 from my bank account to a stranger’s via my bank’s website, with 2 days latency.  It took me about 5 seconds to send 0.1BTC to a stranger, with about 10 minute latency.  It was a revelation.

But, while bitcoins rock, volatility doesn’t.  It’s currently a bit hard to get some bitcoins, and the price has rocketed by speculation.  It’s not just a cute FOSS project, it’s becoming a real market, and surely those piling in now are susceptible to hacking, scams and the inevitable hiccups that go with any project ramping up.  It’s all going to come crashing down to earth.

Good.  My hope is that after the GBC the speculators will move on.  Volatility will settle.  The boring work of accreting trust in this new tool will continue.  I fervently hope that it we will appreciate its true utility once the dust has settled and the Man Loses A Million Dollars and House in Virtual Currency headlines have faded.

[Yes, you can tip me in bitcoins!  I think it's a good habit, fun to do, and better than ads as I get a warm fuzzy feeling of appreciation.  I'll be using any tips I get to tip other sites which take BTC].

No tags Hide

« Previous Page« Previous Entries

Next Entries »Next Page »

Find it!

Theme Design by devolux.org

Tag Cloud