Code review: libreplace

libreplace is the SAMBA library (also used in ctdb) to provide working implementations of various standard(ish) functions on platforms where they are missing or flawed.  It was initially created in 1996 by Andrew Tridgell based on various existing replacement hacks in utils.c (see commit 3ee9d454).

The basic format of replace.h is:

    #ifndef HAVE_STRDUP
    #define strdup rep_strdup
    char *rep_strdup(const char *s);
    #endif

If configure fails to identify the given function X, rep_X is used in its place.  replace.h has some such declarations, but most have migrated to the system/ include directory which has loosely grouped functions by categories such as dir.h, select.h, time.h, etc.  This works around the “which header(s) do I include” problem as well as guaranteeing specific functions.

Other than reading this code for a sense of Unix-like paleontology (and it’s so hard to tell when to remove any of these helpers that cleanups are rare) we can group replacements into three categories:

  1. Helper functions or definitions which are missing, eg. strdup or S_IRWXU.
  2. “Works for me” hacks for platform limitations, which make things compile but are not general, and
  3. Outright extensions, such as #define ZERO_STRUCT(x) memset((char *)&(x), 0, sizeof(x)) or Linux kernel inspired likely()

Since it’s autoconf-based, it uses the standard #ifdef instead of #if (a potential source of bugs, as I’ve mentioned before).  I’ll concentrate on the insufficiently-general issues which can bite users of the library, and a few random asides.

  • #ifndef HAVE_VOLATILE ?  I can’t believe Samba still compiles on a compiler that doesn’t support volatile (this just defines volatile away altogether)  If it did no optimizations whatsoever, volatile might not matter, but I’m suspicious…
  • typedef int bool; is a fairly common workaround for lack of bool, but since pointers implicitly cast to bool but can get truncated when passed as an int, it’s a theoretical trap. ie. (bool)0x1234567800000000 == true, (int)0x1234567800000000 == 0.
  • #if !defined(HAVE_VOLATILE) is the same test as above, repeated. It’s still as bad an idea as it was 186 lines before :)
  • ZERO_STRUCT, ZERO_ARRAY and ARRAY_SIZE are fairly sane, but could use gcc extensions to check their args where available. I implemented this for ARRAY_SIZE in the Linux kernel and in CCAN. Making sure an arg is a struct is harder, but we could figure something…
  • #define PATH_MAX 1024 assumes that systems which don’t define PATH_MAX probably have small path limits. If it’s too short though, it opens up buffer overruns. Similarly for NGROUPS_MAX and PASSWORD_LENGTH.
  • The dlopen replacement is cute: it uses shl_load where available (Google says HPUX), but dlerror simply looks like so:
        #ifndef HAVE_DLERROR
        char *rep_dlerror(void)
        {
         	return "dynamic loading of objects not supported on this platform";
        }
        #endif

    This cute message for runtime failure allows your code to compile, but isn’t helpful if dlopen was a requirement. Also, this should use strerror for shl_load.

  • havenone.h is (I assume) a useful header for testing all the replacements at once: it undefines all HAVE_ macros. Unfortunately it hasn’t been updated, and so it isn’t complete (unused code is buggy code).
  • inet_pton is credited to Paul Vixie 1996. It’s K&R-style non-prototype, returns an int instead of bool, and doesn’t use strspn ((pch = strchr(digits, ch)) != NULL) or (better) atoi. But it checks for exactly 4 octets, numbers > 255, and carefully doesn’t write to dst unless it succeeds. I would have used sscanf(), which wouldn’t have caught too-long input like “1.2.3.4.5”. OTOH, it would catch “127…1” which this would allow. But making input checks more strict is a bad way to be popular…
  • Tridge’s opendir/readdir/telldir/seekdir/closedir replacement in repdir_getdents.c is a replacement for broken telldir/seekdir in the presence of deletions, and a workaround for (older?) BSD’s performance issues. It is in fact never used, because the configure test has had #error _donot_use_getdents_replacement_anymore in it since at least 2006 when the Samba4 changes were merged back into a common library!
  • repdir_getdirents.c is the same thing, implemented in terms of getdirents rather than getdents; it’s still used if the telldir/delete/seekdir test fails.
  • replace.c shows some of the schizophrenia of approaches to replacement: rep_ftruncate #errors if there’s no chsize or F_FREESP ioctl which can be used instead, but rep_initgroups returns -1/ENOSYS in the similar case. Best would be not to implement replacements if none can be implemented, so compile will fail if they’re used.
  • rep_pread and rep_pwrite are classic cases of the limitations of replacement libraries like this. As pread is not supposed to effect the file offset, and file offsets are shared with children or dup’d fds. There’s no sane general way to implement this, and in fact tdb has to test this in tdb_reopen_internal. I would implement a read_seek/write_seek which are documented not to have these guarantees. I remember Tridge ranting about glibc doing the same kind of unsafe implementation of pselect :)
  • snprintf only rivals qsort-with-damn-priv-pointer for pain of “if only they’d done the original function right, I wouldn’t have to reimplement the entire thing”. I’ll avoid the glibc-extracted strptime as well.

I’m not sure Samba compiles on as many platforms as it used to; Perl is probably a better place for this kind of library to have maximum obscure-platform testing. But if I were to put this in CCAN, this would make an excellent start.

Code Reviewing: popt

I decided that every day I would review some code in ctdb.  I never spend enough time reading code, and yet it’s the only way to really get to know a project.  And I decided to start easy: with the popt library in ctdb.

popt is the command-line parsing library included with the RPM source, and used by SAMBA.  I vaguely recall Tridge telling me years ago how good it was.  I started with the man page, and it is an excellent and useful library: it does enough to make the simple cases less code that getopt_long, yet allows flexibility for complex cases.

The idea is simple: similar to getopt_long, you create an array describing the possible options.  Unlike getopt, however, simple integers and flags are handled automatically.  So you set up the context with the commandline, then hand that context to poptGetNextOpt() to do the parsing.  That keeps parsing args until it hits an error or you’ve marked an argument to return a value for special handling.  The manual page is excellent and made me feel really comfortable with using the code.

Now, the code itself.  Even before you notice the four-space tabs and bumpyCaps, you see the lint annotations and docbook-style comments cluttering the source.  It does make the code annoying to read, breaking CCAN’s Golden Rule.  Typedefs of structs, typedefs to pointers, and several #ifdef  __cplusplus complete my minor gripes.

The code is old and cross-platform: the header test for features we’d simply assume in a modern Linux.  It has a simple set of bitmap macros (taken from pbm, judging from the name), a helper routine to find an executable in $PATH (using alloca!) .  These are the kinds of things which would be in separate modules, were this in CCAN.

The definition of _free() to be a (redundantly-NULL-taking) free() which always returns NULL is used to effect throughout the code:

    defs = _free(defs);

Here is a trick I hadn’t seen before to zero an onstack var, and really don’t recommend:

poptDone done = memset(alloca(sizeof(*done)), 0, sizeof(*done));

The help-printing code is in a separate file, popthelp.c.  It’s actually about half 1/3 of the code!  That’s mainly because it takes pains to pretty-print, and it’s done by manually tracking the column number through the routines (aka ‘cursor’).  These days we’d asprintf into a temp buffer and strlen() to figure out if we should start a new line and indent before printing this.  Or even better, create a struct with the FILE * and the column number, and create a fprintf variant which updated the column number and figured out wrapping for us. Routines like maxArgWidth() which iterates the table(s) to figure how much to indent will still suck however: probably simplest to build all the strings in the help table in memory and then dump at the end.

I thought I found a buffer overrun, but a test program disproved it: the singleOptionHelp() uses its maxLeftCol (plus some extra stuff) to size a buffer.  This works because maxLeftCol is previously calculated as the worst-case size of the argument help.  Now, the code is unclear (AFAICT maxLeftCol must always be sufficient, so the extra bytes are wasted), but not buggy.

In summary, this is excellent code.  Talloc would help it, as would modern printf usage (%.*s for example), but generally the code is in really good shape.  I know that the popt code in RPM is a little updated, but I can’t imagine that much has changed in such an obviously-stable codebase. The temptation to rewrite it is thus very low: who knows what the testsuite would miss?

linux.conf.au 2010

After slightly disastrous preparation  (my left knee in a brace as detailed for the perversely-curious here) my week went well.  I tried to get back to my hotel room early each evening to rest as per doctor’s orders (and managed it except Friday night), but polishing my Friday talk tended to take a few hours every day.

Sunday

The Newcomer’s Session was well attended, but Jacinta and I were slack with preparation so it was unbalanced for my tastes.  I raced to the post-session pub assuming my crutches would ensure I’d be the trailer, to find that I was wrong.  It would have been better to explicitly and immediately drag people towards the pub, because that’s (seriously) the most important part of the introduction to LCA.

Monday

Miniconf time, and I started in the Open Programming Languages miniconf.  There was some interestingly random language stuff there: it’s one of my few opportunities to get exposure to higher level languages.  The miniconf talks were enthusiastic and unpolished as such things are supposed to be.  Haskell, and all the wonderful things it doesn’t let you do by Stephen Blackheath was interesting,  but lacked solid examples. Introducing Gearman — Distributed server for all languages by Giuseppe Maxia was a great short intro into an unknown project. vcs-pkg.org by Martin F. Krafft was classic work-in-progress talk with insights into a mundane but critical infrastructure problem (standards and practices for coordinating upstream and across distributions using distributed revision control).

Die Flash Die – SVG has arrived by Andy Fitzsimon gave classic bling talk with a message about the animation potential for SVG.  Useful content, too, for those dealing with this, and I was blown away to hear of Gordon, a FOSS Flash™ runtime written in JavaScript.

How to Use FOSS Graphics Tools to Pay for College by Elizabeth Garbee was an insight into the US education system and a chance to find out what my friend Edale (I know she hates that meme!) was doing.  But her talk didn’t quite gel for this audience. Unfortunately using the words “did you spot the head-fake?” riles me.  You are suddenly telling me that you’ve been using your intellect to compete with me rather than to inform and enrich me.

Then came my own Talloc: Pick Up Your Own Garbage! talk, which was delayed by my miscalculation of transit time on crutches. A mediocre rehash of my previous talloc talks, but I wanted to put it in front of this group because it really offers fresh view into a program’s data structures at runtime.

Writing Facebook Applications in Perl by Paul Fenwick was a nice little introduction to the FB API from a Perl point of view, but he kept his powder dry for his awesome lightening talk on Friday.

Tuesday

I peered in at the tail end of the keynote which was apparently excellent.  I woke a little early then did some more work on my presentation, and by the time I had breakfast I was incurably late. One person admitted to me that they watched the live stream from their hotel room, but I wasn’t that clever.

This this day was all hallway track for me, catching up with many people I haven’t seen since last year. Then the Speaker’s Dinner at Museum of New Zealand: Te Papa Tongarewa. This is also a fun time to chat with everyone, but I was disappointed that my crutches forced me to forgo learning a traditional Haka. It was also the first chance to greet the two chief organizers, who had been sick the first two days of the conference.  Edale and I also had fun playing with their very-past-bedtime hyper 2 yo Brooke (until we were told not to stir her up any more!)

Wednesday

The keynote by Benjamin Mako Hill was a little chaotic but made his point about antifeatures well: how such things are only viable when consumers don’t have freedom of choice (in particular, ability to examine, modify and share the software they’re using).  I then headed to Introduction to game programming by Richard Jones, where I struggled with pyglet before giving up and paying half-attention.  I did learn some things though, and everyone who was active seemed to get great satisfaction from the tutorial.

Open Sourcing the Accountants by Jethro Carr lacked density.  It takes a great deal of work to give a thorough comparison of different accounting packages, and his insights into how accountants think were insufficient to make that the backbone of his talk either.

subunit: Testing across boundaries for fun and profit by Robert Collins was slightly familiar ground for me, but as libtap maintainer he wanted me to attend.  It was a good bread-and-butter talk, which perhaps could have benefited from a few more snazzy real-life examples (making testing sexy is hard though!).  He semi-seriously suggested I should take over the C output implementation for subunit; still thinking…

I caught the questions at Teaching FOSS at universities by Andrew Tridgell and Robert (Bob) Edwards, which I will watch seriously once the videos are uploaded.

Then was one compulsory-attendance presentation of the week: The World’s Worst Inventions by Paul Fenwick. I had made a comment to Paul earlier in the week that I was concerned that my talk lacked substance.  His reply was “I won’t comment how much substance is in my talk”.  And any conclusions were left to the minds of the audience as full-costumed Paul took us through a series of invention disasters.  I teased him about it later, but let’s be honest: if I could present like that I wouldn’t have worried about content either!

That evening was the HackOff.  I’ve never tried competitive programming, so when we came up with the plan of a SAMBA team, I heartily endorsed it :)  Intimidation is important at these events, and the tweet from Adam Harvey was promising: At the #lca2010 HackOff. There’s a table with Rusty, Tridge, Anthony Towns and Andrew Bartlett. We’re fucked. However, despite having the largest team (with 6 of us), we only just squeaked in by 2 minutes.  Subtract any one of the team and we wouldn’t have won, though with fewer we might not have tried to brute-force the final question.

Thursday

Glyn Moody’s keynote was excellent. Then I lost some more hallway time before emerging in The Elephant in the Room: Microsoft and Free Software by Jeremy Allison. I thought it was a worthwhile and balanced presentation; of course it had a few cheap laughs in it, but the examination of Microsoft’s actions wrt FOSS is a worthwhile exercise if we want to assess their potential impact.

I was a bit late to Building a Xapian index of Wikipedia in less time than this talk takes by Olly Betts, but it was too unprepared for my tastes and I went in not knowing what Xapian was (though I picked it up from context). Tux on the Moon: FOSS hardware and software in space by Jonathan Oxer was good, but another one I was late to (15 minutes between talks seems to give me enough time to start conversations, but not enough to finish them).

Simplicity Through Optimization by Paul McKenney was a good talk if you didn’t know your RCU.  For me I would have liked to hear more what the various lines of code were doing (before they were excised by the optimized implementation).  But being deeply familiar with the theory and somewhat familiar with the practice, I’m probably in a minority.

By this stage I was exhausted, and Using Functional Programming Techniques In Your Favourite Language by Malcolm Tredinnick was in the same room so I stayed.  This talk was a disappointment to me (and, I think, Malcolm) because it didn’t quite contain the general insights he’d believed were there when he proposed the talk. Nice for me to get an refreshing exposure to functional programming though.

Dinner at an Indian restaurant with the SAMBA people, which meant I was right near the PDNS, so I dropped in briefly then returned to my hotel room for an early night.

Friday

Nat Torkington’s keynote contained the classic “heckle Rusty” factor and was delightfully punchy. He rolled over to a very very strong set of lightning talks; a format which works so well at these geek-rich events.  Paul Fenwick’s “Unfriendly” Facebook app was an awesome way to close.

Patent defence for free software by Andrew Tridgell (late again!) was familiar ground for me, but I wanted to see how he presented such a dry area.  Lots of text: I would have included some more diagrams (claim dependencies are well represented by a tree, for example). But the audience were rapt, so I’m clearly too picky!

Last minute prep for my talk: I decided the previous night that I would use Notes View, only to find that noone could get it to work.  Both notes and presentation appeared on the projector screen, fortunately as I was about to give up and run without notes, someone suggested I simply drag the notes view back onto my laptop screen!  Sometimes the low-tech approaches evade our over-complicated minds.

FOSS Fun With A Wiimote by Rusty Russell was well-received. I didn’t go as far with the project as I had intended, due to personal time contraints and time lost wrangling with actual hardware, but sometimes that’s instructive too.

The presentation itself was flawed in three places.  Firstly, my intro slide appeared all at once rather than clicking in one point at a time, destroying my carefully practiced routine at that point.  Secondly, noone knew what LED throwies were: (an open source graffiti technology developed at the Graffiti Research Lab) and I so that slide was completely lost.  Finally, I should have set up my replacement two-year-old on the stage where the audience and the cameras could see her clearly.

The closing announced Brisbane for lca2011, and I handed the virtual speakers’ gift to the organisers.  That done, I was ready to relax at the Penguin Dinner.  Most years I don’t even drink, knowing that I’ll have to do the auction.  But as there was no auction I sat next to Nat Torkington to guarantee great conversation and was ready to chill. I did some singing, didn’t try the Haga (again). I even got a cuddle with the organiser’s very well-behaved 5-month-old son Adam.

Unfortunately, events conspired against me and I was dragged into a pledging war for a prize I didn’t want to win (and at which my doctor would be aghast). I thought we could get more money from the Wenches For Winching, who were weasonably wealthy and weally wanted to win. Ted Ts’o had a similar idea. Unfortunately the prospect of crippled Rusty being “rescued” (after being dropped: that was the no way part) was too alluring for many, and I had to work hard to ensure I didn’t win.

A good time had by all, though exhausting after a long week.

Saturday

Briefly peered into the Open Day, which was buzzing with setup and opening, before heading home, spent. I did find out that wild weather had wuined the winching of wenches; but there is a standing offer when they find themselves in Wellington again.

Summary

Absolutely on par with previous awesome conferences; there were no organisational disappointments for me the entire week. I was particularly happy to see people digging in and fixing things when they were wrong instead of simply complaining.

A great achievement, everyone!

More Linux-next Graphing

Mikey blogs about linux-next workload with pretty graphs.  Ideally, we should all have our patches marshalled before the freeze, and there should be no pre-merge-window peak.  I’ve gotten close to this in recent times, by being lazy and being content to wait for the next cycle if I miss one. Rushing things in tends to imply we skimped on review and testing.

So on that basis 2.6.30 looks like a winner:  it has the smallest “peak of crap”.

If you want to actually measure how much work Stephen is having to do, you would have to figure out what changes he is seeing.  Approximately, that would be the changes in Linus’ tree that did not come from linux-next, plus any new work entering linux-next itself.  Anyone?

Coding Fail: Rusty Breaks Booting

I will freely admit that kernel work has dropped in my priority list. But that didn’t excuse sloppy work like my ae1b22f6e46 commit which sought to sidestep an issue for lguest without me having to do much work.

There’s a 64 bit atomic exchange instruction on x86 called cmpxchg8b.  This isn’t supported on pre-586 processors, so we have cmpxchg8b_emu, but that implementation isn’t paravirtualized so won’t run under lguest.  That’s OK, we used to never run it except on machines which didn’t support that cmpxchg8b instruction and I’ve never received a report.  Then at some point we started doing so: the easiest mod I could see was to switch emulation off except for kernels specifically compiled for 386 or 486.  But I missed Linus’ commit which had set the archs on which emulation was skipped:

Note, this patch only adds CMPXCHG8B for the obvious Intel CPU’s, not for others. (There was something really messy about cmpxchg8b and clone CPU’s, so if you enable it on other CPUs later, do it carefully.)

Now, I could blame Linus for putting that in a commit message, not in the Kconfig.cpu file where anyone changing it was going to see it.  But you should always double-check when you think you’re “fixing” something, and I didn’t.

(I get annoyed when developers don’t detail exactly what commit introduced a problem: it’s not just for convenient reading, but such research often prevents reintroducing subtle bugs! Like, say, Cyrix 6×86 not booting…)

Not Always Lovely Blooms…

So, with my recent evangelizing of Bloom Filters, Tridge decided to try to apply them on a problem he was having.  An array of several thousand of unsorted strings, each maybe 100 bytes, which needed checking for duplicates.  In the normal case, we’d expect few or no duplicates.

A Bloom Filter for this is quite simple: Wikipedia tells you how to calculate the optimal number of hashes to use and the optimal number of bits given (say) a 1 in a million chance of a false positive.

I handed Tridge some example code and he put it in alongside a naive qsort implementation.  It’s in his junkcode dir.  The result?  qsort scales better, and is about 100x faster.  The reason?  Sorting usually only has to examine the first few characters, but creating N hashes means (in my implementation using the always-awesome Jenkins lookup3 hash) passing over the whole string N/2 times.  That’s always going to lose: even if I coded a single-pass multihash, it’s still having to look at the whole string.

Sometimes, simplicity and standard routines are not just clearer, but faster.

A Week With Android (HTC Magic)

I haven’t used an iPhone in anger so I can’t compare, but I got this so I could use Google Maps to navigate public transport: Adelaide’s integration is excellent, and as I have no car it’s important for Arabella and me.

The Good

  • Typing on anything that size is suboptimal, but clever auto predict and correct are well done.
  • Google maps integration is nice: knowing my location makes getting public transport directions really sweet.
  • Since I pay 15c/MB 1.5c/MB for data on the phone network, grabbing onto my home WiFi network where possible is good.
  • Google Calendar and contacts storage means I fear obsolete/lost phone much less.
  • Plugging in as a USB mass-storage makes transferring music and pictures of Arabella from Linux really easy.
  • Some neat features, like turning a map search into a contact (and then easily using a contact when looking for directions).

The Bad

  • Intermittent bugs, such as no characters showing up for SMS when the keyboard is in landscape mode, I discovered can be solved by power-cycle.
  • My calendar notifications are completely silent.  I’ve played with every option, but when a calendar event comes due, there’s nothing but a glowing trackball to indicate it.  This seems to be a bug, but I’m reluctant to factory reset to see if that fixes it.
  • Calendar entries default to My Calendar: if you forget to flip that to your google account when first creating the entry, it won’t be shared.  I want everything mirrored to Google, so several times I’ve had to delete and recreate laboriously-entered appointments.
  • I wish the screen would flip faster; landscape makes a better kbd, but portrait often better browsing.
  • Sometimes slowness makes it think I held down a key (giving a special character) when I didn’t, and the autopredict/correct seems to give up if you held a key down.
  • I have access to Internode’s hotspots around the city, but as that needs a username/password logon, it’s not useful automatically.
  • Battery life (in this stage of intense use) is 1-2 days.

I got it from Portagadgets.com, who were efficient (A$487 + $36 shipping, done via local bank transfer).  Getting an account and new SIM from Exetel took longer.

Conclusion: it’s definitely usable by non-geeks, and has raised my expectations of future phones.  There are some things (such as writing this post) which are much easier on my laptop.  But for reading Facebook or Wikipedia, finding your way on Google Maps, or having SMS conversations it’s excellent.

Google Analytics For WordPress Upgrade Fail

Had an old copy of the “Google Analytics For WordPress” lying around (which didn’t seem to put anything in my blog source), but after upgrading it it kept saying “Google Analytics settings reset to default” whenever I tried to change anything.

See this thread which talks about the problem and waves at the solution.  Here’s what you need to do, if like me you’re not a WordPress/MySQL junkie and want simple instructions:

  1. Find your wordpress config file.  Mine, on a Debian system, was in /etc/wordpress/config-rusty.ozlabs.org.php
  2. It will contain lines like this:
    define('DB_NAME', 'rustyozlabsorg');
    define('DB_USER', 'rustyozlabsorg');
    define('DB_PASSWORD', 'g1812fbsa');
  3. You need to open the mysql database, we’re going to do some brain surgery to remove the old cruft:
    $ mysql --user=rustyozlabsorg --password=g1812fbsa rustyozlabsorg
  4. This should give you a “mysql>” prompt, where you type the following:
    DELETE FROM wp_options where option_name='GoogleAnalyticsPP';
  5. It should say something like “Query OK, 1 row affected (0.00 sec)”. Then type
    quit;
  6. You’re done.  Reload your setting page and try again.

Hope that gets into Google and helps someone else who can’t figure out what’s going on!