I used bitcoin-iterate and gnumeric to render the current bitcoin blocksizes, and here are the results.
My First Graph: A Moment of Panic
This is block sizes up to yesterday; I’ve asked gnumeric to derive an exponential trend line from the data (in black; the red one is linear)
That trend line hits 1000000 at block 363845.5, which we’d expect in about 32 days time!Â This is what is freaking out so many denizens of the Bitcoin Subreddit. I also just saw a similar inaccurate [correction: misleading] graph reshared by Mike Hearn on G+ :(
But Wait A Minute
That trend line says we’re on 800k blocks today, and we’re clearly not.Â Let’s add a 6 hour moving average:
In fact, if we cluster into 36 blocks (ie. 6 hours worth), we can see how misleading the terrible exponential fit is:
Clearer Graphs: 1 week Moving Average
So, not time to panic just yet, though we’re clearly growing, and in unpredictable bursts.
I’ve been trying not to follow the Great Blocksize Debate raging on reddit.Â However, the lack of any concrete numbers has kind of irked me, so let me add one for now.
If we assume bandwidth is the main problem with running nodes, let’s look at average connection growth rates since 2008.Â Google lead me to NetMetrics (who seem to charge), and Akamai’s State Of The Internet (who don’t).Â So I used the latter, of course:
I tried to pick a range of countries, and here are the results:
%Â GrowthÂ OverÂ 7Â years
Countries which had best bandwidth grew about 17% a year, so I think that’s the best model for future growth patterns (China is now where the US was 7 years ago, for example).
If bandwidth is the main centralization concern, you’ll want block growth below 15%. That implies we could jump the cap to 3MB next year, and 15% thereafter. Or if you’re less conservative, 3.5MB next year, and 17% there after.
Previously I discussed the use of IBLTs (on the pettycoin blog).Â Kalle and I got some interesting, but slightly different results; before I revisited them I wanted some real data to play with.
Finally, a few weeks ago I ran 4 nodes for a week, logging incoming transactions and the contents of the mempools when we saw a block.Â This gives us some data to chew on when tuning any fast block sync mechanism; here’s my first impressions looking a the data (which is available on github).
These graphs are my first look; in blue is the number of txs in the block, and in purple stacked on top is the number of txs which were left in the mempool after we took those away.
The good news is that all four sites are very similar; there’s small variance across these nodes (three are in Digital Ocean data centres and one is behind two NATs and a wireless network at my local coworking space).
The bad news is that there are spikes of very large mempools around block 352,800; a series of 731kb blocks which I’m guessing is some kind of soft limit for some mining software [EDIT: 750k is the default soft block limit; reported in 1024-byte quantities as blockchain.info does, this is 732k.Â Thanks sipa!].Â Our ability to handle this case will depend very much on heuristics for guessing which transactions are likely candidates to be in the block at all (I’m hoping it’s as simple as first-seen transactions are most likely, but I haven’t tested yet).
The key revelation of the paper is that we can have a network of arbitrarily complicated transactions, such that they aren’t on the blockchain (and thus are fast, cheap and extremely scalable), but at every point are ready to be dropped onto the blockchain for resolution if there’s a problem.Â This is genuinely revolutionary.
It also vindicates Satoshi’s insistence on the generality of the Bitcoin scripting system.Â And though it’s long been suggested that bitcoin would become a clearing system on which genuine microtransactions would be layered, it was unclear that we were so close to having such a system in bitcoin already.
Note that the scheme requires some solution to malleability to allow chains of transactions to be built (this is a common theme, so likely to be mitigated in a future soft fork), but Gregory Maxwell points out that it also wants selective malleability, so transactions can be replaced without invalidating the HTLCs which are spending their outputs.Â Thus it proposes new signature flags, which will require active debate, analysis and another soft fork.
There is much more to discover in the paper itself: recommendations for lightning network routing, the node charging model, a risk summary, the specifics of the softfork changes, and more.
I’ll leave you with a brief list of requirements to make Lightning Networks a reality:
A soft-fork is required, to protect against malleability and to allow new signature modes.
A new peer-to-peer protocol needs to be designed for the lightning network, including routing.
Blame and rating systems are needed for lightning network nodes.Â You don’t have to trust them, but it sucks if they go down as your money is probably stuck until the timeout.
More refinements (eg. relative OP_CHECKLOCKTIMEVERIFY) to simplify and tighten timeout times.
Wallets need to learn to use this, with UI handling of things like timeouts and fallbacks to the bitcoin network (sorry, your transaction failed, you’ll get your money back in N days).
You need to be online every 40 days to check that an old HTLC hasn’t leaked, which will require some alternate solution for occasional users (shut down channel, have some third party, etc).
A server implementation needs to be written.
That’s a lot of work!Â But it’s all simply engineering from here, just as bitcoin was once the paper was released.Â I look forward to seeing it happen (and I’m confident it will).
In Part I I described how a Poon-Dryja channel uses a single in-blockchain transaction to create off-blockchain transactions which can be safely updated by either party (as long as both agree), with fallback to publishing the latest versions to the blockchain if something goes wrong.
In Part II I described how Hashed Timelocked Contracts allow you to safely make one payment conditional upon another, so payments can be routed across untrusted parties using a series of transactions with decrementing timeout values.
Now we’ll join the two together: encapsulate Hashed Timelocked Contracts inside a channel, so they don’t have to be placed in the blockchain (unless something goes wrong).
Revision: Why Poon-Dryja Channels Work
Here’s half of a channel setup between me and youÂ where I’m paying you 1c: (there’s always a mirror setup between you and me, so it’s symmetrical)
The system works because after we agree on a new transaction (eg. to pay you another 1c), you revoke this by handing me your private keys to unlock that 1c output.Â Now if you ever released Transaction 1, I can spend both the outputs.Â If we want to add a new output to Transaction 1, we need to be able to make it similarly stealable.
Adding a 1c HTLC Output To Transaction 1 In The Channel
I’m going to send you 1c now via a HTLC (which means you’ll only get it if the riddle is answered; if it times out, I get the 1c back).Â So we replace transaction 1 with transaction 2, which has three outputs: $9.98 to me, 1c to you, and 1c to the HTLC: (once we agree on the new transactions, we invalidate transaction 1 as detailed in Part I)
Note that you supply another separate signature (sig3) for this output, so you can reveal that private key later without giving away any other output.
We modify our previous HTLC design so you revealing the sig3 would allow me to steal this output. We do this the same way we did for that 1c going to you: send the output via a timelocked mutually signed transaction.Â But there are two transaction paths in an HTLC: the got-the-riddle path and the timeout path, so we need to insert those timelocked mutually signed transactions in both of them.Â First let’s append a 1 day delay to the timeout path:
Similarly, we need to append a timelocked transaction on the “got the riddle solution” path, which now needs my signature as well (otherwise you could create a replacement transaction and bypass the timelocked transaction):
Remember The Other Side?
Poon-Dryja channels are symmetrical, so the full version has a matching HTLC on the other side (except with my temporary keys, so you can catch me out if I use a revoked transaction).Â Here’s the full diagram, just to be complete:
Closing The HTLC
When an HTLC is completed, we just update transaction 2, and don’t include the HTLC output.Â The funds either get added to your output (R value revealed before timeout) or my output (timeout).
Note that we can have an arbitrary number of independent HTLCs in progress at once, and open and/or close as many in each transaction update as both parties agree to.
Keys, Keys Everywhere!
Each output for a revocable transaction needs to use a separate address, so we can hand the private key to the other party.Â We use two disposable keys for each HTLC, and every new HTLC will change one of the other outputs (either mine, if I’m paying you, or yours if you’re paying me), so that needs a new key too.Â That’s 3 keys, doubled for the symmetry, to give 6 keys per HTLC.
Adam Back pointed out that we can actually implement this scheme without the private key handover, and instead sign a transaction for the other side which gives them the money immediately.Â This would permit more key reuse, but means we’d have to store these transactions somewhere on the off chance we needed them.
Storing just the keys is smaller, but more importantly, Section 6.2 of the paper describes using BIP 32 key hierarchies so the disposable keys are derived: after a while, you only need to store one key for all the keys the other side has given you.Â This is vastly more efficient than storing a transaction for every HTLC, and indicates the scale (thousands of HTLCs per second) that the authors are thinking.
My next post will be a TL;DR summary, and some more references to the implementation details and possibilities provided by the paper.
 The new sighash types are fairly loose, and thus allow you to attach a transaction to a different parent if it uses the same output addresses.Â I think we could re-use the same keys in both paths if we ensure that the order of keys required is reversed for one, but we’d still need 4 keys, so it seems a bit too tricky.
In Part I, we demonstrated Poon-Dryja channels; a generalized channel structure which used revocable transactions to ensure that old transactions wouldn’t be reused.
A channel from me<->you would allow me to efficiently send you 1c, but that doesn’t scale since it takes at least one on-blockchain transaction to set up each channel. The solution to this is to route funds via intermediaries;Â in this example we’ll use the fictitious “MtBox”.
If I already have a channel with MtBox’s Payment Node, and so do you, that lets me reliably send 1c to MtBox without (usually) needing the blockchain, and it lets MtBox send you 1c with similar efficiency.
But it doesn’t give me a way to force them to send it to you; I have to trust them.Â We can do better.
Bonding Unrelated Transactions using Riddles
For simplicity, let’s ignore channels for the moment.Â Here’s the “trust MtBox” solution:
What if we could bond these transactions together somehow, so that when you spend the output from the MtBox transaction, that automatically allows MtBox to spend the output from my transaction?
Here’s one way. You send me a riddle question to which nobody else knows the answer: eg. “What’s brown and sticky?”.Â I then promise MtBox the 1c if they answer that riddle correctly, and tell MtBox that you know.
MtBox doesn’t know the answer, so it turns around and promises to pay you 1c if you answer “What’s brown and sticky?”. When you answer “A stick”, MtBox can pay you 1c knowing that it can collect the 1c off me.
The bitcoin blockchain is really good at riddles; in particular “what value hashes to this one?” is easy to express in the scripting language. So you pick a random secret value R, then hash it to get H, then send me H.Â My transaction’s 1c output requires MtBox’s signature, and a value which hashes to H (ie. R).Â MtBox adds the same requirement to its transaction output, so if you spend it, it can get its money back from me:
Handling Failure Using Timeouts
This example is too simplistic; when MtBox’s PHP script stops processing transactions, I won’t be able to get my 1c back if I’ve already published my transaction.Â So we use a familiar trick from Part I, a timeout transaction which after (say) 2 days, returns the funds to me.Â This output needs both my and MtBox’s signatures, and MtBox supplies me with the refund transaction containing the timeout:
MtBox similarly needs a timeout in case you disappear.Â And it needs to make sure it gets the answer to the riddle from you within that 2 days, otherwise I might use my timeout transaction and it can’t get its money back.Â To give plenty of margin, it uses a 1 day timeout:
It’s fairly clear to see that longer paths are possible, using the same “timelocked” transactions.Â The paper uses 1 day per hop, so if you were 5 hops away (say, me <-> MtBox <-> Carol <-> David <-> Evie <-> you) I would use a 5 day timeout to MtBox, MtBox a 4 day to Carol, etc.Â A routing protocol is required, but if some routing doesn’t work two nodes can always cancel by mutual agreement (by creating timeout transaction with no locktime).
The paper refers to each set of transactions as contracts, with the following terms:
If you can produce to MtBox an unknown 20-byte random input data R from a known H, within two days, then MtBox will settle the contract by paying you 1c.
If two days have elapsed, then the above clause is null and void and the clearing process is invalidated.
Either party may (and should) pay out according to the terms of this contract in any method of the participants choosing and close out this contract early so long as both participants in this contract agree.
The hashing and timelock properties of the transactions are what allow them to be chained across a network, hence the term Hashed Timelock Contracts.
Next: Using Channels With Hashed Timelock Contracts.
The hashed riddle construct is cute, but as detailed above every transaction would need to be published on the blockchain, which makes it pretty pointless.Â So the next step is to embed them into a Poon-Dryja channel, so that (in the normal, cooperative case) they don’t need to reach the blockchain at all.
I finally took a second swing at understanding the Lightning Network paper.Â The promise of this work is exceptional: instant reliable transactions across the bitcoin network. But the implementation is complex and the draft paper reads like a grab bag of ideas; but it truly rewards close reading!Â It doesn’t involve novel crypto, nor fancy bitcoin scripting tricks.
There are several techniques which are used in the paper, so I plan to concentrate on one per post and wrap up at the end.
Revision: Payment Channels
A Payment Channel is a method for sending microtransactions to a single recipient, such as me paying you 1c a minute for internet access.Â I create an opening transaction which has a $10 output, which can only be redeemed by a transaction input signed by you and me (or me alone, after a timeout, just in case you vanish).Â That opening transaction goes into the blockchain, and we’re sure it’s bedded down.
Then I send you a signed transaction which spends that opening transaction output, and has two outputs: one for $9.99 to me, and one for 1c to you.Â If you want, you could sign that transaction too, and publish it immediately to get your 1c.
Then a minute later, I send you a signed transaction which spends that same opening transaction output, and has a $9.98 output for me, and a 2c output for you. Each minute, I send you another transaction, increasing the amount you get every time.
This works because:
Â Each transaction I send spends the same output; so only one of them can ever be included in the blockchain.
I can’t publish them, since they need your signature and I don’t have it.
At the end, you will presumably publish the last one, which is best for you.Â You could publish an earlier one, and cheat yourself of money, but that’s not my problem.
Undoing A Promise: Revoking Transactions?
In the simple channel case above, we don’t have to revoke or cancel old transactions, as the only person who can spend them is the person who would be cheated.Â This makes the payment channel one way: if the amount I was paying you ever went down, you could simply broadcast one of the older, more profitable transactions.
So if we wanted to revoke an old transaction, how would we do it?
There’s no native way in bitcoin to have a transaction which expires.Â You can have a transaction which is valid after 5 days (using locktime), but you can’t have one which is valid until 5 days has passed.
So the only way to invalidate a transaction is to spend one of its inputs, and get that input-stealing transaction into the blockchain before the transaction you’re trying to invalidate.Â That’s no good if we’re trying to update a transaction continuously (a-la payment channels) without most of them reaching the blockchain.
The Transaction Revocation Trick
But there’s a trick, as described in the paper.Â We build our transaction as before (I sign, and you hold), which spends our opening transaction output, and has two outputs.Â The first is a 9.99c output for me.Â The second is a bit weird–it’s 1c, but needs two signatures to spend: mine and a temporary one of yours.Â Indeed, I create and sign such a transaction which spends this output, and send it to you, but that transaction has a locktime of 1 day:
Now, if you sign and publish that transaction, I can spend my $9.99 straight away, and you can publish that timelocked transaction tomorrow and get your 1c.
But what if we want to update the transaction?Â We create a new transaction, with 9.98c output to me and 2c output to a transaction signed by both me and another temporary address of yours.Â I create and sign a transaction which spends that 2c output, has a locktime of 1 day and has an output going to you, and send it to you.
We can revoke the old transaction: you simply give me the temporary private key you used for that transaction.Â Weird, I know (and that’s why you had to generate a temporary address for it).Â Now, if you were ever to sign and publish that old transaction, I can spend my $9.99 straight away, and create a transaction using your key and my key to spend your 1c.Â Your transaction (1a below) which could spend that 1c output is timelocked, so I’ll definitely get my 1c transaction into the blockchain first (and the paper uses a timelock of 40 days, not 1).
So the effect is that the old transaction is revoked: if you were to ever sign and release it, I could steal all the money.Â Neat trick, right?
A Minor Variation To Avoid Timeout Fallback
In the original payment channel, the opening transaction had a fallback clause: after some time, it is all spendable by me.Â If you stop responding, I have to wait for this to kick in to get my money back.Â Instead, the paper uses a pair of these “revocable” transaction structures.Â The second is a mirror image of the first, in effect.
So the first output is $9.99 which needs your signature and a temporary signature of mine.Â The second isÂ 1c for meyou.Â You sign the transaction, and I hold it.Â You create and sign a transaction which has that $9.99 as input, a 1 day locktime, and send it to me.
Since both your and my “revocable” transactions spend the same output, only one can reach the blockchain.Â They’re basically equivalent: if you send yours you must wait 1 day for your money.Â If I send mine, I have to wait 1 day for my money.Â But it means either of us can finalize the payment at any time, so the opening transaction doesn’t need a timeout clause.
Now we have a generalized transaction channel, which can spend the opening transaction in any way we both agree on, without trust or requiring on-blockchain updates (unless things break down).
The next post will discuss Hashed Timelock Contracts (HTLCs) which can be used to create chains of payments…
I ignore transaction fees as an unnecessary distraction.
We need malleability fixes, so you can’t mutate a transaction and break the ones which follow.Â But I also need the ability to sign Transaction 1a without a complete Transaction 1 (since you can’t expose the signed version to me).Â The paper proposes new SIGHASH types to allow this.
[EDIT 2015-03-30 22:11:59+10:30: We also need to sign the other symmetric transactions before signing the opening transaction.Â If we released a completed opening transaction before having the other transactions, we might be stuck with no way to get our funds back (as we don’t have a “return all to me” timeout on the opening transaction)]
With the 1.0 virtio standard finalized by the committee (though minor non-material corrections and clarifications are still trickling in), Michael Tsirkin did the heavy lifting of writing the Linux drivers (based partly on an early prototype of mine).
But I wanted an independent implementation to test: both because OASIS insists on multiple implementations before standard ratification, but also because I wanted to make sure the code which is about to go into the merge window works well.
Thus, I began the task of making lguest understand PCI.Â Fortunately, the osdev wiki has an excellent introduction on how to talk PCI on an x86 machine.Â It didn’t take me too long to get a successful PCI bus scan from the guest, and start about implementing the virtio parts.
The final part (over which I procrastinated for a week) was to step through the spec and document all the requirements in the lguest comments.Â I also added checks that the guest driver was behaving sufficiently, but now it’s finally done.
It also resulted in a few minor patches, and some clarification patches for the spec.Â No red flags, however, so I’m reasonably confident that 3.20 will have compliant 1.0 virtio support!
It became clear that there is much confusion over read and write; eg. Linus thought read() was like write() whereas I thought (prior to my last post) that write() was like read(). Both wrong…
Both Linux and v6 UNIX always returned from read() once data was available (v6 didn’t have sockets, but they had pipes). POSIX even suggests this:
The value returned may be less than nbyte if the number of bytes left in the file is less than nbyte, if the read() request was interrupted by a signal, or if the file is a pipe or FIFO or special file and has fewer than nbyte bytes immediately available for reading.
But write() is different. Presumably so simple UNIX filters didn’t have to check the return and loop (they’d just die with EPIPE anyway), write() tries hard to write all the data before returning. And that leads to a simple rule.Â Quoting Linus:
Sure, you can try to play games by knowing socket buffer sizes and look at pending buffers withÂ SIOCOUTQ etc, and say “ok, I can probably do a write of size X without blocking” even on a blocking file descriptor, but it’s hacky, fragile and wrong.
I’m travelling, so I built an Ubuntu-compatible kernel with a printk() into select() and poll() to see who else was making this mistake on my laptop:
cups-browsed: (1262): fd 5 poll() for write without nonblock
cups-browsed: (1262): fd 6 poll() for write without nonblock
Xorg: (1377): fd 1 select() for write without nonblock
Xorg: (1377): fd 3 select() for write without nonblock
Xorg: (1377): fd 11 select() for write without nonblock
This first one is actually OK; fd 5 is an eventfd (which should never block). But the rest seem to be sockets, and thus probably bugs.
What’s worse, are the Linux select() man page:
A file descriptor is considered ready if it is possible to
perform the corresponding I/O operation (e.g., read(2)) without
... those in writefds will be watched to see if a write will
There are numerous C async I/O libraries; tevent being the one I’m most familiar with.Â Yet, tevent has a very wide API, and programs using it inevitably descend into “callback hell”.Â So I wrote ccan/io.
The idea is that each I/O callback returns a “struct io_plan” which says what I/O to do next, and what callback to call.Â Examples are “io_read(buf, len, next, next_arg)” to read a fixed number of bytes, and “io_read_partial(buf, lenp, next, next_arg)” to perform a single read.Â You could also write your own, such as pettycoin’s “io_read_packet()” which read a length then allocated and read in the rest of the packet.
This should enable a convenient debug mode: you turn each io_read() etc. into synchronous operations and now you have a nice callchain showing what happened to a file descriptor.Â In practice, however, debug was painful to use and a frequent source of bugs inside ccan/io, so I never used it for debugging.
And I became less happy when I used it in anger for pettycoin, but at some point you’ve got to stop procrastinating and start producing code, so I left it alone.
Now I’ve revisited it.Â Â 820 insertions(+), 1042 deletions(-) and the code is significantly less hairy, and the API a little simpler.Â In particular, writing the normal “read-then-write” loops is still very nice, while doing full duplex I/O is possible, but more complex.Â Let’s see if I’m still happy once I’ve merged it into pettycoin…