Thanks to edmundedgar on reddit I have some more accurate data to update my previous bandwidth growth estimation post: OFCOM UK, who released their November 2014 report on average broadband speeds. Whereas Akamai numbers could be lowered by the increase in mobile connections, this directly measures actual broadband speeds.
Extracting the figures gives:
- Average download speed in November 2008 was 3.6Mbit
- Average download speed in November 2014 was 22.8Mbit
- Average upload speed in November 2014 was 2.9Mbit
- Average upload speed in November 2008 to April 2009 was 0.43Mbit/s
So in 6 years, downloads went up by 6.333 times, and uploads went up by 6.75 times. That’s an annual increase of 36% for downloads and 37% for uploads; that’s good, as it implies we can use download speed factor increases as a proxy for upload speed increases (as upload speed is just as important for a peer-to-peer network).
This compares with my previous post’s Akamai’s UK numbers of 3.526Mbit in Q4 2008 and 10.874Mbit in Q4 2014: only a factor of 3.08 (26% per annum). Given how close Akamai’s numbers were to OFCOM’s in November 2008 (a year after the iPhone UK release, but probably too early for mobile to have significant effect), it’s reasonable to assume that mobile plays a large part of this difference.
If we assume Akamai’s numbers reflected real broadband rates prior to November 2008, we can also use it to extend the OFCOM data back a year: this is important since there was almost no bandwidth growth according to Akamai from Q4 2007 to Q7 2008: ignoring that period gives a rosier picture than my last post, and smells of cherrypicking data.
So, let’s say the UK went from 3.265Mbit in Q4 2007 (Akamai numbers) to 22.8Mbit in Q4 2014 (OFCOM numbers). That’s a factor of 6.98, or 32% increase per annum for the UK. If we assume that the US Akamai data is under-representing Q4 2014 speeds by the same factor (6.333 / 3.08 = 2.056) as the UK data, that implies the US went from 3.644Mbit in Q4 2007 to 11.061 * 2.056 = 22.74Mbit in Q4 2014, giving a factor of 6.24, or 30% increase per annum for the US.
As stated previously, China is now where the US and UK were 7 years ago, suggesting they’re a reasonable model for future growth for that region. Thus I revise my bandwidth estimates; instead of 17% per annum this suggests 30% per annum as a reasonable growth rate.
There’s a significant debate going on at the moment in the Bitcoin world; there’s a great deal of information and misinformation, and it’s hard to find a cogent summary in one place. This post is my attempt, though I already know that it will cause me even more trouble than that time I foolishly entitled a post “If you didn’t run code written by assholes, your machine wouldn’t boot”.
The Technical Background: 1MB Block Limit
The bitcoin protocol is powered by miners, who gather transactions into blocks, producing a block every 10 minutes (but it varies a lot). They get a 25 bitcoin subsidy for this, plus whatever fees are paid by those transactions. This subsidy halves every 4 years: in about 12 months it will drop to 12.5.
Full nodes on the network check transactions and blocks, and relay them to others. There are also lightweight nodes which simply listen for transactions which affect them, and trust that blocks from miners are generally OK.
A normal transaction is 250 bytes, and there’s a hard-coded 1 megabyte limit on the block size. This limit was introduced years ago as a quick way of avoiding a miner flooding the young network, though the original code could only produce 200kb blocks, and the default reference code still defaults to a 750kb limit.
In the last few months there have been increasing runs of full blocks, causing backlogs for a few hours. More recently, someone deliberately flooded the network with normal-fee transactions for several days; any transactions paying less fees than those had to wait for hours to be processed.
There are 5 people who have commit access to the bitcoin reference implementation (aka. “bitcoin-core”), and they vary significantly in their concerns on the issue.
The Bitcoin Users’ Perspective
From the bitcoin users perspective, blocks should be infinite, and fees zero or minimal. This is the basic position of respected (but non-bitcoin-core) developer Mike Hearn, and has support from bitcoin-core ex-lead Gavin Andresen. They work on the wallet and end-user side of bitcoin, and they see the issue as the most urgent. In an excellent post arguing why growth is so important, Mike raises the following points, which I’ve paraphrased:
- Currencies have network effects. A currency that has few users is simply not competitive with currencies that have many.
- A decentralised currency that the vast majority can’t use doesn’t change the amount of centralisation in the world. Most people will still end up using banks, with all the normal problems.
- Growth is a part of the social contract. It always has been.
- Businesses will only continue to invest in bitcoin and build infrastructure if they are assured that the market will grow significantly.
- Bitcoin needs users, lots of them, for its political survival. There are many people out there who would like to see digital cash disappear, or be regulated out of existence.
At this point, it’s worth mentioning another bitcoin-core developer: Jeff Garzik. He believes that the bitcoin userbase has been promised that transactions will continue to be almost free. When a request to change the default mining limit from 750kb to 1M was closed by the bitcoin lead developer Wladimir van der Laan as unimportant, Jeff saw this as a symbolic moment:
— Jeff Garzik (@jgarzik) July 21, 2015
What Happens If We Don’t Increase Soon?
Mike Hearn has a fairly apocalyptic view of what would happen if blocks fill. That was certainly looking likely when the post was written, but due to episodes where the blocks were full for days, wallet designers are (finally) starting to estimate fees for timely processing (miners process larger fee transactions first). Some wallets and services didn’t even have a way to change the setting, leaving users stranded during high-volume events.
It now seems that the bursts of full blocks will arrive with increasing frequency; proposals are fairly mature now to allow users to post-increase fees if required, which (if all goes well) could make for a fairly smooth transition from the current “fees are tiny and optional” mode of operation to a “there will be a small fee”.
But even if this rosy scenario is true, this
begsavoids the bigger question of how high fees can become before bitcoin becomes useless. 1c? 5c? 20c? $1?
So What Are The Problems With Increasing The Blocksize?
In a word, the problem is miners. As mining has transitioned from a geek pastime, semi-hobbyist, then to large operations with cheap access to power, it has become more concentrated.
The only difference between bitcoin and previous cryptocurrencies is that instead of a centralized “broker” to ensure honesty, bitcoin uses an open competition of miners. Given bitcoin’s endurance, it’s fair to count this a vital property of bitcoin. Mining centralization is the long-term concern of another bitcoin-core developer (and my coworker at Blockstream), Gregory Maxwell.
Control over half the block-producing power and you control who can use bitcoin and cheat anyone not using a full node themselves. Control over 2/3, and you can force a rule change on the rest of the network by stalling it until enough people give in. Central control is also a single point to shut the network down; that lets others apply legal or extra-legal pressure to restrict the network.
What Drives Centralization?
Bitcoin mining is more efficient at scale. That was to be expected. However, the concentration has come much faster than expected because of the invention of mining pools. These pools tell miners what to mine, in return for a small (or in some cases, zero) share of profits. It saves setup costs, they’re easy to use, and miners get more regular payouts. This has caused bitcoin to reel from one centralization crisis to another over the last few years; the decline in full nodes has been precipitous by some measures and continues to decline.
Consider the plight of a miner whose network is further away from most other miners. They find out about new blocks later, and their blocks get built on later. Both these effects cause them to create blocks which the network ignores, called orphans. Some orphans are the inevitable consequence of miners racing for the same prize, but the orphan problem is not symmetrical. Being well connected to the other miners helps, but there’s a second effect: if you discover the previous block, you’ve a head-start on the next one. This means a pool which has 20% of the hashing power doesn’t have to worry about delays at all 20% of the time.
If the orphan rate is very low (say, 0.1%), the effect can be ignored. But as it climbs, the pressure to join a pool (the largest pool) becomes economically irresistible, until only one pool remains.
Larger Blocks Are Driving Up Orphan Rates
Large blocks take longer to propagate, increasing the rate of orphans. This has been happening as blocks increase. Blocks with no transactions at all are smallest, and so propagate fastest: they still get a 25 bitcoin subsidy, though they don’t help bitcoin users much.
Many people assumed that miners wouldn’t overly centralize, lest they cause a clear decentralization failure and drive the bitcoin price into the ground. That assumption has proven weak in the face of climbing orphan rates.
And miners have been behaving very badly. Mining pools orchestrate attacks on each other with surprising regularity; DDOS and block withholding attacks are both well documented. A large mining pool used their power to double spend and steal thousands of bitcoin from a gambling service. When it was noticed, they blamed a rogue employee. No money was returned, nor any legal action taken. It was hoped that miners would leave for another pool as they approached majority share, but that didn’t happen.
If large blocks can be used as a weapon by larger miners against small ones, it’s expected that they will be.
More recently (and quite by accident) it was discovered that over half the mining power aren’t verifying transactions in blocks they build upon. They did this in order to reduce orphans, and one large pool is still doing so. This is a problem because lightweight bitcoin clients work by assuming anything in the longest chain of blocks is good; this was how the original bitcoin paper anticipated that most users would interact with the system.
The Third Side Of The Debate: Long Term Network Funding
Before I summarize, it’s worth mentioning the debate beyond the current debate: long term network support. The minting of new coins decreases with time; the plan of record (as suggested in the original paper) is that total transaction fees will rise to replace the current mining subsidy. The schedule of this is unknown and generally this transition has not happened: free transactions still work.
The block subsidy as I write this is about $7000. If nothing else changes, miners would want $3500 in fees in 12 months when the block subsidy halves, or about $2 per transaction. That won’t happen; miners will simply lose half their income. (Perhaps eventually they form a cartel to enforce a minimum fee, causing another centralization crisis? I don’t know.)
It’s natural for users to try to defer the transition as long as possible, and the practice in bitcoin-core has been to aggressively reduce the default fees as the bitcoin price rises. Core developers Gregory Maxwell and Pieter Wuille feel that signal was a mistake; that fees will have to rise eventually and users should not be lulled into thinking otherwise.
Mike Hearn in particular has been holding out the promise that it may not be necessary. On this he is not widely supported: that some users would offer to pay more so other users can continue to pay less.
It’s worth noting that some bitcoin businesses rely on the current very low fees and don’t want to change; I suspect this adds bitterness and vitriol to many online debates.
The bitcoin-core developers who deal with users most feel that bitcoin needs to expand quickly or die, that letting fees emerge now will kill expansion, and that the infrastructure will improve over time if it has to.
Other bitcoin-core developers feel that bitcoin’s infrastructure is dangerously creaking, that fees need to emerge anyway, and that if there is a real emergency a blocksize change could be rolled out within a few weeks.
At least until this is resolved, don’t count on future bitcoin fees being insignificant, nor promise others that bitcoin has “free transactions”.
 http://blog.bettercrypto.com/?p=1131 “Block Withholding Attacks – Recent Research” N T Courtois 2014
 https://bitcointalk.org/index.php?topic=327767.0 “GHash.IO and double-spending against BetCoin Dice” mmtech et. al 2013
 https://www.reddit.com/r/Bitcoin/comments/3c8tq2/questions_about_the_july_4th_bip66_fork/cstgajp “Questions about the July 4th BIP66 fork”
 https://twitter.com/petertoddbtc/status/608475414449778688 “350,000 full nodes to 6,000 in two years…” P Todd 2015
 https://getaddr.bitnodes.io/dashboard/?days=365 “Reachable nodes during the last 365 days.” Bitnodes.io
 https://bitcointalk.org/index.php?topic=532.msg6306#msg6306 “Re: Scalability and transaction rate” Satoshi 2010
 http://firstname.lastname@example.org/msg08161.html “[Bitcoin-development] Mining centralization pressure from non-uniform propagation speed” Pieter Wuille 2015
Last night f2pool mined a 1MB block containing a single 1MB transaction. This scooped up some of the spam which has been going to various weakly-passworded “brainwallets”, gaining them 0.5569 bitcoins (on top of the normal 25 BTC subsidy). You can see the megatransaction on blockchain.info.
It was widely reported to take about 25 seconds for bitcoin core to process this block: this is far worse than my “2 seconds per MB” result in my last post, which was considered a pretty bad case. Let’s look at why.
How Signatures Are Verified
The algorithm to check a transaction input (of this form) looks like this:
- Strip the other inputs from the transaction.
- Replace the input script we’re checking with the script of the output it’s trying to spend.
- Hash the resulting transaction with SHA256, then hash the result with SHA256 again.
- Check the signature correctly signed that hash result.
Now, for a transaction with 5570 inputs, we have to do this 5570 times. And the bitcoin core code does this by making a copy of the transaction each time, and using the marshalling code to hash that; it’s not a huge surprise that we end up spending 20 seconds on it.
How Fast Could Bitcoin Core Be If Optimized?
Once we strip the inputs, the result is only about 6k long; hashing 6k 5570 times takes about 265 milliseconds (on my modern i3 laptop). We have to do some work to change the transaction each time, but we should end up under half a second without any major backflips.
Problem solved? Not quite….
This Block Isn’t The Worst Case (For An Optimized Implementation)
As I said above, the amount we have to hash is about 6k; if a transaction has larger outputs, that number changes. We can fit in fewer inputs though. A simple simulation shows the worst case for 1MB transaction has 3300 inputs, and 406000 byte output(s): simply doing the hashing for input signatures takes about 10.9 seconds. That’s only about two or three times faster than the bitcoind naive implementation.
This problem is far worse if blocks were 8MB: an 8MB transaction with 22,500 inputs and 3.95MB of outputs takes over 11 minutes to hash. If you can mine one of those, you can keep competitors off your heels forever, and own the bitcoin network… Well, probably not. But there’d be a lot of emergency patching, forking and screaming…
Short Term Steps
An optimized implementation in bitcoind is a good idea anyway, and there are three obvious paths:
- Optimize the signature hash path to avoid the copy, and hash in place as much as possible.
- Use the Intel and ARM optimized SHA256 routines, which increase SHA256 speed by about 80%.
- Parallelize the input checking for large numbers of inputs.
Longer Term Steps
A soft fork could introduce an OP_CHECKSIG2, which hashes the transaction in a different order. In particular, it should hash the input script replacement at the end, so the “midstate” of the hash can be trivially reused. This doesn’t entirely eliminate the problem, since the sighash flags can require other permutations of the transaction; these would have to be carefully explored (or only allowed with OP_CHECKSIG).
This soft fork could also place limits on how big an OP_CHECKSIG-using transaction could be.
Such a change will take a while: there are other things which would be nice to change for OP_CHECKSIG2, such as new sighash flags for the Lightning Network, and removing the silly DER encoding of signatures.
Since I was creating large blocks (41662 transactions), I added a little code to time how long they take once received (on my laptop, which is only an i3).
The obvious place to look is CheckBlock: a simple 1MB block takes a consistent 10 milliseconds to validate, and an 8MB block took 79 to 80 milliseconds, which is nice and linear. (A 17MB block took 171 milliseconds).
Weirdly, that’s not the slow part: promoting the block to the best block (ActivateBestChain) takes 1.9-2.0 seconds for a 1MB block, and 15.3-15.7 seconds for an 8MB block. At least it’s scaling linearly, but it’s just slow.
So, 16 Seconds Per 8MB Block?
I did some digging. Just invalidating and revalidating the 8MB block only took 1 second, so something about receiving a fresh block makes it worse. I spent a day or so wrestling with benchmarking…
Indeed, ConnectTip does the actual script evaluation: CheckBlock() only does a cursory examination of each transaction. I’m guessing bitcoin core is not smart enough to parallelize a chain of transactions like mine, hence the 2 seconds per MB. On normal transaction patterns even my laptop should be about 4 times faster than that (but I haven’t actually tested it yet!).
So, 4 Seconds Per 8MB Block?
But things are going to get better: I hacked in the currently-disabled libsecp256k1, and the time for the 8MB ConnectTip dropped from 18.6 seconds to 6.5 seconds.
So, 1.6 Seconds Per 8MB Block?
I re-enabled optimization after my benchmarking, and the result was 4.4 seconds; that’s libsecp256k1, and an 8MB block.
Let’s Say 1.1 Seconds for an 8MB Block
This is with some assumptions about parallelism; and remember this is on my laptop which has a fairly low-end CPU. While you may not be able to run a competitive mining operation on a Raspberry Pi, you can pretty much ignore normal verification times in the blocksize debate.
 I turned on -debug=bench, which produced impenetrable and seemingly useless results in the log.
So I added a print with a sleep, so I could run perf. Then I disabled optimization, so I’d get understandable backtraces with perf. Then I rebuilt perf because Ubuntu’s perf doesn’t demangle C++ symbols, which is part of the kernel source package. (Are we having fun yet?). I even hacked up a small program to help run perf on just that part of bitcoind. Finally, after perf failed me (it doesn’t show 100% CPU, no idea why; I’d expect to see main in there somewhere…) I added stderr prints and ran strace on the thing to get timings.
Linux’s perf competes with early git for title of least-friendly Linux tool. Because it’s tied to kernel versions, and the interfaces changes fairly randomly, you can never figure out how to use the version you need to use (hint: always use -g).
But when it works, it’s very useful. Recently I wanted to figure out where bitcoind was spending its time processing a block; because I’m a cool kid, I didn’t use gprof, I used perf. The problem is that I only want information on that part of bitcoind. To start with, I put a sleep(30) and a big printf in the source, but that got old fast.
Thus, I wrote “perfme.c“. Compile it (requires some trivial CCAN headers) and link perfme-start and perfme-stop to the binary. By default it runs/stops perf record on its parent, but an optional pid arg can be used for other things (eg. if your program is calling it via system(), the shell will be the parent).
So I did some IBLT research (as posted to bitcoin-dev ) and I lazily used SHA256 to create both the temporary 48-bit txids, and from them to create a 16-bit index offset. Each node has to produce these for every bitcoin transaction ID it knows about (ie. its entire mempool), which is normally less than 10,000 transactions, but we’d better plan for 1M given the coming blopockalypse.
For txid48, we hash an 8 byte seed with the 32-byte txid; I ignored the 8 byte seed for the moment, and measured various implementations of SHA256 hashing 32 bytes on on my Intel Core i3-5010U CPU @ 2.10GHz laptop (though note we’d be hashing 8 extra bytes for IBLT): (implementation in CCAN)
- Bitcoin’s SHA256: 527.7+/-0.9 nsec
- Optimizing the block ending on bitcoin’s SHA256: 500.4+/-0.66 nsec
- Intel’s asm rorx: 314.1+/-0.3 nsec
- Intel’s asm SSE4 337.5+/-0.5 nsec
- Intel’s asm RORx-x8ms 458.6+/-2.2 nsec
- Intel’s asm AVX 336.1+/-0.3 nsec
So, if you have 1M transactions in your mempool, expect it to take about 0.62 seconds of hashing to calculate the IBLT. This is too slow (though it’s fairly trivially parallelizable). However, we just need a universal hash, not a cryptographic one, so I benchmarked murmur3_x64_128:
- Murmur3-128: 23 nsec
That’s more like 0.046 seconds of hashing, which seems like enough of a win to add a new hash to the mix.
I like data. So when Patrick Strateman handed me a hacky patch for a new testnet with a 100MB block limit, I went to get some. I added 7 digital ocean nodes, another hacky patch to prevent sendrawtransaction from broadcasting, and a quick utility to create massive chains of transactions/
My home DSL connection is 11Mbit down, and 1Mbit up; that’s the fastest I can get here. I was CPU mining on my laptop for this test, while running tcpdump to capture network traffic for analysis. I didn’t measure the time taken to process the blocks on the receiving nodes, just the first propagation step.
1 Megabyte Block
Naively, it should take about 10 seconds to send a 1MB block up my DSL line from first packet to last. Here’s what actually happens, in seconds for each node:
The packet dump shows they’re all pretty much sprayed out simultaneously (bitcoind may do the writes in order, but the network stack interleaves them pretty well). That’s why it’s 67 seconds at best before the first node receives my block (a bit longer, since that’s when the packet left my laptop).
8 Megabyte Block
I increased my block size, and one node dropped out, so this isn’t quite the same, but the times to send to each node are about 8 times worse, as expected:
Using the rough formula of
- Get a faster DSL connection. Though even an uplink 10 times faster would mean 1.1% orphan rate with 1MB blocks, or 8% with 8MB blocks.
- Only connect to a single well-connected peer (-maxconnections=1), and hope they propagate your block.
- Refuse to mine any transactions, and just collect the block reward. Doesn’t help the bitcoin network at all though.
- Join a large pool. This is what happens in practice, but raises a significant centralization problem.
- We need bitcoind to be smarter about ratelimiting in these situations, and stream serially. Done correctly (which is hard), it could also help bufferbloat which makes running a full node at home so painful when it propagates blocks.
- Some kind of block compression, along the lines of Gavin’s IBLT idea. I’ve done some preliminary work on this, and it’s promising, but far from trivial.
What happens if bitcoin blocks fill? Miners choose transactions with the highest fees, so low fee transactions get left behind. Let’s look at what makes up blocks today, to try to figure out which transactions will get “crowded out” at various thresholds.
Some assumptions need to be made here: we can’t automatically tell the difference between me taking a $1000 output and paying you 1c, and me paying you $999.99 and sending myself the 1c change. So my first attempt was very conservative: only look at transactions with two or more outputs which were under the given thresholds (I used a nice round $200 / BTC price throughout, for simplicity).
(Note: I used bitcoin-iterate to pull out transaction data, and rebuild blocks without certain transactions; you can reproduce the csv files in the blocksize-stats directory if you want).
Paying More Than 1 Person Under $1 (< 500000 Satoshi)
Here’s the result (against the current blocksize):
Let’s zoom in to the interesting part, first, since there’s very little difference before 220,000 (February 2013). You can see that only about 18% of transactions are sending less than $1 and getting less than $1 in change:
Paying Anyone Under 1c, 10c, $1
The above graph doesn’t capture the case where I have $100 and send you 1c. If we eliminate any transaction which has any output less than various thresholds, we’ll catch that. The downside is that we capture the “sending myself tiny change” case, but I’d expect that to be rarer:
This eliminates far more transactions. We can see only 2.5% of the block size is taken by transactions with 1c outputs (the dark red line following the block “current blocks” line), but the green line shows about 20% of the block used for 10c transactions. And about 45% of the block is transactions moving $1 or less.
Interpretation: Hard Landing Unlikely, But Microtransactions Lose
If the block size doesn’t increase (or doesn’t increase in time): we’ll see transactions get slower, and fees become the significant factor in whether your transaction gets processed quickly. People will change behaviour: I’m not going to spend 20c to send you 50c!
Because block finding is highly variable and many miners are capping blocks at 750k, we see backlogs at times already; these bursts will happen with increasing frequency from now on. This will put pressure on Satoshdice and similar services, who will be highly incentivized to use StrawPay or roll their own channel mechanism for off-blockchain microtransactions.
I’d like to know what timescale this happens on, but the graph shows that we grow (and occasionally shrink) in bursts. A logarithmic graph prepared by Peter R of bitcointalk.org suggests that we hit 1M mid-2016 or so; expect fee pressure to bend that graph downwards soon.
The bad news is that even if fees hit (say) 25c and that prevents all the sub-$1 transactions, we only double our capacity, giving us perhaps another 18 months. (At that point miners are earning $1000 from transaction fees as well as $5000 (@ $200/BTC) from block reward, which is nice for them I guess.)
My Best Guess: Larger Blocks Desirable Within 2 Years, Needed by 3
Personally I think 5c is a reasonable transaction fee, but I’d prefer not to see it until we have decentralized off-chain alternatives. I’d be pretty uncomfortable with a 25c fee unless the Lightning Network was so ubiquitous that I only needed to pay it twice a year. Higher than that would have me reaching for my credit card to charge my Lightning Network account :)
Disclaimer: I Work For BlockStream, on Lightning Networks
Lightning Networks are a marathon, not a sprint. The development timeframes in my head are even vaguer than the guesses above. I hope it’s part of the eventual answer, but it’s not the bandaid we’re looking for. I wish it were different, but we’re going to need other things in the mean time.
I hope this provided useful facts, whatever your opinions.
I used bitcoin-iterate and gnumeric to render the current bitcoin blocksizes, and here are the results.
My First Graph: A Moment of Panic
This is block sizes up to yesterday; I’ve asked gnumeric to derive an exponential trend line from the data (in black; the red one is linear)
That trend line hits 1000000 at block 363845.5, which we’d expect in about 32 days time! This is what is freaking out so many denizens of the Bitcoin Subreddit. I also just saw a similar inaccurate [correction: misleading] graph reshared by Mike Hearn on G+ :(
But Wait A Minute
That trend line says we’re on 800k blocks today, and we’re clearly not. Let’s add a 6 hour moving average:
In fact, if we cluster into 36 blocks (ie. 6 hours worth), we can see how misleading the terrible exponential fit is:
Clearer Graphs: 1 week Moving Average
So, not time to panic just yet, though we’re clearly growing, and in unpredictable bursts.
I’ve been trying not to follow the Great Blocksize Debate raging on reddit. However, the lack of any concrete numbers has kind of irked me, so let me add one for now.
If we assume bandwidth is the main problem with running nodes, let’s look at average connection growth rates since 2008. Google lead me to NetMetrics (who seem to charge), and Akamai’s State Of The Internet (who don’t). So I used the latter, of course:
I tried to pick a range of countries, and here are the results:
|Country||% Growth Over 7 years||Per Annum|
Countries which had best bandwidth grew about 17% a year, so I think that’s the best model for future growth patterns (China is now where the US was 7 years ago, for example).
If bandwidth is the main centralization concern, you’ll want block growth below 15%. That implies we could jump the cap to 3MB next year, and 15% thereafter. Or if you’re less conservative, 3.5MB next year, and 17% there after.