Bizantine’s Law

Co-written by Cameron Sepahi

Moore’s Law

For those unfamiliar, Moore’s Law is the primary reason why iPhones today possess more processing power than the rockets that defined the Apollo moon missions. In 1965, Gordon Moore observed that the number of possible transistors in a circuit doubled every year; he predicted the trend would continue for a decade. Moore ended up being slightly wrong: the number doubles every two years; the trend has now continued for over five decades.

As history shows, Moore didn’t need to predict the future perfectly. All he had to be was directionally right. Because the computation power in a computer chip is largely dependent on how many transistors can fit in the chip, Moore’s Law would go on to mean that computer processing power would double every two years.

Today, Moore looks like a genius, one whose name constantly pops up in articles like these. When Moore initially published his prediction, he simply extrapolated a trend that existed and, most importantly, was going to continue. An observable trend is useless if, given random walk theory, there lack significant inflationary forces for that trend to last.

Bizantine’s Law

Today, we propose Bizantine’s Law, what we believe will one day become the Moore’s Law of the public blockchain field. Bizantine’s Law states: the efficiency of zero-knowledge proofs will double every year. Bizantine’s Law will ultimately mean to public blockchains what Moore’s Law has meant to computers: a public blockchain’s processing power will double every year. Zero-knowledge proofs are the most important cryptographic tool for the mainstream scaling of public blockchain technology [1].


Public blockchains can be thought of as state transition machines. A blockchain has an initial state; then, a blockchains’ validators apply a batch of transactions to that initial state, outputting a new state [2] [3]. This process is repeated frequently, resulting in a chain of states. Blockchains could really be called ‘state-chains’ instead of blockchains.

Blockchains, in their current form, are single-threaded state transition machines, meaning that every validator must apply this new batch of transactions (or some other form of computation) to the initial state, to output the new state [4]. Zero-knowledge proofs enable blockchains to become multi-threaded state-transition machines [5].

Zero-knowledge Proofs, The Basics

There are two parties in every zero-knowledge proof scheme: the provers and the verifiers. The provers are the maintainers of the thread that exists outside of the public blockchain: they execute the thread’s computation and create a zero-knowledge proof that attests to the correctness of the computation.

The verifiers are the validators of the public blockchain: they verify the correctness of the proof provided by the provers, only updating the state of that thread if the proof is correct. Thus, instead of executing the full computation of a thread, a blockchain’s validators only need to verify the correctness of that thread’s computation. Verifying the correctness of computation is significantly less work than running the computation itself [6] [7].

In a zero-knowledge-proof-based, multi-threaded blockchain architecture, each thread only needs to be maintained by one computer [8], similar to how the internet functions today.

Transactions or other forms of computation can still happen on the blockchain itself, but those will be more expensive than those that are executed in off-chain threads, as making thousands of computers perform your computation is much more expensive than only making one (or one small subset). Thus, the winning public blockchain ultimately becomes a mixture of a zero-knowledge proof verifier and an expensive transaction executioner.

How much compacted computation [9] a public blockchain can verify is where Bizantine’s Law comes into play. We believe that a public blockchain will be able to verify at least double the amount of computation every year [10]. The measurement of zero-knowledge proof efficiency primarily comes down to four variables: the prover time of the zero-knowledge proof, the verifier time of the zero-knowledge proof, the size of the zero-knowledge proof, and the Big-O complexity of each of these three variables (how these three variables scale as the compacted computation scales). Bizantine’s Law exists as a result of the analysis of these four variables.

You can find the remainder of the article here, as Medium is not conducive to the formatting of the data we exhibit.


[1] As the zero-knowledge proof overhead decreases, the number of practical use cases of public blockchains rise. More applications, at a greater scale, become feasible to use the blockchain as a shared, global, programmatic database.

[2] Each blockchain has a different name for the party that secures the chain, but ‘validators’ and ‘miners’ are the most frequently used.

[3] This process is repeated every 13 seconds for Ethereum, and every 9 minutes for Bitcoin. The time in between blocks is known as block time.

[4] To be an honest/performant validator, you must process all of the transactions’ effects on the initial state. Validators who do not are either lazy or malicious validators and, either due to them lazily following a malicious miner or being malicious themselves, will be penalized, especially in Proof-of-Stake systems. Proof-of-Work has similar but weaker penalizations for lazy and malicious validators.

[5] Multi-threading is the ability of a computer to provide multiple threads of execution, concurrently. In this case, the blockchain is the world computer.

[6] This depends on the type of computation performed, but, for all relevant purposes, tends to be true. Additionally, it will continue to be more true, as Bizantine’s Law predicts.

[7] To verify the zero-knowledge proof, a blockchains’ validators require: the zero-knowledge proof itself, some additional data attached to the proof, and the logic for the proof’s verification encoded in a smart contract already deployed onto the public blockchain [7a]. The prover creates the proof, and then adds the proof along with the additional data to the verification smart contract on the blockchain. The blockchain’s validators then execute the verification smart contract, only updating the state of the off-chain computation if the verification is successful. As the efficiencies of zero-knowledge proof protocols increases, more computation can be compacted (known as compacted computation) onto a public blockchain.

[7a] The additional data appended to the zero-knowledge proof tends to be the initial state, the set of transactions/state changes, and the final state, but the specific requirements depend on the type of zk rollup (namely if the data is appended on-chain or held off-chain by the zk rollup’s validators).

[8] By one computer, we mean one bookkeeper, similar to how Wells Fargo maintains all the debits and credits in their network. One bookkeeper can be a set of computers, just as Well Fargo maintains debits and credits in their network with a set of computers.

[9] ‘Compacted computation’ is computation executed outside the blockchain but represented on the blockchain via a zero-knowledge proof scheme.

[10] The blockchain can also verify computation via another type of off-chain, multi-threaded computation solution known as optimistic rollups. While improvements provided by optimistic rollups will play a role in this trend in the near-term, we view them as a short-term solution for complex execution schemes. Optimistic rollups rely on fraud proofs, which currently allow for more types of complex blockchain-native computation than zero-knowledge proofs. However, we believe that zk rollups will gradually replace optimistic rollups for all use cases, as zero-knowledge proofs become more efficient.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store