Telegram (AI) YouTube Facebook X
Ру

What Is the Nakamoto Coefficient and How Is It Calculated?

What is the Nakamoto coefficient—and how to calculate it?
What is the Nakamoto coefficient—and how to calculate it?

What is the Nakamoto coefficient?

The Nakamoto coefficient is a tool for gauging decentralisation. It was proposed in 2017 by Coinbase’s former CTO, the originator of the Network State concept, Balaji Srinivasan, together with Leland Lee. 

In their article Quantifying Decentralization (“Quantifying Decentralization”), they likened excessive centralisation to inequality in economics, drawing on the synergy of two measures:

  • the Lorenz curve — a graph that shows the distribution of income or wealth; the further it bows from the line of equality, the greater the inequality. In blockchains it can reflect the distribution of computing power or tokens across participants;
  • the Gini coefficient — a statistic of inequality ranging from 0 to 1.
What is the Nakamoto coefficient and how is it calculated?
The Gini coefficient on a Lorenz curve. Source: Medium.

The authors noted the absence of a quantitative measure of decentralisation. Their core idea is to:

  • list the key subsystems of a distributed system;
  • determine how many distinct elements must be compromised to control each of them;
  • use the minimum of those counts as the indicator of effective decentralisation.

The Nakamoto coefficient is the minimum number of organisations (mining pools, validators or other stakeholders) that would need to collude to disrupt or control a network. Calculating it for any chain indicates how hard the network is to attack.

What data feed the calculation?

Srinivasan and Lee stressed how subsystems influence precision. To apply the concept to public blockchains, the system must be separated from its components.

For Bitcoin, six decentralisation subsystems are often cited:

  1. Mining. In PoW networks, miners confirm transactions; the broader the distribution of hashpower, the higher the decentralisation. In PoS systems, validators are assessed similarly.
  2. Software clients. Diversity of clients reduces single points of failure.
  3. Developers. A broad base of engineers working on upgrades protects against capture by a small group.
  4. Exchanges. Heavy concentration of tokens on a few venues raises the risk of manipulation.
  5. Nodes. Nodes spread across countries and operators make the network more resilient.
  6. Token ownership. The distribution of large BTC balances is evaluated.
What is the Nakamoto coefficient and how is it calculated?
Subsystems of public blockchains. Source: Medium.

The overall Nakamoto coefficient takes the minimum value across the subsystems studied. Centralisation in any one element drags down the network’s overall decentralisation.

What are the Nakamoto coefficients of Bitcoin and Ethereum?

The calculation proceeds in stages:

  1. Identify key actors: major mining pools, validators, node operators and large token holders.
  2. Assess control: analyse distribution of power, such as hash rate in PoW or stake share in PoS.
  3. Sum elements: order participants by size and count how many it takes to reach 51% — the critical threshold for an attack on the network.

As an example, consider Bitcoin’s mining-pool structure as of May 1, 2025:

  • Foundry USA — 30.6% of hash rate;
  • AntPool — 17.1%;
  • ViaBTC — 15.4%;
  • F2Pool — 9.8%;
  • MARA Pool — 5.6%;
  • Others — 21.5%.
What is the Nakamoto coefficient and how is it calculated?
Shares of Bitcoin mining pools as of May 1, 2025. Source: Hashrate Index. 

Calculation:

  • Foundry USA = 30.6%;
  • AntPool (30.6% + 17.1% = 47.7%);
  • ViaBTC (47.7% + 15.4% = 63.1%).

Adding ViaBTC pushes the total past the 51% threshold to 63.1% — summation stops. Three pools control more than half the network, so Bitcoin’s Nakamoto coefficient is 3.

Notwithstanding Bitcoin’s decentralised nature with a large number of active nodes, the concentration of hashpower among pools can create risks. The point of the Nakamoto coefficient is to flag such weak spots. 

Ethereum, the second-largest cryptocurrency by market capitalisation, despite an impressive node count, also falls short on decentralisation when staking shares under its PoS consensus are assessed.

What is the Nakamoto coefficient and how is it calculated?
ETH stakers by share as of May 1, 2025. Source: Dune.

On a similar calculation, Ethereum’s Nakamoto coefficient is 5. The 51.2% threshold is crossed by adding the staking shares of Lido, Coinbase, Binance, Ether.fi and Kiln.

How else is the Nakamoto coefficient used?

PoS networks such as Sui and Aptos use mixed DAGBFT consensus mechanisms. Finalising a block in such systems requires the agreement of two-thirds of validators. In other words, control of more than 66.6% of tokens effectively governs block production. 

According to the analytics resource Nakaflow, Nakamoto coefficients across PoS networks vary widely. As of May 1, 2025, one of the lowest readings — just 4 — is seen on Polygon. Mid-range scores of 20–35 appear on Solana, Cardano, Avalanche, THORChain and Avail. The runaway leader is the Polkadot parachain network with a coefficient of 173.

What is the Nakamoto coefficient and how is it calculated?
Nakamoto coefficients of popular PoS blockchains. Source: Nakaflow.   

Some blockchain teams use the Nakamoto coefficient in their quest to perfect their technology.

The Internet Computer team, for instance, published a study of the network’s decentralisation using a modified version of the metric.

The developers observed that, for their project, taking the minimum value across subsystems is not always the right way to assess risk. For example, grouping participants by continent is not sensible: collusion risk among node providers is not necessarily tied to geography. They therefore opted for a weighted average across subsystems.

Internet Computer’s subsystems were defined as:

  • dapps. Applications used by the community, governed either by the DAO NNS or by individual organisations;
  • protocol governance. Overseen by the NNS and responsible for the code that runs on network nodes;
  • infrastructure layer. The physical layer of Internet Computer, reflecting node participation. Also governed by the NNS.

The developers argued that showing dynamics is more instructive. For instance, raising a subsystem’s coefficient from 1 to 2 is crucial because it removes a single point of failure; that is more significant than an increase from 10 to 11. 

Because a simple weighted average treats both changes the same, the Internet Computer team proposed using a weighted average of the logarithms of the coefficients to capture material shifts.

What are the tool’s shortcomings?

The Nakamoto coefficient is a simple way to estimate the minimum number of participants required to control a network. It helps illuminate the distribution of power and assess a blockchain’s security, reliability and resilience.

The metric informs developers, investors and users alike, encouraging improvements to governance models, consensus mechanisms and scaling solutions in the pursuit of greater decentralisation.

But it has several drawbacks:

  • it is static, capturing the network at a single point in time; participation changes constantly, making data stale quickly;
  • it considers only on-chain data; multiple validators may belong to a single owner;
  • it focuses on subsystems; it may cover only a subset of provers or miners while ignoring client diversity, geographic spread or token-ownership concentration;
  • it ignores the high cost of running a node, which indirectly affects decentralisation;
  • it requires adaptations for different consensus mechanisms;
  • it omits external forces; regulation, technological shifts and market dynamics can reshape decentralisation.

Follow ForkLog on social media

Telegram Instagram
Found a mistake in the text? Highlight it and press CTRL+ENTER.

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK