As entire countries set an example of efficient mining, ForkLog’s editors wondered whether it is possible to go beyond the paradigm — to channel vast computing power for good, rather than confining the technology to hash calculation.
Having studied information compiled by Web3 enthusiast Danil Ivanov on the “useful” consensus mechanism Proof-of-Useful-Work (PoUW), we concluded this is the right direction of travel. Sergey Golubenko shares the findings.
A useful user of the 1990s
In January 1997, the RC5-56 cryptographic challenge began on distributed.net. The task was to find an encryption key for a specific algorithm. Over eight months, a group of participants cracked 56-bit encryption. After that success, a 64-bit key was targeted and found five years later, in 2002.
Alex Petrov, an expert in PoW mining and chip production and co-founder of HyperFusion, told ForkLog about his experience and the start of his career:
«I took part in RC5-56 and even led a team that ranked near the top. Back then two people nudged me to dive into analysing the RC, MD5 and SHA algorithms. Alex Biryukov — through his 1998 research — and the creator of the equihash/aragon2 algorithm Dmitry Khovratovich, who is still active today».
In 1999 the distributed-computing platform SETI@home was released. Built on the University of California, Berkeley’s BOINC, it let people join the search for extraterrestrial intelligence. The installed software processed small chunks of radio-astronomy data. People donated idle PC resources to science — much like DePIN applications today, only without financial incentives.
In 2020 the project was put on hold after the closure of the Arecibo Observatory.
Many programmes on BOINC helped scientists and researchers in astrophysics and in modelling three-dimensional dynamic maps of stellar streams.
Riding the trend, universities released their own software. In 2000 Stanford launched protein-folding simulations to help in the fight against common diseases such as cancer and Alzheimer’s.
Thousands of similar programmes were launched; many still run.
Crypto-hope and the foundations of PoUW
Former IOHK employee and Ergo founder Alexander Chepurnoy recalled the popularity of peer-to-peer (P2P) computer networks in those years.
«At the turn of the 1990s and 2000s there was a P2P boom, and the first virtual currency for rewarding participants appeared after the 2001 launch of the file-sharing network Mojo Nation. However, the startup failed, partly due to poorly designed rewards».
He added that problems usually lurk in the details. When it comes to tokens distributed for useful work, one must always balance supply and demand to avoid a death spiral.
Fast forward to 2013 — bitcoin had existed for five years and Ethereum was two years away. The young blockchain industry brimmed with crypto-enthusiasm without a total fixation on profit.
Gridcoin and Primecoin were the first crypto-startups to marry utility and monetisation. While the notion of PoUW was only approaching realisation, early protocols toyed with terminology. The former opted for Proof-of-Research; the latter for Proof-of-Work based on prime number search.
Gridcoin rewarded participants in scientific computations. Initially the system ran on PoW, but by 2014 it had moved to Proof-of-Stake (PoS), while keeping incentives for participating in BOINC projects. Consensus relied on staking, with scientific work layered on top to determine rewards.
Primecoin was the first experiment in which blockchain computation yielded scientific value. Its creator proposed abandoning classical hashing in favour of rare chains of prime numbers — Cunningham chains.
That period brought a conceptual understanding of “useful” work — long before the term Proof-of-Useful-Work appeared.
In 2014 there were attempts to steer mining energy towards practical ends. One was Permacoin, where network participants engaged in distributed storage of valuable data. Correctness was checked via a Proof-of-Retrievability algorithm.
The next project, CureCoin, launched the same year, took part in the Folding@home initiative, with a cryptocurrency serving as a bridge between a decentralised economy and scientific tasks.
A clear formulation — and plenty of problems
The term PoUW was first used in 2017. A paper — Proofs of Useful Work by Marshall Ball, Alon Rosen, Manuel Sabin and Pratik Gopalan — was expanded in 2021.
The authors set the main goal: to design a PoW scheme in which computational resources are not wasted but applied to meaningful tasks beyond the blockchain. The key criteria are:
- the task must be computationally hard;
- its solution must be fast and reliably verifiable;
- and the result must have practical value outside the cryptographic system.
Alexander Chepurnoy pointed out a raft of difficulties with this concept:
«For fairness, PoUW requires that any task randomly chosen by a miner be indistinguishable from a randomly selected one, and that difficulty not vary from case to case. Otherwise miners will try to pick easy tasks and drop off after solving them. It is also necessary to ensure that private optimisations for specific GPU models, or improvements in FPGA/ASIC hardware, do not yield a large advantage».
He added that, in bitcoin mining, an AsicBoost optimisation was found at one point and exploited by some hardware makers. Many of these issues surfaced in Primecoin, whose network set several world records.
Web3 developer and Everstake founder Sergey Vasilchuk shared his view with ForkLog:
«To me, PoUW is more a label than a fundamentally new model. In essence, many solutions already use this approach without calling themselves PoUW. Look at Chainlink, Wormhole or Pyth — these are examples of real Proof-of-Useful-Work without marketing tags».
In the context of viability, he sees the main driver as the market and its users.
«Chances of success do not depend on the type of consensus. Validators will support any model if it makes economic sense. And economics emerge only where there is user activity. Only the model that addresses real market needs works — not the other way round», the Everstake CEO noted.
The multifaceted nature of PoUW invites reflection on all its aspects. Alex Petrov offered an ethical lens and a question of perception:
«I will leave aside for now the emotional label ‘meaningless cryptographic task’ — because it is meaningless only from the outside world’s point of view; for the network it is useful and functional. In my view, all cryptography fulfils the most important task for it (or for a network like Bitcoin): security. The moral-political dilemma — the ‘right’ or ‘wrong’ use of energy and resources, goals and some ‘useful’ tasks for someone outside this network — is where politicians and sociologists can argue endlessly. There are many examples where costs and benefits are not always obvious or do not please everyone».
That still needs proving
Satoshi Nakamoto’s consensus mechanism was not chosen by accident, and attempts to replace or refine it can simply make things more complex.
Petrov noted the positives of PoW:
«Verification in PoW is extremely simple and fast — that is its beauty. A validator node receives block data, checks the hash, compares it with the target/difficulty and everything is clear — yes or no. Usually this is one or a few cryptographic operations and a simple comparison, performed even offline».
The next stage in PoUW’s development was REM, presented in 2017 by researchers from Cornell University. The system used hardware to attest to a miner’s real computations in order to produce verifiable proofs of useful work.
In 2022 a team from IOHK presented a PoUW protocol prototype. Ofelimos uses combinatorial-optimisation problems to select a leader. The developers replaced the traditional PoW puzzle with an optimisation task while preserving resistance to attacks and mathematical rigour.
According to Alexander Chepurnoy, Ofelimos reliably prevents miners from cherry-picking easy tasks. The protocol supports a broad class of problems, including popular ones in machine learning and ZKP generation. It still leaves many economic and implementation issues.
«Useful tasks with rewards somehow have to appear in the network, while possible collusion between task submitters and miners must not affect consensus. Nor should we forget minimising private optimisations in software and hardware», he added.
A separate line of PoUW research involves cryptographic proofs — SNARKs and other forms of ZKP. Lately developers have used a specific term: zk-PoUW.
Such methods let one verify correctness without redoing the work itself.
In 2023, Brno-based researcher Richard Gazdik proposed a scheme in which miners not only perform computations but generate zk-SNARK proofs for specified tasks along the way. The architecture works like a marketplace: users submit tasks that require proof generation; miners solve them for a reward. The block is produced by the participant whose proof verifies.
A recent study by Samuel Oleksak demonstrated embedding SNARK proofs directly into the consensus layer of an experimental blockchain.
Crypto startup teams and modern blockchains are trying to implement PoUW in practice.
In the technical documentation for Internet Computer, “useful” work is described as an architectural component — the way Internet Computer Consensus (ICC) creates the blockchain. The Network Nervous System (NNS) DAO built atop this mechanism coordinates protocol upgrades.
Another project — Flux — lets users deploy applications and services in a Web3 cloud. It is more a hybrid of a compute platform and a blockchain.
This DePIN project uses PoW with the ZelHash algorithm, a fork of Equihash, on the main blockchain with FluxNodes masternode infrastructure. Blocks are created by PoW, while the “useful” work is performed by nodes processing users’ applications.
On the Internet Computer forum discussion of PoUW, a user nicknamed ZackDS mentioned Flux:
«Half the network is mining on GPUs, and the other half is staking tokens simply to be able to deploy anything via Docker. That in itself isn’t necessarily bad — I just don’t think it fits a blockchain».
Alex Petrov on the size of the PoUW-implementation challenge:
«First you need to check the correctness of the task formulation, to ensure it is legitimate and meets the network’s criteria. Verification of results in PoUW is significantly more complex than in PoW».
Proposed verification options for PoUW:
- full recomputation (rare). If the task is small, one can repeat the computation entirely. But that defeats the idea of saving on verification;
- partial checking — sampling some key points or aspects of the solution;
- cryptographic proofs such as zk-SNARKs/STARKs. This approach can and should be fast — proof generation by the miner is hard, and that is what ensures security;
- consensus-based checking by other verifiers (less decentralised) — several nodes verify and confirm the result;
- statistical methods, for tasks where the exact solution is not unique or where solution quality can be assessed comparatively.
In his words, checking uniqueness is one of the hardest parts, since the “difficulty” of a useful task can be hard to formalise and to compare with a PoW puzzle.
In PoW, compliance of a “proof of work” with blockchain requirements is a result that translates into something providing blockchain security and satisfying the consensus mechanism. For example, the hash of the result must meet a fixed target — simple, fast, and unambiguous.
With PoUW, the number of steps is far greater. They may be non-deterministic, require heavy computation, or interaction with other systems or nodes, adding risk.
«Verification complexity directly depends on the type of useful task. The goal is to reduce verification resources significantly below those for performing the work itself, which is not always easy. In this case, the load on nodes will grow disproportionately, by multiples. We will see multiple resource spending not only on ‘scientific mining’ (let alone the difficulty of coordinating tasks) but also on nodes — the same operations will be performed on thousands of network nodes», the expert concluded.
What next?
Technically, implementing PoUW has proved difficult. The idea is important and interesting and there are first steps — but what should come next?
In the blockchain industry, “useful” work beyond the network is represented by infrastructure applications from DePIN; perhaps the links lie there.
Alexander Chepurnoy offered an alternative plan along these lines:
«It makes sense to start with DePIN without consensus, as DeFi protocols where both a task and its solution can be submitted, building an economy around it. Then you can try to combine it with Ofelimos. Interest in PoUW will remain, but doing it properly will be hard».
Sergey Vasilchuk highlighted weaknesses of DePIN and such a pairing:
«Helium or Render can run on any consensus — and most users do not care. Mobile operators have long used distributed networks and data-integrity algorithms without blockchains. WeatherXM, for instance, is not accurate enough for critical domains — aviation, energy. Real power grids use only certified solutions approved by the regulator. We often overestimate the role of technologies and live under the illusion of their impact».
He also noted that the industry has far more L1/L2 blockchains than truly useful Web3 applications, and voiced interest in applications built on Proof-of-Useful-Work rather than new consensuses. Vasilchuk concluded:
«I am interested in watching for new technical solutions that PoUW can bring. Technology does not change people, but if PoUW yields examples of real benefit, that can gradually shift the industry’s focus from speculation to value creation».
Summing up, Alex Petrov said implementing PoUW is incomparably more complex than PoW. It requires solving fundamental problems in distributed systems, cryptography and game theory, as well as domain-specific knowledge for the “useful” tasks. The number of conceptual and engineering steps or modules is far larger.
He noted that many PoUW projects are still at the research-and-development stage, as building a truly working, secure and efficient system is highly ambitious:
«Instead of optimal devices finely tuned for one specific task [ASIC], we will again end up with a multitasking, expensive and inefficient processor. Given that useful workloads typically change every three to five years — only rarely lasting 20–30 — the potential upside from directing huge computational power to useful goals makes this line of work very attractive for researchers, but economically unviable for most networks. The accompanying complication of their design by risks from external task sources is, in essence, the artificial grafting of additional mechanisms and tasks for someone’s ‘benefit’ from outside».
