Decentralizing the cloud: Acurast’s bet on smartphone-powered compute
- Bert van Kerkhoven

- Aug 28
- 8 min read

How the cloud came about
From transistors to integrated circuits
In 1957, eight engineers — later called the ‘Traiterous Eight’- walked out of the Shockley Semiconductor Laboratory to start Fairchild Semiconductor. Their decision to venture out on their own was largely driven by their frustration that the semiconductor space had been largely stagnant ever since the transistor was invented in Bell Labs in 1947. What followed over the next few decades vindicated them in their vision:
1949: The invention of planar architecture by Jean Hoerni, allowing transistors to be built in directly into flat pieces of silicon — increasing their lifespan, decreasing costs and allowing for production at scale.
1961: The first commercial & monolithic integrated circuit, invented by Robert Noyce by interconnecting multiple transistors on a single chip
Mid-1960s: Annual Fairchild chip production scales into the millions.
In 1965, Gordon Moore — one of the Traitorous Eight and then Fairchild’s Director- observed his famous law: the number of transistors on a chip would double roughly every two years while costs would fall because manufacturing would become more efficient. Moore’s Law has held up incredibly well and has transformed the world in every way possible: it enabled the rise of the cloud, shaped national defence capabilities (e.g. guided missiles), altered the balance of geopolitical power and underpinned trillions in global economic growth.

The relentless persistence of Moore’s Law and the rise of compute power is only part of the story. With the increasingly important role compute plays, power accrues to who owns the GPUs. With the technological milestones described above, the industry dynamics in semiconductors have oscillated between centralization and decentralization.
Centralization because of prohibitive costs
Initially, the dominant use-cases for integrated circuits were for military applications. In the early 1960s, chips cost thousands of dollars and these costs were only justifiable for purposes that served national security interests (like guided missiles & spacecraft). We were a long way off from the use of chips in consumer products. Compute capacity lived predominantly in mainframes (IBM System/360, 1964), costing millions of dollars — limiting their market to large corporations, research institutions and governments.
Decentralization because of Moore’s Law
It was only in 1981 that the first personal computers emerged, built by IBM and powered by Intel’s 8086 microprocessor (with a transistor count of 29,000). Advanced compute could now sit at personal desktops. The acceleration of decentralised compute gained steam during the 1990s and by the end of the decade, transistor count in consumer-grade PCs was in the millions (e.g. Pentium III had 9.5m transistors and could be found in $2000 PCs).
Centralization because of efficiency gains (Moore’s Law => idle compute at individual enterprises)
In the mid-2000s the opposite trend emerged and compute started to centralize again with the introduction of AWS (2006), Microsoft Azure (2010) & Google Cloud (2011). These so-called hyperscalers could provide compute on demand while amortizing capex across millions of users. This was relevant because Moore’s Law created upward supply shocks, while demand struggled to keep up (utilization rates dropping to below 20%). These compute factories use cutting edge chips (Nvidia’s H100s — 80 billion transistors on a single chip), deploy industrial-scale cooling and are powered by 10s of megawatts of power. One plant can house as many as 100,000 servers.
Two decades in, Cloud has turned into a massive and concentrated industry — grossing just shy of $100bn in last quarter alone with enormous concentration (AWS/Microsoft/Google taking a 32%/22%/11% share respectively).

What the cloud is good for and where it falls short
Pro’s
Elastic backends & data plumbing: The cloud is a highly flexible solution to cater to demand that both varies in type of jobs & usage intensity. This elasticity is well explained by the National Institute of Standards & Technology.
AI training: Given that AI training runs to train foundational models can exclusively be performed efficiently on frontier-tech GPUs (H100s) at scale, Cloud has enabled this to materialize (see also Microsoft’s collaboration with OpenAI)
Global reach & latency reduction: Cloud providers operate dozens of data centers across continents, allowing companies to serve customers worldwide with low latency and without the need to build out local infrastructure (e.g. Netflix uses AWS to serve customers in 190+ countries). This doesn’t just drive cloud adoption, it also drives market concentration.
Resilience against disasters: Companies can replicate data across regions without the need to build physical data recovery centers themselves at a fraction of the cost. A survey by Flexera found that 63% of enterprises cite disaster recovery & business continuity as a top driver for cloud adoption.
Cost optimization: Cloud turns CAPEX into OPEX by shifting infrastructure spending on hardware upfront to a pay-as-you-go model. A Cloudzero study estimates enterprises can reduce infrastructure costs by up to 40% through cloud migration.
Cons
Market power & switching friction: It’s no surprise that, despite costs falling under Moore’s Law, the concentration of market power is causing prices to go up (especially egress fees, as well documented by the UK government).
Single points of failure (and an increased cyber security risk): Concentrating compute across a few hyperscalers may be beneficial in terms of efficiency, but also makes the global internet more fragile (a good example was the AWS outage earlier this year, causing disruptions at several centralised exchanges.
Physical & power limitations: International Energy Agency projects global data-center consumption to reach 620–1,050 TWh by 2026. With the rise of AI (and subsequent demand for GPUs), demand may outpace supply for the first time since the emergence of the cloud. With cloud center power usage rivaling the use of medium-sized countries, an increased focus on energy efficiency seems likely. This means a shift of focus to the cooling overhead of large data centers (30–40% of their power usage is to cool the GPUs), performance-per-watt, and fit-for-purpose matching of tasks to the most efficient GPUs.
Trust & confidentiality gaps: The cloud is build under the assumption of trust as companies don’t own the compute. This means that users that work with sensitive workloads (such as financial data, proprietary fund algorithms, healthcare data, etc) are vulnerable to the operators, insiders & state-level subpoenas.
Geopolitical risks: Hyperscalers are subject to the jurisdiction of their home governments (e.g., the U.S. CLOUD Act), which means data stored abroad can still be accessed under domestic law. This creates sovereignty concerns for regions like the EU and Asia, where tensions or sanctions could restrict access and expose sensitive information.
Enter Acurast, the serverless cloud
Similar to how the cloud solved the underutilization problem in the 2000s, Acurast addresses the new bottleneck of the 2020s: monopolistic pricing, limitations of power & confidentiality gaps. Acurast’s bet is on smartphones powering the next great unlock of compute.
Smartphones as the new frontier
While smartphones are often used for trivial tasks, they pack impressive technology. The combined R&D spend in smartphones rivals those of datacenters (Apple alone spent $33bn in R&D for the iPhone last year). Each device ships with frontier chips (e.g. Apple A17 Pro with millions of transistors). Additionally they are increasingly equiped with hardware acceleration for matrix multiplication — making them fit for purpose to work on inference jobs.
While smartphones have become incredibly powerful, the H100s that the hyperscalers use for many tasks may be too powerful and not the best fit to run most workloads they are actually used for. Most modern smartphones come with 8 cores and 8GB of RAM which could actually run most of the workloads that are currently offloaded to the cloud by enterprise customers. Globally, there are now over 6.9 billion smartphones in use. The average replacement cycle is around 3.6 years, leaving hundreds of millions of older but still-capable devices idle in drawers. In fact, it’s estimated that 5.3 billion phones were removed from use in 2022 alone, with most not being recycled.
Each device typically has 6–12 CPU cores and 4–12 GB of RAM. While smartphone compute capacity cannot scale vertically to petabytes of memory like servers can, the ubiquity and abundance of them make them the largest untapped compute cluster in the world.
Confidentiality by design
Where cloud demands trust in the provider, Acurast eliminates it through confidential compute by default. Every workload executes inside a Trusted Execution Environment (TEE) already embedded in modern mobile chipsets (e.g. Apple Secure Enclave).
Two guarantees matter here:
Attestation: Each device cryptographically proves to the network that it is genuine hardware in a verified state. This prevents spoofing or tampering.
Black-box execution: Workloads inside the TEE cannot be inspected — not by the device owner, not by the operator, not even by Acurast itself. Sensitive data like financial trading algorithms or healthcare records remain opaque.
Where hyperscalers’ “confidential computing” products still leave control of the trusted stack with the provider , Acurast makes confidentiality trustless and verifiable.
Modular and elastic supply
Acurast aggregates smartphones into high-performance clusters. The catalyst that enables this is the evolution of USB technology (With USB4 throughput enabling 80 Gbps troughput). Tasks can be delegated to clusters of individual node operators and interconnected devices managed by one manager (where multiple devices effectively function as one compute unit).
This opens up new possibilities:
Running a 16bn parameter LLMmodel across pooled devices with sufficient RAMs (16bn parameter typically requires 12gb of RAM — and thus clusters of interconnected devices)
Spinning up ad-hoc compute clusters for time-sensitive tasks (e.g. sentiment analytics)
Matching lightweight workloads to get those serviced more efficiently in terms of performance-per-watt (relevant for companies optimizing for environmental impact)
The economics of decentralized compute
Cloud economics are driven by hyperscalers’ capex and opex. Users pay not just for compute, but for power, cooling, land, and operators’ margins. In stark contrast with this: smartphones already exist, already consume sunk R&D, and are already paid for by end users. Acurast abstracts away the hardware layer, letting any device owner monetize idle capacity while giving enterprises an alternative to spiraling cloud bills — especially for tasks that were not a fit for cloud in the first place.
This inversion mirrors the rise of cloud itself: just like how AWS capitalized on underutilized corporate servers in 2006, Acurast capitalizes on underutilized mobile devices in 2025.
Implications
The implications of this model are quite impactful:
Energy efficiency: Phones typically operate at 3–6 watts of active consumption, versus 300–700 watts for GPUs in a rack. Matching workloads to phones could massively reduce data-center loads. This makes Acurast the most sustainable alternative to the cloud and also allows for better performance when energy is constrained.
Geopolitics of compute: Control shifts from three hyperscalers to a global mesh of device owners. That reduces the choke points where states or corporations can exert pressure.
Confidentiality guarantees: Sensitive workloads no longer rely on corporate policy or state jurisdiction but on cryptographic attestation.
Market structure: Instead of billion-dollar facilities locked by switching costs, compute becomes liquid and permissionless.
The rise of a decentralized supercomputer: Tapping into idle phones globally could unlock an aggregate compute resource larger than the biggest supercomputers ever built, effectively transforming electronic waste into a global distributed cloud.
Conclusion
We’ve seen compute migrate before: from mainframes to desktops, from desktops to the cloud. Each shift reshaped industries, created new giants, and had a profound impact on access to technology. With power costs, confidentiality, and market frictions now being major constraints of the hyperscale model, it seems likely that the pendulum switches to a decentralized future again for a subset of use-cases that are better served through this decentralized network of idle smartphones
Acurast is betting that the smartphone — the most ubiquitous, extremely R&D-heavy device in history — will be the catalyst for this shift. If it succeeds, compute will no longer just live in distant data centers, but many applications will live everywhere — in billions of households worldwide.
Disclaimer: This article is provided for informational purposes only and does not constitute an offer, solicitation, or recommendation to buy or sell any securities or financial instruments. The views expressed are those of the author and are not necessarily reflective of the views of MN Capital or its affiliates. Nothing in this article should be interpreted as financial, legal, tax, or investment advice. Always seek the advice of a qualified financial professional before making any investment decisions. MN Capital and its affiliates disclaim any liability for actions taken based on this information. MN Capital is an investor in Peaq Network and may hold positions in affiliated projects.




Comments