As artificial intelligence (AI) grows more powerful, the infrastructure required to run it will reach its limits and those limits could open the door for decentralized physical infrastructure networks (DePINs), said Trevor Harries-Jones, director at the Render Network Foundation.
Speaking with TheStreet Roundtable host Jackson Hinkle, Harries-Jones said decentralized GPU networks are not aiming to replace traditional data centers, but rather to complement them by solving some of AI’s most pressing scaling challenges.
Harries-Jones says DePIN isn't about replacing centralized infrastructures
In simple words, DePIN lets people around the world share real-world network infrastructure in return for rewards so that there is no dependence on or control by a centralized company.
One such project is the Render Network. It is actually a decentralized GPU rendering platform that is designed to democratize the digital creation process and free creators from the clutches of centralized entities.
Hinkle pointed to recent examples from the centralized AI world, including OpenAI’s release of the video generation app called Sora, where usage had to be capped due to GPU constraints.
He asked whether decentralized models could eventually overtake centralized data centers.
Harries-Jones pushed back on the idea of an outright replacement.
“I don’t think it’s a question of replacing,” he said. “I actually think it’s a question of utilization of both.”
Centralized GPU clusters remain critical for training large AI models, which benefit from massive memory pools and tightly integrated hardware. But training, he noted, is only a fraction of the total computational workload in AI.
Harries-Jones explained that inference—the running of the AI models—accounts for almost 80% of the GPU work.
That distinction is where decentralized networks like Render come into play. While early versions of AI models are resource-heavy, Harries-Jones said they quickly become more efficient as engineers optimize and compress them.
Over time, models that once required massive infrastructure can run on far simpler devices like smartphones, he added.
"So we tend to see this on all models that come out," he said. "They start being really heavy and unrefined, and over a very short period, they get refined so that they can run on decentralized, simple devices."
From a cost perspective, that shift makes decentralized GPU networks increasingly attractive, Harries-Jones argued.
Instead of relying solely on expensive, high-end data centers, inference workloads can be distributed across idle GPUs around the world, he suggested.
"It's going to be cheaper to run them on decentralized idle consumer nodes than on centralized nodes."
More News:
- Billionaire Bill Ackman proposes 'useful' tool to tackle Iran's internet shutdown
- Billionaire warns Greenland episode shatters Bitcoin myth
- U.S. pension fund makes surprise buy of Michael Saylor’s Strategy
Harries-Jones is bullish on DePIN sector
Harries-Jones framed DePINs as a way to relieve growing AI bottlenecks across both compute and energy infrastructure.
When centralized power systems face strain, decentralized compute offers a parallel solution by tapping underutilized resources globally, he explained.
“So I'm very bullish on the sector as a whole.”
Harries-Jones underlined that global GPU demand far outstrips supply. “There aren't enough GPUs in the world today," he said.
So, the key is to utilize all idle GPUs, not fight for the undersupplied high-end GPUs, he proposed.
As per Harries-Jones, the future of AI infrastructure isn’t centralized networks or DePIN. Instead, it's a flexible usage of both to meet explosive AI demand.