During the 2024 U.S. election campaign, a deepfake video spread across social media, falsely alleging voter fraud. Elsewhere, biased data in healthcare has warped AI outcomes, jeopardizing patient care. Opaque algorithms undermine decisions, destabilize markets, and erode trust in financial systems. AI’s risks are escalating, and its flaws are chipping away at public confidence.
The following is a guest post written by Charles Adkins, CEO, HBAR Foundation. He previously served as President of Hedera Hashgraph, LLC. Charles is a seasoned leader with years of experience in the blockchain and crypto space, having previously worked at Polygon Labs and Aptos.
We need governance that ensures AI serves humanity, not harms it. But the scale and complexity of AI development are beyond human capacity alone. Enter Distributed Ledger Technology (DLT)—a decentralized system that records and verifies data across multiple nodes. DLT brings transparency, accountability, and integrity to AI, fostering trust, preventing monopolistic control, and encouraging ethical innovation.
Breaking Open the AI ‘Black Box’
AI often operates like a black box, relying on secretive data that hides how decisions are made. This opacity undermines trust, especially in industries like healthcare and finance where transparency is non-negotiable. With DLT, there are no secrets. DLT changes the game by recording all data and updates on an immutable ledger—a permanent, digital record that ensures every change is traceable.
Take ProveAI, for instance. It uses DLT to secure and track AI training data and updates, ensuring compliance with ethical standards and regulations like the EU AI Act. This approach holds AI models accountable, creating a foundation for trust and fairness in their outcomes.
Improving Data Quality with DLT
Unfortunately, poor data quality remains a persistent challenge in AI development. A 2024 Precisely survey revealed that 64% of businesses find AI unreliable due to unverified or biased data. DLT addresses this by anchoring real-time data to decentralized networks, ensuring it is accurate, transparent, and immutable.
For AI models like those leveraging Retrieval Augmented Generation (RAG) to enhance responses with external data, DLT ensures only verified, tamper-proof information is used. This minimizes risks of misinformation or bias infiltrating outputs, advancing ethical AI governance.
Fetch.ai and Ocean Protocol are already showcasing the potential of this innovation. Fetch.ai uses oracles to access real-time external data, optimizing logistics and energy efficiency across the Web3 ecosystem. Similarly, Ocean Protocol secures tokenized data sharing, enabling AI systems to access high-quality datasets while protecting user privacy.
Tackling Misinformation With DLT
These capabilities are essential for tackling escalating challenges like misinformation, particularly amidst the rise of deepfakes. Ofcom recently revealed that 43% of people aged 16+ encountered at least one deepfake online in the first half of 2024. Blockchain platforms like Truepic are already tackling this issue by combining blockchain with image authentication, timestamping, and verifying media at the moment of creation. By integrating verified data and media into RAG workflows, AI systems can more effectively fact-check outputs, enhancing trust in the information they generate.
Decentralized Governance for Ethical AI
Centralized governance models often struggle to manage the speed, complexity, and ethical challenges of AI development, hindering responsible innovation. Precisely’s global survey revealed that 62% of organizations see inadequate governance as a major obstacle to AI adoption.
Decentralized Autonomous Organizations (DAOs), powered by DLT, may provide a solution. DAOs automate governance and decision-making through smart contracts, enabling stakeholders—developers, users, and regulators—to vote transparently on proposals. Every decision is recorded on the blockchain, preventing unilateral control, aligning decisions with collective interests, and ensuring accountability and inclusivity.
SingularityNET showcases this potential, using a DAO framework to align AI projects with ethical principles. This decentralized approach not only fosters inclusivity but ensures governance reflects the public interest, laying the groundwork for scalable, ethical AI development.
Global Standards and the Path Forward
As AI increasingly depends on cross-border data, secure and transparent systems like DLT will be essential to build trust at scale. Many organizations are already exploring its potential. For instance, the MediLedger Network uses DLT to prevent data tampering in pharmaceutical supply chains, while the European Blockchain Services Infrastructure (EBSI) leverages DLT for secure information distribution, potentially providing a framework to help EU organizations comply with the recent EU AI Act.
But we need to go further.
Global regulatory alignment is crucial to prevent fragmentation and establish universal standards. Governments, enterprises, and civil society must collaborate to develop governance frameworks that prioritize public interest. DAOs, too, must evolve to provide flexible, collective oversight as AI technology advances.
This is not the time for complacency. If action isn’t taken now, AI’s risks will grow unchecked, leaving us powerless to address them. The future of ethical AI depends on bold decisions today. DLT can be the foundation for this future—transparent, accountable, and aligned with humanity’s best interests.