Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.
In the famous opening scene of Blade Runner, a character named Holden administers a fictional interpretation of the Turing test to gauge if Leon is a replicant (a humanoid robot). For the test, Holden tells Leon a story to elicit an emotional reaction. “You’re in a desert, walking along in the sand, when all of a sudden you look down…you look down and see a tortoise, Leon. It’s crawling toward you…” As Holden keeps telling this hypothetical story, Leon gets more and more agitated until it’s obvious he is not human.
We’re not in Blade Runner territory yet in the real world, but as AI and machine learning get more integrated into our lives, we need assurances that the AI models we’re using are what they say they are.
This is where zero-knowledge proofs come in. At their core, ZK proofs enable one party to prove to another that a specific computation was executed correctly without exposing the actual data or requiring the verifier to redo the calculations (aka the succinctness property). Think of it like a sudoku puzzle: while solving it might be tough, verifying the solution is much easier.
This property is especially valuable when computational tasks take place off-chain to avoid overwhelming a network and incurring high fees. With ZK proofs, these off-chain tasks can still be verified without burdening blockchains, which have strict computational limits since all nodes need to verify each block. In short, we need ZK cryptography to scale AI machine learning securely and efficiently.
ZK verifies ML models so we can scale AI safely
Machine learning, a subset of AI, is known for its heavy computational demands, requiring vast amounts of data processing to simulate human adaptation and decision-making. From image recognition to predictive analytics, ML models are gearing up to transform almost every industry—if they haven’t already—but they are also pushing the limits of computation. So how do we verify and attest that ML models are authentic by using blockchains, where onchain operations can be prohibitively expensive?
We need a provable way to trust AI models so that we know that the model we’re using hasn’t been tampered with or falsely advertised. When you make ChatGPT queries about your favorite sci-fi films, you probably trust the model being used, and it’s not the end of the world if the quality of responses goes down here and there. However, in industries like finance and healthcare, accuracy and reliability are critical. One mistake could have cascaded negative economic effects around the world.
This is where ZK plays a pivotal role. By leveraging ZK proofs, ML computations can still be executed off-chain while also having onchain verification. This opens up new avenues for deploying AI models in blockchain applications. Zero-knowledge machine learning, or ZKML, allows for cryptographic verification of ML algorithms and their outputs while keeping the actual algorithms private, bridging the gap between AI’s computational demands and blockchain’s security guarantees.
One of the most exciting ZKML applications is DeFi. Imagine a liquidity pool where an AI algorithm manages the rebalancing of assets to maximize yield while refining its trading strategies along the way. ZKML can execute these calculations off-chain and then use ZK proofs to ensure an ML model is legitimate, rather than some other algorithm or another person’s trades. At the same time, ZK can protect users’ trading data so that they retain financial confidentiality, even if the ML models they’re using to make trades are public. The result? Secure AI-driven DeFi protocols with ZK verifiability.
We need to know our machines better
As AI becomes more central to human activity, concerns about tampering, manipulation, and adversarial attacks only continue to grow. AI models, especially those handling critical decisions, must be resistant to attacks that would corrupt their outputs. Of course, we want AI applications to be safe. It’s not just about AI safety in the traditional sense (i.e., ensuring models don’t behave unpredictably) but also about creating a trustless environment where the model itself is easily verifiable.
In a world where models proliferate, we’re essentially living our lives guided by AI. As the number of models grows, so too does the potential for attacks where the integrity of the model is undermined. This is particularly worrisome in scenarios where the output of an AI model might not be what it seems.
By integrating ZK cryptography into AI, we can start building trust and accountability in these models now. Like an SSL certificate or security badge in your web browser, there will likely be a symbol for AI verifiability—one that guarantees the model you’re interacting with is the one you expect.
In Blade Runner, the Voight-Kampff test aimed to distinguish replicants from humans. Today, as we navigate an increasingly AI-driven world, we face a similar challenge: distinguishing authentic AI models from potentially compromised ones. In crypto, ZK cryptography could stand as our Voight-Kampff test—a robust, scalable method to verify the integrity of AI models without compromising their inner workings. That way, we’re not just asking if androids dream of electric sheep but also ensuring that the AI shepherding our digital lives is exactly what it claims to be.
Rob Viglione is the co-founder and CEO of Horizen Labs, the development studio behind several leading web3 projects, including zkVerify, Horizen, and ApeChain. Rob is deeply interested in web3 scalability, blockchain efficiency, and zero-knowledge proofs. His work focuses on developing innovative solutions for zk-rollups to enhance scalability, create cost savings, and drive efficiency. He holds a Ph.D. in Finance, an MBA in Finance and Marketing, and a Bachelor’s degree in Physics and Applied Mathematics. Rob currently serves on the Board of Directors for the Puerto Rico Blockchain Trade Association.