Artificial intelligence faces a trust crisis. Decentralized privacy protection technology can solve this problem

Bnews editor
28 Mar 2025 10:13:28 AM
Decentralized privacy-preserving technologies can solve the AI trust deficit problem, and verifiability and more substantial data protection will not affect the development of AI.Artificial intelligence (AI) has been a mainstream topic sinc
Artificial intelligence faces a trust crisis. Decentralized privacy protection technology can solve this problem

Decentralized privacy-preserving technologies can solve the AI trust deficit problem, and verifiability and more substantial data protection will not affect the development of AI.

Artificial intelligence (AI) has been a mainstream topic since 2024, but users and companies still cannot fully trust it. Whether it is financial, personal data or medical decisions, hesitancy about the reliability and integrity of AI remains high.

This growing AI trust deficit has now become one of the biggest barriers to widespread adoption. Decentralized, privacy-preserving technologies are quickly identified as viable solutions to provide verifiability, transparency and stronger data protection without hindering the growth of AI.

The growing AI trust deficit problem

AI is the second most popular category in the crypto space in 2024, attracting more than 16% of investor interest. Startups and multinational corporations have invested significant resources in developing AI technology to expand it to people's finances, health and all aspects of life.

For example, the emerging DeFi x AI (DeFAI) space launched more than 7,000 projects and reached a peak market capitalization of $7 billion in early 2025, although it has declined after the market crash. DeFAI demonstrates AI’s transformative potential to make decentralized finance (DeFi) more user-friendly, perform complex multi-step operations through natural language commands, and conduct complex market research.

However, innovation alone does not address AI’s core vulnerabilities: illusions, manipulation, and privacy issues.

In November 2024, a user successfully convinced an AI agent on Base to send $47,000, even though the agent was programmed never to perform such an operation. While this scenario is part of the game, it raises real concerns: Can we trust AI agents to handle financial operations autonomously?

Audits, bug bounties, and red teams help reduce the risk, but they cannot eliminate the risk of prompt injection, logical flaws, or unauthorized data use. According to a report by KPMG (2023), 61% of people are still hesitant to trust AI, and even industry professionals have expressed this concern. According to a Forrester survey cited by Harvard Business Review, 25% of analysts believe that trust is the biggest obstacle facing AI.

This skepticism remains strong. A survey at the Wall Street Journal CIO Web Summit found that 61% of top U.S. IT leaders are still experimenting with AI agents. The rest are still experimenting or avoiding them altogether, citing concerns about reliability, cybersecurity risks, and data privacy.

Healthcare is particularly aware of these risks. Sharing electronic health records (EHRs) with large language models (LLMs) to improve outcomes is very promising, but the legal and ethical risks are also great without sound privacy protections.

For example, the healthcare industry has suffered adversely from data privacy breaches. This problem is exacerbated when hospitals share EHR data to train AI algorithms without protecting patient privacy.

Decentralized, privacy-preserving infrastructure

J.M. Barrie wrote in Peter Pan, “The world is made of trust, confidence, and pixie dust.” Trust is not just an add-on to AI—it’s foundational. Without trust, the $15.7 trillion in economic benefits that AI is expected to deliver by 2030 may never be realized.

This is where decentralized cryptographic systems come in, such as zero-knowledge succinct non-interactive proofs of knowledge (ZK-SNARKs). These technologies offer a new path: allowing users to verify the correctness of AI decisions without revealing personal data or the inner workings of the model.

By applying privacy-preserving cryptographic techniques to machine learning infrastructure, AI can become auditable, trustworthy, and privacy-respecting, especially in fields such as finance and healthcare.

ZK-SNARKs rely on advanced cryptographic proof systems that allow a party to prove that a proposition is true without revealing its proof process. For AI, this allows models to be verified without disclosing training data, input values, or proprietary logic.

Imagine a decentralized AI loan agent. Instead of reviewing full financial records, it checks encrypted credit score proofs to make autonomous loan decisions without accessing sensitive data. This protects user privacy and reduces institutional risk.

ZK technology also solves the black box problem of large language models. By using dynamic proofs, the accuracy of AI outputs can be verified while protecting data integrity and model architecture. This is a win-win for both users and companies — users no longer have to worry about data misuse, while companies can protect their intellectual property.

Decentralized AI

We are entering a new phase of AI where better models alone are no longer enough. Users demand transparency; businesses need resilience; and regulators expect accountability.

Decentralized, verifiable cryptography provides all three.

Technologies like ZK-SNARKs, threshold multi-party computation, and BLS-based verification systems are more than just “crypto tools” — they are becoming the foundation for trusted AI. Combined with the transparency of blockchain, they create a powerful new technology stack for privacy-preserving, auditable, and reliable AI systems.

Gartner predicts that 80% of companies will use AI by 2026. The drive to adopt AI is not just hype or resources. It will depend on building AI systems that people and companies can truly trust.

And that starts with decentralization.

Opinion: Felix Xu, Co-founder of ARPA Network and Bella Protocol

This article is for general information only and should not be considered legal or investment advice. The opinions expressed in this article are solely those of the author and do not necessarily reflect or represent the views of Cointelegraph.