Subdivisions in Crypto×AI Worth Paying Attention to

BeginnerMar 26, 2024
Vitalik has published "The promise and challenges of crypto + AI applications," discussing the ways blockchain and artificial intelligence can be combined and the potential challenges. The article presents four integration methods and introduces representative projects for each direction. There are differences in the core characteristics of AI and blockchain, so it's necessary to balance aspects such as data ownership, transparency, monetization capabilities, and energy costs when combining them. Currently, many AI applications are gaming-related, involving interaction with AI and training characters to better fit individual needs. At the same time, there are projects exploring the use of blockchain features to create better artificial intelligence. Decentralized computing power is also a popular direction but still faces challenges. Overall, the AI track needs to find projects with competitiveness and long-term value.
Subdivisions in Crypto×AI Worth Paying Attention to

Forward the Original Title:’Metrics Ventures研报 | 从V神文章出发,Crypto×AI有哪些值得关注的细分赛道?’

1 Introduction: Four ways to combine Crypto with AI

Decentralization is the consensus maintained by blockchain, ensuring security is the core principle, and openness is the key foundation from a cryptographic perspective to make on-chain behavior possess the aforementioned characteristics. This approach has been applicable in several rounds of blockchain revolutions in the past few years. However, when artificial intelligence gets involved, the situation undergoes some changes.

Imagine designing the architecture of blockchain or applications through artificial intelligence. In this case, the model needs to be open source, but doing so will expose its vulnerability in adversarial machine learning. Conversely, not being open source would result in losing decentralization. Therefore, it is necessary to consider in what way and to what extent the integration should be accomplished when introducing artificial intelligence into current blockchain or applications.

Source: DE UNIVERSITY OF ETHEREUM

In the article ‘When Giants Collide: Exploring the Convergence of Crypto x AI’ from @ueth">DE UNIVERSITY OF ETHEREUM, the differences in core characteristics between artificial intelligence and blockchain are outlined. As shown in the figure above, the characteristics of artificial intelligence are:

  • Centralization
  • Low Transparency
  • Energy Consuming
  • Monopoly
  • Weak Monetization Attributes

The characteristics mentioned above are completely opposite in blockchain when compared to artificial intelligence. This is the true argument of Vitalik’s article. If artificial intelligence and blockchain are combined, then applications born from it need to make trade-offs in terms of data ownership, transparency, monetization capabilities, energy costs, etc. Additionally, what infrastructure needs to be created to ensure the effective integration of both also needs to be considered.

Following the above criteria and his own thoughts, Vitalik categorizes applications formed by the combination of artificial intelligence and blockchain into four main types:

  • AI as a player in a game
  • AI as an interface to the game
  • AI as the rules of the game
  • AI as the objective of the game

Among them, the first three mainly represent three ways in which AI is introduced into the Crypto world, representing three levels of depth from shallow to deep. According to the author’s understanding, this classification represents the extent to which AI influences human decision-making, and thus introduces different levels of systemic risk to the entire Crypto world:

  • Artificial intelligence as a participant in applications: Artificial intelligence itself does not influence human decisions and behavior, so it does not pose risks to the real human world. Therefore, it currently has the highest degree of practicality.
  • Artificial intelligence as an interface for applications: Artificial intelligence provides auxiliary information or tools for human decision-making and behavior, which improves user and developer experiences and lowers barriers. However, incorrect information or operations may introduce some risks to the real world.
  • Artificial intelligence as the rules of applications: Artificial intelligence fully replaces humans in making decisions and operations. Therefore, malicious behavior or failures of artificial intelligence will directly lead to chaos in the real world. Currently, in both Web2 and Web3, it is not possible to trust artificial intelligence to replace humans in decision-making.

Finally, the fourth category of projects aims to leverage the characteristics of Crypto to create better artificial intelligence. As mentioned earlier, centralization, low transparency, energy consumption, monopolistic tendencies, and weak monetary attributes can naturally be mitigated through the properties of Crypto. Although many people are skeptical about whether Crypto can have an impact on the development of artificial intelligence, the most fascinating narrative of Crypto has always been its ability to influence the real world through decentralization. This track has also become the most intensely speculated part of the AI track due to its grand vision.

2 AI As A Participant

In mechanisms where AI participates, the ultimate source of incentives often comes from protocols inputted by humans. Before AI becomes an interface or even a rule, we typically need to evaluate the performance of different AIs, allowing AI to participate in a mechanism, and ultimately receive rewards or penalties through an on-chain mechanism.

When AI acts as a participant, compared to being an interface or rule, the risks to users and the entire system are generally negligible. It can be considered as a necessary stage before AI deeply influences user decisions and behavior. Therefore, the cost and trade-offs required for the fusion of artificial intelligence and blockchain at this level are relatively small. This is also a category of products that Vitalik believes currently have a high degree of practicality.

In terms of breadth and implementation, many current AI applications fall into this category, such as AI-empowered trading bots and chatbots. The current level of implementation still makes it difficult for AI to serve as an interface or even a rule. Users are comparing and gradually optimizing among different bots, and crypto users have not yet developed habits of using AI applications. In Vitalik’s article, Autonomous Agents are also classified into this category.

However, in a narrower sense and from a long-term vision perspective, we tend to make more detailed distinctions for AI applications or AI agents. Therefore, under this category, representative subcategories include:

2.1 AI Games

To some extent, AI games can indeed be classified into this category. Players interact with AI and train their AI characters to better fit their personal preferences, such as aligning more closely with individual tastes or becoming more competitive within the game mechanics. Games serve as a transitional stage for AI before it enters the real world. They also represent a track with relatively low implementation risks and are the easiest for ordinary users to understand. Iconic projects in this category include AI Arena, Echelon Prime, and Altered State Machine.

  • AI Arena: A player-versus-player (PVP) fighting game where players can train and evolve their in-game characters using AI. The game aims to allow more ordinary users to interact with, understand, and experience AI through gaming, while also providing AI engineers with various AI algorithms to increase their income. Each in-game character is powered by AI-enabled NFTs, with the Core containing the AI model’s architecture and parameters stored on IPFS. The parameters in a new NFT are randomly generated, meaning it will perform random actions. Users need to improve their character’s strategic abilities through imitation learning (IL). Each time a user trains a character and saves progress, the parameters are updated on IPFS. \

  • Altered State Machine: .ASM is not an AI game but a protocol for rights verification and trading for AI agents. It is positioned as a metaverse AI protocol and is currently integrating with multiple games including FIFA, introducing AI agents into games and the metaverse. ASM utilizes NFTs to verify and trade AI agents, with each agent consisting of three parts: Brain (the agent’s intrinsic characteristics), Memories (storing the agent’s learned behavior strategies and model training, linked to the Brain), and Form (character appearance, etc.). ASM has a Gym module, including a decentralized GPU cloud provider, to provide computational support for agents. Projects currently built on ASM include AIFA (AI soccer game), Muhammed Ali (AI boxing game), AI League (street soccer game in partnership with FIFA), Raicers (AI-driven racing game), and FLUF World’s Thingies (generative NFTs). \

  • Parallel Colony (PRIME): Echelon Prime is developing Parallel Colony, a game based on AI LLM (Large Language Models). Players can interact with their AI avatars and influence them, with avatars autonomously acting based on memories and life trajectories. Colony is currently one of the most anticipated AI games, and the official whitepaper has recently been released. Additionally, the announcement of migration to Solana has sparked another wave of excitement and increased value for PRIME.

2.2 Market/Contest Prediction

The predictive capability is the foundation for AI to make future decisions and behaviors. Before AI models are used for practical predictions, prediction competitions compare the performance of AI models at a higher level. By providing incentives in the form of tokens for data scientists/AI models, this approach has positive implications for the development of the entire Crypto×AI field. It continuously fosters the development of more efficient and high-performing models and applications suitable for the crypto world. Before AI deeply influences decision-making and behavior, this creates higher-quality and safer products. As Vitalik stated, prediction markets are a powerful primitive that can be expanded to many other types of problems. Iconic projects in this track include Numerai and Ocean Protocol.

  • Numerai: Numerai is a long-running data science competition where data scientists train machine learning models to predict stock markets based on historical market data provided by Numerai. They then stake their models and NMR tokens in tournaments, with well-performing models receiving NMR token rewards, while tokens staked on poorly performing models are burned. As of March 7, 2024, there have been 6,433 models staked, and the protocol has provided a total of $75,760,979 in rewards to data scientists. Numerai incentivizes global collaboration among data scientists to build a new type of hedge fund. The funds released so far include Numerai One and Numerai Supreme. The path of Numerai involves market prediction competitions→crowdsourced prediction models→ the creation of new hedge funds based on crowdsourced models.
  • Ocean Protocol: Ocean Predictor focuses on predictions, starting with crowdsourced predictions of cryptocurrency trends. Players can choose to run the Predictoor bot or Trader bot. The Predictor bot uses AI models to predict the price of cryptocurrencies (e.g., BTC/USDT) at the next time point (e.g., five minutes ahead) and stakes a certain amount of $OCEAN tokens. The protocol calculates a global prediction based on the staked amount. Traders buy prediction results and can trade based on them. When the prediction accuracy is high, Traders can profit from it. Predictors who make incorrect predictions will be penalized, while those who make correct predictions will receive a portion of the tokens staked as well as the purchasing fees from Traders as rewards. On March 2nd, Ocean Predictoor announced its latest direction, the World-World Model (WWM), which begins exploring predictions for real-world scenarios such as weather and energy.

3 AI As An Interface

AI can assist users in understanding what is happening in the crypto world using simple and easy-to-understand language, acting as a mentor for users and providing alerts for potential risks to reduce the entry barriers and user risks in Crypto, thus improving user experience. The functionalities of products that can be realized are diverse, such as risk alerts during wallet interactions, AI-driven intent trading, AI chatbots capable of answering common user questions about crypto, and more. The audience for these services is expanding, including not only ordinary users but also developers, analysts, and almost all other groups, making them potential recipients of AI services.

Let’s reiterate the commonalities of these projects: they have not yet replaced humans in executing certain decisions and behaviors, but are utilizing AI models to provide information and tools for assisting human decision-making and behavior. At this level, the risks of AI malfeasance are starting to be exposed in the system - providing incorrect information to interfere with human judgment. This aspect has been thoroughly analyzed in Vitalik’s article.

There are many and varied projects that can be classified under this category, including AI chatbots, AI smart contract audits, AI code generation, AI trading bots, and more. It can be said that the vast majority of AI applications are currently at this basic level. Representative projects include:

  • Paal: PaaL is currently the leading project in AI chatbots and can be seen as a ChatGPT trained on crypto-related knowledge. Integrated with platforms like Telegram (TG) and Discord, PaaL provides users with functionalities such as token data analysis, token fundamentals and token economics analysis, as well as text-to-image generation and other features. PaaL Bot can be integrated into group chats to automatically respond to certain information. PaaL also supports customized personal bots, allowing users to build their own AI knowledge base and custom bots by feeding datasets. PaaL is advancing towards AI Trading Bots and, on February 29th, announced its AI-supported crypto research & trading terminal, PaalX. According to the introduction, it can perform AI smart contract audits, integrate and trade news based on Twitter, and provide support for crypto research and trading. The AI assistant can lower the barrier to entry for users.

ChainGPT: ChainGPT relies on artificial intelligence to develop a series of crypto tools, such as chatbot, NFT generator, news collection, smart contract generation and audit, transaction assistant, Prompt market and AI cross-chain exchange. However, ChainGPT’s current focus is on project incubation and Launchpad, and it has completed IDOs for 24 projects and 4 Free Giveaways.

  • Arkham: Ultra is Arkham’s dedicated AI engine designed to match addresses with real-world entities using algorithms, thereby increasing transparency in the crypto industry. Ultra merges on-chain and off-chain data provided by users and collected by itself, and outputs it into an expandable database, which is ultimately presented in chart form. However, the Arkham documentation does not provide detailed discussions on the Ultra system. Arkham has recently attracted attention due to personal investment from Sam Altman, the founder of OpenAI, and has experienced a five-fold increase in value over the past 30 days.
  • GraphLinq:GraphLinq is an automated workflow management solution designed to enable users to deploy and manage various types of automation functions without programming. For example, users can push the price of Bitcoin from Coingecko to a TG Bot every 5 minutes. GraphLinq’s solution visualizes automation processes using graphs, allowing users to create automation tasks by dragging nodes and using the GraphLinq Engine to execute them. Although no coding is required, the process of creating a graph still has a certain learning curve for ordinary users, including selecting the appropriate template and choosing and connecting suitable logic blocks from hundreds of options.To address this, GraphLinq is introducing AI to allow users to build and manage automation tasks using conversational artificial intelligence and natural language, thus simplifying the process for users who may not be familiar with technical aspects.
  • 0x0.ai:0x0’s AI-related businesses primarily include three aspects: AI smart contract auditing, AI anti-Rug detection, and AI developer center. Among them, AI anti-Rug detection aims to detect suspicious behaviors such as excessive taxes or liquidity drain to prevent users from being deceived. The AI developer center utilizes machine learning techniques to generate smart contracts, enabling no-code deployment of contracts. However, only AI smart contract auditing has been preliminarily launched, while the other two functionalities have not been fully developed yet.
  • Zignaly: Zignaly was founded in 2018 with the aim of enabling individual investors to choose fund managers to manage their cryptocurrency assets, similar to the logic of copy-trading. Zignaly is using machine learning and artificial intelligence technologies to establish a system for evaluating fund managers. The first product launched is called Z-Score. However, as an artificial intelligence product, it is still relatively basic in its current form.

4 AI As The Game Rules

This is the most exciting part—enabling AI to replace human decision-making and behavior. Your AI will directly control your wallet, making trading decisions and actions on your behalf. In this category, the author believes it can mainly be divided into three levels: AI applications (especially those with the vision of autonomous decision-making, such as AI automated trading bots, AI DeFi yield bots), Autonomous Agent protocols, and zkML/opML.

AI applications are tools for making specific decisions in a particular field. They accumulate knowledge and data from different sectors and rely on AI models tailored to specific problems for decision-making. It’s worth noting that AI applications are classified into both interfaces and rules in this article. In terms of development vision, AI applications should become independent decision-making agents, but currently, neither the effectiveness of AI models nor the security of integrated AI can meet this requirement. Even as interfaces, they are somewhat forced. AI applications are still in a very early stage, with specific projects introduced earlier.

Autonomous Agents, mentioned by Vitalik, are classified in the first category (AI as participants), but this article categorizes them into the third category based on their long-term vision. Autonomous Agents use a large amount of data and algorithms to simulate human thinking and decision-making processes, executing various tasks and interactions. This article mainly focuses on the infrastructure of Agents, such as communication layers and network layers, which define the ownership of Agents, establish their identity, communication standards, and methods, connect multiple Agent applications, and enable them to collaborate on decision-making and behavior.

zkML/opML: Ensure that outputs provided through correct model reasoning processes are credible through cryptographic or economic methods. Security issues are fatal when introducing AI into smart contracts. Smart contracts rely on inputs to generate outputs and automate a series of functions. If AI provides erroneous inputs, it will introduce significant systemic risks to the entire Crypto system. Therefore, zkML/opML and a series of potential solutions are the foundation for enabling AI to act independently and make decisions.

Finally, the three together constitute the three basic levels of AI as rule operators: zkml/opml as the lowest-level infrastructure ensuring protocol security; Agent protocols establish the Agent ecosystem, enabling collaborative decision-making and behavior; AI applications, also specific AI Agents, will continuously improve their capabilities in specific domains and actually make decisions and take action.

4.1 Autonomous Agent

The application of AI Agents in the crypto world is natural. From smart contracts to TG Bots to AI Agents, the crypto space is moving towards higher automation and lower user barriers. While smart contracts execute functions automatically through immutable code, they still rely on external triggers to activate and cannot run autonomously or continuously. TG Bots reduce user barriers by allowing users to interact with the blockchain through natural language, but they can only perform simple and specific tasks and cannot achieve user-centric transactions. AI Agents, however, possess a certain degree of independent decision-making capability. They understand natural language and autonomously combine other agents and blockchain tools to accomplish user-specified goals.

AI Agents are dedicated to significantly improving the user experience of crypto products, while blockchain technology can further enhance the decentralization, transparency, and security of AI Agent operations. Specific assistance includes:

  • By incentivizing developers with tokens to provide more agents.
  • NFT authentication to facilitate fee and transaction-based agent activities.
  • Providing on-chain agent identity and registration mechanisms.
  • Offering immutable agent activity logs for timely tracing and accountability of their actions.

The main projects of this track are as follows:

  • Autonolas: Autonolas supports asset ownership and composability for agents and related components through on-chain protocols, enabling code components, agents, and services to be discovered and reused on-chain, while incentivizing developers with economic compensation. Developers register their code on-chain and receive NFTs representing ownership of the code after developing complete agents or components. Service owners collaborate with multiple agents to create a service and register it on-chain, attracting agent operators to execute the service, which users access by paying for its usage.
  • Fetch.ai: Fetch.ai has a strong team background and development experience in the field of AI, currently focusing on the AI agent track. The protocol consists of four key layers: AI agents, Agentverse, AI Engine, and Fetch Network. AI agents form the core of the system, while the others provide frameworks and tools to assist in building agent services. Agentverse is a software-as-a-service platform primarily used to create and register AI agents. The AI Engine aims to interpret user natural language inputs and translate them into actionable tasks, selecting the most suitable registered AI agent from Agentverse to execute the task. Fetch Network is the blockchain layer of the protocol, where AI agents must register in the on-chain Almanac contract to collaborate with other agents. It’s worth noting that Autonolas currently focuses on building agents in the crypto world and brings offline agent operations onto the blockchain, while Fetch.ai’s scope includes the Web2 world, such as travel bookings and weather forecasts.
  • Delysium: Delysium has transitioned from gaming to an AI agent protocol, primarily comprising two layers: the communication layer and the blockchain layer. The communication layer serves as the backbone of Delysium, providing a secure and scalable infrastructure for efficient communication between AI agents. The blockchain layer verifies agent identities and records agent behavior immutably through smart contracts. Specifically, the communication layer establishes a unified communication protocol among agents, facilitating seamless communication using standardized messaging systems. Additionally, it establishes service discovery protocols and APIs, enabling users and other agents to quickly discover and connect to available agents. The blockchain layer consists mainly of two parts: Agent ID and the Chronicle smart contract. Agent ID ensures that only legitimate agents can access the network, while Chronicle serves as an immutable log repository for all significant decisions and actions made by agents, ensuring trustworthy traceability of agent behavior.
  • Altered State Machine: Altered State Machine establishes standards for asset ownership and transactions for agents through NFTs. Although ASM primarily integrates with games at present, its foundational specifications also have the potential for expansion into other agent domains.
  • Morpheous: Morpheous is building an AI agent ecosystem network, aiming to connect coders, computer providers, community builders, and capital providers, who respectively provide AI agents, compute power supporting agent operations, front-end and development tools, and funding. MOR will adopt a Fair Launch model to incentivize miners providing compute power, stETH stakers, contributors to agent or smart contract development, and community development contributors.

4.2 zkML/opML

Zero-knowledge proof currently has two main application directions:

  • Proof of correct computation at a lower cost on-chain (ZK-Rollup and ZKP cross-chain bridges are leveraging this feature of ZK);
  • Privacy protection: No need to know the details of the computation, yet it can be proven that the computation was executed correctly.

Similarly, the application of ZKP in machine learning can also be divided into two categories:

  • Inference verification: That is, through ZK-proof, proving on-chain at a lower cost that the process of dense computation of AI model inference executed correctly off-chain.
  • Privacy protection: Can be divided into two categories. One is the protection of data privacy, which involves using private data for inference on public models, which can be achieved by using ZKML to protect the privacy of data. The other is the protection of model privacy, aiming to conceal specific information such as model weights, and compute and derive output results from public inputs.

The author believes that currently, the most important aspect for crypto is inference verification, and here we further elaborate on the scenarios for inference verification. Starting from AI as a participant to AI as the rules of the world, we hope to integrate AI into on-chain processes. However, the high computational cost of AI model inference prevents direct on-chain execution. Moving this process off-chain means we must tolerate the trust issues brought by this black box—did the AI model operator tamper with my input? Did they use the model I specified for inference? By converting ML models into ZK circuits, we can achieve: (1) On-chain storage of smaller models, storing small zkML models in smart contracts directly addresses the opacity issue; (2) Completing inference off-chain while generating ZK proofs, using on-chain execution of ZK proofs to verify the correctness of the inference process. The infrastructure will include two contracts—the main contract (which uses the ML model to output results) and the ZK-Proof verification contract.

zkML is still in its very early stages and faces technical challenges in converting ML models into ZK circuits, as well as high computational and cryptographic overhead costs. Similar to the development path of Rollup, opML serves as another solution from an economic perspective. opML uses the AnyTrust assumption of Arbitrum, meaning each claim has at least one honest node, ensuring that the submitter or at least one verifier is honest. However, OPML can only serve as an alternative for inference verification and cannot achieve privacy protection.

Current projects are building the infrastructure for zkML and exploring its applications. The establishment of applications is equally important because it needs to clearly demonstrate to crypto users the significant role of zkML and prove that the ultimate value can outweigh the enormous costs. In these projects, some focus on ZK technology development related to machine learning (such as Modulus Labs), while others focus on more general ZK infrastructure building. Related projects include:

  • Modulus is utilizing zkML to apply artificial intelligence to on-chain inference processes. On February 27th, Modulus launched the zkML prover Remainder, achieving a 180x efficiency improvement compared to traditional AI inference on equivalent hardware. Additionally, Modulus is collaborating with multiple projects to explore practical use cases of zkML. For example, they are partnering with Upshot to collect complex market data, evaluate NFT prices using AI with ZK proofs, and transmit the prices onto the blockchain. They are also collaborating with AI Arena to prove that the Avatar in combat and the player trained are the same entity.
  • Risc Zero places models on-chain, and by running machine learning models in RISC Zero’s ZKVM, they can prove that the exact computations involved in the model are executed correctly.
  • Ingonyama is developing specialized hardware for ZK technology, which may lower the barrier to entry for the ZK technology field. zkML could also be used in the model training process.

5 AI As A Goal

If the previous three categories focus more on how AI empowers Crypto, then “AI as a goal” emphasizes Crypto’s assistance to AI, namely how to utilize Crypto to create better AI models and products. This may include multiple evaluation criteria such as greater efficiency, precision, and decentralization. AI comprises three core elements: data, computing power, and algorithms, and in each dimension, Crypto is striving to provide more effective support for AI:

  • Data: Data serves as the foundation for model training, and decentralized data protocols incentivize individuals or enterprises to provide more private data while using cryptography to safeguard data privacy and prevent the leakage of sensitive personal information.
  • Computing Power: The decentralized computing power track is currently the hottest AI track. Protocols facilitate the matching of supply and demand in the market, promoting the pairing of long-tail computing power with AI enterprises for model training and inference.
  • Algorithms: Crypto’s empowerment of algorithms is the most crucial aspect of achieving decentralized AI, as described in Vitalik Buterin’s article “AI as a Goal.” By creating decentralized and trustworthy black box AI, issues such as adversarial machine learning can be addressed. However, this approach may face significant obstacles such as high cryptographic costs. Additionally, “using cryptographic incentives to encourage the creation of better AI” can be achieved without completely delving into the rabbit hole of cryptography.

The monopolization of data and computing power by large tech companies has led to a monopoly on the model training process, where closed-source models become key profit drivers for these corporations. From an infrastructure perspective, Crypto incentivizes the decentralized supply of data and computing power through economic means. Additionally, it ensures data privacy during the process through cryptographic methods. This serves as the foundation to facilitate decentralized model training, aiming to achieve a more transparent and decentralized AI ecosystem.

5.1 Decentralized Data Protocol

Decentralized data protocols primarily operate through crowdsourcing of data, incentivizing users to provide datasets or data services (such as data labeling) for enterprises to use in model training. They also establish Data Marketplaces to facilitate matching between supply and demand. Some protocols are also exploring incentivizing users through DePIN protocols to acquire browsing data or utilizing users’ devices/bandwidth for web data scraping.

  • Ocean Protocol: Tokenizes data ownership and allows users to create NFTs for data/algorithms in a codeless manner on Ocean Protocol, simultaneously creating corresponding datatokens to control access to the data NFTs. Ocean Protocol ensures data privacy through Compute To Data (C2D), where users can only obtain output results based on data/algorithms, without complete downloads. Established in 2017 as a data marketplace, Ocean Protocol naturally jumped on the AI bandwagon in this current trend.
  • Synesis One: This project is the Train2Earn platform on Solana, where users earn $SNS rewards by providing natural language data and data labeling. Users support mining by providing data, which is stored and placed on-chain after verification, then used by AI companies for training and inference. Miners are divided into three categories: Architects/Builders/Validators. Architects create new data tasks, Builders provide text data for specific tasks, and Validators verify the datasets provided by Builders. Completed datasets are stored in IPFS and their sources, along with IPFS addresses, are stored in an off-chain database for AI companies (currently Mind AI) to use.

Grass: The decentralized data layer, dubbed as AI, essentially functions as a decentralized network scraping market, obtaining data for AI model training purposes. Internet websites serve as vital sources of training data for AI, with many sites like Twitter, Google, and Reddit holding significant value. However, these websites continually impose restrictions on data scraping. Grass leverages unused bandwidth within individual networks to mitigate the impact of data blocking by employing different IP addresses for scraping data from public websites. It conducts initial data cleaning and serves as a data source for AI model training endeavors. Currently in the beta testing phase, Grass allows users to earn points by providing bandwidth, which can be redeemed for potential airdrops.

AIT Protocol: AIT Protocol is a decentralized data labeling protocol designed to provide developers with high-quality datasets for model training. Web3 enables global labor forces to quickly access the network and earn incentives through data labeling. AIT’s data scientists pre-label the data, which is then further processed by users. After undergoing quality checks by data scientists, the validated data is provided to developers for use.

In addition to the aforementioned data provisioning and data labeling protocols, former decentralized storage infrastructure such as Filecoin, Arweave, and others will also contribute to a more decentralized data supply.

5.2 Decentralized Computing Power

In the era of AI, the importance of computing power is self-evident. Not only has the stock price of NVIDIA soared, but in the crypto world, decentralized computing power can be said to be the hottest niche direction in the AI track—out of the top 200 AI projects by market capitalization, 5 projects (Render/Akash/AIOZ Network/Golem/Nosana) focus on decentralized computing power, and have experienced significant growth in the past few months. Many projects in the low market cap range have also seen the emergence of decentralized computing power platforms. Although they are just getting started, they have quickly gained momentum, especially with the wave of enthusiasm from the NVIDIA conference.

From the characteristics of the track, the basic logic of projects in this direction is highly homogeneous—using token incentives to encourage individuals or enterprises with idle computing resources to provide resources, thereby significantly reducing usage costs and establishing a supply-demand market for computing power. Currently, the main sources of computing power come from data centers, miners (especially after Ethereum transitions to PoS), consumer-level computing power, and collaborations with other projects. Although homogenized, this is a track where leading projects have high moats. The main competitive advantages of projects come from: computing power resources, computing power leasing prices, computing power utilization rates, and other technical advantages. The leading projects in this track include Akash, Render, io.net, and Gensyn.

According to specific business directions, projects can be roughly divided into two categories: AI model inference and AI model training. Since the requirements for computing power and bandwidth for AI model training are much higher than inference, and the market for model inference is expanding rapidly, predictable income will be significantly higher than model training in the future. Therefore, currently, the vast majority of projects focus on the inference direction (Akash, Render, io.net), with Gensyn focusing on training. Among them, Akash and Render were not initially developed for AI computing. Akash was originally used for general computing, while Render was primarily used for video and image rendering. io.net is specifically designed for AI computing, but after AI raised the level of computing demand, these projects have all tended to develop in the direction of AI.

The most important two competitive indicators still come from the supply side (computing power resources) and the demand side (computing power utilization). Akash has 282 GPUs and over 20,000 CPUs, with more than 160,000 leases completed, and a GPU network utilization rate of 50-70%, which is a good figure in this track. io.net has 40,272 GPUs and 5,958 CPUs, along with Render’s 4,318 GPUs and 159 CPUs, and Filecoin’s 1,024 GPUs usage license, including about 200 H100s and thousands of A100s. io.net is attracting computing power resources with extremely high airdrop expectations, and GPU data is growing rapidly, requiring a reassessment of its ability to attract resources after the token is listed. Render and Gensyn have not disclosed specific data. In addition, many projects are enhancing their competitiveness on both the supply and demand sides through ecosystem collaborations. For example, io.net uses Render and Filecoin’s computing power to enhance its own resource reserves, and Render has established the Computing Client Program (RNP-004), allowing users to indirectly access Render’s computing power resources through computing clients such as io.net, Nosana, FedMl, and Beam, thus quickly transitioning from the rendering field to artificial intelligence computing.

In addition, the verification of decentralized computing remains a challenge — how to prove that workers with computational resources correctly execute computing tasks. Gensyn is attempting to establish such a verification layer, ensuring the correctness of computations through probabilistic learning proofs, graph-based precise positioning protocols, and incentives. Validators and reporters jointly inspect computations in Gensyn, so besides providing computational support for decentralized training, its established verification mechanism also holds unique value. The computing protocol Fluence, situated on Solana, also enhances validation of computing tasks. Developers can verify if their applications run as expected and if computations are correctly executed by examining the proofs provided by the on-chain providers. However, the practical need still prioritizes feasibility over trustworthiness. Computing platforms must first have sufficient computational power to be competitive. Of course, for excellent verification protocols, there’s the option to access computational resources from other platforms, serving as validation and protocol layers to play a unique role.

5.3 Decentralized Model

The ultimate scenario described by Vitalik, as depicted in the diagram below, is still very distant. Currently, we are unable to achieve a trusted black-box AI created through blockchain and encryption technologies to address adversarial machine learning. Encrypting the entire AI process from training data to query outputs incurs significant costs. However, there are projects currently attempting to incentivize the creation of better AI models. They first bridge the closed-off states between different models, creating a landscape where models can learn from each other, collaborate, and engage in healthy competition. Bittensor is one of the most representative projects in this regard.

Bittensor: Bittensor is facilitating the integration of various AI models, but it’s important to note that Bittensor itself does not engage in model training; rather, it primarily provides AI inference services. Its 32 subnets focus on different service directions, such as data fetching, text generation, Text2Image, etc. When completing a task, AI models belonging to different directions can collaborate with each other. Incentive mechanisms drive competition between subnets and within subnets. Currently, rewards are distributed at a rate of 1 TAO per block, totaling approximately 7200 TAO tokens per day. The 64 validators in SN0 (Root Network) determine the distribution ratio of these rewards among different subnets based on subnet performance. Subnet validators, on the other hand, determine the distribution ratio among different miners based on their work evaluation. As a result, better-performing services and models receive more incentives, promoting overall improvement in the quality of system inference.

6 Conclusion: Is MEME Just A Hype or A Technological Revolution?

From Sam Altman’s moves driving the skyrocketing prices of ARKM and WLD to the Nvidia conference boosting a series of participating projects, many are adjusting their investment ideas in the AI field. Is the AI field primarily driven by meme speculation or technological revolution?

Apart from a few celebrity themes (such as ARKM and WLD), the overall AI field in crypto seems more like a “meme driven by technological narrative.”

On one hand, the overall speculation in the Crypto AI field is undoubtedly closely linked to the progress of Web2 AI. External hype led by entities like OpenAI will serve as the catalyst for the Crypto AI field. On the other hand, the story of the AI field still revolves around technological narratives. However, it’s crucial to emphasize the “technological narrative” rather than just the technology itself. This underscores the importance of choosing specific directions within the AI field and paying attention to project fundamentals. It’s necessary to find narrative directions with speculative value as well as projects with long-term competitiveness and moats.

Looking at the four potential combinations proposed by Vitalik, we see a balance between narrative charm and feasibility. In the first and second categories, represented by AI applications, we observe many GPT wrappers. While these products are quickly deployed, they also exhibit a high degree of business homogeneity. First-mover advantage, ecosystems, user base, and revenue become the stories told in the context of homogeneous competition. The third and fourth categories represent grand narratives combining AI with crypto, such as Agent on-chain collaboration networks, zkML, and decentralized reshaping of AI. These are still in the early stages, and projects with technological innovation will quickly attract funds, even if they are only in the early stages of implementation.

Disclaimer:

  1. This article is reprinted from [Metrics Ventures]. Forward the Original Title‘Metrics Ventures研报 | 从V神文章出发,Crypto×AI有哪些值得关注的细分赛道?’.All copyrights belong to the original author [@charlotte0211z@BlazingKevin_,Metrics Ventures]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

Subdivisions in Crypto×AI Worth Paying Attention to

BeginnerMar 26, 2024
Vitalik has published "The promise and challenges of crypto + AI applications," discussing the ways blockchain and artificial intelligence can be combined and the potential challenges. The article presents four integration methods and introduces representative projects for each direction. There are differences in the core characteristics of AI and blockchain, so it's necessary to balance aspects such as data ownership, transparency, monetization capabilities, and energy costs when combining them. Currently, many AI applications are gaming-related, involving interaction with AI and training characters to better fit individual needs. At the same time, there are projects exploring the use of blockchain features to create better artificial intelligence. Decentralized computing power is also a popular direction but still faces challenges. Overall, the AI track needs to find projects with competitiveness and long-term value.
Subdivisions in Crypto×AI Worth Paying Attention to

Forward the Original Title:’Metrics Ventures研报 | 从V神文章出发,Crypto×AI有哪些值得关注的细分赛道?’

1 Introduction: Four ways to combine Crypto with AI

Decentralization is the consensus maintained by blockchain, ensuring security is the core principle, and openness is the key foundation from a cryptographic perspective to make on-chain behavior possess the aforementioned characteristics. This approach has been applicable in several rounds of blockchain revolutions in the past few years. However, when artificial intelligence gets involved, the situation undergoes some changes.

Imagine designing the architecture of blockchain or applications through artificial intelligence. In this case, the model needs to be open source, but doing so will expose its vulnerability in adversarial machine learning. Conversely, not being open source would result in losing decentralization. Therefore, it is necessary to consider in what way and to what extent the integration should be accomplished when introducing artificial intelligence into current blockchain or applications.

Source: DE UNIVERSITY OF ETHEREUM

In the article ‘When Giants Collide: Exploring the Convergence of Crypto x AI’ from @ueth">DE UNIVERSITY OF ETHEREUM, the differences in core characteristics between artificial intelligence and blockchain are outlined. As shown in the figure above, the characteristics of artificial intelligence are:

  • Centralization
  • Low Transparency
  • Energy Consuming
  • Monopoly
  • Weak Monetization Attributes

The characteristics mentioned above are completely opposite in blockchain when compared to artificial intelligence. This is the true argument of Vitalik’s article. If artificial intelligence and blockchain are combined, then applications born from it need to make trade-offs in terms of data ownership, transparency, monetization capabilities, energy costs, etc. Additionally, what infrastructure needs to be created to ensure the effective integration of both also needs to be considered.

Following the above criteria and his own thoughts, Vitalik categorizes applications formed by the combination of artificial intelligence and blockchain into four main types:

  • AI as a player in a game
  • AI as an interface to the game
  • AI as the rules of the game
  • AI as the objective of the game

Among them, the first three mainly represent three ways in which AI is introduced into the Crypto world, representing three levels of depth from shallow to deep. According to the author’s understanding, this classification represents the extent to which AI influences human decision-making, and thus introduces different levels of systemic risk to the entire Crypto world:

  • Artificial intelligence as a participant in applications: Artificial intelligence itself does not influence human decisions and behavior, so it does not pose risks to the real human world. Therefore, it currently has the highest degree of practicality.
  • Artificial intelligence as an interface for applications: Artificial intelligence provides auxiliary information or tools for human decision-making and behavior, which improves user and developer experiences and lowers barriers. However, incorrect information or operations may introduce some risks to the real world.
  • Artificial intelligence as the rules of applications: Artificial intelligence fully replaces humans in making decisions and operations. Therefore, malicious behavior or failures of artificial intelligence will directly lead to chaos in the real world. Currently, in both Web2 and Web3, it is not possible to trust artificial intelligence to replace humans in decision-making.

Finally, the fourth category of projects aims to leverage the characteristics of Crypto to create better artificial intelligence. As mentioned earlier, centralization, low transparency, energy consumption, monopolistic tendencies, and weak monetary attributes can naturally be mitigated through the properties of Crypto. Although many people are skeptical about whether Crypto can have an impact on the development of artificial intelligence, the most fascinating narrative of Crypto has always been its ability to influence the real world through decentralization. This track has also become the most intensely speculated part of the AI track due to its grand vision.

2 AI As A Participant

In mechanisms where AI participates, the ultimate source of incentives often comes from protocols inputted by humans. Before AI becomes an interface or even a rule, we typically need to evaluate the performance of different AIs, allowing AI to participate in a mechanism, and ultimately receive rewards or penalties through an on-chain mechanism.

When AI acts as a participant, compared to being an interface or rule, the risks to users and the entire system are generally negligible. It can be considered as a necessary stage before AI deeply influences user decisions and behavior. Therefore, the cost and trade-offs required for the fusion of artificial intelligence and blockchain at this level are relatively small. This is also a category of products that Vitalik believes currently have a high degree of practicality.

In terms of breadth and implementation, many current AI applications fall into this category, such as AI-empowered trading bots and chatbots. The current level of implementation still makes it difficult for AI to serve as an interface or even a rule. Users are comparing and gradually optimizing among different bots, and crypto users have not yet developed habits of using AI applications. In Vitalik’s article, Autonomous Agents are also classified into this category.

However, in a narrower sense and from a long-term vision perspective, we tend to make more detailed distinctions for AI applications or AI agents. Therefore, under this category, representative subcategories include:

2.1 AI Games

To some extent, AI games can indeed be classified into this category. Players interact with AI and train their AI characters to better fit their personal preferences, such as aligning more closely with individual tastes or becoming more competitive within the game mechanics. Games serve as a transitional stage for AI before it enters the real world. They also represent a track with relatively low implementation risks and are the easiest for ordinary users to understand. Iconic projects in this category include AI Arena, Echelon Prime, and Altered State Machine.

  • AI Arena: A player-versus-player (PVP) fighting game where players can train and evolve their in-game characters using AI. The game aims to allow more ordinary users to interact with, understand, and experience AI through gaming, while also providing AI engineers with various AI algorithms to increase their income. Each in-game character is powered by AI-enabled NFTs, with the Core containing the AI model’s architecture and parameters stored on IPFS. The parameters in a new NFT are randomly generated, meaning it will perform random actions. Users need to improve their character’s strategic abilities through imitation learning (IL). Each time a user trains a character and saves progress, the parameters are updated on IPFS. \

  • Altered State Machine: .ASM is not an AI game but a protocol for rights verification and trading for AI agents. It is positioned as a metaverse AI protocol and is currently integrating with multiple games including FIFA, introducing AI agents into games and the metaverse. ASM utilizes NFTs to verify and trade AI agents, with each agent consisting of three parts: Brain (the agent’s intrinsic characteristics), Memories (storing the agent’s learned behavior strategies and model training, linked to the Brain), and Form (character appearance, etc.). ASM has a Gym module, including a decentralized GPU cloud provider, to provide computational support for agents. Projects currently built on ASM include AIFA (AI soccer game), Muhammed Ali (AI boxing game), AI League (street soccer game in partnership with FIFA), Raicers (AI-driven racing game), and FLUF World’s Thingies (generative NFTs). \

  • Parallel Colony (PRIME): Echelon Prime is developing Parallel Colony, a game based on AI LLM (Large Language Models). Players can interact with their AI avatars and influence them, with avatars autonomously acting based on memories and life trajectories. Colony is currently one of the most anticipated AI games, and the official whitepaper has recently been released. Additionally, the announcement of migration to Solana has sparked another wave of excitement and increased value for PRIME.

2.2 Market/Contest Prediction

The predictive capability is the foundation for AI to make future decisions and behaviors. Before AI models are used for practical predictions, prediction competitions compare the performance of AI models at a higher level. By providing incentives in the form of tokens for data scientists/AI models, this approach has positive implications for the development of the entire Crypto×AI field. It continuously fosters the development of more efficient and high-performing models and applications suitable for the crypto world. Before AI deeply influences decision-making and behavior, this creates higher-quality and safer products. As Vitalik stated, prediction markets are a powerful primitive that can be expanded to many other types of problems. Iconic projects in this track include Numerai and Ocean Protocol.

  • Numerai: Numerai is a long-running data science competition where data scientists train machine learning models to predict stock markets based on historical market data provided by Numerai. They then stake their models and NMR tokens in tournaments, with well-performing models receiving NMR token rewards, while tokens staked on poorly performing models are burned. As of March 7, 2024, there have been 6,433 models staked, and the protocol has provided a total of $75,760,979 in rewards to data scientists. Numerai incentivizes global collaboration among data scientists to build a new type of hedge fund. The funds released so far include Numerai One and Numerai Supreme. The path of Numerai involves market prediction competitions→crowdsourced prediction models→ the creation of new hedge funds based on crowdsourced models.
  • Ocean Protocol: Ocean Predictor focuses on predictions, starting with crowdsourced predictions of cryptocurrency trends. Players can choose to run the Predictoor bot or Trader bot. The Predictor bot uses AI models to predict the price of cryptocurrencies (e.g., BTC/USDT) at the next time point (e.g., five minutes ahead) and stakes a certain amount of $OCEAN tokens. The protocol calculates a global prediction based on the staked amount. Traders buy prediction results and can trade based on them. When the prediction accuracy is high, Traders can profit from it. Predictors who make incorrect predictions will be penalized, while those who make correct predictions will receive a portion of the tokens staked as well as the purchasing fees from Traders as rewards. On March 2nd, Ocean Predictoor announced its latest direction, the World-World Model (WWM), which begins exploring predictions for real-world scenarios such as weather and energy.

3 AI As An Interface

AI can assist users in understanding what is happening in the crypto world using simple and easy-to-understand language, acting as a mentor for users and providing alerts for potential risks to reduce the entry barriers and user risks in Crypto, thus improving user experience. The functionalities of products that can be realized are diverse, such as risk alerts during wallet interactions, AI-driven intent trading, AI chatbots capable of answering common user questions about crypto, and more. The audience for these services is expanding, including not only ordinary users but also developers, analysts, and almost all other groups, making them potential recipients of AI services.

Let’s reiterate the commonalities of these projects: they have not yet replaced humans in executing certain decisions and behaviors, but are utilizing AI models to provide information and tools for assisting human decision-making and behavior. At this level, the risks of AI malfeasance are starting to be exposed in the system - providing incorrect information to interfere with human judgment. This aspect has been thoroughly analyzed in Vitalik’s article.

There are many and varied projects that can be classified under this category, including AI chatbots, AI smart contract audits, AI code generation, AI trading bots, and more. It can be said that the vast majority of AI applications are currently at this basic level. Representative projects include:

  • Paal: PaaL is currently the leading project in AI chatbots and can be seen as a ChatGPT trained on crypto-related knowledge. Integrated with platforms like Telegram (TG) and Discord, PaaL provides users with functionalities such as token data analysis, token fundamentals and token economics analysis, as well as text-to-image generation and other features. PaaL Bot can be integrated into group chats to automatically respond to certain information. PaaL also supports customized personal bots, allowing users to build their own AI knowledge base and custom bots by feeding datasets. PaaL is advancing towards AI Trading Bots and, on February 29th, announced its AI-supported crypto research & trading terminal, PaalX. According to the introduction, it can perform AI smart contract audits, integrate and trade news based on Twitter, and provide support for crypto research and trading. The AI assistant can lower the barrier to entry for users.

ChainGPT: ChainGPT relies on artificial intelligence to develop a series of crypto tools, such as chatbot, NFT generator, news collection, smart contract generation and audit, transaction assistant, Prompt market and AI cross-chain exchange. However, ChainGPT’s current focus is on project incubation and Launchpad, and it has completed IDOs for 24 projects and 4 Free Giveaways.

  • Arkham: Ultra is Arkham’s dedicated AI engine designed to match addresses with real-world entities using algorithms, thereby increasing transparency in the crypto industry. Ultra merges on-chain and off-chain data provided by users and collected by itself, and outputs it into an expandable database, which is ultimately presented in chart form. However, the Arkham documentation does not provide detailed discussions on the Ultra system. Arkham has recently attracted attention due to personal investment from Sam Altman, the founder of OpenAI, and has experienced a five-fold increase in value over the past 30 days.
  • GraphLinq:GraphLinq is an automated workflow management solution designed to enable users to deploy and manage various types of automation functions without programming. For example, users can push the price of Bitcoin from Coingecko to a TG Bot every 5 minutes. GraphLinq’s solution visualizes automation processes using graphs, allowing users to create automation tasks by dragging nodes and using the GraphLinq Engine to execute them. Although no coding is required, the process of creating a graph still has a certain learning curve for ordinary users, including selecting the appropriate template and choosing and connecting suitable logic blocks from hundreds of options.To address this, GraphLinq is introducing AI to allow users to build and manage automation tasks using conversational artificial intelligence and natural language, thus simplifying the process for users who may not be familiar with technical aspects.
  • 0x0.ai:0x0’s AI-related businesses primarily include three aspects: AI smart contract auditing, AI anti-Rug detection, and AI developer center. Among them, AI anti-Rug detection aims to detect suspicious behaviors such as excessive taxes or liquidity drain to prevent users from being deceived. The AI developer center utilizes machine learning techniques to generate smart contracts, enabling no-code deployment of contracts. However, only AI smart contract auditing has been preliminarily launched, while the other two functionalities have not been fully developed yet.
  • Zignaly: Zignaly was founded in 2018 with the aim of enabling individual investors to choose fund managers to manage their cryptocurrency assets, similar to the logic of copy-trading. Zignaly is using machine learning and artificial intelligence technologies to establish a system for evaluating fund managers. The first product launched is called Z-Score. However, as an artificial intelligence product, it is still relatively basic in its current form.

4 AI As The Game Rules

This is the most exciting part—enabling AI to replace human decision-making and behavior. Your AI will directly control your wallet, making trading decisions and actions on your behalf. In this category, the author believes it can mainly be divided into three levels: AI applications (especially those with the vision of autonomous decision-making, such as AI automated trading bots, AI DeFi yield bots), Autonomous Agent protocols, and zkML/opML.

AI applications are tools for making specific decisions in a particular field. They accumulate knowledge and data from different sectors and rely on AI models tailored to specific problems for decision-making. It’s worth noting that AI applications are classified into both interfaces and rules in this article. In terms of development vision, AI applications should become independent decision-making agents, but currently, neither the effectiveness of AI models nor the security of integrated AI can meet this requirement. Even as interfaces, they are somewhat forced. AI applications are still in a very early stage, with specific projects introduced earlier.

Autonomous Agents, mentioned by Vitalik, are classified in the first category (AI as participants), but this article categorizes them into the third category based on their long-term vision. Autonomous Agents use a large amount of data and algorithms to simulate human thinking and decision-making processes, executing various tasks and interactions. This article mainly focuses on the infrastructure of Agents, such as communication layers and network layers, which define the ownership of Agents, establish their identity, communication standards, and methods, connect multiple Agent applications, and enable them to collaborate on decision-making and behavior.

zkML/opML: Ensure that outputs provided through correct model reasoning processes are credible through cryptographic or economic methods. Security issues are fatal when introducing AI into smart contracts. Smart contracts rely on inputs to generate outputs and automate a series of functions. If AI provides erroneous inputs, it will introduce significant systemic risks to the entire Crypto system. Therefore, zkML/opML and a series of potential solutions are the foundation for enabling AI to act independently and make decisions.

Finally, the three together constitute the three basic levels of AI as rule operators: zkml/opml as the lowest-level infrastructure ensuring protocol security; Agent protocols establish the Agent ecosystem, enabling collaborative decision-making and behavior; AI applications, also specific AI Agents, will continuously improve their capabilities in specific domains and actually make decisions and take action.

4.1 Autonomous Agent

The application of AI Agents in the crypto world is natural. From smart contracts to TG Bots to AI Agents, the crypto space is moving towards higher automation and lower user barriers. While smart contracts execute functions automatically through immutable code, they still rely on external triggers to activate and cannot run autonomously or continuously. TG Bots reduce user barriers by allowing users to interact with the blockchain through natural language, but they can only perform simple and specific tasks and cannot achieve user-centric transactions. AI Agents, however, possess a certain degree of independent decision-making capability. They understand natural language and autonomously combine other agents and blockchain tools to accomplish user-specified goals.

AI Agents are dedicated to significantly improving the user experience of crypto products, while blockchain technology can further enhance the decentralization, transparency, and security of AI Agent operations. Specific assistance includes:

  • By incentivizing developers with tokens to provide more agents.
  • NFT authentication to facilitate fee and transaction-based agent activities.
  • Providing on-chain agent identity and registration mechanisms.
  • Offering immutable agent activity logs for timely tracing and accountability of their actions.

The main projects of this track are as follows:

  • Autonolas: Autonolas supports asset ownership and composability for agents and related components through on-chain protocols, enabling code components, agents, and services to be discovered and reused on-chain, while incentivizing developers with economic compensation. Developers register their code on-chain and receive NFTs representing ownership of the code after developing complete agents or components. Service owners collaborate with multiple agents to create a service and register it on-chain, attracting agent operators to execute the service, which users access by paying for its usage.
  • Fetch.ai: Fetch.ai has a strong team background and development experience in the field of AI, currently focusing on the AI agent track. The protocol consists of four key layers: AI agents, Agentverse, AI Engine, and Fetch Network. AI agents form the core of the system, while the others provide frameworks and tools to assist in building agent services. Agentverse is a software-as-a-service platform primarily used to create and register AI agents. The AI Engine aims to interpret user natural language inputs and translate them into actionable tasks, selecting the most suitable registered AI agent from Agentverse to execute the task. Fetch Network is the blockchain layer of the protocol, where AI agents must register in the on-chain Almanac contract to collaborate with other agents. It’s worth noting that Autonolas currently focuses on building agents in the crypto world and brings offline agent operations onto the blockchain, while Fetch.ai’s scope includes the Web2 world, such as travel bookings and weather forecasts.
  • Delysium: Delysium has transitioned from gaming to an AI agent protocol, primarily comprising two layers: the communication layer and the blockchain layer. The communication layer serves as the backbone of Delysium, providing a secure and scalable infrastructure for efficient communication between AI agents. The blockchain layer verifies agent identities and records agent behavior immutably through smart contracts. Specifically, the communication layer establishes a unified communication protocol among agents, facilitating seamless communication using standardized messaging systems. Additionally, it establishes service discovery protocols and APIs, enabling users and other agents to quickly discover and connect to available agents. The blockchain layer consists mainly of two parts: Agent ID and the Chronicle smart contract. Agent ID ensures that only legitimate agents can access the network, while Chronicle serves as an immutable log repository for all significant decisions and actions made by agents, ensuring trustworthy traceability of agent behavior.
  • Altered State Machine: Altered State Machine establishes standards for asset ownership and transactions for agents through NFTs. Although ASM primarily integrates with games at present, its foundational specifications also have the potential for expansion into other agent domains.
  • Morpheous: Morpheous is building an AI agent ecosystem network, aiming to connect coders, computer providers, community builders, and capital providers, who respectively provide AI agents, compute power supporting agent operations, front-end and development tools, and funding. MOR will adopt a Fair Launch model to incentivize miners providing compute power, stETH stakers, contributors to agent or smart contract development, and community development contributors.

4.2 zkML/opML

Zero-knowledge proof currently has two main application directions:

  • Proof of correct computation at a lower cost on-chain (ZK-Rollup and ZKP cross-chain bridges are leveraging this feature of ZK);
  • Privacy protection: No need to know the details of the computation, yet it can be proven that the computation was executed correctly.

Similarly, the application of ZKP in machine learning can also be divided into two categories:

  • Inference verification: That is, through ZK-proof, proving on-chain at a lower cost that the process of dense computation of AI model inference executed correctly off-chain.
  • Privacy protection: Can be divided into two categories. One is the protection of data privacy, which involves using private data for inference on public models, which can be achieved by using ZKML to protect the privacy of data. The other is the protection of model privacy, aiming to conceal specific information such as model weights, and compute and derive output results from public inputs.

The author believes that currently, the most important aspect for crypto is inference verification, and here we further elaborate on the scenarios for inference verification. Starting from AI as a participant to AI as the rules of the world, we hope to integrate AI into on-chain processes. However, the high computational cost of AI model inference prevents direct on-chain execution. Moving this process off-chain means we must tolerate the trust issues brought by this black box—did the AI model operator tamper with my input? Did they use the model I specified for inference? By converting ML models into ZK circuits, we can achieve: (1) On-chain storage of smaller models, storing small zkML models in smart contracts directly addresses the opacity issue; (2) Completing inference off-chain while generating ZK proofs, using on-chain execution of ZK proofs to verify the correctness of the inference process. The infrastructure will include two contracts—the main contract (which uses the ML model to output results) and the ZK-Proof verification contract.

zkML is still in its very early stages and faces technical challenges in converting ML models into ZK circuits, as well as high computational and cryptographic overhead costs. Similar to the development path of Rollup, opML serves as another solution from an economic perspective. opML uses the AnyTrust assumption of Arbitrum, meaning each claim has at least one honest node, ensuring that the submitter or at least one verifier is honest. However, OPML can only serve as an alternative for inference verification and cannot achieve privacy protection.

Current projects are building the infrastructure for zkML and exploring its applications. The establishment of applications is equally important because it needs to clearly demonstrate to crypto users the significant role of zkML and prove that the ultimate value can outweigh the enormous costs. In these projects, some focus on ZK technology development related to machine learning (such as Modulus Labs), while others focus on more general ZK infrastructure building. Related projects include:

  • Modulus is utilizing zkML to apply artificial intelligence to on-chain inference processes. On February 27th, Modulus launched the zkML prover Remainder, achieving a 180x efficiency improvement compared to traditional AI inference on equivalent hardware. Additionally, Modulus is collaborating with multiple projects to explore practical use cases of zkML. For example, they are partnering with Upshot to collect complex market data, evaluate NFT prices using AI with ZK proofs, and transmit the prices onto the blockchain. They are also collaborating with AI Arena to prove that the Avatar in combat and the player trained are the same entity.
  • Risc Zero places models on-chain, and by running machine learning models in RISC Zero’s ZKVM, they can prove that the exact computations involved in the model are executed correctly.
  • Ingonyama is developing specialized hardware for ZK technology, which may lower the barrier to entry for the ZK technology field. zkML could also be used in the model training process.

5 AI As A Goal

If the previous three categories focus more on how AI empowers Crypto, then “AI as a goal” emphasizes Crypto’s assistance to AI, namely how to utilize Crypto to create better AI models and products. This may include multiple evaluation criteria such as greater efficiency, precision, and decentralization. AI comprises three core elements: data, computing power, and algorithms, and in each dimension, Crypto is striving to provide more effective support for AI:

  • Data: Data serves as the foundation for model training, and decentralized data protocols incentivize individuals or enterprises to provide more private data while using cryptography to safeguard data privacy and prevent the leakage of sensitive personal information.
  • Computing Power: The decentralized computing power track is currently the hottest AI track. Protocols facilitate the matching of supply and demand in the market, promoting the pairing of long-tail computing power with AI enterprises for model training and inference.
  • Algorithms: Crypto’s empowerment of algorithms is the most crucial aspect of achieving decentralized AI, as described in Vitalik Buterin’s article “AI as a Goal.” By creating decentralized and trustworthy black box AI, issues such as adversarial machine learning can be addressed. However, this approach may face significant obstacles such as high cryptographic costs. Additionally, “using cryptographic incentives to encourage the creation of better AI” can be achieved without completely delving into the rabbit hole of cryptography.

The monopolization of data and computing power by large tech companies has led to a monopoly on the model training process, where closed-source models become key profit drivers for these corporations. From an infrastructure perspective, Crypto incentivizes the decentralized supply of data and computing power through economic means. Additionally, it ensures data privacy during the process through cryptographic methods. This serves as the foundation to facilitate decentralized model training, aiming to achieve a more transparent and decentralized AI ecosystem.

5.1 Decentralized Data Protocol

Decentralized data protocols primarily operate through crowdsourcing of data, incentivizing users to provide datasets or data services (such as data labeling) for enterprises to use in model training. They also establish Data Marketplaces to facilitate matching between supply and demand. Some protocols are also exploring incentivizing users through DePIN protocols to acquire browsing data or utilizing users’ devices/bandwidth for web data scraping.

  • Ocean Protocol: Tokenizes data ownership and allows users to create NFTs for data/algorithms in a codeless manner on Ocean Protocol, simultaneously creating corresponding datatokens to control access to the data NFTs. Ocean Protocol ensures data privacy through Compute To Data (C2D), where users can only obtain output results based on data/algorithms, without complete downloads. Established in 2017 as a data marketplace, Ocean Protocol naturally jumped on the AI bandwagon in this current trend.
  • Synesis One: This project is the Train2Earn platform on Solana, where users earn $SNS rewards by providing natural language data and data labeling. Users support mining by providing data, which is stored and placed on-chain after verification, then used by AI companies for training and inference. Miners are divided into three categories: Architects/Builders/Validators. Architects create new data tasks, Builders provide text data for specific tasks, and Validators verify the datasets provided by Builders. Completed datasets are stored in IPFS and their sources, along with IPFS addresses, are stored in an off-chain database for AI companies (currently Mind AI) to use.

Grass: The decentralized data layer, dubbed as AI, essentially functions as a decentralized network scraping market, obtaining data for AI model training purposes. Internet websites serve as vital sources of training data for AI, with many sites like Twitter, Google, and Reddit holding significant value. However, these websites continually impose restrictions on data scraping. Grass leverages unused bandwidth within individual networks to mitigate the impact of data blocking by employing different IP addresses for scraping data from public websites. It conducts initial data cleaning and serves as a data source for AI model training endeavors. Currently in the beta testing phase, Grass allows users to earn points by providing bandwidth, which can be redeemed for potential airdrops.

AIT Protocol: AIT Protocol is a decentralized data labeling protocol designed to provide developers with high-quality datasets for model training. Web3 enables global labor forces to quickly access the network and earn incentives through data labeling. AIT’s data scientists pre-label the data, which is then further processed by users. After undergoing quality checks by data scientists, the validated data is provided to developers for use.

In addition to the aforementioned data provisioning and data labeling protocols, former decentralized storage infrastructure such as Filecoin, Arweave, and others will also contribute to a more decentralized data supply.

5.2 Decentralized Computing Power

In the era of AI, the importance of computing power is self-evident. Not only has the stock price of NVIDIA soared, but in the crypto world, decentralized computing power can be said to be the hottest niche direction in the AI track—out of the top 200 AI projects by market capitalization, 5 projects (Render/Akash/AIOZ Network/Golem/Nosana) focus on decentralized computing power, and have experienced significant growth in the past few months. Many projects in the low market cap range have also seen the emergence of decentralized computing power platforms. Although they are just getting started, they have quickly gained momentum, especially with the wave of enthusiasm from the NVIDIA conference.

From the characteristics of the track, the basic logic of projects in this direction is highly homogeneous—using token incentives to encourage individuals or enterprises with idle computing resources to provide resources, thereby significantly reducing usage costs and establishing a supply-demand market for computing power. Currently, the main sources of computing power come from data centers, miners (especially after Ethereum transitions to PoS), consumer-level computing power, and collaborations with other projects. Although homogenized, this is a track where leading projects have high moats. The main competitive advantages of projects come from: computing power resources, computing power leasing prices, computing power utilization rates, and other technical advantages. The leading projects in this track include Akash, Render, io.net, and Gensyn.

According to specific business directions, projects can be roughly divided into two categories: AI model inference and AI model training. Since the requirements for computing power and bandwidth for AI model training are much higher than inference, and the market for model inference is expanding rapidly, predictable income will be significantly higher than model training in the future. Therefore, currently, the vast majority of projects focus on the inference direction (Akash, Render, io.net), with Gensyn focusing on training. Among them, Akash and Render were not initially developed for AI computing. Akash was originally used for general computing, while Render was primarily used for video and image rendering. io.net is specifically designed for AI computing, but after AI raised the level of computing demand, these projects have all tended to develop in the direction of AI.

The most important two competitive indicators still come from the supply side (computing power resources) and the demand side (computing power utilization). Akash has 282 GPUs and over 20,000 CPUs, with more than 160,000 leases completed, and a GPU network utilization rate of 50-70%, which is a good figure in this track. io.net has 40,272 GPUs and 5,958 CPUs, along with Render’s 4,318 GPUs and 159 CPUs, and Filecoin’s 1,024 GPUs usage license, including about 200 H100s and thousands of A100s. io.net is attracting computing power resources with extremely high airdrop expectations, and GPU data is growing rapidly, requiring a reassessment of its ability to attract resources after the token is listed. Render and Gensyn have not disclosed specific data. In addition, many projects are enhancing their competitiveness on both the supply and demand sides through ecosystem collaborations. For example, io.net uses Render and Filecoin’s computing power to enhance its own resource reserves, and Render has established the Computing Client Program (RNP-004), allowing users to indirectly access Render’s computing power resources through computing clients such as io.net, Nosana, FedMl, and Beam, thus quickly transitioning from the rendering field to artificial intelligence computing.

In addition, the verification of decentralized computing remains a challenge — how to prove that workers with computational resources correctly execute computing tasks. Gensyn is attempting to establish such a verification layer, ensuring the correctness of computations through probabilistic learning proofs, graph-based precise positioning protocols, and incentives. Validators and reporters jointly inspect computations in Gensyn, so besides providing computational support for decentralized training, its established verification mechanism also holds unique value. The computing protocol Fluence, situated on Solana, also enhances validation of computing tasks. Developers can verify if their applications run as expected and if computations are correctly executed by examining the proofs provided by the on-chain providers. However, the practical need still prioritizes feasibility over trustworthiness. Computing platforms must first have sufficient computational power to be competitive. Of course, for excellent verification protocols, there’s the option to access computational resources from other platforms, serving as validation and protocol layers to play a unique role.

5.3 Decentralized Model

The ultimate scenario described by Vitalik, as depicted in the diagram below, is still very distant. Currently, we are unable to achieve a trusted black-box AI created through blockchain and encryption technologies to address adversarial machine learning. Encrypting the entire AI process from training data to query outputs incurs significant costs. However, there are projects currently attempting to incentivize the creation of better AI models. They first bridge the closed-off states between different models, creating a landscape where models can learn from each other, collaborate, and engage in healthy competition. Bittensor is one of the most representative projects in this regard.

Bittensor: Bittensor is facilitating the integration of various AI models, but it’s important to note that Bittensor itself does not engage in model training; rather, it primarily provides AI inference services. Its 32 subnets focus on different service directions, such as data fetching, text generation, Text2Image, etc. When completing a task, AI models belonging to different directions can collaborate with each other. Incentive mechanisms drive competition between subnets and within subnets. Currently, rewards are distributed at a rate of 1 TAO per block, totaling approximately 7200 TAO tokens per day. The 64 validators in SN0 (Root Network) determine the distribution ratio of these rewards among different subnets based on subnet performance. Subnet validators, on the other hand, determine the distribution ratio among different miners based on their work evaluation. As a result, better-performing services and models receive more incentives, promoting overall improvement in the quality of system inference.

6 Conclusion: Is MEME Just A Hype or A Technological Revolution?

From Sam Altman’s moves driving the skyrocketing prices of ARKM and WLD to the Nvidia conference boosting a series of participating projects, many are adjusting their investment ideas in the AI field. Is the AI field primarily driven by meme speculation or technological revolution?

Apart from a few celebrity themes (such as ARKM and WLD), the overall AI field in crypto seems more like a “meme driven by technological narrative.”

On one hand, the overall speculation in the Crypto AI field is undoubtedly closely linked to the progress of Web2 AI. External hype led by entities like OpenAI will serve as the catalyst for the Crypto AI field. On the other hand, the story of the AI field still revolves around technological narratives. However, it’s crucial to emphasize the “technological narrative” rather than just the technology itself. This underscores the importance of choosing specific directions within the AI field and paying attention to project fundamentals. It’s necessary to find narrative directions with speculative value as well as projects with long-term competitiveness and moats.

Looking at the four potential combinations proposed by Vitalik, we see a balance between narrative charm and feasibility. In the first and second categories, represented by AI applications, we observe many GPT wrappers. While these products are quickly deployed, they also exhibit a high degree of business homogeneity. First-mover advantage, ecosystems, user base, and revenue become the stories told in the context of homogeneous competition. The third and fourth categories represent grand narratives combining AI with crypto, such as Agent on-chain collaboration networks, zkML, and decentralized reshaping of AI. These are still in the early stages, and projects with technological innovation will quickly attract funds, even if they are only in the early stages of implementation.

Disclaimer:

  1. This article is reprinted from [Metrics Ventures]. Forward the Original Title‘Metrics Ventures研报 | 从V神文章出发,Crypto×AI有哪些值得关注的细分赛道?’.All copyrights belong to the original author [@charlotte0211z@BlazingKevin_,Metrics Ventures]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
Start Now
Sign up and get a
$100
Voucher!