AI x Web3: Exploring the Emerging Industry Landscape and Future Potential

IntermediateJul 29, 2024
AI and Web3 may seem like independent technologies, each based on fundamentally different principles and serving different functions. However, a deeper exploration reveals that these two technologies have the opportunity to balance each other's trade-offs, with their unique strengths complementing and enhancing each other.
AI x Web3: Exploring the Emerging Industry Landscape and Future Potential

Part One

At first glance, AI and Web3 appear to be independent technologies, each based on fundamentally different principles and serving distinct functions. However, a deeper exploration reveals that these two technologies have the potential to balance each other’s trade-offs, with their unique strengths complementing and enhancing one another. Balaji Srinivasan eloquently articulated this concept of complementary capabilities at the SuperAI conference, sparking a detailed comparison of how these technologies interact.

Tokens emerged from a bottom-up approach, rising from the decentralized efforts of anonymous network enthusiasts and evolving over a decade through the collaborative efforts of numerous independent entities worldwide. In contrast, artificial intelligence has been developed through a top-down approach, dominated by a few tech giants that set the pace and dynamics of the industry. The barriers to entry in AI are more determined by resource intensity rather than technical complexity.

These two technologies also have fundamentally different natures. Tokens are deterministic systems that produce immutable results, such as the predictability of hash functions or zero-knowledge proofs. This contrasts sharply with the probabilistic and often unpredictable nature of AI.

Similarly, cryptographic technology excels in validation, ensuring the authenticity and security of transactions and establishing trustless processes and systems, while AI focuses on generation, creating rich digital content. However, ensuring content provenance and preventing identity theft pose challenges in the creation of digital content.

Fortunately, tokens provide a counterpoint to digital abundance—digital scarcity. They offer relatively mature tools that can be applied to AI technologies to ensure content provenance and address identity theft issues.

A notable advantage of tokens is their ability to attract substantial hardware and capital into coordinated networks to serve specific goals. This capability is particularly beneficial for AI, which consumes large amounts of computing power. Mobilizing underutilized resources to provide more affordable computing power can significantly enhance AI efficiency.

By comparing these two technologies, we not only appreciate their individual contributions but also see how they can together pave new paths in technology and economics. Each technology can address the shortcomings of the other, creating a more integrated and innovative future. This blog post aims to explore the emerging AI x Web3 industry landscape, focusing on some new verticals at the intersection of these technologies.

Source: IOSG Ventures

Part Two

2.1 Computing network

  • The industry landscape first introduces computing networks that aim to address the limited GPU supply issue and explore various ways to reduce computing costs. Notable aspects include:
  • Non-uniform GPU interoperability: This ambitious attempt involves high technical risks and uncertainties, but if successful, it could create significant scale and impact by making all computing resources interchangeable. The idea is to develop compilers and other prerequisites that allow any hardware resource to be used on the supply side, while abstracting the non-uniformity of hardware on the demand side. This would enable computing requests to be routed to any resource within the network, potentially reducing reliance on CUDA software, which is currently dominant among AI developers. Despite the potential benefits, many experts are highly skeptical about the feasibility of this approach.
  • High-performance GPU aggregation: This approach focuses on integrating the most popular GPUs globally into a distributed, permissionless network, without concerns about interoperability issues between non-uniform GPU resources.
  • Commodity consumer-grade GPU aggregation: This involves aggregating lower-performance GPUs available in consumer devices, which are among the most underutilized resources on the supply side. It caters to those who are willing to sacrifice performance and speed for cheaper and longer training processes.

2.2 Training and inference

Computing networks are primarily used for two main functions: training and inference. The demand for these networks comes from both Web 2.0 and Web 3.0 projects. In the Web 3.0 space, projects like Bittensor utilize computing resources for model fine-tuning. For inference, Web 3.0 projects emphasize the verifiability of the process. This focus has led to the emergence of verifiable inference as a market vertical, with projects exploring how to integrate AI inference into smart contracts while maintaining decentralization principles.

2.3 Intelligent agent platform

  • Next is the intelligent agent platform, which outlines the core issues that startups in this category need to address:
  • Agent interoperability and discovery and communication capabilities: agents can discover and communicate with each other.
  • Agent cluster building and management capabilities: agents can form clusters and manage other agents.
  • AI agent ownership and market: providing ownership and market for AI agents.
  • These features emphasize the importance of flexible and modular systems that can be seamlessly integrated into various blockchain and AI applications. AI agents have the potential to revolutionize how we interact with the internet, and we believe agents will utilize infrastructure to support their operations. We envision AI agents relying on infrastructure in the following ways:
  • Accessing real-time web data using a distributed crawling network
  • Conducting inter-agent payments using DeFi channels
  • Requiring economic deposits not only for penalizing misconduct but also to enhance agent discoverability (i.e., using deposits as economic signals during the discovery process)
  • Using consensus to decide which events should lead to slashing
  • Open interoperability standards and agent frameworks to support building composable collectives
  • Evaluating past performance based on immutable data history and selecting suitable agent collectives in real-time

Source: IOSG Ventures

2.4 Data layer

In the integration of AI and Web3, data is a core component. Data is a strategic asset in AI competition, constituting key resources alongside computing resources. However, this category is often overlooked as most industry attention is focused on the computing layer. In reality, primitives provide many interesting value directions in the process of data acquisition, mainly including the following two high-level directions:

Accessing public internet data

Accessing protected data

Accessing public internet data: This direction aims to build a distributed crawler network that can crawl the entire internet within a few days, acquiring massive datasets or accessing very specific internet data in real-time. However, to crawl large datasets on the internet, the network demand is very high, requiring at least a few hundred nodes to start some meaningful work. Fortunately, Grass, a distributed crawler node network, already has over 2 million nodes actively sharing internet bandwidth with the network, aiming to crawl the entire internet. This demonstrates the great potential of economic incentives in attracting valuable resources.

Although Grass provides a fair competitive environment for public data, the challenge of utilizing potential data—specifically, access to proprietary datasets—remains. Specifically, a large amount of data is still stored in a privacy-protected manner due to its sensitive nature. Many startups are using cryptographic tools that enable AI developers to utilize the foundational data structure of proprietary datasets to build and fine-tune large language models while keeping sensitive information private.

Technologies such as federated learning, differential privacy, trusted execution environments, fully homomorphic encryption, and multi-party computation provide different levels of privacy protection and trade-offs. Bagel’s research paper summarizes an excellent overview of these technologies. These technologies not only protect data privacy during the machine learning process but also achieve comprehensive privacy-protected AI solutions at the computing layer.

2.5 Data and model sources

Data and model provenance technologies aim to establish processes that assure users they are interacting with the intended models and data. Moreover, these technologies provide guarantees of authenticity and provenance. For instance, watermarking, a type of model provenance technology, embeds signatures directly into machine learning algorithms, more specifically into the model weights, so that during retrieval, it can be verified whether the inference originates from the intended model.

2.6 Application

In terms of applications, the design possibilities are limitless. In the industry landscape above, we have listed some particularly anticipated development cases as AI technology is applied in the Web 3.0 field. Since these use cases are mostly self-explanatory, we will not comment further. However, it is worth noting that the intersection of AI and Web 3.0 has the potential to reshape many verticals within the field, as these new primitives offer developers more freedom to create innovative use cases and optimize existing ones.

Part Three

Summary

The integration of AI and Web3 brings a landscape full of innovation and potential. By leveraging the unique advantages of each technology, we can address various challenges and open up new technological pathways. As we explore this emerging industry, the synergy between AI and Web3 can drive progress, reshape our future digital experiences, and transform how we interact online.

The fusion of digital scarcity and digital abundance, the mobilization of underutilized resources to achieve computational efficiency, and the establishment of secure, privacy-protecting data practices will define the era of next-generation technological evolution.

However, we must recognize that this industry is still in its infancy, and the current landscape may quickly become outdated. The rapid pace of innovation means that today’s cutting-edge solutions might soon be replaced by new breakthroughs. Nevertheless, the fundamental concepts discussed—such as computational networks, agent platforms, and data protocols—highlight the immense possibilities of integrating AI with Web3.

Disclaimer:

  1. This article is reproduced from [深潮TechFlow], the copyright belongs to the original author [IOSG Ventures], if you have any objections to the reprint, please contact the Gate Learn team, and the team will handle it as soon as possible according to relevant procedures.

  2. Disclaimer: The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.

  3. Other language versions of the article are translated by the Gate Learn team and are not mentioned in Gate.io, the translated article may not be reproduced, distributed or plagiarized.

AI x Web3: Exploring the Emerging Industry Landscape and Future Potential

IntermediateJul 29, 2024
AI and Web3 may seem like independent technologies, each based on fundamentally different principles and serving different functions. However, a deeper exploration reveals that these two technologies have the opportunity to balance each other's trade-offs, with their unique strengths complementing and enhancing each other.
AI x Web3: Exploring the Emerging Industry Landscape and Future Potential

Part One

At first glance, AI and Web3 appear to be independent technologies, each based on fundamentally different principles and serving distinct functions. However, a deeper exploration reveals that these two technologies have the potential to balance each other’s trade-offs, with their unique strengths complementing and enhancing one another. Balaji Srinivasan eloquently articulated this concept of complementary capabilities at the SuperAI conference, sparking a detailed comparison of how these technologies interact.

Tokens emerged from a bottom-up approach, rising from the decentralized efforts of anonymous network enthusiasts and evolving over a decade through the collaborative efforts of numerous independent entities worldwide. In contrast, artificial intelligence has been developed through a top-down approach, dominated by a few tech giants that set the pace and dynamics of the industry. The barriers to entry in AI are more determined by resource intensity rather than technical complexity.

These two technologies also have fundamentally different natures. Tokens are deterministic systems that produce immutable results, such as the predictability of hash functions or zero-knowledge proofs. This contrasts sharply with the probabilistic and often unpredictable nature of AI.

Similarly, cryptographic technology excels in validation, ensuring the authenticity and security of transactions and establishing trustless processes and systems, while AI focuses on generation, creating rich digital content. However, ensuring content provenance and preventing identity theft pose challenges in the creation of digital content.

Fortunately, tokens provide a counterpoint to digital abundance—digital scarcity. They offer relatively mature tools that can be applied to AI technologies to ensure content provenance and address identity theft issues.

A notable advantage of tokens is their ability to attract substantial hardware and capital into coordinated networks to serve specific goals. This capability is particularly beneficial for AI, which consumes large amounts of computing power. Mobilizing underutilized resources to provide more affordable computing power can significantly enhance AI efficiency.

By comparing these two technologies, we not only appreciate their individual contributions but also see how they can together pave new paths in technology and economics. Each technology can address the shortcomings of the other, creating a more integrated and innovative future. This blog post aims to explore the emerging AI x Web3 industry landscape, focusing on some new verticals at the intersection of these technologies.

Source: IOSG Ventures

Part Two

2.1 Computing network

  • The industry landscape first introduces computing networks that aim to address the limited GPU supply issue and explore various ways to reduce computing costs. Notable aspects include:
  • Non-uniform GPU interoperability: This ambitious attempt involves high technical risks and uncertainties, but if successful, it could create significant scale and impact by making all computing resources interchangeable. The idea is to develop compilers and other prerequisites that allow any hardware resource to be used on the supply side, while abstracting the non-uniformity of hardware on the demand side. This would enable computing requests to be routed to any resource within the network, potentially reducing reliance on CUDA software, which is currently dominant among AI developers. Despite the potential benefits, many experts are highly skeptical about the feasibility of this approach.
  • High-performance GPU aggregation: This approach focuses on integrating the most popular GPUs globally into a distributed, permissionless network, without concerns about interoperability issues between non-uniform GPU resources.
  • Commodity consumer-grade GPU aggregation: This involves aggregating lower-performance GPUs available in consumer devices, which are among the most underutilized resources on the supply side. It caters to those who are willing to sacrifice performance and speed for cheaper and longer training processes.

2.2 Training and inference

Computing networks are primarily used for two main functions: training and inference. The demand for these networks comes from both Web 2.0 and Web 3.0 projects. In the Web 3.0 space, projects like Bittensor utilize computing resources for model fine-tuning. For inference, Web 3.0 projects emphasize the verifiability of the process. This focus has led to the emergence of verifiable inference as a market vertical, with projects exploring how to integrate AI inference into smart contracts while maintaining decentralization principles.

2.3 Intelligent agent platform

  • Next is the intelligent agent platform, which outlines the core issues that startups in this category need to address:
  • Agent interoperability and discovery and communication capabilities: agents can discover and communicate with each other.
  • Agent cluster building and management capabilities: agents can form clusters and manage other agents.
  • AI agent ownership and market: providing ownership and market for AI agents.
  • These features emphasize the importance of flexible and modular systems that can be seamlessly integrated into various blockchain and AI applications. AI agents have the potential to revolutionize how we interact with the internet, and we believe agents will utilize infrastructure to support their operations. We envision AI agents relying on infrastructure in the following ways:
  • Accessing real-time web data using a distributed crawling network
  • Conducting inter-agent payments using DeFi channels
  • Requiring economic deposits not only for penalizing misconduct but also to enhance agent discoverability (i.e., using deposits as economic signals during the discovery process)
  • Using consensus to decide which events should lead to slashing
  • Open interoperability standards and agent frameworks to support building composable collectives
  • Evaluating past performance based on immutable data history and selecting suitable agent collectives in real-time

Source: IOSG Ventures

2.4 Data layer

In the integration of AI and Web3, data is a core component. Data is a strategic asset in AI competition, constituting key resources alongside computing resources. However, this category is often overlooked as most industry attention is focused on the computing layer. In reality, primitives provide many interesting value directions in the process of data acquisition, mainly including the following two high-level directions:

Accessing public internet data

Accessing protected data

Accessing public internet data: This direction aims to build a distributed crawler network that can crawl the entire internet within a few days, acquiring massive datasets or accessing very specific internet data in real-time. However, to crawl large datasets on the internet, the network demand is very high, requiring at least a few hundred nodes to start some meaningful work. Fortunately, Grass, a distributed crawler node network, already has over 2 million nodes actively sharing internet bandwidth with the network, aiming to crawl the entire internet. This demonstrates the great potential of economic incentives in attracting valuable resources.

Although Grass provides a fair competitive environment for public data, the challenge of utilizing potential data—specifically, access to proprietary datasets—remains. Specifically, a large amount of data is still stored in a privacy-protected manner due to its sensitive nature. Many startups are using cryptographic tools that enable AI developers to utilize the foundational data structure of proprietary datasets to build and fine-tune large language models while keeping sensitive information private.

Technologies such as federated learning, differential privacy, trusted execution environments, fully homomorphic encryption, and multi-party computation provide different levels of privacy protection and trade-offs. Bagel’s research paper summarizes an excellent overview of these technologies. These technologies not only protect data privacy during the machine learning process but also achieve comprehensive privacy-protected AI solutions at the computing layer.

2.5 Data and model sources

Data and model provenance technologies aim to establish processes that assure users they are interacting with the intended models and data. Moreover, these technologies provide guarantees of authenticity and provenance. For instance, watermarking, a type of model provenance technology, embeds signatures directly into machine learning algorithms, more specifically into the model weights, so that during retrieval, it can be verified whether the inference originates from the intended model.

2.6 Application

In terms of applications, the design possibilities are limitless. In the industry landscape above, we have listed some particularly anticipated development cases as AI technology is applied in the Web 3.0 field. Since these use cases are mostly self-explanatory, we will not comment further. However, it is worth noting that the intersection of AI and Web 3.0 has the potential to reshape many verticals within the field, as these new primitives offer developers more freedom to create innovative use cases and optimize existing ones.

Part Three

Summary

The integration of AI and Web3 brings a landscape full of innovation and potential. By leveraging the unique advantages of each technology, we can address various challenges and open up new technological pathways. As we explore this emerging industry, the synergy between AI and Web3 can drive progress, reshape our future digital experiences, and transform how we interact online.

The fusion of digital scarcity and digital abundance, the mobilization of underutilized resources to achieve computational efficiency, and the establishment of secure, privacy-protecting data practices will define the era of next-generation technological evolution.

However, we must recognize that this industry is still in its infancy, and the current landscape may quickly become outdated. The rapid pace of innovation means that today’s cutting-edge solutions might soon be replaced by new breakthroughs. Nevertheless, the fundamental concepts discussed—such as computational networks, agent platforms, and data protocols—highlight the immense possibilities of integrating AI with Web3.

Disclaimer:

  1. This article is reproduced from [深潮TechFlow], the copyright belongs to the original author [IOSG Ventures], if you have any objections to the reprint, please contact the Gate Learn team, and the team will handle it as soon as possible according to relevant procedures.

  2. Disclaimer: The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.

  3. Other language versions of the article are translated by the Gate Learn team and are not mentioned in Gate.io, the translated article may not be reproduced, distributed or plagiarized.

Start Now
Sign up and get a
$100
Voucher!