The Next Wave of Narratives in the Crypto AI Sector

IntermediateJun 04, 2024
Alex Xu, a research partner at Mint Ventures, analyses emerging narratives in the burgeoning crypto AI sector, discussing the catalytic pathways and logic behind these narratives, relevant project targets, as well as risks and uncertainties.
The Next Wave of Narratives in the Crypto AI Sector

Introduction

As of now, the current crypto bull market cycle is the most lackluster in terms of commercial innovation, lacking the phenomenal hot tracks like DeFi, NFT, and GameFi seen in the previous bull market. As a result, the overall market needs industrial hotspots, with sluggish growth in users, industrial investment, and developers.

This stagnation is also reflected in current asset prices. Throughout the cycle, most altcoins have continued to lose value against BTC, including ETH. After all, the valuation of smart contract platforms is determined by the prosperity of applications. When innovation in application development is lackluster, the valuation of public blockchains is hard to elevate.

AI, as a relatively new commercial category in this cycle, still has the potential to bring considerable incremental attention to crypto AI sector projects, thanks to the explosive development speed and continuous hot topics in the external commercial world.

In the IO.NET report released by the author in April, the necessity of combining AI with Crypto was outlined. The advantages of crypto-economic solutions in terms of determinism, resource mobilization and allocation, and trustlessness could potentially address the three challenges of AI: randomness, resource intensity, and difficulty in distinguishing between humans and machines.

In the AI sector of the crypto economy, the author attempts to discuss and deduce some important issues through another article, including:

  • Emerging or potentially explosive narratives in the crypto AI sector
  • Catalytic pathways and logic behind these narratives
  • Relevant project targets associated with these narratives
  • Risks and uncertainties in narrative deduction

This article reflects the author’s thoughts as of the publication date, which may change in the future. The viewpoints are highly subjective and may contain errors in facts, data, and reasoning logic. Please do not take this as investment advice. Criticisms and discussions from peers are welcome.

Let’s get down to the business.

The next wave of narratives in the crypto AI track

Before officially introducing the next wave of narratives in the crypto AI track, let’s first take a look at the main narratives of the current crypto AI. From the perspective of market value, those with more than 1 billion US dollars are:

  • Computing power: Render (RNDR, with circulating market value of 3.85 billion), Akash (1.2 billion circulating market value), IO.NET (the latest round of primary financing valuation is 1 billion)
  • Algorithm Network: Bittensor (TAO, 2.97 billion circulating market value)
  • AI agent: Fetchai (FET, 2.1 billion market capitalization before merger)

*Data time: 2024.5.24, currency units are US dollars.

Apart from the aforementioned sectors, which will be the next AI sector with a single project market value exceeding $1 billion?

The author believes that it can be speculated from two perspectives: the narrative of the “industrial supply side” and the narrative of the “GPT moment.”

First Perspective on AI Narrative: Opportunities in Energy and Data Sectors Behind AI from the Industrial Supply Side

From the industrial supply side, there are four driving forces for AI development:

  • Algorithms: High-quality algorithms can execute training and inference tasks more efficiently.
  • Computing Power: Both model training and inference require computing power provided by GPU hardware. This is the current primary bottleneck in the industry, as a chip shortage has led to high prices for mid-to-high-end chips.
  • Energy: AI data centers consume significant energy. Besides the electricity needed to power GPUs, cooling systems for large data centers can account for about 40% of total energy consumption.
  • Data: Improving large model performance requires expanding training parameters, which means a massive demand for high-quality data.

Among these four driving forces, there are crypto projects with a circulating market value exceeding $1 billion in the algorithm and computing power sectors. However, projects with similar market value have yet to appear in the energy and data fields.

In reality, the supply shortages of energy and data may soon emerge as new industry hotspots, potentially driving a surge in related crypto projects. Let’s start with energy.

On February 29, 2024, Elon Musk mentioned at the Bosch ConnectedWorld 2024 conference: “I predicted the chip shortage over a year ago. The next shortage will be electricity. I think there won’t be enough power to run all the chips next year.”

Looking at specific data, the AI Index Report published annually by the Stanford Institute for Human-Centered Artificial Intelligence, led by Fei-Fei Li, assessed in its 2022 report on the 2021 AI industry that AI’s energy consumption was only 0.9% of global electricity demand, posing limited pressure on energy and the environment. In 2023, the International Energy Agency (IEA) summarized that in 2022, global data centers consumed approximately 460 terawatt-hours (TWh) of electricity, accounting for 2% of global electricity demand. They predicted that by 2026, the global data center energy consumption would be at least 620 TWh and could reach up to 1050 TWh.

However, the IEA’s estimates are still conservative, as many AI projects are about to launch, with energy demands far exceeding their 2023 projections.

For example, Microsoft and OpenAI are planning the Stargate project. This project, expected to start in 2028 and be completed around 2030, aims to build a supercomputer with millions of dedicated AI chips, providing unprecedented computing power for OpenAI, particularly for its research in artificial intelligence and large language models. The project is expected to cost over $100 billion, 100 times the current cost of large data centers.

The energy consumption of the Stargate project alone is estimated to be 50 terawatt-hours.

Due to this, OpenAI’s founder Sam Altman stated at the Davos Forum in January this year: “Future artificial intelligence needs an energy breakthrough because AI will consume far more electricity than people expect.”

Following computing power and energy, the next area of shortage in the rapidly growing AI industry is likely to be data.

Or rather, the shortage of high-quality data required by AI has already become a reality.

From the evolution of GPT, humans have basically grasped the growth pattern of large language model capabilities—by expanding model parameters and training data, the model’s capabilities can be exponentially improved—and this process currently shows no short-term technical bottleneck.

However, the issue is that high-quality and publicly available data may become increasingly scarce in the future. AI products might face supply and demand conflicts for data similar to those for chips and energy.

The first is an increase in disputes over data ownership.

On December 27, 2023, The New York Times filed a lawsuit against OpenAI and Microsoft in the U.S. District Court, accusing them of using millions of its articles without permission to train the GPT model. The lawsuit demands billions of dollars in statutory and actual damages for the “illegal copying and use of works of unique value” and calls for the destruction of all models and training data containing The New York Times’ copyrighted material.

At the end of March, The New York Times issued a new statement targeting not only OpenAI but also Google and Meta. The statement claimed that OpenAI transcribed a large number of YouTube videos into text using a speech recognition tool called Whisper, then used the text to train GPT-4. The New York Times asserted that it has become common practice for big companies to use sneaky methods to train AI models, pointing out that Google has also been converting YouTube video content into text for training its own large models, which essentially infringes on the rights of video content creators.

The lawsuit between The New York Times and OpenAI, labeled as the “first AI copyright case,” is complex and has far-reaching implications for the future of content and the AI industry. Given the complexity of the case and its potential impact, a quick resolution is unlikely. One possible outcome is an out-of-court settlement, with wealthy companies like Microsoft and OpenAI paying substantial compensation. However, future data copyright disputes will inevitably raise the overall cost of high-quality data.

Additionally, as the world’s largest search engine, Google has revealed that it is considering charging fees for its search functionality. The charges would not target the general public but rather AI companies.


Source: Reuters

Google’s search engine servers store a large amount of content. It can even be said that Google stores all the content that has appeared on all Internet pages since the 21st century. The current AI-driven search products, such as overseas ones such as perplexity, and domestic ones such as Kimi and Secret Tower, all process the searched data through AI and then output it to users. Search engines’ charges for AI will inevitably increase the cost of data acquisition.

In fact, in addition to public data, AI giants are also eyeing non-public internal data.

Photobucket is an established image and video hosting website that had 70 million users and nearly half of the U.S. online photo market in the early 2000s. With the rise of social media, the number of Photobucket users has dropped significantly. Currently, there are only 2 million active users left (they pay a high fee of US$399 per year). According to the agreement and privacy policy signed by users when they registered, they have not been used for more than a year. The account will be recycled, and Photobucket’s right to use the pictures and video data uploaded by the user is also supported. Photobucket CEO Ted Leonard revealed that the 1.3 billion photo and video data it owns is extremely valuable for training generative AI models. He is in talks with multiple technology companies to sell the data, with offers ranging from 5 cents to $1 per photo and more than $1 per video, estimating that the data Photobucket can provide is worth more than $1 billion.

EPOCH, a research team focusing on the development trend of artificial intelligence, once published a report on the data required for machine learning based on the use of data and the generation of new data by machine learning in 2022, and considering the growth of computing resources. It once published a report on the state of data required for machine learning titled “ “Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning”. The report concluded that high-quality text data will be exhausted between February 2023 and 2026, and image data will be exhausted between 2030 and 2060. If the efficiency of data utilization cannot be significantly improved, or new data sources emerge, the current trend of large machine learning models that rely on massive data sets may slow down.

Judging from the current situation where AI giants are purchasing data at high prices, free high-quality text data has been exhausted. EPOCH’s prediction 2 years ago was relatively accurate.

At the same time, solutions to the demand for “AI data shortage” are also emerging, namely: AI data provision services.

Defined.ai is a company that provides customized, real and high-quality data for AI companies.

Examples of data types that Defined.ai can provide: https://www.defined.ai/datasets

Its business model is: AI companies provide Defined.ai with their own data needs. For example, in terms of picture quality, the resolution must be as high as possible to avoid blurring, overexposure, and the content should be authentic. In terms of content, AI companies can customize specific themes based on their own training tasks, such as night photos, night cones, parking lots, and signs, to improve the recognition rate of AI in night scenes. The public can take the task of taking the photo. Then, the company will review them and upload them. The parts that meet the requirements will be settled based on the number of photos. The price is about US$1-2 for a high-quality picture, US$5-7 for a short film of more than ten seconds. A high-quality video of more than 10 minutes costs US$100-300, and a text is US$1 per thousand words. The person who receives the subcontracting task can get about 20% of the fee. Data provision may become another crowdsourcing business after “data labeling”.

Global task crowdsourcing distribution, economic incentives, data asset pricing/circulation, and privacy protection are open to everyone, which sound particularly well-suited for a Web3 business paradigm.

AI Narrative Targets from the Industrial Supply Side

The attention brought by the chip shortage has permeated the crypto industry, making distributed computing power the hottest and highest market-cap AI track so far.

So, if the supply and demand conflicts in the AI industry’s energy and data sectors were to explode in the next 1-2 years, what narrative-related projects are currently present in the crypto industry?

Energy-related Targets

Energy-related projects that have been listed on major centralized exchanges (CEX) are rare, with Power Ledger (token: POWR) being the only notable example.

Power Ledger, established in 2017, is a blockchain-based comprehensive energy platform aimed at decentralizing energy trading. It promotes direct electricity transactions between individuals and communities, supports the widespread application of renewable energy, and ensures transparency and efficiency through smart contracts. Initially, Power Ledger operated on a consortium chain derived from Ethereum. In the second half of 2023, Power Ledger updated its whitepaper and launched its own comprehensive public chain, which is based on Solana’s technical framework to handle high-frequency micro-transactions in the distributed energy market. Currently, Power Ledger’s main businesses include:

  • Energy Trading: Allows users to buy and sell electricity directly, especially from renewable sources.
  • Environmental Product Trading: Facilitates trading in carbon credits and renewable energy certificates, as well as financing based on environmental products.
  • Public Chain Operation: Attracts application developers to build on the Power Ledger blockchain, with transaction fees paid in POWR tokens.

As of now, Power Ledger’s circulating market cap is $170 million, with a fully diluted market cap of $320 million.

Data-related Targets

Compared to energy-related crypto targets, the data track has a richer variety of crypto targets. Here are the data track projects that I am currently watching, all of which are listed on at least one of the major CEXs such as Binance, OKX, or Coinbase, arranged in ascending order of their fully diluted valuation (FDV):

  1. Streamr – DATA

Value Proposition: Streamr aims to build a decentralized real-time data network that allows users to freely trade and share data while retaining full control over their data. Through its data marketplace, Streamr seeks to enable data producers to directly sell data streams to interested consumers without intermediaries, thereby reducing costs and increasing efficiency.

Source: https://streamr.network/hub/projects

In a practical collaboration case, Streamr partnered with another Web3 onboard hardware project, DIMO. Through DIMO hardware sensors installed in vehicles, they collect data such as temperature, air pressure, and other metrics, forming weather data streams that are transmitted to organizations in need.

Compared to other data projects, Streamr focuses more on IoT and hardware sensor data. Besides the aforementioned DIMO vehicle data, other projects include real-time traffic data streams in Helsinki. Due to this focus, Streamr’s project token, DATA, experienced a surge, doubling in value in a single day last December when the DePIN concept was at its peak.

Currently, Streamr’s circulating market cap is $44 million, with a fully diluted market cap of $58 million.

  1. Covalent – CQT

Unlike other data projects, Covalent provides blockchain data. The Covalent network reads data from blockchain nodes via RPC, processes and organizes this data, creating an efficient query database. This allows Covalent’s users to quickly retrieve the information they need without performing complex queries directly from blockchain nodes. This service is known as “blockchain data indexing.”

Covalent’s clients are primarily B2B, including Dapp projects like various DeFi applications, as well as many centralized crypto companies such as ConsenSys (the parent company of MetaMask), CoinGecko (a well-known crypto asset tracking site), Rotki (a tax tool), and Rainbow (a crypto wallet). Additionally, traditional financial giants like Fidelity and the Big Four accounting firm EY are also Covalent’s clients. According to Covalent’s official disclosures, the project’s revenue from data services has already surpassed that of the leading project in the same field, The Graph.

The Web3 industry, due to the completeness, openness, authenticity, and real-time nature of on-chain data, is poised to become a valuable source of high-quality data for specific AI scenarios and “small AI models.” As a data provider, Covalent has begun supplying data for various AI scenarios and has launched verifiable structured data specifically for AI.

Source: https://www.covalenthq.com/solutions/decentralized-ai/

For example, it provides data to SmartWhales, an on-chain intelligent trading platform, and uses AI to identify profitable trading patterns and addresses; Entendre Finance uses Covalent’s structured data and AI processing for real-time insights, anomaly detection, and predictive analysis.

At present, the main scenarios for the on-chain data services provided by Covalent are still financial. However, with the generalization of Web3 products and data types, the usage scenarios of on-chain data will also be further expanded.

The current diluted market value of the Covalent project is $150 million, and the full diluted market value is $235 million. Compared with The Graph, a blockchain data index project in the same track, it has a relatively obvious valuation advantage.

  1. Hivemapper – Honey

Among all data materials, video data often has the highest unit price. Hivemapper can provide data including video and map information to AI companies. Hivemapper itself is a decentralized global mapping project that aims to create a detailed, dynamic and accessible mapping system through blockchain technology and community contributions. Participants can capture map data through a dashcam and add it to the open source Hivemapper data network, and receive rewards based on their contributions in the project token HONEY. In order to improve network effects and reduce interaction costs, Hivemapper is built on Solana.

Hivemapper, founded in 2015, initially aimed to create maps using drones. However, it soon realized that this model was difficult to scale, prompting a shift to using dashcams and smartphones to capture geographic data, significantly reducing the cost of global map production.

Compared to street view and mapping software like Google Maps, Hivemapper uses an incentivized network and crowdsourcing model to expand map coverage more efficiently, maintain the freshness of real-world maps, and improve video quality.

Before the AI-driven demand for data surged, Hivemapper’s primary clients included the automotive industry’s autonomous driving departments, navigation service companies, governments, insurance companies, and real estate firms. Today, Hivemapper can provide extensive road and environmental data for AI and large models through APIs. By continually updating streams of images and road feature data, AI and ML models can better translate this data into improved capabilities, performing tasks related to geographic location and visual judgment.


Data source: https://hivemapper.com/blog/diversify-ai-computer-vision-models-with-global-road-imagery-map-data/

Currently, Hivemapper’s Honey project has a diluted market cap of $120 million and a fully diluted market cap (FDV) of $496 million.

In addition to the three projects mentioned above, the data area also includes:

The Graph – GRT: With a diluted market cap of $3.2 billion and an FDV of $3.7 billion, The Graph provides blockchain data indexing services similar to Covalent.

Ocean Protocol – OCEAN: With a circulating market cap of $670 million and an FDV of $1.45 billion, Ocean Protocol is an open-source protocol aimed at facilitating the exchange and monetization of data and data-related services. It connects data consumers with data providers to share data while ensuring trust, transparency, and traceability. This project is set to merge with Fetch.ai and SingularityNET, with its token converting to ASI.

Second perspective on the AI narrative: The Arrival of AGI, reminiscent of the GPT Moment

In the author’s view, the inaugural year of the “AI track” in the crypto industry was the remarkable year of 2023, marked by the advent of GPT, and the surge in crypto AI projects was more of a ripple effect from the explosive growth of the AI industry.

Although capabilities like GPT4 and Turbo continued to evolve after GPT3.5, and Sora showcased astonishing video creation abilities, along with rapid developments in large language models outside of OpenAI, it is undeniable that the cognitive impact of AI technological advancements on the general public is diminishing. People are gradually starting to use AI tools, and large-scale job displacement seems to be yet to occur.

So, will the AI field witness another “GPT moment” in the future, where a leap in AI development astonishes the masses, making people realize that their lives and work will be changed as a result? This moment could be the advent of Artificial General Intelligence (AGI).

AGI refers to machines having comprehensive cognitive abilities similar to humans, capable of solving various complex problems beyond specific tasks. AGI systems possess high-level abstract thinking, extensive background knowledge, cross-domain common-sense reasoning, causal understanding, and cross-disciplinary transfer learning abilities. In terms of comprehensive capabilities, AGI’s performance is on par with the best humans, and even surpasses the collective abilities of the most outstanding human groups.

In fact, whether depicted in science fiction, games, or movies, or fueled by public expectations following the rapid proliferation of GPT, society has long anticipated the emergence of AGI surpassing human cognitive levels. It could be said that GPT itself is a precursor to AGI, a prophecy of general artificial intelligence.

The reason why GPT has such immense industrial energy and psychological impact is that its implementation speed and performance exceeded the expectations of the masses: people didn’t expect that an artificial intelligence system capable of passing the Turing Test would actually arrive, and arrive so quickly.

In reality, Artificial General Intelligence (AGI) may reprise the suddenness of the “GPT moment” within 1-2 years: people have just adapted to the assistance of GPT, only to discover that AI is not just an assistant anymore. It can even independently accomplish highly creative and challenging tasks, including those problems that have baffled top scientists for decades.

On April 8th this year, Musk was interviewed by Nicolai Tangen, Chief Investment Officer of the Norwegian Sovereign Wealth Fund, about the timing of AGI’s emergence.

He said, “If we define AGI as smarter than the smartest humans, I think it’s likely to happen around 2025.” In other words, according to his estimation, it will take at most another year and a half for AGI to arrive. Of course, he added a caveat, that “power and hardware keep up.”

The benefits of AGI’s arrival are evident.

It means that humanity’s productivity will take a giant leap forward, and numerous scientific research problems that have plagued us for decades will be solved effortlessly. If we define “the smartest humans” as Nobel Prize winners, it means that as long as there is sufficient energy, computing power, and data, we can have countless tireless “Nobel Prize winners” delving into the most challenging scientific problems around the clock.

In reality, Nobel Prize winners are not as rare as one in several hundred million; most of them are on par with university professors in terms of ability and intelligence. However, due to probability and luck in choosing the right direction, and persisting until results are obtained, individuals of similar caliber to them, their equally outstanding colleagues, may have also won Nobel Prizes in parallel universes of scientific research. Unfortunately, there are still not enough people with the abilities of top university professors participating in scientific breakthroughs, so the speed of “exploring all correct directions in scientific research” remains slow.

With the advent of AGI, in conditions where energy and computing power are sufficiently supplied, we can have an infinite number of AGIs with the level of Nobel Prize winners exploring in-depth in any possible scientific breakthrough direction. The rate of technological advancement will increase by dozens of times. Technological advancement will lead to resources that are currently considered expensive and scarce increasing hundreds of times in the next 10 to 20 years, such as food production, new materials, new drugs, high-quality education, etc. The cost of obtaining these resources will also decrease exponentially, enabling us to support more people with fewer resources, and the per capita wealth will increase rapidly.

Global GDP Trend (Source: World Bank)

This may sound a bit sensational. Let’s look at two examples, which have been discussed by the author in the IO.NET research report before:

  • In 2018, Nobel Prize laureate in Chemistry, Frances Arnold, stated at the award ceremony: “Today, we can read, write, and edit any DNA sequence in practical applications, but we still cannot compose it.” Just five years later, in 2023, researchers from Salesforce Research, an AI startup from Stanford University and Silicon Valley, published a paper in Nature Biotechnology. They used a large language model based on fine-tuning GPT-3 to create one million new proteins from scratch and discovered two proteins with drastically different structures, both of which have antimicrobial properties and could potentially serve as bacterial resistance solutions beyond antibiotics. In other words, with the help of AI, the bottleneck in protein “creation” has been overcome.
  • Before this, the artificial intelligence AlphaFold algorithm predicted the structures of almost all 214 million known proteins on Earth within 18 months, a result hundreds of times greater than the combined efforts of all previous structural biologists.

The revolution has already occurred, and the advent of AGI will further accelerate this process. On the other hand, the challenges brought by the advent of AGI are also enormous. AGI will not only replace a large number of cognitive workers, but also impact physical laborers who were previously considered to be “less affected by AI.” With the maturity of robotics technology and the development of new materials leading to a reduction in production costs, the proportion of labor positions replaced by machines and software will rapidly increase.

At that time, two seemingly distant issues will quickly come to the surface:

  1. The problem of employment and income for a large number of unemployed people.
  2. How to distinguish between AI and humans in a world where AI is ubiquitous.

Worldcoin\Worldchain is attempting to provide solutions by offering a Universal Basic Income (UBI) system to provide basic income to the public and using iris-based biometric features to distinguish between humans and AI.

In fact, UBI, which provides money to everyone, is not just a pie in the sky. Countries like Finland and England have experimented with universal basic income, and parties in Canada, Spain, India, and other countries are actively proposing and promoting related experiments.

The benefit of using a biometric identification + blockchain-based model for UBI distribution lies in the global nature of the system, providing broader coverage to the population. Additionally, it can leverage the user network expanded through income distribution to build other business models, such as financial services (Defi), social networking, crowdsourcing, etc., forming synergies within the network.

One of the corresponding assets to the impact of the advent of AGI is Worldcoin – WLD, with a diluted market cap of $1.03 billion and a fully diluted market cap of $47.2 billion.

The risks and uncertainties of narrative deduction

Unlike many previous project and track research reports released by Mint Ventures, this article has a greater subjectivity in the narrative deduction and prediction. Readers should consider the content of this article as a divergent discussion rather than a prophecy of the future. The narrative extrapolation presented by the author faces many uncertainties, leading to speculative errors. These risks or influencing factors include but are not limited to:

  • Energy aspect: GPU updates causing a rapid decline in energy consumption

Despite the sharp increase in energy demand surrounding AI, chip manufacturers like Nvidia are providing higher computing power with lower power consumption through continuous hardware upgrades. For example, in March of this year, Nvidia released a new generation of AI computing card GB200, which integrates two B200 GPUs and one Grace CPU. Its training performance is four times that of the previous generation’s main AI GPU H100, and the inference performance is seven times that of H100, while the required energy consumption is only one-quarter of H100’s. However, despite this, the desire for power from AI is far from being satisfied. With the decrease in unit energy consumption, the total energy consumption may actually increase as AI applications and demands further expand.

  • Data aspect: The Q* project realizes “self-generated data”

There has long been a rumor within OpenAI about the project “Q,” which has been mentioned in internal communications to OpenAI employees. According to Reuters quoting insiders at OpenAI, this may be a breakthrough in OpenAI’s pursuit of superintelligence/general artificial intelligence (AGI). Q not only has the ability to solve previously unseen mathematical problems through abstraction but also has the ability to generate data for training large models without the need for real-world data feeding. If this rumor is true, the bottleneck of AI model training limited by the lack of high-quality data will be broken.

  • AGI arrival: OpenAI’s concerns

The timing of AGI’s arrival, as Elon Musk suggested, may indeed come by 2025, but this is only a matter of time. However, Worldcoin, as a direct beneficiary narrative of AGI’s arrival, may face the biggest concerns from OpenAI, as it is widely recognized as the “shadow token of OpenAI.”

In the early hours of May 14th, OpenAI showcased the latest GPT-4o and 19 other different versions of large language models in comprehensive task scores at its spring product launch event. Just looking at the table, GPT-4o scored 1310, seemingly significantly higher than the later ranks. However, in terms of the total score, it is only 4.5% higher than the second-place GPT4 turbo, 4.9% higher than Google’s Gemini 1.5 Pro in fourth place, and 5.1% higher than Anthropic’s Claude 3 Opus in fifth place.

Since the world-shaking moment of GPT3.5’s debut has passed just over a year, OpenAI’s competitors have already caught up to a very close position (although GPT5 has not been released yet and is expected to be launched this year). Whether OpenAI can maintain its industry-leading position in the future seems to be becoming blurry. If OpenAI’s leading edge and dominant position are diluted or surpassed, then the narrative value of Worldcoin as OpenAI’s shadow token will also decrease.

Furthermore, besides Worldcoin’s iris authentication scheme, more and more competitors are also entering this market. For example, the palm scan ID project Humanity Protocol just announced the completion of a new round of financing worth $30 million at a valuation of $1 billion. LayerZero Labs also announced its operation on Humanity and joined its validator node network, using ZK proofs to authenticate credentials.

Conclusion

In conclusion, although the author has extrapolated the narrative of the AI track, the AI track is different from native crypto fields such as DeFi. It is more of a product of the overflow of the AI ​​boom into the currency circle. Currently, many projects have not yet fully established their business models, and many projects are more like AI-themed memes (such as Rndr similar to Nvidia’s meme, Worldcoin similar to OpenAI’s meme). Readers should be cautious about them.

Statement:

  1. This article originally titled “The Next Wave of Narrative Deduction in the Crypto AI Sector: Catalysts, Development Pathways, and Related Projects” is reproduced from [mintventures]. All copyrights belong to the original author [Alex Xu]. If you have any objection to the reprint, please contact the Gate Learn team, the team will handle it as soon as possible.

  2. Disclaimer: The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.

  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

The Next Wave of Narratives in the Crypto AI Sector

IntermediateJun 04, 2024
Alex Xu, a research partner at Mint Ventures, analyses emerging narratives in the burgeoning crypto AI sector, discussing the catalytic pathways and logic behind these narratives, relevant project targets, as well as risks and uncertainties.
The Next Wave of Narratives in the Crypto AI Sector

Introduction

As of now, the current crypto bull market cycle is the most lackluster in terms of commercial innovation, lacking the phenomenal hot tracks like DeFi, NFT, and GameFi seen in the previous bull market. As a result, the overall market needs industrial hotspots, with sluggish growth in users, industrial investment, and developers.

This stagnation is also reflected in current asset prices. Throughout the cycle, most altcoins have continued to lose value against BTC, including ETH. After all, the valuation of smart contract platforms is determined by the prosperity of applications. When innovation in application development is lackluster, the valuation of public blockchains is hard to elevate.

AI, as a relatively new commercial category in this cycle, still has the potential to bring considerable incremental attention to crypto AI sector projects, thanks to the explosive development speed and continuous hot topics in the external commercial world.

In the IO.NET report released by the author in April, the necessity of combining AI with Crypto was outlined. The advantages of crypto-economic solutions in terms of determinism, resource mobilization and allocation, and trustlessness could potentially address the three challenges of AI: randomness, resource intensity, and difficulty in distinguishing between humans and machines.

In the AI sector of the crypto economy, the author attempts to discuss and deduce some important issues through another article, including:

  • Emerging or potentially explosive narratives in the crypto AI sector
  • Catalytic pathways and logic behind these narratives
  • Relevant project targets associated with these narratives
  • Risks and uncertainties in narrative deduction

This article reflects the author’s thoughts as of the publication date, which may change in the future. The viewpoints are highly subjective and may contain errors in facts, data, and reasoning logic. Please do not take this as investment advice. Criticisms and discussions from peers are welcome.

Let’s get down to the business.

The next wave of narratives in the crypto AI track

Before officially introducing the next wave of narratives in the crypto AI track, let’s first take a look at the main narratives of the current crypto AI. From the perspective of market value, those with more than 1 billion US dollars are:

  • Computing power: Render (RNDR, with circulating market value of 3.85 billion), Akash (1.2 billion circulating market value), IO.NET (the latest round of primary financing valuation is 1 billion)
  • Algorithm Network: Bittensor (TAO, 2.97 billion circulating market value)
  • AI agent: Fetchai (FET, 2.1 billion market capitalization before merger)

*Data time: 2024.5.24, currency units are US dollars.

Apart from the aforementioned sectors, which will be the next AI sector with a single project market value exceeding $1 billion?

The author believes that it can be speculated from two perspectives: the narrative of the “industrial supply side” and the narrative of the “GPT moment.”

First Perspective on AI Narrative: Opportunities in Energy and Data Sectors Behind AI from the Industrial Supply Side

From the industrial supply side, there are four driving forces for AI development:

  • Algorithms: High-quality algorithms can execute training and inference tasks more efficiently.
  • Computing Power: Both model training and inference require computing power provided by GPU hardware. This is the current primary bottleneck in the industry, as a chip shortage has led to high prices for mid-to-high-end chips.
  • Energy: AI data centers consume significant energy. Besides the electricity needed to power GPUs, cooling systems for large data centers can account for about 40% of total energy consumption.
  • Data: Improving large model performance requires expanding training parameters, which means a massive demand for high-quality data.

Among these four driving forces, there are crypto projects with a circulating market value exceeding $1 billion in the algorithm and computing power sectors. However, projects with similar market value have yet to appear in the energy and data fields.

In reality, the supply shortages of energy and data may soon emerge as new industry hotspots, potentially driving a surge in related crypto projects. Let’s start with energy.

On February 29, 2024, Elon Musk mentioned at the Bosch ConnectedWorld 2024 conference: “I predicted the chip shortage over a year ago. The next shortage will be electricity. I think there won’t be enough power to run all the chips next year.”

Looking at specific data, the AI Index Report published annually by the Stanford Institute for Human-Centered Artificial Intelligence, led by Fei-Fei Li, assessed in its 2022 report on the 2021 AI industry that AI’s energy consumption was only 0.9% of global electricity demand, posing limited pressure on energy and the environment. In 2023, the International Energy Agency (IEA) summarized that in 2022, global data centers consumed approximately 460 terawatt-hours (TWh) of electricity, accounting for 2% of global electricity demand. They predicted that by 2026, the global data center energy consumption would be at least 620 TWh and could reach up to 1050 TWh.

However, the IEA’s estimates are still conservative, as many AI projects are about to launch, with energy demands far exceeding their 2023 projections.

For example, Microsoft and OpenAI are planning the Stargate project. This project, expected to start in 2028 and be completed around 2030, aims to build a supercomputer with millions of dedicated AI chips, providing unprecedented computing power for OpenAI, particularly for its research in artificial intelligence and large language models. The project is expected to cost over $100 billion, 100 times the current cost of large data centers.

The energy consumption of the Stargate project alone is estimated to be 50 terawatt-hours.

Due to this, OpenAI’s founder Sam Altman stated at the Davos Forum in January this year: “Future artificial intelligence needs an energy breakthrough because AI will consume far more electricity than people expect.”

Following computing power and energy, the next area of shortage in the rapidly growing AI industry is likely to be data.

Or rather, the shortage of high-quality data required by AI has already become a reality.

From the evolution of GPT, humans have basically grasped the growth pattern of large language model capabilities—by expanding model parameters and training data, the model’s capabilities can be exponentially improved—and this process currently shows no short-term technical bottleneck.

However, the issue is that high-quality and publicly available data may become increasingly scarce in the future. AI products might face supply and demand conflicts for data similar to those for chips and energy.

The first is an increase in disputes over data ownership.

On December 27, 2023, The New York Times filed a lawsuit against OpenAI and Microsoft in the U.S. District Court, accusing them of using millions of its articles without permission to train the GPT model. The lawsuit demands billions of dollars in statutory and actual damages for the “illegal copying and use of works of unique value” and calls for the destruction of all models and training data containing The New York Times’ copyrighted material.

At the end of March, The New York Times issued a new statement targeting not only OpenAI but also Google and Meta. The statement claimed that OpenAI transcribed a large number of YouTube videos into text using a speech recognition tool called Whisper, then used the text to train GPT-4. The New York Times asserted that it has become common practice for big companies to use sneaky methods to train AI models, pointing out that Google has also been converting YouTube video content into text for training its own large models, which essentially infringes on the rights of video content creators.

The lawsuit between The New York Times and OpenAI, labeled as the “first AI copyright case,” is complex and has far-reaching implications for the future of content and the AI industry. Given the complexity of the case and its potential impact, a quick resolution is unlikely. One possible outcome is an out-of-court settlement, with wealthy companies like Microsoft and OpenAI paying substantial compensation. However, future data copyright disputes will inevitably raise the overall cost of high-quality data.

Additionally, as the world’s largest search engine, Google has revealed that it is considering charging fees for its search functionality. The charges would not target the general public but rather AI companies.


Source: Reuters

Google’s search engine servers store a large amount of content. It can even be said that Google stores all the content that has appeared on all Internet pages since the 21st century. The current AI-driven search products, such as overseas ones such as perplexity, and domestic ones such as Kimi and Secret Tower, all process the searched data through AI and then output it to users. Search engines’ charges for AI will inevitably increase the cost of data acquisition.

In fact, in addition to public data, AI giants are also eyeing non-public internal data.

Photobucket is an established image and video hosting website that had 70 million users and nearly half of the U.S. online photo market in the early 2000s. With the rise of social media, the number of Photobucket users has dropped significantly. Currently, there are only 2 million active users left (they pay a high fee of US$399 per year). According to the agreement and privacy policy signed by users when they registered, they have not been used for more than a year. The account will be recycled, and Photobucket’s right to use the pictures and video data uploaded by the user is also supported. Photobucket CEO Ted Leonard revealed that the 1.3 billion photo and video data it owns is extremely valuable for training generative AI models. He is in talks with multiple technology companies to sell the data, with offers ranging from 5 cents to $1 per photo and more than $1 per video, estimating that the data Photobucket can provide is worth more than $1 billion.

EPOCH, a research team focusing on the development trend of artificial intelligence, once published a report on the data required for machine learning based on the use of data and the generation of new data by machine learning in 2022, and considering the growth of computing resources. It once published a report on the state of data required for machine learning titled “ “Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning”. The report concluded that high-quality text data will be exhausted between February 2023 and 2026, and image data will be exhausted between 2030 and 2060. If the efficiency of data utilization cannot be significantly improved, or new data sources emerge, the current trend of large machine learning models that rely on massive data sets may slow down.

Judging from the current situation where AI giants are purchasing data at high prices, free high-quality text data has been exhausted. EPOCH’s prediction 2 years ago was relatively accurate.

At the same time, solutions to the demand for “AI data shortage” are also emerging, namely: AI data provision services.

Defined.ai is a company that provides customized, real and high-quality data for AI companies.

Examples of data types that Defined.ai can provide: https://www.defined.ai/datasets

Its business model is: AI companies provide Defined.ai with their own data needs. For example, in terms of picture quality, the resolution must be as high as possible to avoid blurring, overexposure, and the content should be authentic. In terms of content, AI companies can customize specific themes based on their own training tasks, such as night photos, night cones, parking lots, and signs, to improve the recognition rate of AI in night scenes. The public can take the task of taking the photo. Then, the company will review them and upload them. The parts that meet the requirements will be settled based on the number of photos. The price is about US$1-2 for a high-quality picture, US$5-7 for a short film of more than ten seconds. A high-quality video of more than 10 minutes costs US$100-300, and a text is US$1 per thousand words. The person who receives the subcontracting task can get about 20% of the fee. Data provision may become another crowdsourcing business after “data labeling”.

Global task crowdsourcing distribution, economic incentives, data asset pricing/circulation, and privacy protection are open to everyone, which sound particularly well-suited for a Web3 business paradigm.

AI Narrative Targets from the Industrial Supply Side

The attention brought by the chip shortage has permeated the crypto industry, making distributed computing power the hottest and highest market-cap AI track so far.

So, if the supply and demand conflicts in the AI industry’s energy and data sectors were to explode in the next 1-2 years, what narrative-related projects are currently present in the crypto industry?

Energy-related Targets

Energy-related projects that have been listed on major centralized exchanges (CEX) are rare, with Power Ledger (token: POWR) being the only notable example.

Power Ledger, established in 2017, is a blockchain-based comprehensive energy platform aimed at decentralizing energy trading. It promotes direct electricity transactions between individuals and communities, supports the widespread application of renewable energy, and ensures transparency and efficiency through smart contracts. Initially, Power Ledger operated on a consortium chain derived from Ethereum. In the second half of 2023, Power Ledger updated its whitepaper and launched its own comprehensive public chain, which is based on Solana’s technical framework to handle high-frequency micro-transactions in the distributed energy market. Currently, Power Ledger’s main businesses include:

  • Energy Trading: Allows users to buy and sell electricity directly, especially from renewable sources.
  • Environmental Product Trading: Facilitates trading in carbon credits and renewable energy certificates, as well as financing based on environmental products.
  • Public Chain Operation: Attracts application developers to build on the Power Ledger blockchain, with transaction fees paid in POWR tokens.

As of now, Power Ledger’s circulating market cap is $170 million, with a fully diluted market cap of $320 million.

Data-related Targets

Compared to energy-related crypto targets, the data track has a richer variety of crypto targets. Here are the data track projects that I am currently watching, all of which are listed on at least one of the major CEXs such as Binance, OKX, or Coinbase, arranged in ascending order of their fully diluted valuation (FDV):

  1. Streamr – DATA

Value Proposition: Streamr aims to build a decentralized real-time data network that allows users to freely trade and share data while retaining full control over their data. Through its data marketplace, Streamr seeks to enable data producers to directly sell data streams to interested consumers without intermediaries, thereby reducing costs and increasing efficiency.

Source: https://streamr.network/hub/projects

In a practical collaboration case, Streamr partnered with another Web3 onboard hardware project, DIMO. Through DIMO hardware sensors installed in vehicles, they collect data such as temperature, air pressure, and other metrics, forming weather data streams that are transmitted to organizations in need.

Compared to other data projects, Streamr focuses more on IoT and hardware sensor data. Besides the aforementioned DIMO vehicle data, other projects include real-time traffic data streams in Helsinki. Due to this focus, Streamr’s project token, DATA, experienced a surge, doubling in value in a single day last December when the DePIN concept was at its peak.

Currently, Streamr’s circulating market cap is $44 million, with a fully diluted market cap of $58 million.

  1. Covalent – CQT

Unlike other data projects, Covalent provides blockchain data. The Covalent network reads data from blockchain nodes via RPC, processes and organizes this data, creating an efficient query database. This allows Covalent’s users to quickly retrieve the information they need without performing complex queries directly from blockchain nodes. This service is known as “blockchain data indexing.”

Covalent’s clients are primarily B2B, including Dapp projects like various DeFi applications, as well as many centralized crypto companies such as ConsenSys (the parent company of MetaMask), CoinGecko (a well-known crypto asset tracking site), Rotki (a tax tool), and Rainbow (a crypto wallet). Additionally, traditional financial giants like Fidelity and the Big Four accounting firm EY are also Covalent’s clients. According to Covalent’s official disclosures, the project’s revenue from data services has already surpassed that of the leading project in the same field, The Graph.

The Web3 industry, due to the completeness, openness, authenticity, and real-time nature of on-chain data, is poised to become a valuable source of high-quality data for specific AI scenarios and “small AI models.” As a data provider, Covalent has begun supplying data for various AI scenarios and has launched verifiable structured data specifically for AI.

Source: https://www.covalenthq.com/solutions/decentralized-ai/

For example, it provides data to SmartWhales, an on-chain intelligent trading platform, and uses AI to identify profitable trading patterns and addresses; Entendre Finance uses Covalent’s structured data and AI processing for real-time insights, anomaly detection, and predictive analysis.

At present, the main scenarios for the on-chain data services provided by Covalent are still financial. However, with the generalization of Web3 products and data types, the usage scenarios of on-chain data will also be further expanded.

The current diluted market value of the Covalent project is $150 million, and the full diluted market value is $235 million. Compared with The Graph, a blockchain data index project in the same track, it has a relatively obvious valuation advantage.

  1. Hivemapper – Honey

Among all data materials, video data often has the highest unit price. Hivemapper can provide data including video and map information to AI companies. Hivemapper itself is a decentralized global mapping project that aims to create a detailed, dynamic and accessible mapping system through blockchain technology and community contributions. Participants can capture map data through a dashcam and add it to the open source Hivemapper data network, and receive rewards based on their contributions in the project token HONEY. In order to improve network effects and reduce interaction costs, Hivemapper is built on Solana.

Hivemapper, founded in 2015, initially aimed to create maps using drones. However, it soon realized that this model was difficult to scale, prompting a shift to using dashcams and smartphones to capture geographic data, significantly reducing the cost of global map production.

Compared to street view and mapping software like Google Maps, Hivemapper uses an incentivized network and crowdsourcing model to expand map coverage more efficiently, maintain the freshness of real-world maps, and improve video quality.

Before the AI-driven demand for data surged, Hivemapper’s primary clients included the automotive industry’s autonomous driving departments, navigation service companies, governments, insurance companies, and real estate firms. Today, Hivemapper can provide extensive road and environmental data for AI and large models through APIs. By continually updating streams of images and road feature data, AI and ML models can better translate this data into improved capabilities, performing tasks related to geographic location and visual judgment.


Data source: https://hivemapper.com/blog/diversify-ai-computer-vision-models-with-global-road-imagery-map-data/

Currently, Hivemapper’s Honey project has a diluted market cap of $120 million and a fully diluted market cap (FDV) of $496 million.

In addition to the three projects mentioned above, the data area also includes:

The Graph – GRT: With a diluted market cap of $3.2 billion and an FDV of $3.7 billion, The Graph provides blockchain data indexing services similar to Covalent.

Ocean Protocol – OCEAN: With a circulating market cap of $670 million and an FDV of $1.45 billion, Ocean Protocol is an open-source protocol aimed at facilitating the exchange and monetization of data and data-related services. It connects data consumers with data providers to share data while ensuring trust, transparency, and traceability. This project is set to merge with Fetch.ai and SingularityNET, with its token converting to ASI.

Second perspective on the AI narrative: The Arrival of AGI, reminiscent of the GPT Moment

In the author’s view, the inaugural year of the “AI track” in the crypto industry was the remarkable year of 2023, marked by the advent of GPT, and the surge in crypto AI projects was more of a ripple effect from the explosive growth of the AI industry.

Although capabilities like GPT4 and Turbo continued to evolve after GPT3.5, and Sora showcased astonishing video creation abilities, along with rapid developments in large language models outside of OpenAI, it is undeniable that the cognitive impact of AI technological advancements on the general public is diminishing. People are gradually starting to use AI tools, and large-scale job displacement seems to be yet to occur.

So, will the AI field witness another “GPT moment” in the future, where a leap in AI development astonishes the masses, making people realize that their lives and work will be changed as a result? This moment could be the advent of Artificial General Intelligence (AGI).

AGI refers to machines having comprehensive cognitive abilities similar to humans, capable of solving various complex problems beyond specific tasks. AGI systems possess high-level abstract thinking, extensive background knowledge, cross-domain common-sense reasoning, causal understanding, and cross-disciplinary transfer learning abilities. In terms of comprehensive capabilities, AGI’s performance is on par with the best humans, and even surpasses the collective abilities of the most outstanding human groups.

In fact, whether depicted in science fiction, games, or movies, or fueled by public expectations following the rapid proliferation of GPT, society has long anticipated the emergence of AGI surpassing human cognitive levels. It could be said that GPT itself is a precursor to AGI, a prophecy of general artificial intelligence.

The reason why GPT has such immense industrial energy and psychological impact is that its implementation speed and performance exceeded the expectations of the masses: people didn’t expect that an artificial intelligence system capable of passing the Turing Test would actually arrive, and arrive so quickly.

In reality, Artificial General Intelligence (AGI) may reprise the suddenness of the “GPT moment” within 1-2 years: people have just adapted to the assistance of GPT, only to discover that AI is not just an assistant anymore. It can even independently accomplish highly creative and challenging tasks, including those problems that have baffled top scientists for decades.

On April 8th this year, Musk was interviewed by Nicolai Tangen, Chief Investment Officer of the Norwegian Sovereign Wealth Fund, about the timing of AGI’s emergence.

He said, “If we define AGI as smarter than the smartest humans, I think it’s likely to happen around 2025.” In other words, according to his estimation, it will take at most another year and a half for AGI to arrive. Of course, he added a caveat, that “power and hardware keep up.”

The benefits of AGI’s arrival are evident.

It means that humanity’s productivity will take a giant leap forward, and numerous scientific research problems that have plagued us for decades will be solved effortlessly. If we define “the smartest humans” as Nobel Prize winners, it means that as long as there is sufficient energy, computing power, and data, we can have countless tireless “Nobel Prize winners” delving into the most challenging scientific problems around the clock.

In reality, Nobel Prize winners are not as rare as one in several hundred million; most of them are on par with university professors in terms of ability and intelligence. However, due to probability and luck in choosing the right direction, and persisting until results are obtained, individuals of similar caliber to them, their equally outstanding colleagues, may have also won Nobel Prizes in parallel universes of scientific research. Unfortunately, there are still not enough people with the abilities of top university professors participating in scientific breakthroughs, so the speed of “exploring all correct directions in scientific research” remains slow.

With the advent of AGI, in conditions where energy and computing power are sufficiently supplied, we can have an infinite number of AGIs with the level of Nobel Prize winners exploring in-depth in any possible scientific breakthrough direction. The rate of technological advancement will increase by dozens of times. Technological advancement will lead to resources that are currently considered expensive and scarce increasing hundreds of times in the next 10 to 20 years, such as food production, new materials, new drugs, high-quality education, etc. The cost of obtaining these resources will also decrease exponentially, enabling us to support more people with fewer resources, and the per capita wealth will increase rapidly.

Global GDP Trend (Source: World Bank)

This may sound a bit sensational. Let’s look at two examples, which have been discussed by the author in the IO.NET research report before:

  • In 2018, Nobel Prize laureate in Chemistry, Frances Arnold, stated at the award ceremony: “Today, we can read, write, and edit any DNA sequence in practical applications, but we still cannot compose it.” Just five years later, in 2023, researchers from Salesforce Research, an AI startup from Stanford University and Silicon Valley, published a paper in Nature Biotechnology. They used a large language model based on fine-tuning GPT-3 to create one million new proteins from scratch and discovered two proteins with drastically different structures, both of which have antimicrobial properties and could potentially serve as bacterial resistance solutions beyond antibiotics. In other words, with the help of AI, the bottleneck in protein “creation” has been overcome.
  • Before this, the artificial intelligence AlphaFold algorithm predicted the structures of almost all 214 million known proteins on Earth within 18 months, a result hundreds of times greater than the combined efforts of all previous structural biologists.

The revolution has already occurred, and the advent of AGI will further accelerate this process. On the other hand, the challenges brought by the advent of AGI are also enormous. AGI will not only replace a large number of cognitive workers, but also impact physical laborers who were previously considered to be “less affected by AI.” With the maturity of robotics technology and the development of new materials leading to a reduction in production costs, the proportion of labor positions replaced by machines and software will rapidly increase.

At that time, two seemingly distant issues will quickly come to the surface:

  1. The problem of employment and income for a large number of unemployed people.
  2. How to distinguish between AI and humans in a world where AI is ubiquitous.

Worldcoin\Worldchain is attempting to provide solutions by offering a Universal Basic Income (UBI) system to provide basic income to the public and using iris-based biometric features to distinguish between humans and AI.

In fact, UBI, which provides money to everyone, is not just a pie in the sky. Countries like Finland and England have experimented with universal basic income, and parties in Canada, Spain, India, and other countries are actively proposing and promoting related experiments.

The benefit of using a biometric identification + blockchain-based model for UBI distribution lies in the global nature of the system, providing broader coverage to the population. Additionally, it can leverage the user network expanded through income distribution to build other business models, such as financial services (Defi), social networking, crowdsourcing, etc., forming synergies within the network.

One of the corresponding assets to the impact of the advent of AGI is Worldcoin – WLD, with a diluted market cap of $1.03 billion and a fully diluted market cap of $47.2 billion.

The risks and uncertainties of narrative deduction

Unlike many previous project and track research reports released by Mint Ventures, this article has a greater subjectivity in the narrative deduction and prediction. Readers should consider the content of this article as a divergent discussion rather than a prophecy of the future. The narrative extrapolation presented by the author faces many uncertainties, leading to speculative errors. These risks or influencing factors include but are not limited to:

  • Energy aspect: GPU updates causing a rapid decline in energy consumption

Despite the sharp increase in energy demand surrounding AI, chip manufacturers like Nvidia are providing higher computing power with lower power consumption through continuous hardware upgrades. For example, in March of this year, Nvidia released a new generation of AI computing card GB200, which integrates two B200 GPUs and one Grace CPU. Its training performance is four times that of the previous generation’s main AI GPU H100, and the inference performance is seven times that of H100, while the required energy consumption is only one-quarter of H100’s. However, despite this, the desire for power from AI is far from being satisfied. With the decrease in unit energy consumption, the total energy consumption may actually increase as AI applications and demands further expand.

  • Data aspect: The Q* project realizes “self-generated data”

There has long been a rumor within OpenAI about the project “Q,” which has been mentioned in internal communications to OpenAI employees. According to Reuters quoting insiders at OpenAI, this may be a breakthrough in OpenAI’s pursuit of superintelligence/general artificial intelligence (AGI). Q not only has the ability to solve previously unseen mathematical problems through abstraction but also has the ability to generate data for training large models without the need for real-world data feeding. If this rumor is true, the bottleneck of AI model training limited by the lack of high-quality data will be broken.

  • AGI arrival: OpenAI’s concerns

The timing of AGI’s arrival, as Elon Musk suggested, may indeed come by 2025, but this is only a matter of time. However, Worldcoin, as a direct beneficiary narrative of AGI’s arrival, may face the biggest concerns from OpenAI, as it is widely recognized as the “shadow token of OpenAI.”

In the early hours of May 14th, OpenAI showcased the latest GPT-4o and 19 other different versions of large language models in comprehensive task scores at its spring product launch event. Just looking at the table, GPT-4o scored 1310, seemingly significantly higher than the later ranks. However, in terms of the total score, it is only 4.5% higher than the second-place GPT4 turbo, 4.9% higher than Google’s Gemini 1.5 Pro in fourth place, and 5.1% higher than Anthropic’s Claude 3 Opus in fifth place.

Since the world-shaking moment of GPT3.5’s debut has passed just over a year, OpenAI’s competitors have already caught up to a very close position (although GPT5 has not been released yet and is expected to be launched this year). Whether OpenAI can maintain its industry-leading position in the future seems to be becoming blurry. If OpenAI’s leading edge and dominant position are diluted or surpassed, then the narrative value of Worldcoin as OpenAI’s shadow token will also decrease.

Furthermore, besides Worldcoin’s iris authentication scheme, more and more competitors are also entering this market. For example, the palm scan ID project Humanity Protocol just announced the completion of a new round of financing worth $30 million at a valuation of $1 billion. LayerZero Labs also announced its operation on Humanity and joined its validator node network, using ZK proofs to authenticate credentials.

Conclusion

In conclusion, although the author has extrapolated the narrative of the AI track, the AI track is different from native crypto fields such as DeFi. It is more of a product of the overflow of the AI ​​boom into the currency circle. Currently, many projects have not yet fully established their business models, and many projects are more like AI-themed memes (such as Rndr similar to Nvidia’s meme, Worldcoin similar to OpenAI’s meme). Readers should be cautious about them.

Statement:

  1. This article originally titled “The Next Wave of Narrative Deduction in the Crypto AI Sector: Catalysts, Development Pathways, and Related Projects” is reproduced from [mintventures]. All copyrights belong to the original author [Alex Xu]. If you have any objection to the reprint, please contact the Gate Learn team, the team will handle it as soon as possible.

  2. Disclaimer: The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.

  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

Start Now
Sign up and get a
$100
Voucher!