Fred

Fred

Change the world by Web3 @RyzeLabs | alumni @THUBA_DAO

[In-Depth Analysis] What kind of sparks can AI and Web3 create?

Introduction: The Development of AI+Web3#

In recent years, the rapid development of artificial intelligence (AI) and Web3 technologies has attracted widespread attention globally. AI, as a technology that simulates and mimics human intelligence, has made significant breakthroughs in areas such as facial recognition, natural language processing, and machine learning. The rapid advancement of AI technology has brought about tremendous changes and innovations across various industries.

The market size of the AI industry reached $200 billion in 2023, with industry giants and outstanding players like OpenAI, Character.AI, and Midjourney emerging rapidly, leading the AI boom.

At the same time, Web3, as an emerging internet model, is gradually changing our understanding and usage of the internet. Based on decentralized blockchain technology, Web3 achieves data sharing and control, user autonomy, and the establishment of trust mechanisms through features such as smart contracts, distributed storage, and decentralized identity verification. The core idea of Web3 is to liberate data from centralized authoritative institutions, granting users control over their data and the right to share its value.

Currently, the market value of the Web3 industry has reached $25 trillion, with players like Bitcoin, Ethereum, Solana, and application-layer projects like Uniswap and Stepn continuously emerging with new narratives and scenarios, attracting more and more people to join the Web3 industry.

It is easy to see that the combination of AI and Web3 is a field of great interest to builders and VCs from both the East and West, and how to effectively integrate the two is a question worth exploring.

This article will focus on the current state of AI+Web3 development, exploring the potential value and impact brought by this integration. We will first introduce the basic concepts and characteristics of AI and Web3, then discuss their interrelationship. Subsequently, we will analyze the current status of AI+Web3 projects and delve into the limitations and challenges they face. Through this research, we hope to provide valuable references and insights for investors and practitioners in related industries.

Ways AI and Web3 Interact#

The development of AI and Web3 is like the two sides of a balance, with AI bringing productivity improvements and Web3 bringing changes to production relationships. So what kind of sparks can AI and Web3 create together? We will first analyze the dilemmas and areas for improvement faced by the AI and Web3 industries, and then discuss how they can help solve these dilemmas.

Dilemmas Faced by the AI Industry#

To explore the dilemmas faced by the AI industry, we first need to look at the essence of the AI industry. The core of the AI industry revolves around three elements: computing power, algorithms, and data.

image

  1. First is computing power: Computing power refers to the ability to perform large-scale calculations and processing. AI tasks often require processing vast amounts of data and performing complex calculations, such as training deep neural network models. High-intensity computing power can accelerate model training and inference processes, improving the performance and efficiency of AI systems. In recent years, with the development of hardware technology, such as graphics processing units (GPUs) and dedicated AI chips (like TPUs), the enhancement of computing power has played a crucial role in the development of the AI industry. Nvidia, which has seen its stock soar in recent years, occupies a large market share as a GPU provider, earning substantial profits.

  2. What are algorithms: Algorithms are the core components of AI systems; they are mathematical and statistical methods used to solve problems and accomplish tasks. AI algorithms can be divided into traditional machine learning algorithms and deep learning algorithms, with deep learning algorithms achieving significant breakthroughs in recent years. The choice and design of algorithms are critical to the performance and effectiveness of AI systems. Continuously improving and innovating algorithms can enhance the accuracy, robustness, and generalization ability of AI systems. Different algorithms will yield different results, so enhancing algorithms is also crucial for task completion.

  3. Why is data important: The core task of AI systems is to extract patterns and rules from data through learning and training. Data is the foundation for training and optimizing models; through large-scale data samples, AI systems can learn to create more accurate and intelligent models. Rich datasets can provide more comprehensive and diverse information, allowing models to generalize better to unseen data, helping AI systems better understand and solve real-world problems.

After understanding the three core elements of AI, let's look at the dilemmas and challenges AI faces in these areas. First, regarding computing power, AI tasks typically require a significant amount of computational resources for model training and inference, especially for deep learning models. However, acquiring and managing large-scale computing power is an expensive and complex challenge. The costs, energy consumption, and maintenance of high-performance computing devices are all issues. This can be particularly difficult for startups and individual developers to obtain sufficient computing power.

In terms of algorithms, although deep learning algorithms have achieved great success in many fields, there are still some dilemmas and challenges. For example, training deep neural networks requires vast amounts of data and computational resources, and for certain tasks, the interpretability and explainability of the models may be insufficient. Additionally, the robustness and generalization ability of algorithms are also important issues, as models may perform inconsistently on unseen data. Among the many algorithms, finding the best algorithm to provide the best service is a process that requires continuous exploration.

Regarding data, while data drives AI, obtaining high-quality and diverse data remains a challenge. In some fields, data may be difficult to obtain, such as sensitive health data in the medical field. Furthermore, the quality, accuracy, and labeling of data are also issues; incomplete or biased data can lead to erroneous behaviors or biases in models. At the same time, protecting data privacy and security is also a significant consideration.

Moreover, there are issues such as explainability and transparency; the black-box nature of AI models is a public concern. For certain applications, such as finance, healthcare, and justice, the decision-making process of models needs to be explainable and traceable, while existing deep learning models often lack transparency. Explaining the decision-making process of models and providing reliable explanations remains a challenge.

Additionally, many AI projects have unclear business models, which leaves many AI entrepreneurs feeling lost.

Dilemmas Faced by the Web3 Industry#

In the Web3 industry, there are also many different dilemmas that need to be addressed, whether it is data analysis in Web3, poor user experience of Web3 products, or issues related to smart contract code vulnerabilities and hacker attacks, there is much room for improvement. As a tool for enhancing productivity, AI also has significant potential in these areas.

First, there is the need for improvement in data analysis and prediction capabilities: The application of AI technology in data analysis and prediction has brought significant impacts to the Web3 industry. Through intelligent analysis and mining using AI algorithms, Web3 platforms can extract valuable information from massive amounts of data and make more accurate predictions and decisions. This is particularly important for risk assessment, market forecasting, and asset management in decentralized finance (DeFi).

Additionally, improvements in user experience and personalized services can be achieved: The application of AI technology enables Web3 platforms to provide better user experiences and personalized services. By analyzing and modeling user data, Web3 platforms can offer personalized recommendations, customized services, and intelligent interactive experiences. This helps to enhance user engagement and satisfaction, promoting the development of the Web3 ecosystem. For example, many Web3 protocols integrate AI tools like ChatGPT to better serve users.

In terms of security and privacy protection, the application of AI also has profound implications for the Web3 industry. AI technology can be used to detect and defend against cyberattacks, identify abnormal behaviors, and provide stronger security guarantees. Additionally, AI can be applied to data privacy protection through technologies such as data encryption and privacy computing, safeguarding users' personal information on Web3 platforms. In the auditing of smart contracts, AI technology can be used for automated contract auditing and vulnerability detection, improving the security and reliability of contracts.

It is evident that AI can participate in and provide support in many aspects of the dilemmas and potential improvements faced by the Web3 industry.

Analysis of the Current State of AI+Web3 Projects#

Projects that combine AI and Web3 mainly focus on two major aspects: utilizing blockchain technology to enhance the performance of AI projects and using AI technology to improve Web3 projects.

Around these two aspects, a large number of projects have emerged exploring this path, including Io.net, Gensyn, Ritual, and various others. This article will analyze the current status and development of different sub-tracks of AI supporting Web3 and Web3 supporting AI.

image

Web3 Supporting AI#

Decentralized Computing Power#

Since OpenAI launched ChatGPT at the end of 2022, it has ignited a wave of interest in AI. Within five days of its launch, the user count reached one million, while Instagram took about two and a half months to reach the same milestone. Following that, ChatGPT's growth was rapid, reaching 100 million monthly active users within two months, and by November 2023, it had reached 100 million weekly active users. With the advent of ChatGPT, the AI field quickly exploded from a niche track into a highly regarded industry.

According to a Trendforce report, ChatGPT requires 30,000 NVIDIA A100 GPUs to operate, and future GPT-5 will require even more computational power. This has sparked an arms race among AI companies, as only those with sufficient computing power can secure enough momentum and advantage in the AI battle, leading to a shortage of GPUs.

Before the rise of AI, Nvidia, the largest GPU provider, had customers concentrated in three major cloud services: AWS, Azure, and GCP. With the rise of artificial intelligence, a large number of new buyers have emerged, including major tech companies like Meta, Oracle, and other data platforms and AI startups, all joining the race to hoard GPUs for training AI models. Large tech companies like Meta and Tesla have significantly increased their purchases for custom AI models and internal research. Foundational model companies like Anthropic and data platforms like Snowflake and Databricks have also purchased more GPUs to help clients provide AI services.

As mentioned by Semi Analysis last year, there are "GPU rich and GPU poor" companies, with a few having over 20,000 A100/H100 GPUs, allowing team members to use 100 to 1,000 GPUs for projects. These companies are either cloud providers or self-built LLMs, including OpenAI, Google, Meta, Anthropic, Inflection, Tesla, Oracle, and Mistral.

However, most companies fall into the category of GPU poor, struggling with far fewer GPUs and spending considerable time and effort on tasks that are challenging to advance the ecosystem. This situation is not limited to startups; some of the most well-known AI companies—Hugging Face, Databricks (MosaicML), Together, and even Snowflake—have fewer than 20K A100/H100 GPUs. These companies have world-class technical talent but are constrained by the supply of GPUs, putting them at a disadvantage in the AI competition compared to larger companies.

This shortage is not limited to the "GPU poor"; even by the end of 2023, the leading AI player OpenAI had to shut down paid registrations for weeks due to insufficient GPU availability while procuring more GPU supplies.

image

It is evident that the rapid development of AI has led to a severe mismatch between the demand and supply sides of GPUs, with the issue of supply not meeting demand becoming urgent.

To address this issue, some Web3 projects have begun to leverage the characteristics of Web3 technology to provide decentralized computing power services, including Akash, Render, Gensyn, and others. These projects share a common goal of incentivizing users to provide idle GPU computing power through tokens, becoming the supply side of computing power to support AI clients.

The supply side can be categorized into three main areas: cloud service providers, cryptocurrency miners, and enterprises.

Cloud service providers include large cloud service providers (like AWS, Azure, GCP) and GPU cloud service providers (like Coreweave, Lambda, Crusoe, etc.), where users can resell idle computing power from cloud service providers to earn income. Cryptocurrency miners, following Ethereum's transition from PoW to PoS, have also become an important potential supply side with idle GPU computing power. Additionally, large enterprises like Tesla and Meta, which have purchased large quantities of GPUs for strategic reasons, can also provide idle GPU computing power as a supply side.

Currently, players in this space can be roughly divided into two categories: those using decentralized computing power for AI inference and those using it for AI training. The former includes Render (which, while focused on rendering, can also provide AI computing power), Akash, Aethir, etc.; the latter includes io.net (which can support both inference and training) and Gensyn, with the main difference being the varying requirements for computing power.

Let's first discuss the projects focused on AI inference. These projects attract users to participate in providing computing power through token incentives, then offer the computing power network services to the demand side, thus matching idle computing power supply with demand. An introduction and analysis of these types of projects can be found in our previous DePIN research report from Ryze Labs.

The core point is that through a token incentive mechanism, projects first attract suppliers and then attract users, thus achieving the project's cold start and core operational mechanism, allowing for further expansion and development. In this cycle, the supply side receives more valuable token rewards, while the demand side benefits from cheaper and more cost-effective services. The project's token value aligns with the growth of participants on both the supply and demand sides, and as token prices rise, more participants and speculators are attracted to join, forming value capture.

image

The other category uses decentralized computing power for AI training, such as Gensyn and io.net (which can support both AI training and inference). In fact, the operational logic of these projects is not fundamentally different from those focused on AI inference; they still attract supply side participation to provide computing power through token incentives for the demand side to use.

Among them, io.net, as a decentralized computing power network, currently has over 500,000 GPUs, performing exceptionally well among decentralized computing power projects. Additionally, it has integrated the computing power of Render and Filecoin, continuously developing its ecosystem.

image

Furthermore, Gensyn facilitates the allocation and rewards of machine learning tasks through smart contracts to achieve AI training. As shown in the diagram, the cost of machine learning training work on Gensyn is approximately $0.4 per hour, significantly lower than the over $2 cost on AWS and GCP.

Gensyn's system includes four participants: submitters, executors, validators, and reporters.

  • Submitters: Demand users are consumers of tasks, providing tasks to be computed and paying for AI training tasks.
  • Executors: Executors perform the model training tasks and generate proofs of task completion for validators to check.
  • Validators: Validators link the non-deterministic training process with deterministic linear computations, comparing the executors' proofs with expected thresholds.
  • Reporters: Reporters check the validators' work and raise challenges to earn rewards when issues are found.

It is clear that Gensyn aims to become a large-scale, cost-effective computing protocol for global deep learning models. However, across this track, why do most projects choose to use decentralized computing power for AI inference rather than training?
Here, I will help those unfamiliar with AI training and inference understand the differences between the two:

  • AI Training: If we compare artificial intelligence to a student, training is akin to providing the student with a wealth of knowledge and examples, which can also be understood as the data we often refer to. The AI learns from these knowledge examples. Since learning inherently requires understanding and memorizing vast amounts of information, this process demands significant computational power and time.
  • AI Inference: So what is inference? It can be understood as using the knowledge learned to solve problems or take exams. During the inference phase, the AI uses the knowledge it has learned to answer questions, rather than acquiring new knowledge, so the computational requirements during inference are much lower.

It is evident that the computational power requirements for the two are vastly different. The feasibility of using decentralized computing power for AI inference and training will be further analyzed in the challenges section.

Additionally, there are projects like Ritual that aim to combine distributed networks with model creators, maintaining decentralization and security. Its first product, Infernet, allows smart contracts on the blockchain to access AI models off-chain, enabling such contracts to access AI in a manner that preserves verification, decentralization, and privacy.

The coordinator of Infernet is responsible for managing the behavior of nodes in the network and responding to computation requests from consumers. When users utilize Infernet, tasks such as inference and proof are performed off-chain, with output results returned to the coordinator and ultimately passed to consumers on-chain through contracts.

In addition to decentralized computing power networks, there are also decentralized bandwidth networks like Grass, which aim to enhance data transmission speed and efficiency. Overall, the emergence of decentralized computing power networks provides a new possibility for the supply side of AI computing power, propelling AI forward.

Decentralized Algorithm Models#

As mentioned in Chapter 2, the three core elements of AI are computing power, algorithms, and data. Since computing power can form a supply network through decentralization, can algorithms also adopt a similar approach to form a supply network for algorithm models?
Before analyzing the projects in this track, let’s first understand the significance of decentralized algorithm models. Many may wonder, since OpenAI already exists, why is there a need for a decentralized algorithm network?

Essentially, a decentralized algorithm network is a decentralized AI algorithm service market that connects many different AI models, each with its own expertise and skills. When users pose questions, the market selects the most suitable AI model to provide answers. Chat-GPT is an AI model developed by OpenAI that can understand and generate human-like text.

In simple terms, ChatGPT is like a highly capable student helping to solve various types of problems, while a decentralized algorithm network resembles a school with many students helping to solve problems. Although this student is currently very capable, over a longer period, a school that can recruit students globally has tremendous potential.

Currently, in the field of decentralized algorithm models, there are also some projects attempting to explore this area. The representative project Bittensor will be used as a case study to help understand the development of this niche field.

In Bittensor, the supply side of algorithm models (or miners) contributes their machine learning models to the network. These models can analyze data and provide insights. Model providers are rewarded with the cryptocurrency token TAO for their contributions.

To ensure the quality of answers to questions, Bittensor employs a unique consensus mechanism to ensure the network reaches a consensus on the best answers. When a question is posed, multiple model miners provide answers. The validators in the network then begin their work to determine the best answer and send it back to the user.

The TAO token in Bittensor plays two main roles throughout the process: it incentivizes miners to contribute algorithm models to the network, and users must spend tokens to pose questions and have the network complete tasks.

Since Bittensor is decentralized, anyone with internet access can join the network, either as a user posing questions or as a miner providing answers. This allows more people to utilize powerful artificial intelligence.

In summary, using networks like Bittensor as an example, the field of decentralized algorithm models has the potential to create a more open and transparent landscape, where AI models can be trained, shared, and utilized in a secure and decentralized manner. Additionally, there are decentralized algorithm model networks like BasedAI attempting similar initiatives, with the interesting aspect being the use of ZK to protect user data privacy during interactions with models, which will be further discussed in the fourth section.

As decentralized algorithm model platforms develop, they will enable smaller companies to compete with large organizations in utilizing top-tier AI tools, potentially having a significant impact across various industries.

Decentralized Data Collection#

For training AI models, a large supply of data is essential. However, most Web2 companies still monopolize user data, with platforms like X, Reddit, TikTok, Snapchat, Instagram, and YouTube prohibiting data collection for AI training. This has become a significant obstacle to the development of the AI industry.

On the other hand, some Web2 platforms sell user data to AI companies without sharing any profits with users. For example, Reddit reached a $60 million agreement with Google to allow Google to train AI models on its posts. This has led to the monopolization of data collection rights by large capital and data entities, pushing the industry towards an ultra-capital-intensive direction.

In response to this situation, some projects are leveraging Web3 to achieve decentralized data collection through token incentives. For instance, in PublicAI, users can participate in two roles:

  • One role is as AI data providers, where users can find valuable content on X, tag the official PublicAI account, and use #AI or #Web3 as classification tags to send content to the PublicAI data center for data collection.
  • The other role is as data validators, where users can log into the PublicAI data center to vote for the most valuable data for AI training.

In return, users can earn token incentives through these contributions, fostering a win-win relationship between data contributors and the AI industry.

In addition to projects like PublicAI that specifically collect data for AI training, many other projects are also engaging in decentralized data collection through token incentives. For example, Ocean collects user data through data tokenization to serve AI, Hivemapper collects map data through users' vehicle-mounted cameras, Dimo collects user vehicle data, and WiHi collects weather data. These projects that collect data through decentralization are also potential supply sides for AI training, so broadly speaking, they can also be included in the paradigm of Web3 supporting AI.

ZK Protecting User Privacy in AI#

In addition to the advantages of decentralization, blockchain technology also brings a significant benefit: zero-knowledge proofs. Through zero-knowledge technology, privacy can be protected while achieving information verification.

In traditional machine learning, data typically needs to be stored and processed centrally, which may lead to risks of data privacy breaches. On the other hand, methods for protecting data privacy, such as data encryption or data de-identification, may limit the accuracy and performance of machine learning models.

The technology of zero-knowledge proofs can help address this dilemma, resolving the conflict between privacy protection and data sharing. ZKML (Zero-Knowledge Machine Learning) allows for the training and inference of machine learning models without revealing the original data. Zero-knowledge proofs enable the features of data and the results of models to be proven correct without disclosing the actual data content.

The core goal of ZKML is to achieve a balance between privacy protection and data sharing. It can be applied in various scenarios, such as medical health data analysis, financial data analysis, and cross-organizational collaboration. By using ZKML, individuals can protect the privacy of their sensitive data while sharing it with others to gain broader insights and collaborative opportunities without worrying about the risk of data privacy breaches.

Currently, this field is still in its early stages, with most projects still exploring. For example, BasedAI has proposed a decentralized approach that seamlessly integrates FHE with LLM to maintain data confidentiality. By embedding privacy into its distributed network infrastructure using zero-knowledge large language models (ZK-LLM), it ensures that user data remains private throughout the network's operation.

Here, let me briefly explain what Fully Homomorphic Encryption (FHE) is. Fully homomorphic encryption is a type of encryption technology that allows computations to be performed on encrypted data without needing to decrypt it. This means that various mathematical operations (such as addition, multiplication, etc.) performed on FHE-encrypted data can be executed while keeping the data encrypted, yielding results equivalent to those obtained from performing the same operations on the original unencrypted data, thus protecting user data privacy.

Additionally, beyond the four categories mentioned above, there are also blockchain projects like Cortex that support executing AI programs on-chain. Currently, executing machine learning programs on traditional blockchains faces a challenge, as virtual machines are extremely inefficient when running any non-complex machine learning models. Therefore, most people believe that running AI on a blockchain is impossible. However, the Cortex Virtual Machine (CVM) utilizes GPUs to execute AI programs on-chain and is compatible with EVM. In other words, the Cortex chain can execute all Ethereum DApps and integrate AI machine learning into these DApps. This allows machine learning models to run in a decentralized, immutable, and transparent manner, as network consensus verifies every step of AI inference.

AI Supporting Web3#

In the collision of AI and Web3, in addition to Web3 supporting AI, the assistance of AI to the Web3 industry is also worth noting. The core contribution of artificial intelligence lies in enhancing productivity, thus there have been many attempts in areas such as AI auditing of smart contracts, data analysis and prediction, personalized services, security, and privacy protection.

Data Analysis and Prediction#

Currently, many Web3 projects are beginning to integrate existing AI services (such as ChatGPT) or develop their own to provide data analysis and prediction services for Web3 users. The scope is very broad, including providing investment strategies through AI algorithms, on-chain analysis AI tools, price and market predictions, and more.

For example, Pond uses AI graph algorithms to predict valuable alpha tokens for users and institutions, providing investment assistance. BullBear AI trains based on users' historical data, price trends, and market movements to provide the most accurate information to support price trend predictions, helping users gain profits.

There are also platforms like Numerai, which is an investment competition platform where participants use AI and large language models to predict stock markets, utilizing the platform's free high-quality data to train models and submit predictions daily. Numerai calculates the performance of these predictions over the next month, and participants can stake NMR on their models to earn returns based on their performance.

Additionally, there are on-chain data analysis platforms like Arkham that also integrate AI into their services. Arkham connects blockchain addresses with entities like exchanges, funds, and whales, displaying key data and analysis for users to provide decision-making advantages. Its AI integration part involves Arkham Ultra, which matches addresses with real-world entities through algorithms, developed over three years with support from core contributors from Palantir and OpenAI founders.

Personalized Services#

In Web2 projects, AI has many application scenarios in search and recommendation fields to serve users' personalized needs. The same is true for Web3 projects, where many project teams optimize user experience by integrating AI.

For instance, the well-known data analysis platform Dune recently launched the Wand tool, which helps users write SQL queries using large language models. Through the Wand Create feature, users can automatically generate SQL queries based on natural language questions, making it very convenient for users who do not understand SQL to search.

Moreover, some Web3 content platforms have also begun to integrate ChatGPT for content summarization. For example, the Web3 media platform Followin integrates ChatGPT to summarize viewpoints and recent developments in a particular field; the Web3 encyclopedia platform IQ.wiki aims to become a primary source of objective, high-quality knowledge about blockchain technology and cryptocurrencies on the internet, making blockchain more discoverable and accessible globally, providing users with trustworthy information, and it also integrates GPT-4 to summarize wiki articles; and the LLM-based search engine Kaito aims to become a Web3 search platform, changing the way information is accessed in Web3.

In terms of content creation, there are projects like NFPrompt that reduce user creation costs. NFPrompt allows users to generate NFTs more easily through AI, thereby lowering the cost of creation and providing many personalized services in the creative process.

AI Auditing of Smart Contracts#

In the Web3 field, auditing smart contracts is also a very important task. Using AI to audit smart contract code can more efficiently and accurately identify vulnerabilities in the code.

As Vitalik once mentioned, one of the biggest challenges in the cryptocurrency field is the errors in our code. A promising possibility is that artificial intelligence (AI) could significantly simplify the use of formal verification tools to prove that code meets specific properties. If this can be achieved, we may have error-free SEK EVMs (such as Ethereum Virtual Machines). The more errors are reduced, the greater the security of the space, and AI is very helpful in achieving this.

For example, the 0x0.ai project offers an AI smart contract auditor, a tool that uses advanced algorithms to analyze smart contracts and identify potential vulnerabilities or issues that could lead to fraud or other security risks. Auditors use machine learning techniques to identify patterns and anomalies in the code, flagging potential issues for further review.

In addition to the three categories mentioned above, there are also some native cases utilizing AI to assist the Web3 field, such as PAAL, which helps users create personalized AI bots that can be deployed on Telegram and Discord to serve Web3 users; AI-driven multi-chain DEX aggregator Hera uses AI to provide the best trading paths between the widest range of tokens and any token pairs. Overall, AI's assistance to Web3 is more about serving as a tool layer.

Limitations and Challenges of AI+Web3 Projects#

Real Obstacles in Decentralized Computing Power#

Currently, many of the Web3 projects supporting AI are focusing on decentralized computing power, promoting global users to become the supply side of computing power through token incentives, which is a very interesting innovation. However, on the other hand, there are some real issues that need to be addressed:

Compared to centralized computing service providers, decentralized computing products typically rely on nodes and participants distributed globally to provide computational resources. Due to potential delays and instability in network connections between these nodes, performance and stability may be lower than that of centralized computing products.

Additionally, the availability of decentralized computing products is affected by the degree of matching between supply and demand. If there are not enough suppliers or demand is too high, it may lead to resource shortages or an inability to meet user needs.

Finally, compared to centralized computing products, decentralized computing products usually involve more technical details and complexities. Users may need to understand and deal with distributed networks, smart contracts, and cryptocurrency payments, increasing the cost of understanding and using these products.

After in-depth discussions with many decentralized computing project teams, it has been found that current decentralized computing is still largely limited to AI inference rather than AI training.

Next, I will help everyone understand the reasons behind this through four small questions:

  1. Why do most decentralized computing projects choose to do AI inference rather than AI training?
  2. What exactly is Nvidia's strength? What are the reasons that make decentralized computing training difficult?
  3. What will the endgame of decentralized computing (Render, Akash, io.net, etc.) look like?
  4. What will the endgame of decentralized algorithm models (Bittensor) look like?

Now, let's unravel these layers one by one:

  1. Looking across this track, most decentralized computing projects choose to do AI inference rather than training, primarily due to the differing requirements for computing power and bandwidth.

To help everyone understand better, let's compare AI to a student:

AI Training: If we compare artificial intelligence to a student, training is akin to providing the student with a wealth of knowledge and examples, which can also be understood as the data we often refer to. The AI learns from these knowledge examples. Since learning inherently requires understanding and memorizing vast amounts of information, this process demands significant computational power and time.

AI Inference: So what is inference? It can be understood as using the knowledge learned to solve problems or take exams. During the inference phase, the AI uses the knowledge it has learned to answer questions, rather than acquiring new knowledge, so the computational requirements during inference are much lower.

It is easy to see that the difficulty between the two fundamentally lies in the fact that large model AI training requires an enormous amount of data and has extremely high bandwidth requirements for fast data communication. Therefore, the current implementation of decentralized computing for training is extremely challenging, while inference has much lower data and bandwidth requirements, making it more feasible.

For large models, stability is paramount; if training is interrupted, it needs to be retrained, leading to high sunk costs. On the other hand, demands that require relatively lower computing power can be realized, such as the AI inference mentioned above, or some specific scenarios of small to medium-sized model training are possible. In decentralized computing networks, there are some relatively large node service providers that can serve these relatively large computing demands.

  1. So where are the bottlenecks in data and bandwidth? Why is decentralized training difficult to achieve?

This involves two key elements of large model training: single-card computing power and multi-card parallelism.
Single-card computing power: Currently, all centers that need to train large models can be referred to as supercomputing centers. To facilitate understanding, we can compare the supercomputing center to the human body, where the underlying unit GPU is like a cell. If a single cell (GPU) has strong computing power, then the overall computing power (single cell × quantity) can also be strong.

Multi-card parallelism: Training a large model often requires hundreds of billions of GBs. For supercomputing centers training large models, at least tens of thousands of A100 GPUs are needed as a baseline. Therefore, it requires mobilizing these tens of thousands of cards for training. However, training a large model is not simply a matter of sequentially training on the first A100 card and then on the second; rather, different parts of the model are trained on different GPUs, and training A may require the results from B, thus involving multi-card parallelism.

Why is Nvidia so powerful, with its market value soaring, while AMD and domestic companies like Huawei and Horizon find it difficult to catch up? The core issue is not the single-card computing power itself, but two aspects: the CUDA software environment and NVLink multi-card communication.

On one hand, having a software ecosystem that can adapt to hardware is very important, such as Nvidia's CUDA system. Building a new system is challenging, akin to creating a new language; the replacement cost is very high.

On the other hand, multi-card communication essentially involves the input and output of information between cards. How to parallelize and transmit data is crucial. Due to the existence of NVLink, it is impossible to connect Nvidia and AMD cards; additionally, NVLink limits the physical distance between GPUs, requiring them to be within the same supercomputing center. This makes it difficult for decentralized computing power distributed worldwide to form a computing cluster for large model training.

The first point explains why AMD and domestic companies like Huawei and Horizon find it difficult to catch up; the second point explains why decentralized training is challenging to achieve.

  1. What will the endgame of decentralized computing look like?
    Decentralized computing currently struggles to conduct large model training, primarily because stability is crucial for large model training. If training is interrupted, it requires retraining, leading to high sunk costs. The requirements for multi-card parallelism are high, and bandwidth is limited by physical distance. Nvidia achieves multi-card communication through NVLink; however, within a supercomputing center, NVLink restricts the physical distance between GPUs, making it difficult for dispersed computing power to form a computing cluster for large model training.

On the other hand, demands that require relatively lower computing power can be realized, such as AI inference or some specific scenarios of small to medium-sized model training. In decentralized computing networks, there are some relatively large node service providers that have the potential to serve these relatively large computing demands. Additionally, edge computing scenarios like rendering are also relatively easier to implement.

  1. What will the endgame of decentralized algorithm models look like?
    The endgame of decentralized algorithm models depends on the future of AI. I believe the future AI battle may feature one or two closed-source model giants (like ChatGPT) alongside a multitude of flourishing models. In this context, application layer products do not need to be tied to a single large model but can collaborate with multiple large models. In this regard, the model of Bittensor has significant potential.

The Combination of AI and Web3 is Relatively Rough, Not Achieving 1+1>2#

Currently, in projects that combine Web3 and AI, especially in AI supporting Web3 projects, most projects still only superficially use AI without truly reflecting a deep integration of AI and cryptocurrency. This superficial application is primarily manifested in two aspects:
First, whether using AI for data analysis and prediction, employing AI in recommendation and search scenarios, or conducting code audits, the integration with Web2 projects and AI does not differ significantly. These projects merely utilize AI to enhance efficiency and conduct analysis without demonstrating the native integration and innovative solutions between AI and cryptocurrency.

Secondly, many Web3 teams' integration with AI is more about marketing, purely leveraging the concept of AI. They only apply AI technology in very limited areas and then begin to promote AI trends, creating a false impression of a close relationship between the project and AI. However, in terms of genuine innovation, these projects still have significant gaps.

Despite these limitations in current Web3 and AI projects, we should recognize that this is merely the early stage of development. In the future, we can expect more in-depth research and innovation to achieve a closer integration between AI and cryptocurrency, creating more native and meaningful solutions in areas such as finance, decentralized autonomous organizations, prediction markets, and NFTs.

Token Economics as a Buffer for AI Project Narratives#

As mentioned at the beginning, the commercial model dilemma of AI projects arises because more and more large models are gradually becoming open-source. Currently, many AI+Web3 projects often find it difficult to develop and raise funds in Web2, so they choose to overlay Web3 narratives and token economics to promote user participation.

However, the key question is whether the integration of token economics genuinely helps AI projects address actual needs or is merely a superficial narrative or short-term value; this remains to be questioned.

Currently, most AI+Web3 projects are far from being practical. It is hoped that more grounded and thoughtful teams can not only use tokens as a hype tool for AI projects but genuinely meet actual demand scenarios.

Conclusion#

Currently, many cases and applications of AI+Web3 projects have emerged. First, AI technology can provide more efficient and intelligent application scenarios for Web3. Through AI's data analysis and prediction capabilities, Web3 users can have better tools in investment decision-making scenarios; in addition, AI can audit smart contract code, optimize the execution process of smart contracts, and improve the performance and efficiency of blockchains. At the same time, AI technology can also provide more precise and intelligent recommendations and personalized services for decentralized applications, enhancing user experience.

Meanwhile, the decentralized and programmable characteristics of Web3 also provide new opportunities for the development of AI technology. Through token incentives, decentralized computing power projects offer new solutions to the dilemma of insufficient AI computing power, while Web3's smart contracts and distributed storage mechanisms provide broader space and resources for sharing and training AI algorithms. The user autonomy and trust mechanisms of Web3 also bring new possibilities for AI development, allowing users to choose to participate in data sharing and training, thereby improving the diversity and quality of data, further enhancing the performance and accuracy of AI models.

Although the current intersection of AI+Web3 projects is still in its early stages and faces many dilemmas, it also brings many advantages. For example, while decentralized computing power products have some drawbacks, they reduce reliance on centralized institutions, provide greater transparency and auditability, and enable broader participation and innovation. For specific use cases and user needs, decentralized computing power products may be a valuable choice; the same goes for data collection, where decentralized data collection projects also bring advantages, such as reducing reliance on single data sources, providing broader data coverage, and promoting data diversity and inclusivity. In practice, it is necessary to weigh these pros and cons and take appropriate management and technical measures to overcome challenges, ensuring that decentralized data collection projects have a positive impact on AI development.

In summary, the integration of AI+Web3 offers infinite possibilities for future technological innovation and economic development. By combining AI's intelligent analysis and decision-making capabilities with Web3's decentralization and user autonomy, we believe that a more intelligent, open, and fair economic and social system can be built in the future.

References#

https://docs.bewater.xyz/zh/aixcrypto/
https://medium.com/@ModulusLabs
https://docs.bewater.xyz/zh/aixcrypto/chapter2.html#_3-4-4-%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD%E6%8A%80%E6%9C%AF%E5%BC%80%E6%BA%90%E7%9A%84%E9%97%AE%E9%A2%98
https://docs.bewater.xyz/zh/aixcrypto/chapter4.html#_1-2-1-%E6%95%B0%E6%8D%AE
https://mirror.xyz/lukewasm.eth/LxhWgl-vaAoM3s_i9nCP8AxlfcLvTKuhXayBoEr00mA
https://www.galaxy.com/insights/research/understanding-intersection-crypto-ai/
https://www.theblockbeats.info/news/48410?search=1
https://www.theblockbeats.info/news/48758?search=1
https://www.theblockbeats.info/news/49284?search=1
https://www.theblockbeats.info/news/50419?search=1
https://www.theblockbeats.info/news/50464?search=1
https://www.theblockbeats.info/news/50814?search=1
https://www.theblockbeats.info/news/51165?search=1
https://www.theblockbeats.info/news/51099?search=1
https://www.techflowpost.com/article/detail_16418.html
https://blog.invgate.com/chatgpt-statistics
https://www.windowscentral.com/hardware/computers-desktops/chatgpt-may-need-30000-nvidia-gpus-should-pc-gamers-be-worried
https://www.trendforce.com/presscenter/news/20230301-11584.html
https://www.linkedin.com/pulse/great-gpu-shortage-richpoor-chris-zeoli-5cs5c/
https://www.semianalysis.com/p/google-gemini-eats-the-world-gemini
https://news.marsbit.co/20230613141801035350.html
https://medium.com/@taofinney/bittensor-tao-a-beginners-guide-eb9ee8e0d1a4
https://www.hk01.com/%E7%B6%B2%E7%A7%91%E3%80%8A3.0%E3%80%8B/1006289/iosg-%E5%BE%9E-ai-x-web3-%E6%8A%80%E8%A1%93%E5%A0%86%E6%A3%A8%E5%B1%95%E9%96%8B-infra-%E6%96%B0%E6%95%B8%E4%BA%8B
https://arxiv.org/html/2403.01008v1

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.