AI DePIN Network: Distributed GPU Computing Boosts AI Development

The Intersection of AI and DePIN: The Rise of Distributed Computing Networks

Since 2023, AI and DePIN have become popular trends in the Web3 space, with market values reaching 30 billion USD and 23 billion USD, respectively. This article focuses on the intersection of the two, exploring the development of related protocols.

The Intersection of AI and DePIN

In the AI technology stack, the DePIN network provides practicality for AI through computing resources. The development of large tech companies has led to a shortage of GPUs, making it difficult for other developers to obtain sufficient GPUs for computation. This often results in developers choosing centralized cloud providers, but long-term hardware contracts are not flexible enough and are less efficient.

DePIN provides a more flexible and cost-effective alternative, incentivizing resource contributions through token rewards. In the AI sector, DePIN crowdsources GPU resources from individual owners to data centers, offering a unified supply for users who need hardware access. This not only provides developers with customized and on-demand access but also brings additional income to GPU owners.

There are many AI DePIN networks on the market. This article will explore the roles, goals, and achievements of each protocol to understand the differences between them.

Overview of AI DePIN Network

Render is a pioneer in the P2P network providing GPU computing power, initially focused on content creation rendering, and later expanded to AI computing tasks.

Key points:

  • Founded by cloud graphics company OTOY
  • GPU networks are used by major companies in the entertainment industry
  • Collaborate with Stability AI and others, integrating AI models with 3D rendering workflows.
  • Approve multiple computing clients, integrate more DePIN network GPUs

Akash is positioned as a "super cloud" alternative that supports storage, GPU, and CPU computing. It offers developer-friendly tools such as container platforms and Kubernetes-managed compute nodes for seamless software deployment across environments.

Key points:

  • A wide range of computing tasks from general computing to web hosting
  • AkashML allows GPU networks to run over 15,000 models on Hugging Face.
  • Hosted applications such as the Mistral AI LLM model chatbot.
  • Build platforms such as the Metaverse and AI deployment using its Supercloud

io.net provides access to distributed GPU cloud clusters, specifically for AI and ML use cases. It aggregates GPUs from data centers, crypto miners, and other fields.

Key points:

  • IO-SDK is compatible with frameworks such as PyTorch, and the multi-layer architecture can be dynamically expanded.
  • Supports the creation of 3 different types of clusters, launched within 2 minutes.
  • Collaborate with Render, Filecoin, and others to integrate GPU resources

Gensyn provides GPU capabilities focused on machine learning and deep learning computations. It claims to achieve a more efficient verification mechanism through concepts such as proof of learning.

Key points:

  • The cost of V100 equivalent GPU is about $0.40 per hour, significantly saving.
  • Fine-tune the pre-trained base model to complete specific tasks.
  • Provide a decentralized, globally shared foundational model

Aethir is specifically designed with enterprise GPUs, focusing on compute-intensive fields such as AI, ML, and cloud gaming. The containers in the network serve as virtual endpoints for executing cloud applications, providing a low-latency experience.

Key points:

  • Expand to cloud mobile services, launching a decentralized cloud smartphone in collaboration with APhone.
  • Establish extensive cooperation with major Web2 companies such as NVIDIA
  • Collaborate with Web3 projects such as CARV, Magic Eden

Phala Network serves as the execution layer for Web3 AI solutions. Its blockchain is a trustless cloud computing solution that addresses privacy issues through the trusted execution environment (TEE).

Key points:

  • Act as a co-processor protocol for verifiable computation, allowing AI agents to access on-chain resources.
  • AI proxy contracts can obtain top language models like OpenAI through Redpill.
  • The future will include multiple proof systems such as zk-proofs, MPC, FHE, etc.
  • Future support for TEE GPUs such as H100 to enhance computing power

Project Comparison

| | Render | Akash | io.net | Gensyn | Aethir | Phala | |--------|------------|------------------|---------------------|--------|--------------|---------| | Hardware | GPU & CPU | GPU & CPU | GPU & CPU | GPU | GPU | CPU | | Business Focus | Graphics Rendering and AI | Cloud Computing, Rendering and AI | AI | AI | AI, Cloud Gaming and Telecommunications | On-chain AI Execution | | AI Task Type | Inference | Training and Inference | Training and Inference | Training | Training | Execution | | Work Pricing | Performance-Based Pricing | Reverse Auction | Market Pricing | Market Pricing | Bidding System | Equity Calculation | | Blockchain | Solana | Cosmos | Solana | Gensyn | Arbitrum | Polkadot| | Data Privacy | Encryption and Hashing | mTLS Authentication | Data Encryption | Secure Mapping | Encryption | TEE | | Work Fee | 0.5-5%/work | 20% USDC, 4% AKT| 2% USDC, 0.25% reserve | Low Fees | 20%/session | Proportional to staking| | Security | Proof of Render | Proof of Stake | Proof of Computation | Proof of Stake | Proof of Render Capability | Inherited from Relay Chain| | Completion Proof | - | - | Time-Locked Proof | Learning Proof | Rendering Work Proof | TEE Proof | | Quality Assurance | Dispute | - | - | Verify and Report | Check Node | Remote Proof | | GPU Cluster | No | Yes | Yes | Yes | Yes | No |

The Intersection of AI and DePIN

Importance

Availability of Clustering and Parallel Computing

The distributed computing framework implements GPU clusters to improve training efficiency and scalability. Training complex AI models requires strong computing power, often relying on distributed computing. For example, OpenAI's GPT-4 model has over 1.8 trillion parameters, taking 3-4 months and using about 25,000 Nvidia A100 GPUs.

Most projects have now integrated clusters for parallel computing. io.net, in collaboration with other projects, has deployed over 3,800 clusters in Q1. Although Render does not support clustering, it decomposes a single frame for simultaneous processing across multiple nodes. Phala supports CPU worker clustering.

The cluster framework is important for AI workflow networks, but the number and type of cluster GPUs that meet the developers' needs is another issue.

Data Privacy

AI model development requires large datasets, which may involve sensitive information. Samsung has disabled ChatGPT due to privacy concerns, while Microsoft's 38TB data leak further highlights the importance of AI security. Various data privacy methods are crucial for empowering data providers.

Most projects use some form of data encryption to protect privacy. Render uses encryption and hashing, io.net and Gensyn adopt data encryption, and Akash uses mTLS authentication.

io.net has partnered with Mind Network to launch fully homomorphic encryption (FHE), allowing encrypted data to be processed without decryption. This better protects privacy than existing encryption technologies.

Phala Network introduces a Trusted Execution Environment ( TEE ) to prevent external access or modification of data. It also combines zk-proofs for RiscZero zkVM integration.

Calculation completion proof and quality inspection

Due to the wide range of services, the final quality may not meet user standards. The completion certificate indicates that the GPU was indeed used for the required service, and quality checks are beneficial to users.

Gensyn and Aethir generate proof of completion, io.net proves that GPU performance is fully utilized without issues. Gensyn and Aethir conduct quality checks. Gensyn re-runs part of the proof using validators, and the reporter checks again. Aethir evaluates service quality using checkpoint verification and penalizes substandard services. Render suggests a dispute resolution process, and the review committee can reduce problematic nodes. Phala generates TEE proofs to ensure AI agents perform on-chain operations.

Hardware Statistics

| | Render | Akash | io.net | Gensyn | Aethir | Phala | |-------------|--------|-------|--------|------------|------------|--------| | Number of GPUs | 5600 | 384 | 38177 | - | 40000+ | - | | Number of CPUs | 114 | 14672 | 5433 | - | - | 30000+ | | H100/A100 Quantity | - | 157 | 2330 | - | 2000+ | - | | H100 Cost/Hour | - | $1.46 | $1.19 | - | - | - | | A100 Cost/Hour | - | $1.37 | $1.50 | $0.55 ( estimated ) | $0.33 ( estimated ) | - |

The Intersection of AI and DePIN

Requirements for high-performance GPUs

AI model training requires top-tier GPUs such as Nvidia A100 and H100. The inference performance of H100 is 4 times that of A100, making it the preferred choice, especially for large companies.

The decentralized GPU market must compete with Web2 not only on price but also by meeting actual needs. In 2023, Nvidia delivered 500,000 H100s to large tech companies, incurring high hardware costs. It is important to consider the number of hardware units that can be introduced at a low cost for these projects.

Different projects provide varying computing power. Akash has only over 150 H100 and A100, while io.net and Aethir each have over 2000. Pre-trained LLMs typically require GPU clusters ranging from 248 to over 2000, making the latter two projects more suitable for large model computations.

Currently, the cost of decentralized GPU services is much lower than that of centralized services. Gensyn and Aethir claim that A100-level hardware costs less than 1 dollar per hour, but it still requires time for verification.

Although the network-connected GPU cluster has a large number of GPUs and low costs, it is limited in memory compared to GPUs connected via NVLink. NVLink supports direct communication between GPUs, making it suitable for LLMs with many parameters and large datasets.

Nevertheless, decentralized GPU networks still provide powerful computing capabilities and scalability for dynamic workload demands or users needing flexibility, creating opportunities to build more AI use cases.

provides consumer-grade GPU/CPU

The CPU is also very important in AI model training, used for data preprocessing to memory management. Consumer-grade GPUs can be used for fine-tuning pre-trained models or small-scale training.

Considering that more than 85% of consumers have idle GPUs, projects like Render, Akash, and io.net also serve this part of the market. This allows them to focus on large-scale intensive computing, general small-scale rendering, or a mix of both.

Conclusion

The AI DePIN field is still relatively new and faces challenges. For example, io.net was accused of falsifying the number of GPUs, which was later resolved through proof of work.

Nevertheless, the number of tasks and hardware executed on these networks has significantly increased, highlighting the growing demand for alternatives to Web2 cloud providers. At the same time, the surge in hardware providers indicates that supply has not been fully utilized before. This demonstrates the product-market fit of AI DePIN networks, effectively addressing the challenges of demand and supply.

Looking ahead, AI is expected to become a booming trillion-dollar market, and these decentralized GPU networks will play a key role in providing developers with cost-effective computing alternatives. By continuously bridging the gap between demand and supply, these networks will make significant contributions to the future landscape of AI and computing infrastructure.

The Intersection of AI and DePIN

The Intersection of AI and DePIN

The Intersection of AI and DePIN

The Intersection of AI and DePIN

The Intersection of AI and DePIN

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 4
  • Share
Comment
0/400
DecentralizeMevip
· 07-24 18:34
What is depin? It's not as good as doing a poi project.
View OriginalReply0
ContractCollectorvip
· 07-24 18:33
Decentralization is the way to go.
View OriginalReply0
CoffeeNFTsvip
· 07-24 18:10
First there was Amazon, then there was GPT. Who understands the grand strategy of GPU?
View OriginalReply0
TommyTeacher1vip
· 07-24 18:06
The sky is rolling up.
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)