The Manus model breaks through GAIA testing, triggering controversies over AI development paths and safety.

The Manus model performed exceptionally well in the GAIA benchmark test, sparking controversy over the development path of AI.

Recently, the Manus model achieved breakthrough results in the GAIA benchmark test, outperforming large language models of the same tier. This achievement demonstrates Manus's exceptional ability in handling complex tasks, such as multinational business negotiations, involving contract analysis, strategy formulation, and proposal generation.

The advantages of Manus mainly lie in three aspects: dynamic goal decomposition, cross-modal reasoning, and memory-enhanced learning. It can decompose complex tasks into hundreds of executable subtasks, handle various types of data simultaneously, and continuously improve decision-making efficiency and reduce error rates through reinforcement learning.

This development has once again sparked discussions in the industry about the path of AI development: will the future lead to Artificial General Intelligence (AGI) or Multi-Agent Systems (MAS)?

Manus's design concept suggests two possibilities:

  1. AGI Path: Continuously enhancing the capabilities of a single intelligent system to gradually approach the comprehensive decision-making level of humans.

  2. MAS Path: Position Manus as a super coordinator, directing thousands of specialized agents to work collaboratively.

These two paths reflect a core contradiction in the development of AI: how to balance efficiency and safety? The closer a single intelligent system is to AGI, the more difficult it becomes to explain its decision-making process; while multi-agent systems can distribute risk, they may miss critical decision-making opportunities due to communication delays.

Manus brings the dawn of AGI, AI safety is also worth pondering

The progress of Manus has also magnified the potential risks in AI development. For example, in medical scenarios, there may be a need to access sensitive genetic data of patients; in financial negotiations, there might be exposure to undisclosed corporate financial information. Additionally, there is the issue of algorithmic bias, such as unfair salary suggestions for specific groups during the recruitment process. In terms of legal contract reviews, there may be a higher misjudgment rate for terms in emerging industries. More seriously, hackers could interfere with Manus's judgment in negotiations by embedding specific audio signals.

These challenges highlight a grim reality: the smarter AI systems become, the larger their potential attack surface.

In the Web3 field, security has always been a topic of great concern. The "impossible triangle" theory proposed by Ethereum founder Vitalik Buterin (that blockchain networks cannot simultaneously achieve security, decentralization, and scalability) has spawned various cryptographic technologies:

  1. Zero Trust Security Model: Emphasizes strict authentication and authorization for every access request.

  2. Decentralized Identity (DID): Allows entities to obtain verifiable identities without the need for centralized registration.

  3. Fully Homomorphic Encryption (FHE): Allows computation on encrypted data, protecting data privacy.

Among them, fully homomorphic encryption is considered a key technology to solve security issues in the AI era. It can play a role in the following aspects:

  • Data layer: All information input by users is processed in an encrypted state, and even the AI system itself cannot decrypt the original data.

  • Algorithm level: Achieve "encrypted model training" through FHE, so that even developers cannot peek into the AI's decision-making process.

  • Collaborative level: Communication between multiple agents uses threshold encryption, which ensures that even if a single node is compromised, global data will not be exposed.

In the Web3 security field, several projects have been explored. For example, uPort was launched on the Ethereum mainnet in 2017 and is one of the earlier decentralized identity projects. NKN launched its mainnet in 2019 based on a zero-trust security model. In the FHE field, Mind Network is the first project to launch its mainnet and has established partnerships with several well-known institutions.

With the rapid development of AI technology, building a robust security defense system has become increasingly important. Fully Homomorphic Encryption not only addresses current security issues but also prepares for the arrival of the future strong AI era. On the path to AGI, FHE has become an indispensable technical support.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 4
  • Share
Comment
0/400
LightningSentryvip
· 07-28 22:59
Interesting, good performance.
View OriginalReply0
BearMarketBarbervip
· 07-28 22:53
The road ahead is uncertain and changeable.
View OriginalReply0
EntryPositionAnalystvip
· 07-28 22:45
Classics are not necessarily sustainable
View OriginalReply0
WalletDivorcervip
· 07-28 22:32
Won again, huh?
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)