📢 Gate Square Exclusive: #PUBLIC Creative Contest# Is Now Live!
Join Gate Launchpool Round 297 — PublicAI (PUBLIC) and share your post on Gate Square for a chance to win from a 4,000 $PUBLIC prize pool
🎨 Event Period
Aug 18, 2025, 10:00 – Aug 22, 2025, 16:00 (UTC)
📌 How to Participate
Post original content on Gate Square related to PublicAI (PUBLIC) or the ongoing Launchpool event
Content must be at least 100 words (analysis, tutorials, creative graphics, reviews, etc.)
Add hashtag: #PUBLIC Creative Contest#
Include screenshots of your Launchpool participation (e.g., staking record, reward
Manus Breaks GAIA Benchmark Test, Triggering New Thoughts on AI Development and Web3 Security
Manus has made breakthrough progress in the GAIA Benchmark, prompting reflections on the path of AI development.
Manus has set a new record in the GAIA benchmark test, surpassing the performance of large language models in the same category. This means that Manus is capable of independently completing complex tasks, such as multinational business negotiations, involving contract clause analysis, strategy formulation, and proposal generation, and even coordinating legal and financial teams. The advantages of Manus lie in its dynamic goal decomposition ability, cross-modal reasoning capability, and memory-augmented learning ability. It can break down complex tasks into hundreds of executable subtasks while handling multiple types of data, and continuously improve decision-making efficiency and reduce error rates through reinforcement learning.
The breakthrough of Manus has once again sparked discussions in the industry about the development path of AI: Will the future lead to general artificial intelligence (AGI) or a collaborative dominance of multi-agent systems (MAS)?
The design concept of Manus suggests two possibilities:
AGI Path: Continuously enhancing the capabilities of a single intelligent system to approach human-level comprehensive decision-making.
MAS Path: Use Manus as a super coordinator to direct thousands of specialized agents to work together.
The discussion of these two paths actually reflects a core issue in AI development: how to balance efficiency and safety? As a single intelligent system approaches AGI, the risk of opacity in its decision-making process also increases; while multi-agent collaboration can distribute risk, it may miss critical decision-making opportunities due to communication delays.
The progress of Manus has also magnified the inherent risks in AI development. For example, data privacy issues: in healthcare scenarios, Manus needs access to sensitive genomic data of patients; in financial negotiations, it may involve undisclosed corporate financial information. Additionally, there are issues of algorithmic bias, such as giving unfair salary suggestions to specific groups during the recruitment process, or a high misjudgment rate of clauses in emerging industries during legal contract reviews. Security vulnerabilities are also a serious problem, as hackers may interfere with Manus's judgment by implanting specific sound frequencies.
These issues highlight a fact: the more intelligent the system, the wider its potential attack surface.
In the Web3 domain, security has always been a topic of great concern. The blockchain-based "impossible triangle" theory (the difficulty of achieving security, decentralization, and scalability simultaneously) has led to the emergence of various cryptographic solutions:
Zero Trust Security Model: The core of this model is "trust no one, always verify", which involves strict authentication and authorization for every access request.
Decentralized Identity (DID): This is a new type of identity recognition standard that allows entities to obtain verifiable and persistent identities without the need for centralized registration.
Fully Homomorphic Encryption (FHE): This is an advanced encryption technology that allows computation on encrypted data without decryption, particularly suitable for scenarios such as cloud computing and data outsourcing.
Homomorphic encryption, as the latest encryption technology, has the potential to become a key tool for solving security issues in the AI era. It allows data to be processed in an encrypted state, such that even the AI systems themselves cannot decrypt the original information.
In practical applications, FHE can enhance the security of AI systems from multiple levels:
Data layer: All information entered by users (including biometric data, voice, etc.) is processed in an encrypted state to protect user privacy.
Algorithm level: Achieve "encrypted model training" through FHE, so that even developers cannot directly observe the decision-making process of AI.
Collaborative Level: The communication between multiple agents uses threshold encryption, so even if a single node is compromised, it will not lead to global data leakage.
Although Web3 security technologies may seem distant to ordinary users, they are closely related to everyone's interests. In this challenging digital world, only by continuously strengthening security measures can we truly protect user rights.
As AI technology continues to approach human intelligence levels, we need more advanced defense systems. The value of FHE lies not only in addressing current security issues but also in preparing for a more powerful AI era in the future. On the road to AGI, FHE is no longer an option but a necessary condition to ensure the safe development of AI.