🎉 Gate Square Growth Points Summer Lucky Draw Round 1️⃣ 2️⃣ Is Live!
🎁 Prize pool over $10,000! Win Huawei Mate Tri-fold Phone, F1 Red Bull Racing Car Model, exclusive Gate merch, popular tokens & more!
Try your luck now 👉 https://www.gate.com/activities/pointprize?now_period=12
How to earn Growth Points fast?
1️⃣ Go to [Square], tap the icon next to your avatar to enter [Community Center]
2️⃣ Complete daily tasks like posting, commenting, liking, and chatting to earn points
100% chance to win — prizes guaranteed! Come and draw now!
Event ends: August 9, 16:00 UTC
More details: https://www
Fully Homomorphic Encryption: The Security Shield of the AI Era and the Key to AGI Development
AI Security issues become prominent, fully homomorphic encryption may become the solution.
With the rapid development of artificial intelligence technology, Manus has achieved groundbreaking results in the GAIA benchmark tests, surpassing large language models of the same tier in performance. Manus has demonstrated the ability to independently complete complex tasks, such as multinational business negotiations, which involve expertise across multiple fields. Compared to traditional systems, Manus has significant advantages in dynamic goal decomposition, cross-modal reasoning, and memory-enhanced learning.
However, Manus's progress has also sparked heated discussions in the industry regarding the development path of AI: Will there be a unified General Artificial Intelligence (AGI) in the future, or will multiple specialized agents (MAS) work collaboratively? This debate actually reflects the balance between efficiency and security in AI development. As a single agent approaches AGI, the opacity of the decision-making process increases the risk; while collaboration among multiple agents can disperse risk, it may miss critical decision-making opportunities due to communication delays.
The development of Manus has inadvertently magnified the inherent security risks of AI. In medical scenarios, it requires access to sensitive genomic data of patients; in financial negotiations, it may involve undisclosed financial information of companies. Furthermore, AI systems also face issues such as algorithmic bias and adversarial attacks. For instance, during the recruitment process, there may be unfair salary suggestions for specific groups, or a low accuracy rate in assessing terms for emerging industries during legal document review. More seriously, hackers may disrupt the judgment of AI systems in negotiations by implanting specific audio signals.
In the face of these challenges, the industry is exploring various security solutions. Among them, fully homomorphic encryption (FHE) technology is considered a powerful tool for addressing security issues in the AI era. FHE allows computations to be performed on encrypted data, enabling the processing of sensitive information without the need for decryption.
At the data level, FHE ensures that all information input by users (including biometric features, voice, etc.) is processed in an encrypted state, and even the AI system itself cannot decrypt the original data. At the algorithm level, "encryption model training" implemented through FHE can protect the AI's decision-making process from being spied on. In multi-agent collaboration, using threshold encryption technology can prevent a single node from being compromised, leading to global data leakage.
Although Web3 security technologies may seem distant to ordinary users, they are crucial for protecting user interests. In the blockchain industry, there are already some projects exploring security technologies such as decentralized identity and zero-trust security models. However, compared to other more prominent areas, security projects are often overlooked by speculators.
As AI technology continues to approach human intelligence levels, establishing a robust defense system has become increasingly important. Security technologies such as FHE not only address current issues but also lay the foundation for a more powerful AI era in the future. On the road to AGI, these security technologies are no longer optional but are necessary conditions to ensure the reliable operation of AI systems.