📢 Gate Square #MBG Posting Challenge# is Live— Post for MBG Rewards!
Want a share of 1,000 MBG? Get involved now—show your insights and real participation to become an MBG promoter!
💰 20 top posts will each win 50 MBG!
How to Participate:
1️⃣ Research the MBG project
Share your in-depth views on MBG’s fundamentals, community governance, development goals, and tokenomics, etc.
2️⃣ Join and share your real experience
Take part in MBG activities (CandyDrop, Launchpool, or spot trading), and post your screenshots, earnings, or step-by-step tutorials. Content can include profits, beginner-friendl
Understanding Lao Huang's ComputeX Speech in One Article: This is not a product launch, this is the 'Mobilization Order for the AI Industrial Revolution'.
Written by: Zhang Yaqi
Source: Wall Street Journal
On May 19, 2025, NVIDIA founder and CEO Jensen Huang delivered a two-hour keynote speech at Computex 2025.
From AI infrastructure, chip platforms, enterprise AI, to Bots and digital twins... Lao Huang depicted a rising era of the AI Factory.
In the past, data centers served traditional applications; today’s AI data centers are no longer just "data centers" but AI factories: a new type of smart factory that takes electricity as input and produces "Token" as output.
"NVIDIA is no longer just a technology company; we are now an AI infrastructure company."
He emphasized that this is the third infrastructure revolution after electricity and the internet - smart infrastructure.
Major Chip Release: Grace Blackwell GB200 and NVLink Architecture
Jensen Huang unveiled the NVLink Spine—this is a core interconnect module weighing 70 pounds, featuring two miles of cabling and a bandwidth of 130 TB/s. He stated, "The data throughput of this system is larger than the entire internet!"
The GB200 Grace Blackwell super chip uses a dual-chip package and connects 72 GPUs, resembling a "virtual giant chip." It is built on the latest NVLink Spine architecture, with a single node equivalent to the performance of the 2018 Sierra supercomputer.
Jensen Huang stated:
"This is not a server, this is an AI factory. You input energy into it, and it will give you Tokens."
In addition, this press conference also welcomed the GB200 system upgrade. Nvidia plans to launch the GB300 in Q3, which will enhance inference performance by 1.5 times, increase HBM memory by 1.5 times, double network bandwidth, and maintain physical compatibility with the previous generation to achieve 100% liquid cooling.
NVLink Fusion: Open Chip Interconnection Ecosystem
The most eye-catching is the NVLink Fusion program.
The NVLink Fusion architecture can seamlessly connect CPUs / ASICs / TPUs from other vendors with NVIDIA GPUs. This technology provides NVLink Chiplet & interface IP, allowing for the free combination of "semi-custom infrastructure."
In simple terms, customers can choose to use their own CPU with NVIDIA's AI chips, or use NVIDIA's CPU with AI accelerators from other vendors.
There is an analysis that suggests NVLink, as one of the key technologies for NVIDIA to maintain its dominance in AI workloads, addresses the communication speed issue between GPUs and CPUs in AI servers, which is one of the biggest obstacles to scalability and directly affects peak performance and energy efficiency. Compared to standard PCIe interfaces, it offers higher bandwidth and lower latency, with a bandwidth advantage of up to 14 times.
NVLink Fusion allows Fujitsu and Qualcomm to use the interface for their own CPUs, with NVLink functionality integrated into the chipset adjacent to the computing package. NVIDIA has also attracted custom silicon accelerators from designers such as MediaTek, Marvell, and Alchip, supporting other types of custom AI accelerators to work in conjunction with NVIDIA's Grace CPU.
Huang Renxun humorously said:
"Of course, using all of Nvidia's is the best, that would make me the happiest. But if you only use a little bit of Nvidia's stuff, I would still be very happy."
Personal Supercomputing Era: DGX Spark and DGX Station
Jensen Huang stated that the personal AI computer DGX Spark of Project DIGITS, mentioned earlier at CES, has fully entered production and will be launched in the coming weeks.
DGX Spark is aimed at AI researchers who want to own their own supercomputers, with companies setting their own prices. Jensen Huang stated, "Everyone can have one by Christmas."
"Today, everyone can have their own AI supercomputer, and... it can be plugged into the kitchen socket."
Enterprise AI Reconstruction: From Hardware to Agentic AI
Jensen Huang also announced the RTX Pro enterprise AI server: it supports traditional x86, Hypervisor, Windows, and other IT workloads. It can also run graphical AI agents and even run the Crysis game.
Jensen Huang stated that Agentic AI is the future "digital employee." Digital customer service representatives, digital marketing managers, digital engineers, etc., will become part of the corporate workforce. NVIDIA will deploy full-stack AI Ops support and collaborate with CrowdStrike, Red Hat, DataRobot, and others to promote enterprise AI deployment.
"We will need new HR to manage these AI employees."
New AI Storage Architecture: NVIDIA AIQ + Nemo + GPU Storage Frontend
Jensen Huang stated that AI is no longer just reading SQL; it needs to understand the semantics of unstructured data. Future storage systems will have built-in GPUs for searching, sorting, embedding, and indexing.
NVIDIA will also deploy Nemo + NeMo Retriever + IQ, an open-source "AI semantic retrieval framework", and collaborate with Dell, Hitachi, IBM, NetApp, and VAST to deploy enterprise-level platforms.
Bots will become the next "trillion-dollar industry"
Jensen Huang stated that they are concurrently advancing robotic systems with the automotive industry, using the Isaac Groot platform, which is powered by a new processor called Jetson Thor, specifically designed for robotic applications, suitable for everything from autonomous vehicles to human-machine systems. NVIDIA's Isaac operating system manages all neural network processing, sensor processing, and data pipelines, leveraging pre-trained models developed by a specialized robotics team to enhance system capabilities.
"In the AI era, to train Bots... you first have to use AI to teach AI."
Jensen Huang also stated that they are applying their AI models to autonomous vehicles, launching a fleet globally in collaboration with Mercedes, utilizing NVIDIA's end-to-end autonomous driving technology, which can be implemented this year.
He believes that Bots will become the next trillion-dollar industry, but this requires a lot of effort. NVIDIA's Bots division has the capability to achieve this, and that is solely due to scalability.
Launch of the Physical AI Engine Newton
Jensen Huang stated that he has collaborated with DeepMind and Disney Research to develop the world's most advanced physics engine, Newton, which is planned to be open-sourced in July.
According to reports, Newton fully supports GPU acceleration, has high differentiability and ultra-real-time operational capabilities, and can achieve effective learning through experience. NVIDIA is integrating this physics engine into NVIDIA's ISAAC simulator, which allows us to bring these Bots "to life" in a realistic way.