📢 Gate Square #MBG Posting Challenge# is Live— Post for MBG Rewards!
Want a share of 1,000 MBG? Get involved now—show your insights and real participation to become an MBG promoter!
💰 20 top posts will each win 50 MBG!
How to Participate:
1️⃣ Research the MBG project
Share your in-depth views on MBG’s fundamentals, community governance, development goals, and tokenomics, etc.
2️⃣ Join and share your real experience
Take part in MBG activities (CandyDrop, Launchpool, or spot trading), and post your screenshots, earnings, or step-by-step tutorials. Content can include profits, beginner-friendl
MiniMax Open Source's first inference model: Competing with DeepSeek, the Computing Power cost is only about $530,000.
Gate News bot message, MiniMax announced on June 17 that it will release important updates for five consecutive days. Today's first release is the Open Source first inference model MiniMax-M1.
According to the official report, the MiniMax-M1 has benchmarked alongside open source models such as DeepSeek-R1 and Qwen3, approaching the most advanced models overseas.
The official blog also mentioned that based on two major technological innovations, the MiniMax-M1 training process was efficient "beyond expectations," completing the reinforcement learning training phase in just 3 weeks using 512 H800 GPUs, with a computing power rental cost of only $534,700. This is an order of magnitude less than the initial expectations.
Source: Jinshi