📢 Gate Square #Creator Campaign Phase 2# is officially live!
Join the ZKWASM event series, share your insights, and win a share of 4,000 $ZKWASM!
As a pioneer in zk-based public chains, ZKWASM is now being prominently promoted on the Gate platform!
Three major campaigns are launching simultaneously: Launchpool subscription, CandyDrop airdrop, and Alpha exclusive trading — don’t miss out!
🎨 Campaign 1: Post on Gate Square and win content rewards
📅 Time: July 25, 22:00 – July 29, 22:00 (UTC+8)
📌 How to participate:
Post original content (at least 100 words) on Gate Square related to
Microsoft Open Source New Version of Phi-4: Inference Efficiency Rises 10 Times, Can Run on Laptops
Jin10 data reported on July 10, this morning, Microsoft open sourced the latest version of the Phi-4 family, Phi-4-mini-flash-reasoning, on its official website. The mini-flash version continues the Phi-4 family’s characteristics of small parameters and strong performance, specifically designed for scenarios limited by Computing Power, memory, and latency, capable of running on a single GPU, suitable for edge devices like laptops and tablets. Compared to the previous version, mini-flash utilizes Microsoft’s self-developed innovative architecture, SambaY, resulting in a big pump in inference efficiency by 10 times, with average latency reduced by 2-3 times, achieving a significant improvement in overall inference performance.