Chinese AI Finally Codes Itself Stable Enough to Not Immediately Crash
KEY POINTS
- •Chinese company DeepSeek published a new AI training method on January 22, 2026, promising better stability in large language models.
- •Analysts Wei Sun and Lian Jye Su praised this 'striking breakthrough' as a sign of China’s increasing confidence, despite hardware shortages.
- •DeepSeek's next big model R2 is delayed, but its new training techniques may power upcoming versions amid challenges in global AI distribution.
China's DeepSeek, led by Liang Wenfeng—apparently a wizard of AI stability—is schooling the tech world with 'Manifold-Constrained Hyper-Connections' (mHC), a breakthrough training trick to keep gargantuan language models from melting down. Published in January 2026 right after their Sputnik moment in 2025 rocked markets with R1, DeepSeek’s latest paper makes 'sharing info internally without chaos' sound like the AI version of toddler playdates. Analysts Wei Sun and Lian Jye Su gush about this ‘striking breakthrough’ and the potential ripple effect, though Sun cautiously predicts no standalone R2 model anytime soon, possibly due to advanced chip shortages and Liang’s 'my baby isn’t perfect' attitude. Meanwhile, distribution woes keep DeepSeek less viral in the West than ChatGPT is in lunchrooms.
Share the Story
Source: Businessinsider | Published: 1/2/2026 | Author: Lee Chong Ming