
Chinese artificial intelligence firm DeepSeek is bringing forward the launch of its R2 AI model following the success of its R1 model. The R1 model prompted a US$1 trillion sell-off in stocks, motivating the company to shorten the R2 launch timeline. Initially set to be launched in May, sources indicate that it may happen sooner, although no exact date has been confirmed.
DeepSeek’s Mixture-of-Experts (MoE) and multihead latent attention (MLA) techniques enable its AI models to rival OpenAI and Google. The DeepSeek AI can do so all the while operating at 20-40 times lower costs than OpenAI’s equivalent models. R2 is expected to enhance coding efficiency and multilingual reasoning, broadening its usability beyond English.
DeepSeek's meteoric ascent has caught the eye of China's leadership. Its founder, billionaire Liang Wenfeng, was summoned to a VIP meeting with Premier Li Qiang, indicating Beijing's endorsement. DeepSeek's models have been adopted quickly by 13 Chinese city governments, 10 state-owned energy companies, and tech titans such as Baidu, Tencent, and Lenovo.
As the US emphasizes AI supremacy, DeepSeek's success could accelerate Western regulatory oversight and export bans on AI chips. Due to privacy issues, western countries such as South Korea and Italy have already taken DeepSeek's apps off their national stores.
Despite the US prohibition of high-end Nvidia chips, DeepSeek's early investment in computing muscle has maintained its lead in AI innovation. The company's low-cost, high-performance AI models can redefine the global landscape of AI. Thus, compelling competitors to reconsider pricing and strategies.