DeepSeek has reportedly delayed the launch of its R2 model owing to challenges in training it with Huawei chips
DeepSeek – which made waves in the industry with its V3 model in December and the R1 model in January – has not introduced major updates to its products in the past months, aside from two minor revisions.
While the market had expected DeepSeek to introduce a new foundation model within months of R1’s release, the Hangzhou-based start-up – founded by computer scientist Liang Wenfeng as a side project of his quantitative trading firm – has yet to announce a schedule for the launch of the R2 model.
DeepSeek did not respond to a request for comment on Wednesday.
Developing and training an advanced model is an expensive and complex task, requiring substantial computing resources and training data, as well as sophisticated algorithms. It took OpenAI two and a half years to release GPT-5 after launching GPT-4 in March 2023.









