At its highest capacity, DeepSeek’s V4 ‘redefines the state-of-the-art for open models’, according to the company.

Chinese AI darling DeepSeek has launched its long-awaited V4 large language model (LLM) in preview, as speculation around a possible first funding round swirls.

The latest open-source launch comes more than a year after the start-up released R1, whose cost effectiveness and performance sent Silicon Valley leaders in a flurry, igniting accusations of theft. R1 was trained using lower-capacity Nvidia chips.

The V4 series comes in two versions, a ‘Pro’ version with 49bn activated parameters and a ‘Flash’ version with 13bn activated parameters, both supporting a context length of 1m tokens.

At its maximum capacity, the V4-Pro-Max mode “redefines the state-of-the-art for open models, outperforming its predecessors in core tasks”, DeepSeek said.