DeepSeek says data in pre-training stage is mainly collected from publicly available online information and authorised third-party data

In a document published on Monday, the Hangzhou-based start-up said it “has always prioritised AI security” and decided to make its disclosure to help people use its models, at a time when Beijing is ramping up oversight over the industry.

The company said data in the pre-training stage was “mainly” collected from publicly available online information as well as authorised third-party data, and DeepSeek had no intention to collect personal data.

DeepSeek said it applied automated filters to remove raw data containing “hate speech, pornography, violence, spam and potentially infringing contents”. Meanwhile, it applied algorithmic detection with human review to identify “inherent statistical biases in large-scale data sets” to mitigate the impact on model values.

The company, founded by computer scientist Liang Wenfeng, said it was committed to reducing the “hallucinations” of its models through research and techniques such as retrieval-augmented generation, but added that it remained an “unavoidable” problem.