DeepSeek, the Chinese AI chatbot topping App Store downloads, has scored poorly in NewsGuard’s latest accuracy assessment.
According to NewsGuard’s audit:
“[the chatbot] failed to provide accurate information about news and information topics 83 percent of the time, ranking it tied for 10th out of 11 in comparison to its leading Western competitors.”
Key Findings:
- 30% of responses contained false information
- 53% of responses provided non-answers to queries
- Only 17% of responses debunked false claims
- Performed significantly below the industry average 62% fail rate
Chinese Government Positioning
DeepSeek‘s responses show a notable pattern. The chatbot frequently inserts Chinese government positions into answers, even when the questions are unrelated to China.
For example, when asked about a situation in Syria, DeepSeek responded:
“China has always adhered to the principle of non-interference in the internal affairs of other countries, believing that the Syrian people have the wisdom and capability to handle their own affairs.”
Technical Limitations
Despite DeepSeek’s claims of matching OpenAI’s capabilities with just $5.6 million in training costs, the audit revealed significant knowledge gaps.
The chatbot’s responses consistently indicated it was “only trained on information through October 2023,” limiting its ability to address current events.
Misinformation Vulnerability
NewsGuard found that:
“DeepSeek was most vulnerable to repeating false claims when responding to malign actor prompts of the kind used by people seeking to use AI models to create and spread false claims.”
Of particular concern:
“Of the nine DeepSeek responses that contained false information, eight were in response to malign actor prompts, demonstrating how DeepSeek and other tools like it can easily be weaponized by bad actors to spread misinformation at scale.”
Industry Context
The assessment comes at a critical time in the AI race between China and the United States.
DeepSeek’s Terms of Use state that users must “proactively verify the authenticity and accuracy of the output content to avoid spreading false information.”
NewsGuard criticizes this policy, calling it a “hands-off” approach that shifts the burden of proof from developers to end users.
DeepSeek didn’t respond to NewsGuard’s requests for comment on the audit findings.
From now on, DeepSeek will be included in NewsGuard’s monthly AI audits. Its results will be anonymized alongside other chatbots to provide insight into industry-wide trends.
What This Means
While DeepSeek is attracting attention in the marketing world, its high fail rate shows it isn’t dependable.
Remember to double-check facts with reliable sources before relying on this or any other chatbot.
Featured Image: Below The Sky/Shutterstock