The dark side of Deepseek R-1? OpenAI o1 rival alleged to be biased, insecure, foulmouthed and unethical
Chinese AI model can be tricked into writing terrorist training manuals and offering advice on developing chemical or biological weapons, analysts claim

The launch of Deepseek's R1 set off a "bomb" in global stock markets, wiping out as much as $1 trillion in value as the world started to wonder whether the America-led AI boom was all hype and no substance.
Now a study has raised concerns about the safety and security of the model, which is a cheap and high-performing rival to OpenAI's o1.
Red teaming research by Enkrypt AI, an AI security and compliance platform, has allegedly uncovered "serious ethical and security flaws" in DeepSeek’s technology.
Analysts claimed the Chinese model is biased, prone to generating insecure code and has a troubling tendency to produce toxic or harmful content threats, hate speech and material alleged to be illegal or explicit.
The model was also found to be "vulnerable" to manipulation. Researchers alleged it could be tricked into the creation of chemical and biological weapons, posing significant global security concerns.
Trump advisor and tech venture capitalist Marc Andreessen famously described the launch of R1 as "AI’s Sputnik moment" - although really it has more in common with the proliferation of dangerous weapons than than it does the relatively peaceful race to the moon.
"The cat is out of the bag"

We asked Enkrypt founder Sahil Agarwal, PhD if it was now impossible to stop large language models (LLMs) from being misused by bad guys.
"For villains, the cat is out of the bag," Agarwal told Machine. "We are already witnessing numerous attacks fueled by such LLMs. The ‘hope’ is that as newer models with even greater capabilities are developed, the model providers will incorporate some ethical safeguards to ensure the safe and reliable use of AI."
The study claimed that R1 was significantly more likely to generate harmful content than its rivals. Enkrypt alleged it is:
- Three times more biased than Claude-3 Opus.
- Four times more vulnerable to generating insecure code than OpenAI’s O1.
- Four times more toxic than GPT-4o.
- Eleven times more likely to generate harmful output compared to OpenAI’s O.
- Just over three times more likely to produce chemical, biological, radiological, and nuclear (CBRN) content than OpenAI’s O1 and Claude-3 Opus.
Sahil Agarwal, CEO of Enkrypt AI, continued: "DeepSeek-R1 offers significant cost advantages in AI deployment, but these come with serious risks. Our research findings reveal major security and safety gaps that cannot be ignored.
"While DeepSeek-R1 may be viable for narrowly scoped applications, robust safeguards—including guardrails and continuous monitoring—are essential to prevent harmful misuse. AI safety must evolve alongside innovation, not as an afterthought."
The model reportedly exhibited the following risks during testing:
- Bias and discrimination: 83% of bias tests successfully produced discriminatory output, with severe biases in race, gender, health, and religion. Failures like these could violate global regulations such as the EU AI Act and U.S. Fair Housing Act, posing risks for businesses integrating AI into finance, hiring, and healthcare.
- Harmful content and extremism: 45% of harmful content tests successfully bypassed safety protocols, generating criminal planning guides, illegal weapons information, and extremist propaganda. In one instance, DeepSeek-R1 drafted a persuasive recruitment blog for terrorist organizations, exposing its high potential for misuse.
- Toxic language: The model ranked in the bottom 20th percentile for AI safety, with 6.68% of responses containing profanity, hate speech, or extremist narratives. In contrast, Claude-3 Opus effectively blocked all toxic prompts, highlighting DeepSeek-R1’s weak moderation systems.
- Inadequate cybersecurity: 78% of cybersecurity tests successfully tricked DeepSeek-R1 into generating insecure or malicious code, including malware, trojans, and exploits. The model was 4.5x more likely than OpenAI’s O1 to generate functional hacking tools, posing a major risk for cybercriminal exploitation.
- Biological chemical threats: DeepSeek-R1 was found to explain in detail the biochemical interactions of sulfur mustard (mustard gas) with DNA, a clear biosecurity threat. The report warns that such CBRN-related AI outputs could aid in the development of chemical or biological weapons.
Agarwal continued: "As the AI arms race between the U.S. and China intensifies, both nations are pushing the boundaries of next-generation AI for military, economic, and technological supremacy.
"However, our findings reveal that DeepSeek-R1’s security vulnerabilities could be turned into a dangerous tool—one that cybercriminals, disinformation networks, and even those with biochemical warfare ambitions could exploit. These risks demand immediate attention."
Say goodbye to OpenAI?

Within 48 hours of the launch of DeepSeek's new R1 model, the security platform Netskope saw a 1052% increase in the number of users across its 3500 customers, including a third of the Fortune 100.
"Interest in DeepSeek is rapidly trending up, but the challenge with this or any popular or emerging generative AI app is the same as it was two years ago with ChatGPT: the risk its misuse creates for organizations that haven't implemented advanced data security controls," said Ray Canzanese, Director of Netskope Threat Labs.
“Controls that block unapproved apps, use Data Loss Prevention tools (DLP) to control data movement into approved apps, and leverage real-time user coaching to empower people to make informed decisions when using GenAI apps are currently among the most popular tools for limiting the GenAI risk.”
Mike Britton, CIO of the AI security firm Abnormal Security, told Machine that although DeepSeek's claims of "remarkably low costs" are "causing a stir in the industry", organisations must be cautious when deploying the model.
"Without a detailed view into costs and metrics, there remains uncertainty about whether their approach is truly faster or cheaper, or how the model was even trained," he said. "While it’s making waves now, a major backlash could follow as the more regulated parts of the world may be hesitant, or outright unwilling, to engage with it.
"Right now, much of the concern around DeepSeek is how it might threaten the current AI market with a competitive, cheaper alternative. But what’s also concerning, especially for the general public, is its potential for misuse. Bad actors are already using popular generative AI tools to automate their attacks. If they can gain access to even faster and cheaper AI tools, it could enable them to carry out sophisticated attacks at an unprecedented scale."
Dan Shiebler, Head of Machine Learning at Abnormal, also told us: "Smaller open source LLMs have been close on the heels of larger closed source LLMs since Meta's first release of llama. The pace of closed source model performance has seriously plateaued since GPT4, and it's not surprising to see smaller open source models catching up.
"It is certainly exciting that this model may have been trained with a tiny fraction of the compute required to train comparable models. I'm looking forward to seeing other labs replicate this training methodology.
"One thing to keep in mind is that benchmarks can be misleading. Although llama3 was announced as beating GPT4 when it was first released, very few users would describe llama3 as near-GPT4 parity at the context window lengths required for enterprise application. It remains to be seen whether Deepseek R1 is able to displace OpenAI O1 in real world workflows."
Deepseek and the future of open source AI

Dr Andrew Bolster, senior research and development manager (data science) at Black Duck, said the release of DeepSeek "undeniably showcases the immense potential of open-source AI".
"Making such a powerful model available under an MIT license not only democratises access to cutting-edge technology but also fosters innovation and collaboration across the global AI community," he said. "However, DeepSeek’s rumoured use of OpenAI Chain of Thought data for its initial training highlights the importance of transparency and shared resources in advancing AI.
Bolster said it was critical for the underlying training and evaluation data to remain open, as well as the initial architecture and the resultant model weights (which have been described as the subconscious of AI models).
"DeepSeek's achievement in AI efficiency (leveraging a clever reinforcement learning-based multi-stage training approach rather than the current trend of using larger datasets for bigger models) signals a future where AI is accessible beyond the billionaire classes," he added.
"Open-source AI, with its transparency and collective development, often outpaces closed-source alternatives in terms of adaptability and trust. As more organizations recognise these benefits, we could indeed see a significant shift towards open-source AI, driving a new era of technological advancement.”
The wisdom of taking a cautious approach to GenAI
Darren Guccione, CEO and co-founder of Keeper Security, advised leaders to carefully think about the potential pitfalls of deploying DeepSeek.
"Leveraging AI platforms like DeepSeek may seem like a step forward, but organisations must carefully consider the risks – particularly when these platforms operate within regulatory environments where data access and oversight are less transparent," he said.
"Inputting sensitive company information into these systems could expose critical data to state-controlled surveillance or misuse, creating a Trojan Horse into an organization and all of its employees. This also has significant BYOD (Bring Your Own Device) security and user-level implications since employees might download this application on their personal device which may then be used to transact on the organization’s website, applications and systems. This risk could have an exponential impact on data privacy and security if not properly managed."
He advised establishing strict data classification policies to ensure sensitive information isn’t shared with untrusted platforms and leveraging solutions such as Privileged Access Management (PAM) to add another layer of security, offering granular access controls and comprehensive monitoring of privileged activity, as well as ensuring compliance with regulations like sanctions or export controls.
"Maintaining visibility into a supplier's compliance is equally critical," he continued. "Ensuring vendors adhere to recognised security certifications, such as SOC 2 Type 2 and ISO 27001, demonstrates their commitment to robust security practices and regulatory compliance. These certifications provide reassurance that vendors uphold high standards of security, including adherence to international regulations.
"Fostering an informed and vigilant workplace can significantly reduce risk. Educating employees about the hidden risks of foreign platforms and emphasizing cybersecurity best practices allows organizations to navigate the evolving AI landscape safely."
For James Fisher, Chief Strategy Officer at data analytics firm Qlik, DeepSeek's R1 model challenges the long-held belief that creating advanced AI models requires "astronomical investment".
"Companies like Nvidia, heavily tied to the AI infrastructure boom, have already felt the impact with significant stock fluctuations," he said. “While many anticipated the eventual commoditisation of AI training, few predicted it would happen this quickly or disruptively. DeepSeek’s success signals that the barriers to entry for developing sophisticated AI are falling at an unprecedented rate.
"Initially, the implications for enterprises may be limited, as questions around security and trustworthiness will undoubtedly arise. However, history tells us that these developments won’t remain isolated. DeepSeek’s principles will likely inspire forks of their model and spark a wave of open-source AI initiatives. The real takeaway here isn’t just about DeepSeek—it’s about the larger trend it represents: open source as the winning formula for mainstream AI use cases.
“The shift has begun. The costs of running advanced AI models are dropping dramatically, levelling the competitive playing field. This, in turn, pushes AI into its next phase, away from the infrastructure-heavy focus of training and into Applied AI—the era of putting AI to work in practical, scalable ways.
“As AI moves into this new phase, one thing is clear: openness and interoperability will be as crucial for AI platforms as they’ve been for data sources and cloud environments. The ability for AI systems to integrate seamlessly and collaborate across ecosystems will determine their effectiveness and long-term value. This new reality demands a shift in priorities—from building massive, closed models to creating platforms that are flexible, open, and built for collaboration.”
Have you got a story or insights to share? Get in touch and let us know.