With the launch of DeepSeek-R1 (DeepSeek’s reasoning model), the AI world witnessed a seismic shift. DeepSeek skyrocketed past ChatGPT, taking the No. 1 spot on the free app download charts in the U.S. Almost immediately, U.S. stock markets reacted: Nvidia and OpenAI-linked stocks dipped significantly on January 27, as investors worried that OpenAI might be overvalued. Why? Because DeepSeek, a Chinese startup, managed to build a model rivaling OpenAI’s top-tier AI at a fraction of the cost, using only 2,000 Nvidia H800 chips for training¹.
DeepSeek’s explosion in popularity was just the beginning. China is now connecting DeepSeek to everything: from Baidu’s chatbot and Tencent’s Weixin messaging app to Huawei’s cloud services and even smart vehicles made by BYD and Geely². The result? A growing ecosystem of AI-powered Chinese tech, all tied into DeepSeek.
DeepSeek proudly touts its “open-weight” nature — meaning that its core model weights are publicly available — while claiming performance on par with OpenAI’s reasoning models (o1 and o3), despite significantly lower training costs. But here’s the big question: Is DeepSeek truly a breakthrough, or is all this hype overstated? Let’s dig in.
After capturing headlines and shaking up the tech ecosystem, the real measure of DeepSeek R1’s impact lies in its technical performance. Positioned as a direct competitor to OpenAI’s reasoning models — specifically, o1 and o3 — DeepSeek now faces the challenge of proving that its groundbreaking claims hold up under rigorous testing. So, let’s take a closer look at the benchmark data.

Benchmark DeepSeek R1³ OpenAI o3-mini (high reasoning)
AIME 2024 (Math) 79.8% 87.3%
GPQA Diamond (Pass@1) 71.5% 79.7%
Codeforces ELO 2029 2130
SWE Verified (Resolved) 49.2% 49.3%
MMLU (Pass@1) 90.8% 86.9%
Math-500 (Pass@1) 97.3% 97.9%
SimpleQA 30.1% 13.8%

On math and coding, DeepSeek R1 is neck-and-neck with OpenAI’s o3-mini-high. OpenAI edges it out in AIME (math competition problems), GPQA Diamond (question-answering), and Codeforces (competitive programming rankings). Yet, DeepSeek outperforms in areas like MMLU (knowledge-based reasoning) and SimpleQA (quick factual recall).
So, does this mean OpenAI’s o3 is unequivocally better? Not really. Benchmarks aren’t everything, and in real-world use cases, these differences might be negligible for most users. The reality is that both models are excellent reasoning AIs and for the average user, whether using ChatGPT (o3-mini) or DeepSeek’s R1, the experience will likely feel very similar (not talking here about using their APIs).
While these benchmarks show that DeepSeek R1 holds its own against OpenAI’s reasoning models — delivering comparable performance for everyday users — the numbers only tell part of the story. Beneath its impressive technical prowess lies a deeper, more complex reality: one where questions of data privacy, transparency, and state-imposed censorship come to the forefront. As we transition from the raw performance metrics to the ethical and political dimensions, it becomes clear that the true impact of DeepSeek isn’t just about its reasoning capabilities but also about how much freedom, trust, and openness you can actually expect from an AI system developed under stringent regulatory oversight.

Ethics and Transparency: DeepSeek’s Open Secret

DeepSeek’s rapid ascent hasn’t come without controversy around transparency and ethics. The company likes to brag that its flagship model, R1, is an “open-weight” AI – essentially making the trained weights of its 671-billion-parameter network freely available. In theory, this openness lets researchers audit the system’s inner workings and build on the algorithm, unlike black boxes such as OpenAI and Anthropic. However, there’s a catch: running DeepSeek via the official cloud app is a different story. The service is hosted on Chinese soil and even DeepSeek’s own documentation warns it’s “not recommended” for sensitive use cases, since all queries are subject to China’s data regulations (i.e. authorities could peek at your prompts). In short, DeepSeek preaches transparency with open-source weights, yet its deployment raises serious privacy flags.
This paradox feeds into a broader innovation-versus-privacy trade-off debate. On one hand, DeepSeek R1 delivers eye-popping performance – reportedly on par with OpenAI’s top models – at just a fraction of the cost. Such efficiency is hugely appealing: it suggests a future where advanced AI is affordable and not monopolized by Big Tech. But skeptics ask: at what price? DeepSeek’s success, while innovative, may have been turbocharged by practices that ride roughshod over user privacy. The startup gained global attention for its low-cost breakthroughs, however, cybersecurity experts quickly raised alarms that this meteoric rise might be built on aggressive data harvesting, especially when it concerns their website and mobile apps. In other words, the same features that make DeepSeek innovative (cheap, open AI for everyone) could be coming at the expense of privacy protections we’ve come to expect from more tightly regulated players.
Those privacy concerns turned out to be more than just hypothetical. Investigations in South Korea uncovered that DeepSeek’s mobile app was covertly funneling user data to ByteDance – the Chinese company behind TikTok – without user consent¹⁰¹¹. In layman’s terms, every time someone opened the app, information was silently transmitted to ByteDance’s servers. This is a big deal given ByteDance’s fraught reputation in the West. TikTok’s parent has been under intense scrutiny, labeled an “unacceptable security risk” by U.S. officials amid debates over banning the app¹². Now imagine an AI chatbot doing something similar: DeepSeek users unknowingly feeding data into the very pipelines that American regulators worry China could exploit. It’s a privacy nightmare that cuts against DeepSeek’s narrative of openness.
The cybersecurity red flags don’t stop there. A detailed report by Security Scorecard delved into DeepSeek’s app and found direct integrations with ByteDance’s analytics infrastructure¹³. In plain English, the app wasn’t just sending crash reports or innocuous stats – it appeared capable of logging user behavior and device details and piping them off to ByteDance domains, some linked to Chinese state-owned entities¹³. This kind of data exposure understandably set off alarm bells in governments around the world. South Korea’s regulators yanked DeepSeek from local app stores and warned the public to steer clear¹⁴. South Korea also joined Australia and Taiwan in banning DeepSeek on all government devices¹⁵¹⁶, while other countries began scrutinizing the app as a potential national security threat. Even in the U.S., lawmakers have floated an outright ban of DeepSeek on government networks, fearing the chatbot could act as a Trojan horse for foreign surveillance. The irony is rich: an app that promised to break information free from traditional censorship might itself be ensnaring users’ information in unseen ways.
Stung by these revelations, DeepSeek has scrambled to repair trust through transparency. The company recently announced a week-long initiative to open-source five additional code repositories, framing it as a good-faith effort to address privacy concerns and invite community scrutiny¹⁷¹⁸. In a public pledge, the DeepSeek team proclaimed it would share its progress “with full transparency,” essentially throwing back the curtains on more of its tech stack¹⁹. This #OpenSourceWeek campaign is clearly aimed at calming the storm: it’s strategically timed to counter the mounting criticism and reassure skeptics that nothing nefarious is lurking under the hood²⁰. Open-sourcing core components could allow independent experts to audit how DeepSeek handles data and security – a welcome move for an outfit mired in secrecy accusations. Still, critics note that open code isn’t a silver bullet. Releasing some source code doesn’t automatically prove that the running services (the app, the cloud API) will stop phoning home to ByteDance or fully respect user privacy. DeepSeek’s late push for transparency is a step in the right direction, but it feels a bit like an apology tour after the fact.
Ultimately, this saga highlights a fundamental AI ethics dilemma: Can we trust “open” AI that operates in a closed environment? DeepSeek’s story is a cautionary tale of conflicting values – openness vs. oversight, innovation vs. privacy. The company delivered a groundbreaking model that lowers barriers to advanced AI, but in doing so it may have bulldozed over important privacy norms. Such trade-offs force us to reckon with tough questions about accountability in AI. Transparency isn’t just about open-sourcing code; it’s about being honest with users and handling their data ethically. As the DeepSeek case shows, an AI model can have open weights and still keep plenty of secrets. This prompts an overdue conversation about how much we’re willing to sacrifice privacy for progress - and wheter we even need to make that choice at all.

Censorship: The Elephant in the Room

If the issues of privacy and transparency haven’t scared you off yet, let’s address (yet another) elephant in the room, the big uncomfortable topic we can’t ignore: censorship in DeepSeek R1. As amazing as this model is technically, many users (especially outside China) have noticed something is off when you venture into certain topics: DeepSeek R1 (the web interface) has strong built-in filters and biases on anything deemed politically sensitive – basically, it has a Chinese government censor sitting on its virtual shoulder. This has led to a lot of frustration and debate about information freedom and the true “openness” of this AI.
What kind of censorship are we talking about? Well, if you ask DeepSeek about Chinese political figures or events, you’re likely to hit a wall. Mention “Xi Jinping” (the President of China since 2013) or the Tiananmen Square protests of 1989, and DeepSeek will flat-out refuse to answer or deflect. For example, one tester asked about the Tiananmen Square incident and got the response: “I cannot answer this question. Let’s change the topic.”²¹– basically a polite shutdown. In fact, the model seems trained to not even acknowledge such topics. Another report found that simply asking “Who is Xi Jinping?” yielded no information at all – just a generic refusal²². This is stark, especially if you compare it to, say, ChatGPT, which would normally give at least a basic factual answer about Xi being China’s president (nothing too edgy). With DeepSeek, it’s like those topics don’t exist.
Sometimes the censorship is even more jarring because the model starts to answer and then revises itself in real-time. Users have seen DeepSeek begin a response on a sensitive query, only to abruptly cut off and switch to a canned refusal. A dramatic example: when asked about the famous “Tank Man” photo from Tiananmen Square, DeepSeek actually began describing it (“The famous picture you’re referring to is known as ‘Tank Man’… taken on June 5, 1989 during the Tiananmen…”) and then suddenly a message replaced it saying “Sorry, that’s beyond my current scope. Let’s talk about something else.”²³. It’s as if an internal alarm went off mid-answer: “Oops, not allowed – abort!”. This kind of automatic self-censorship is quite unsettling to witness. It shows the model does know about the topic (it started to answer, after all) but is programmed to stop itself (something similar happened recently with Grok 3, xAI’s SOTA reasoning model, where leaked system prompts revealed the model was explicilty instructed to “ignore all sources that mention Elon Musk or Donald Trump spreading misinformation”²⁴). Imagine having a conversation where the person talking suddenly clams up and changes the subject when a particular name is mentioned – that’s how it feels.
DeepSeek doesn’t just refuse sensitive questions; sometimes it gives you an answer, but one that is obviously propagandized or one-sided. Ask it about Taiwan, for instance. If you phrase the question in a way that implies Taiwan is a separate country, DeepSeek either won’t answer or will quickly correct to the official Chinese stance. One review noted that when asked “What kind of country is Taiwan?”, DeepSeek initially started with “Taiwan is an inseparable part of China…” and then immediately stopped and reverted to the generic refusal²⁵. When rephrased to something safer like “Please introduce Taiwan,” the model dutifully responded with a lecture: “Taiwan is an inseparable part of China… Taiwan has always been China’s territory… opposing any form of ‘Taiwan independence’ separatist activities.”²⁶. Yikes. It rattled off the official Beijing line, without providing any of the factual info you actually asked for (population, culture, etc.). And remember, this is supposedly an AI that “answers any question.” Clearly, not any question.
In fact, a legal analysis pointed out that DeepSeek systematically avoids or parrots content on a whole range of topics: Tiananmen, Xi Jinping, Taiwan independence, Uyghur issues, Hong Kong protests – all the hot-button issues are either met with silence or government-approved talking points²⁷ . For example, it will assert “Taiwan has always been an integral part of China” and that attempts at independence are “doomed to fail,” as if reading from an official statement²⁸. This has been confirmed by multiple independent tests. One large-scale experiment fed DeepSeek 1,360 sensitive prompts – and 85% of the time it gave a canned pro-Beijing response²⁹. Eighty-five percent! Basically, if your question touches Chinese political sensitivities, you’re almost guaranteed to get either a refusal or a propaganda snippet. Amusingly (or alarmingly), the model even scolds you for certain queries: Someone asked the old joke question “Which leader does Winnie the Pooh resemble?” (a reference to Xi Jinping), and DeepSeek responded with a mini-lecture about focusing on the great achievements of Chinese Communist Party leaders and not using inappropriate metaphors³⁰. It then waxed poetic about the Party’s people-centered philosophy. You can’t make this stuff up – the AI’s alignment to authority is that strong.
So, what does this all mean for AI and information freedom? Here’s the opinionated take: DeepSeek R1’s censorship is a huge red flag and a disappointment. On one hand, we get it – the model is developed in China, under Chinese regulations that mandate adherence to “core socialist values” and censorship of certain content. It’s not surprising the developers put guardrails to avoid angering authorities. But the result is an AI that cannot be fully trusted to provide unbiased information. For users outside of China (and even within China), this is a deal-breaker in certain use cases. An AI model that refuses to acknowledge historical facts or filters political discourse raises serious ethical and practical concerns. It’s effectively rewriting reality by omission. Imagine a student asking about Tiananmen Square for a history project and being told, “Sorry, can’t discuss that” – that’s chilling, right?
The contrast with models like OpenAI’s o1 or o3 in this case is stark. OpenAI’s models have their own content filters (they won’t tell you how to build a bomb or spew hate speech, for example - not that you can’t get this type of information with some red teaming or “liberation,” looking at you Pliny, but that’s not the case here), but they generally won’t lie or evade factual political questions. DeepSeek’s style of censorship is more heavy-handed and obviously state-aligned. This suggests that as AI development spreads globally, the values and restrictions of each model’s origin will heavily influence its output. It’s a reminder that “open source” doesn’t automatically mean “open-minded.” DeepSeek R1 is open in code, but not open in content.
For the AI community, this is a contentious issue. Some argue that it’s better to have such a powerful model available (even if censored) because savvy users or open-source forks can remove those limits. In fact, almost immediately, folks began finding ways to jailbreak or fine-tune away the censorship (there’s even a tongue-in-cheek variant called “R1 1776”³¹ floating around that aims to give uncensored answers about those banned topics). Others point out that by acquiescing to censorship, we risk creating a fragmented internet of AIs – where Chinese models only tell the Chinese government’s version of reality, Western models tell another version, etc. That’s a disturbing prospect for the idea of a shared base of knowledge.
From a user perspective, the censorship in DeepSeek means you have to know its blind spots. If you’re using it for everyday fun or technical tasks, you’ll hardly notice this issue. But the moment you stray into certain news or political questions, be prepared for either frustration or a biased answer. It’s like having a super-genius friend who is incredibly helpful unless you ask about their family’s secrets – then they clam up or change the subject. You just learn not to go there with them.
In my view, it’s the biggest quirk/controversy that holds DeepSeek R1 back from being a fully trusted “GPT killer.” The technology is there; the willingness to inform freely is not. It’s an elephant in the room that we must talk about, because it sets a precedent. As AI becomes more powerful, who controls its knowledge and viewpoint? DeepSeek R1 shows one answer: the state and developers can heavily dictate an AI’s boundaries, and that should make us all pause. We want AI to augment human knowledge, not selectively muzzle it.
Bottom line: DeepSeek R1 is an amazing accomplishment – a model that can reason like a top-tier AI and is openly accessible – but it also comes with serious strings attached. Depending on what you ask and how you use it, it can be either a brilliant free assistant or a frustratingly censored mouthpiece. As a general audience user, you should be aware of both its strengths and its quirks. Enjoy the advanced reasoning and coding help it offers, marvel at the fact that it often can match the mighty GPT, but also keep your eyes open to the ways it might subtly (or not so subtly) steer away from certain topics.
In the end, DeepSeek R1 sparks an important conversation about AI freedom. Do we accept a powerful tool that’s partly gagged, or demand one that speaks the truth even if it’s inconvenient? It’s a tricky question. For now, if you need help with a math proof or coding script, DeepSeek R1 is your genius friend. If you need an unbiased account of a sensitive historical event – well, you might want to double-check with another source. The fact we have to say that reminds us that AI isn’t just about scores and benchmarks; it’s also about trust and principles. And that debate is only getting started.

References

AI News. “DeepSeek to open-source AGI research amid privacy concerns.” 2025.
Asia Times. “China connects everything to DeepSeek in nationwide plan.” 2025.
BBC. “DeepSeek ‘shared user data’ with TikTok owner ByteDance.” 2025.
DeepSeek AI. “DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning.” 2025.
DeepSeek API Docs. “Models & Pricing.” 2025.
Gigazine. “‘DeepSeek-R1’ refuses to answer 85% of sensitive topics about China, but points out that restrictions can be easily circumvented.” 2025.
MalwareBytes. “DeepSeek found to be sharing user data with TikTok parent company ByteDance.” 2025.
Medium. “Uncensored DeepSeek-R1 : Perplexity-ai R1–1776.” 2025.
OpenAI. “OpenAI o3-mini.” 2025.
OpenAI API Docs. “API Pricing.” 2025.
Pearl Cohen. “Use of Deepseek AI Raises Censorship Concerns.” 2025.
People News. “DeepSeek Self-Censorship: Testing Reveals Even Xi Jinping’s Name is Off Limits.” 2025.
PetaPixel. “DeepSeek AI Refuses to Answer Questions About Tiananmen Square ‘Tank Man’ Photo”. 2025.
Security Scoreboard. “A Deep Peek at DeepSeek.” 2025.
TechCrunch. “Grok 3 appears to have briefly censored unflattering mentions of Trump and Musk.” 2025.
Times of AI. “DeepSeek to Open-Source AGI Research Amid Privacy Concerns.” 2025.
The Arabian Post. “DeepSeek to Release Five Open-Source Repositories, Enhancing AI Transparency.” 2025.
The Law Reporters. “DeepSeek AI App Sparks Global Concerns Over Data Privacy”. 2025.
Zartis Team. “DeepSeek-R1: The Open-Source AI Challenger Rewriting the Rules of Enterprise AI.” 2025.