• Duo Discover
  • Posts
  • Microsoft Officially Bans DeepSeek App Over Data Privacy and Propaganda Concerns

Microsoft Officially Bans DeepSeek App Over Data Privacy and Propaganda Concerns

In partnership with

Hear why BILL S&E is #1 + earn AirPods 4

BILL is the all-in-one financial platform that simplifies managing bills, invoices, expenses, budgets, and business credit.

As a controller, you need software that works as hard as you do—that’s why BILL Spend & Expense is ranked #1 on G2’s 2025 Best Accounting & Finance Products list. New features include:

  • Assistant permissions for seamless delegation

  • Auto-freezing cards

  • Visa Signature benefit

  • API integrations for smarter scaling

Ready to see it in action and claim your AirPods 4?*

*Terms and conditions apply. See offer page for details. The BILL Divvy Card is issued by Cross River Bank, Member FDIC, and is not a deposit product.

Microsoft has imposed a ban on the use of the popular DeepSeek application among its employees, citing serious concerns around data privacy, national security, and the potential for Chinese state influence through AI-generated content. The announcement was made by Brad Smith, Microsoft’s Vice Chairman and President, during a U.S. Senate hearing focused on artificial intelligence, cybersecurity, and foreign influence in the tech sector.

“At Microsoft, we don’t allow our employees to use the DeepSeek app,” Smith told lawmakers. “We’ve chosen not to list it in our app store either.”

This marks the first time the tech giant has publicly confirmed internal restrictions on DeepSeek, a rising AI tool developed by a Chinese startup that has gained international attention for its open-source AI model and user-friendly chatbot interface. Available on desktop and mobile, DeepSeek quickly became popular following the release of its “R1” model, which some compared to early versions of ChatGPT.

Security Concerns and Allegations of Propaganda

Microsoft’s ban stems from a confluence of concerns. Central among them is the fact that DeepSeek stores user data on servers located in China. This raises alarms for privacy and national security experts, as Chinese data laws require companies to cooperate with the country’s intelligence agencies upon request. This legal requirement means that user information — including prompts, conversations, and potentially sensitive business data — could be accessed by Chinese authorities.

Smith also pointed out a second, more ideological concern: that DeepSeek may be a vector for subtle Chinese propaganda. “There’s a real risk that answers generated by DeepSeek could be influenced by Chinese state narratives,” Smith warned.

This accusation echoes concerns already raised by watchdog groups and digital rights advocates. DeepSeek’s privacy policy openly states that its platform censors content deemed “sensitive” by the Chinese government. Topics such as the Tiananmen Square massacre, pro-democracy movements, and the situation in Xinjiang are typically filtered, if not outright blocked, on Chinese platforms. If these biases are baked into the AI’s training data, they could influence responses even when the platform is accessed abroad.

A Paradox: Azure Supports DeepSeek’s AI Model

Despite these concerns, Microsoft has paradoxically hosted DeepSeek’s R1 model on its Azure cloud service. The model was added to Azure shortly after it went viral in early 2025. This move drew some criticism, with observers questioning why Microsoft would support a tool it deemed unsafe for its own employees.

The key distinction, according to Smith, lies in the difference between hosting a model and endorsing an app. “DeepSeek is open source,” Smith noted. “That means developers can download the model, run it on their own secure servers, and use it without sending any data back to China.”

In short, while DeepSeek’s app is banned internally at Microsoft due to concerns over its default infrastructure and data flow, the AI model itself is still available to customers who want to deploy it in a more secure context.

Even so, experts warn that data privacy is just one piece of the puzzle. “Running the model locally doesn’t automatically mitigate risks like propaganda, unsafe code generation, or hallucinations,” said one AI ethics researcher. “Security isn’t just about where the data goes — it’s also about what the model does.”

Microsoft’s Modifications to the DeepSeek Model

In the Senate hearing, Smith revealed that Microsoft’s internal team had been able to go “inside” DeepSeek’s model and make changes to address what he described as “harmful side effects.” He did not elaborate on the specifics of those changes, and Microsoft declined to provide further details to TechCrunch, referring instead to Smith’s original remarks.

When the DeepSeek R1 model was added to Azure, Microsoft emphasized that it had undergone “rigorous red teaming and safety evaluations” — processes designed to test AI systems for bias, security vulnerabilities, and misinformation risks. However, without transparency about how Microsoft altered the model or what the red-teaming process uncovered, some critics remain skeptical.

Competitive Tensions and Platform Policy

While DeepSeek is seen by some as a competitor to Microsoft’s own Copilot — the AI assistant integrated with Bing and other Microsoft products — the ban on the DeepSeek app is not part of a blanket policy against AI chat apps. For instance, Perplexity, another AI search engine and chatbot tool, is currently available in the Microsoft Store. However, a search for Google’s Gemini or even its Chrome browser in the store returns no results — a subtle but notable reflection of the deep competition between the tech giants.

This selective inclusion raises questions about how Microsoft curates its app ecosystem — and whether competitive advantage plays a role alongside security and privacy considerations. Smith emphasized that in DeepSeek’s case, the decision was purely based on security risks and potential propaganda influence, not business rivalry.

The Bigger Picture: AI, Sovereignty, and Risk

Microsoft’s stance on DeepSeek comes at a time of heightened scrutiny around AI tools developed in geopolitical hotspots. As the world’s leading tech companies race to integrate generative AI into consumer products, concerns about who controls the data, how models are trained, and what content is generated have taken center stage.

The U.S. government has already issued multiple advisories regarding the use of Chinese-made software, especially in critical industries. With AI models now playing a role in everything from customer support to medical research, decisions like Microsoft’s could influence broader corporate and government policies.

Conclusion

Microsoft’s internal ban on the DeepSeek app marks a significant moment in the intersection of technology, geopolitics, and corporate policy. By drawing a hard line against software it believes poses national security and ethical risks, Microsoft is positioning itself not just as a tech innovator, but as a gatekeeper of responsible AI usage.

As more international AI models enter the global marketplace, the choices made by platform providers — about what to include, what to alter, and what to exclude — will shape the future of how artificial intelligence is developed, accessed, and governed.

What did you think of this week's issue?

We take your feedback seriously.

Login or Subscribe to participate in polls.