Imagine waking up one morning to find a new AI model dominating the charts—one that rivals ChatGPT, but at a fraction of the cost. That’s exactly what happened with DeepSeek AI, a Chinese-developed AI assistant that surged in popularity almost overnight. With over a million downloads in its first week and topping Apple Store charts, DeepSeek is more than just another AI app—it’s a potential cybersecurity and national security concern.
DeepSeek isn’t just making headlines for its performance. The LMG Security Research Team recently analyzed this issue and shared their feedback in our weekly Cyberside Chats podcast and video, so click either of those links to hear the full conversation. Let’s dive into the serious questions about DeepSeek AI’s data privacy, corporate security, and AI supply chain risks. Should organizations embrace this low-cost AI, or is the cybersecurity risk too great? Let’s break it down.
Why DeepSeek AI Is a Game-Changer
DeepSeek AI isn’t just popular, it’s disruptive. Here’s why:
It’s Powerful – Early reviews suggest DeepSeek’s quality rivals or even exceeds ChatGPT and Grok AI.
It’s Unlimited and Free – Unlike most AI services, DeepSeek offers free access with unlimited queries, making it attractive for businesses and individuals alike.
With this level of adoption and impact, cybersecurity professionals must take the risks of DeepSeek AI seriously and plan accordingly.
The Cybersecurity Risks of DeepSeek AI
As with any disruptive technology, DeepSeek brings substantial risks, especially for businesses:
Data Stored in China
When you use DeepSeek, where does your data go? The answer: China. DeepSeek’s privacy policy says, “We store the information we collect in secure servers located in the People’s Republic of China.” This raises immediate concerns about data privacy, regulatory compliance, and national security. If your employees are entering sensitive data into DeepSeek, it may be accessible to foreign entities. “What are you going to do when you find out that someone has uploaded your meeting transcripts or your sensitive source code? This happened to Samsung,” stated Sherri Davidoff, founder of LMG Security. “Is that a data breach for you? Does China now have your sensitive intellectual property?”
The Shadow AI Problem
Even if your organization bans DeepSeek, that doesn’t mean employees won’t use it. Shadow AI—the unauthorized use of AI tools—means employees may install DeepSeek on personal devices, potentially exposing corporate data. If your employees enter any sensitive intellectual property, you are likely handing that data over to the Chinese government. “There are doctors and nurses who are in a hurry, trying to get a medical record transferred, so they’ll just send it off to their personal Gmail account to look at on their phone while working with a patient,” stated Matt Durrin, LMG Security’s director of training and research. “Now you have a HIPAA compliance problem and a potential security breach if that doctor or nurse’s phone is using AI. This information could now be part of the AI database, which is a breach risk, and potentially be used in the AI’s responses to questions.” We see this happen all too often. With AI automatically installing itself in everything from your email to your PDF reader, do you now have a data breach on your hands?
AI-Powered Hacking
Researchers have already demonstrated that DeepSeek can be tricked into generating malware, phishing emails, and hacking instructions through prompt injection attacks. “We tested DeepSeek in the LMG Security lab and were able to get it to make phishing emails for us and create a handbook on how to design and execute a phishing campaign,” Durrin stated.
Password Reuse and Credential Leaks
Researchers share that DeepSeek has already been hacked and leaked one million sensitive records, even though it just became a popular download last week. Imagine what the future will hold if users reuse passwords and there are credential leaks. The current industry sentiment is that the security on DeepSeek is comparatively weaker than many of today’s popular AIs, and it’s relatively easy to jailbreak.
Supply Chain Vulnerabilities
Vendors and partners may already be using AI tools like DeepSeek without disclosing it. If your business relies on third-party vendors for critical functions, their AI choices could introduce unseen risks. “Even if your organization says, ‘We’re not going to use DeepSeek AI,’ what are your suppliers doing?” Davidoff continued, “Are they using AI tools that you haven’t vetted?“ Read on for advice and watch our 5-minute video on how to adapt to AI security risks.
What Your Organization Must Do Now
Cybersecurity professionals can’t afford to ignore DeepSeek. Here’s what you should do immediately:
Decide on your AI policy now and communicate it clearly. Define which AI tools are approved for use and consider geographic concerns, including potential regulatory restrictions. If you already have an AI policy, update it to address these risks. (Check out LMG’s AI Readiness Checklist for more information.)
Educate leadership and employees on the risks of AI tools like DeepSeek. Ensure that executives understand the implications, and train employees about secure AI usage and the dangers of shadow AI.
Vet your vendors to determine which AI tools they use and where they store data. Update security questionnaires and contracts to ensure vendors handling sensitive data comply with your AI-related policies.
Update your incident response plan to include AI-related risks. Ensure that your organization is prepared to handle unauthorized data exposure and AI-generated cyber threats. Consider running tabletop exercises to test your response.
Stay informed because AI security risks are evolving rapidly. Keep monitoring developments and adjust your security policies accordingly.
Final Thoughts
“DeepSeek AI is more than just another chatbot—it’s a wake-up call for cybersecurity professionals,” Davidoff shared. “While it offers impressive capabilities at a low price, the risks of data leaks, Chinese acquisition of sensitive information, AI-powered cyberattacks, and supply chain vulnerabilities cannot be ignored.” You should immediately assess your AI policies, educate employees, and reinforce security measures to avoid disclosing valuable IP.
Want to learn more about adapting your cybersecurity strategy for AI threats? Check out LMG Security’s AI Readiness Checklist for essential cybersecurity updates.
We hope you find this information helpful! If you need help developing AI or cybersecurity policies and procedures, technical testing, training, or cybersecurity consulting support, please contact us. Our expert team is ready to help!