By Sherri Davidoff   /   Mar 6th, 2025

AI Readiness: The Top Cybersecurity Control of Q1 2025

ai readiness image 2The cybersecurity landscape has never been more volatile. AI-driven tools are revolutionizing security defenses, but they’re also exposing organizations to unprecedented risks. Just look at the recent DeepSeek data leak—a Chinese AI-powered analytics firm left over one million sensitive records exposed due to poor security controls. Meanwhile, deepfake technology is advancing at breakneck speed, enabling cybercriminals to generate hyper-realistic videos for phishing and social engineering attacks. Organizations worldwide are struggling to keep up. That’s why at LMG Security, we’ve selected AI Readiness as the top cybersecurity control of Q1 2025. AI Readiness is no longer a futuristic concept—it’s a business-critical priority.

Why AI Readiness Is the Top Cybersecurity Control of Q1 2025

Organizations are rushing to integrate AI into their operations. According to NetScope’s Cloud and Threat Report: 2025, 94% of organizations now use generative AI apps, up from 81% a year ago. Yet, security practices are not keeping pace. AI not only adds efficiency to business processes, but it can also dramatically improve security by enabling automation, improving threat detection, and streamlining security operations.

At the same time, cybercriminals are leveraging AI for increasingly sophisticated attacks. Hackers are using AI to generate state-of-the-art malware, automate social engineering attacks, and exploit vulnerabilities faster than ever. Here are some of the biggest risks organizations face:

  • Data leaks from AI tools: Many AI applications store input data, which can be accessed or repurposed by the provider. Organizations must be cautious when entering sensitive information into AI-powered platforms.
  • AI-powered phishing scams: Hackers are using deepfake technology to create realistic videos of executives or public figures, tricking victims into sharing credentials.
  • Malicious AI models: Attackers can train AI models to generate harmful outputs or manipulate legitimate AI applications.
  • Rapid exploit generation: AI tools can analyze leaked or stolen source code and find vulnerabilities faster than human researchers.

The DeepSeek Data Leak: A Wake-Up Call

A recent AI-related security breach underscores the urgent need for AI Readiness. On January 29, DeepSeek, a Chinese AI-driven data analytics firm, suffered a major breach that exposed more than one million sensitive records, including chat logs, API keys, and internal operational data.

Cybersecurity researchers at Wiz Research discovered that DeepSeek had left a publicly accessible ClickHouse database open without authentication—a fundamental security failure. This incident highlights a growing concern: AI companies aren’t perfect, and AI models themselves have flaws. Even sophisticated AI-driven firms can make critical security mistakes, exposing vast amounts of sensitive data.

Beyond direct breaches, data uploaded into AI models can also be used by providers for training or other purposes. This is especially a concern when it comes to the DeepSeek AI tool, since Chinese companies have long been linked to cyber espionage and malware distribution. Organizations must be aware of these risks and adopt strict data governance policies to prevent unintended exposure.

How to Achieve AI Readiness

AI Readiness means having clear policies, security controls, and proactive defenses in place to mitigate AI-driven risks. Check out LMG’s checklist, “Adapting to AI Risks: Essential Cybersecurity Program Updates” for a full checklist of next steps. Here are our top recommendations:

  1. Implement AI Security Policies

A Darktrace survey found that 95% of organizations are discussing AI security policies, but only 45% have actually implemented them. It’s critical to establish policies covering:

    • AI model security: How AI models are protected from tampering.
    • Data governance: What data can be used to train AI models.
    • Access control: Who can interact with AI systems and under what conditions.
  1. Continuous Vulnerability Management

As AI rapidly identifies new vulnerabilities, organizations must continuously test and update their security posture. This includes:

    • Routinely scanning your network for vulnerabilities and missed software updates, and making sure to address them right away.
    • AI-assisted threat detection: Using AI to monitor and predict attacks.
    • Automated patching: Ensuring vulnerabilities are fixed immediately.

(See our LMG Security Top Controls on Continuous Vulnerability Management: LMG Security’s Top Controls)

  1. Conduct Regular Penetration Testing

Penetration testing is essential for identifying vulnerabilities that automated tools often miss. While AI-enhanced security tools can detect many threats, they have limitations—it can take weeks or even months for new vulnerabilities to become detectable by automated scanners. Additionally, many security gaps, such as business logic flaws, require human expertise to uncover. Our penetration testing team often finds they can string together smaller security gaps to achieve full system compromise. These are the types of gaps that automated testing often misses. Regular penetration testing ensures that organizations stay ahead of emerging threats and validate AI-generated security reports.

  1. Monitor AI Supply Chain Risks

AI vendors often process sensitive data. Carefully vet third-party AI providers and require transparency about:

    • How AI models are trained
    • Where data is stored
    • What security measures are in place

In addition, make sure to monitor use of AI by other vendors. Understand what AI your third-party suppliers leverage, and know whether they use it to process your sensitive data. Our team regularly creates customized vendor security vetting policies and provides platform recommendations to help you reduce your risk.

Final Takeaways: Engage in AI Readiness Now

AI Readiness is not optional—it’s an urgent necessity. The rapid rise of AI-driven cyber threats requires organizations to act now by establishing strict AI security policies, training employees on AI-powered cyber threats, implementing continuous vulnerability management, and carefully monitoring AI supply chain risks. The cybersecurity landscape is evolving at AI speed.

Please contact us if you need help updating your cybersecurity policies and procedures. Our expert team provides both broad cybersecurity best practices policy recommendations and more focused AI policy support.

About the Author

Sherri Davidoff

Sherri Davidoff is the Founder of LMG Security and the author of three books, including “Ransomware and Cyber Extortion” and “Data Breaches: Crisis and Opportunity. As a recognized expert in cybersecurity, she has been called a “security badass” by the New York Times. Sherri is a regular instructor at the renowned Black Hat trainings and a faculty member at the Pacific Coast Banking School. She is also the co-author of Network Forensics: Tracking Hackers Through Cyberspace (Prentice Hall, 2012), and has been featured as the protagonist in the book, Breaking and Entering: The Extraordinary Story of a Hacker Called “Alien.” Sherri is a GIAC-certified forensic examiner (GCFA) and penetration tester (GPEN) and received her degree in Computer Science and Electrical Engineering from MIT.

CONTACT US