Nine AI Security Policy Changes You Need to Make Today
AI is transforming how businesses operate—and how they’re attacked. Organizations that fail to adapt their cybersecurity policies risk being blindsided by AI-powered threats or mishandling AI-driven opportunities. An effective response doesn’t require a single “AI policy” but rather a comprehensive review of all cybersecurity policies to account for AI’s widespread impact. As we’ve seen from recent cases of AI-enabled phishing and misinformation, the stakes have never been higher. Let’s explore the top AI security policy updates your organization should implement today to safeguard against emerging risks and embrace AI responsibly.
Nine AI Security Policy Changes You Need to Make Immediately
- Bring Your Own Device (BYOD) and Remote Work Policies
AI tools on personal devices introduce new vulnerabilities, particularly in remote work environments. These tools can capture sensitive data, creating pathways for cybercriminals. For example, recent reports have shown that AI-enabled personal assistants on smartphones can inadvertently store sensitive work information, leading to unintentional data leaks.
Policy Recommendations:
-
- Enforce stricter controls for AI-enabled personal devices accessing corporate networks.
- Mandate regular security assessments for personal devices used for work purposes.
- Prohibit the use of unauthorized AI applications in remote work settings.
- Develop a framework for monitoring AI applications without infringing on employee privacy.
-
- Acceptable Use Policies (AUP)
Employees often use personal AI applications to streamline their tasks, but these tools can inadvertently expose sensitive data or create compliance issues. For example, an employee using an unapproved AI tool for document generation may unintentionally upload proprietary information to an insecure cloud service.
Policy Recommendations:
-
- Clearly define which AI tools are permissible.
- Educate employees on the risks of unsanctioned AI tools, such as data leakage or security breaches.
- Include examples of real-world consequences to drive home the importance of compliance.
- Regularly review and update the policy as new AI tools emerge.
- Vendor and Third-Party Management Policies
Vendors increasingly rely on AI systems, introducing potential vulnerabilities into your supply chain. Inadequate vetting can expose your organization to significant risks. Supplier breaches are a top cybersecurity threat and are setting off chain reaction breaches. If your supplier is breached, you, your customers, and even your customers‘ customers may be at risk. Check our third-party risk management blog for more advice or visit our third-party risk management policy development page.
Policy Recommendations:
-
- Update vendor vetting processes to assess AI usage and security measures.
- Revise contracts to include clauses mandating AI security best practices and compliance with relevant regulations.
- Regularly audit vendors’ AI systems for security risks and ethical concerns.
- Incorporate termination clauses for vendors failing to meet AI-related standards.
- Engage in collaborative security exercises with key vendors to ensure alignment on AI risks.
- Incident Response (IR) Plans
AI-driven threats, such as deepfake phishing or AI-enhanced ransomware, demand tailored responses. For example, in 2024 scammers used deepfake voice cloning technology to convince an employee at a multinational company to transfer over $25MM in a sophisticated wire fraud scam. AI is driving a surge in the volume of cyberattacks as well, with Amazon reporting that “it is seeing hundreds of millions more possible cyber threats across the web each day than it did earlier this year,” as a result of AI. Organizations need to be prepared to handle higher volume and different types of incidents as hackers continue to evolve. Updating your IR plan to include AI is essential. IR plan development is one of the most crucial places to consider bringing in experts to create a comprehensive plan that reflects today’s best practices.
Policy Recommendations:
-
- Expand IR plans to include AI-specific attack scenarios.
- Train teams to handle AI-driven incidents, such as countering deepfake impersonations.
- Conduct tabletop exercises focused on AI threats to identify gaps in your response strategy.
- Invest in tools capable of detecting AI-generated content in real-time.
- Public Relations Plans
AI-enabled threats like deepfakes and disinformation can severely damage an organization’s reputation. Recently, LMG’s team launched a new tabletop exercise scenario, in which an AI-generated video falsely implicates a corporation in unethical practices, causing a temporary stock dip and public relations chaos. When misinformation or reputational attacks arise, rapid and strategic communication is crucial. AI’s ability to amplify false information can make these situations escalate faster than ever before.
Policy Recommendations:
-
- Create a communication plan to address AI-generated misinformation.
- Develop strategies to identify and counter AI-driven disinformation campaigns.
- Establish protocols to mitigate reputational damage from deepfakes.
- Partner with media monitoring services to track and address AI-generated misinformation.
- Build relationships with trusted news outlets to combat false narratives swiftly.
- Data Classification and Handling Policies
AI systems can infer sensitive insights from seemingly innocuous data, escalating the need for robust data management. For instance, an AI system trained on unclassified data inadvertently generated profiles that revealed confidential business strategies.
Policy Recommendations:
-
- Revise data classification frameworks to address risks from AI processing.
- Implement stricter access controls for datasets used in AI models.
- Define clear handling requirements for sensitive AI-generated data.
- Monitor data usage within AI systems to prevent misuse or overreach.
- Access Control and Identity Policies
AI-powered impersonation attacks, such as deepfake-driven identity theft, require enhanced security measures. Experts are warning that AI can be used to bypass biometric authentication on mobile applications, while security firm KnowBe4 reported a real-world scam in which hackers from North Korea used AI to enhance their appearance on job interviews in order to get hired and conduct an insider attack. When our team conducts policy and procedure development for organizations, access control and acceptable use policies are some of the most frequently requested services.
Policy Recommendations:
-
- Adopt multi-factor authentication (MFA) that is resistant to deepfake attacks.
- Strengthen privilege management to limit access to AI systems.
- Regularly monitor and audit access logs for AI systems.
- Implement biometric authentication measures where feasible to enhance identity verification.
- Data Retention and Destruction Policies
AI models often retain learned insights even after the original data is deleted, creating potential compliance risks. Moreover, AI researchers demonstrated at last year’s Black Hat USA conference that even Microsoft’s CoPilot system could be “confused” into oversharing sensitive information.
Policy Recommendations:
-
- Define retention periods for datasets used in AI training.
- Ensure compliance with data deletion requirements.
- Develop procedures for retraining or retiring AI models as needed.
- Create a framework for auditing AI systems to verify compliance with retention policies.
- Training and Awareness Policies
Educating employees about AI-specific risks is critical to maintaining a strong security posture. For example, there’s been a recent spike in “hyper-personalized phishing scams” targeting corporate executives, which are largely driven by hacker AI tools. Employees in a wide range of roles need to stay alert for increasingly personal and targeted attacks.
Policy Recommendations:
-
- Train employees to recognize AI-related threats, such as deepfake phishing.
- Provide regular updates on emerging AI risks and mitigation strategies.
- Foster a culture of ethical and secure AI usage across the organization.
- Leverage interactive training sessions, such as workshops and simulated attacks, to engage employees.
The Next Step
AI has redefined the cybersecurity landscape, presenting both unprecedented risks and transformative opportunities. Organizations must act now by reviewing their entire suite of policies, not just creating a standalone AI policy. By updating your AI security policy framework across all operational areas, you can protect your organization and harness AI’s potential responsibly.
Take proactive steps today to strengthen your policies and build a resilient foundation for the future. Review your AI security policy framework, involve your team, and ensure that your organization is prepared to meet the challenges and opportunities of AI head-on.
Please contact us if you need help creating policies and procedures, training your team, or testing your network. Our expert team is happy to help!