The Top 3 AI Scams and How to Protect Your Organization
The rapid increase in AI scams is alarming, and attackers are leveraging sophisticated technologies like deepfakes, voice cloning, and AI-driven phishing to launch increasingly convincing attacks. In fact, Deloitte predicts that generative AI could push fraud losses up to $40 billion a year by 2027. With generative AI making cyberattacks harder to detect and dramatically faster to execute, AI-assisted attacks are poised to impact organizations at an unprecedented rate. Let’s dive into today’s most common AI-assisted attacks and how you can reduce your organization’s risk of becoming a victim.
The Top 3 AI Scams
“Evil AI” can help attackers with everything from writing malware and finding software vulnerabilities (watch our video on WormGPT for details) to step-by-step directions on how to launch cyberattacks that include startlingly realistic email and website phishing copy and graphics. Here are the top methods:
- AI-Enhanced Phishing Attacks: Phishing has been a staple scam for over two decades, but AI scams have taken phishing to new heights. Using AI-powered tools, criminals can now create realistic, polished phishing emails and Smishing (SMS text phishing) attacks that appear nearly identical to authentic communications from some of today’s trusted organizations. These AI scams use generative AI to produce phishing messages with flawless grammar and official-looking graphics. For example, WormGPT allows hackers to craft realistic emails and even generates HTML code to make phishing emails appear as genuine Microsoft or Google login prompts. These advancements remove the once-reliable red flags, like spelling errors and unusual formatting, that many users relied on to identify scams. You also can’t always trust SSL certificates for websites these days. Watch our short video on how hackers are abusing these certificates.
- Deepfake Video and Audio Manipulation Attacks: It’s now easy for attackers to create very realistic deepfakes. Earlier this year, Ferrari narrowly avoided a deepfake scam in which attackers posed as the CEO on WhatsApp. The voice sounded a little monotonous, and the quick-thinking employee asked the attacker a question about an earlier conversation and was able to spot the scam. Ferrari was lucky, but another multinational organization lost $25 million when an unsuspecting finance employee fell victim to a deepfake video conference call and ended up transferring money to the attackers. Watch our video to see real examples of deepfake videos.
AI scams involving voice cloning are also surging, especially within the realm of business email compromise, a type of fraud where criminals manipulate corporate email systems to redirect funds or gain sensitive information. As attackers launch and track attack communications from within a victim’s email, they can now also follow up with voice calls. These tactics are particularly dangerous when combined with spoofed caller ID and social engineering.
Since attackers can create a convincing voice clone with only a few seconds of audio they are using this tactic for a wide range of scams. From phone calls claiming to be a relative who urgently needs bail money to voice cloning as a second step in business email compromise (BEC) attacks, these scams are happening now and are on the rise.
- Data Analysis and Productization: AI tools are also being used to analyze hacked email accounts and use this information to craft tailored BEC messages that blend seamlessly into the user’s email threads. Once a hacker has access to a company’s email system, they can use AI to study conversation patterns and quickly identify high-value targets. They can also download any accessible files and use AI to quickly search, analyze, and create data products, like a list of names and credit card or social security numbers, to sell on the dark web. As you can see, AI analysis enables attackers to maximize the value of their attacks by quickly sorting through internal data, emails, and more.
How to Protect Yourself and Your Organization
Combating the rapid evolution of AI scams requires both proactive and reactive prevention. Here are some practical steps you can take to safeguard your organization against AI scams:
- Careful Caller Verification: In a world of voice cloning, caller ID can no longer be trusted. Always verify any unusual or high-stakes requests by:
- Calling a known contact number for the person and verifying the request.
- Using video verification using an internal connection on Slack or Teams.
- In your call centers, consider using tools like Okta and Caller Verify authentication apps. You can also use authentication solutions like Phoneprinting by Pindrop that provide acoustic and behavioral analysis.
- Cybersecurity Awareness Training: Conduct regular cybersecurity awareness training for all employees and ensure that it covers the latest attack techniques. The best approach is to have general cybersecurity awareness training for all employees, and specialized training for IT (in prevention and response), accounting, and executives, since these employees are often targeted. Read our cybersecurity training blog for more detailed advice. You should also run regular social engineering simulation tests to assess your team’s awareness and response.
- Transition to Strong Authentication Methods: Consider advanced multi-factor authentication (MFA) options such as passkeys, biometric verification, or hardware-based authentication tokens, which are resilient against session cookie theft. Unlike traditional SMS or email-based MFA, these methods provide a stronger defense against AI scams that exploit session cookies. Read our MFA blog for additional details.
- Reduce Sensitive Data Exposure: Data is hazardous material. Use identity and access management controls to limit access to sensitive data, particularly on email and cloud systems. You should also regularly delete any unnecessary data. These practices can reduce the damage if an attacker gains access to your systems. You should also encrypt your data and maintain a data and asset inventory to keep track of sensitive information, ensuring that ONLY the necessary data is retained and readily accessible.
As AI scams continue to grow in sophistication, cybersecurity companies are developing AI-enhanced tools to counteract these threats. For instance, major platforms like Microsoft and Google are incorporating AI algorithms to identify and block AI-generated content in real-time. Attacks will continue to evolve, but by staying informed and implementing robust security practices, you can protect yourself and your organization from the far-reaching impact of today’s AI scams.
Please contact us if you need help with technical testing, advisory support, or training. Our expert team is ready to help!