BLOG

We’re All the Target: Generative AI and the Automation of Spear Phishing

Jim Downey Thumbnail
Jim Downey
Published October 26, 2023

Not long ago, we could pick out phishing emails by their bad spelling, grammatical errors, and non-English syntax. We could spot widely used, generic ploys like the Nigerian prince scam. Most of us have not faced well-polished, targeted spear phishing because the cost of researching our background and crafting personalized messages has been too costly for criminals. With generative AI, that’s rapidly changing. As security professionals, we need to prepare for the consequences.

Generative AI enables the end-to-end automation of spear phishing, lowering its cost and broadening its use. Think of the work that an attacker must go through to craft an effective spear phishing message for a business email compromise (BEC). The attacker picks a target, researches their social media, discovers their closest connections, and picks out the target’s interests. With this information, the attacker crafts a personalized email in a tone of voice intended to avoid suspicion. The work requires a thoughtful following of leads and psychological intuition.

Could this work be automated? Certainly, attackers automate the scraping of social media content and use credential stuffing to take over accounts for information gathering. Through automation, attackers can build a knowledge graph about the life of a target.

With this knowledge graph, attackers can feed highly personal information into a ChatGPT-like service–one without ethical safeguards–to create targeted and effective spear phishing messages. The attacker could create entire sequences of messages that span multiple channels from email to social media with messages originating from multiple fake accounts, each with a well-crafted persona generated based on the target’s trust propensities.

There are signs that this threat is imminent. Reports of new attack tools for sale on the dark web, including WormGPT and FraudGPT, indicate criminals have begun to adapt generative AI to nefarious purposes, including phishing. While the use of this technology has not yet reached large scale end-to-end automation, the pieces are coming together, and the economic dynamics of cybercrime make the development nearly inevitable.

Within the economy of cybercrime, there is a specialization that drives innovation. The World Economic Forum (WEF) estimates that cybercrime is now the world’s third-largest economy, coming in behind the United States and China, with costs expected to reach $8 trillion in 2023 and $10.5 trillion in 2025. As in any large economy, the cybercrime economy includes vendors with specializations: there are vendors who sell stolen credentials, vendors who provide access to compromised accounts, and vendors offering IP address proxying over tens of millions of residential IP addresses.

Moreover, there are phishing-as-a-service providers offering complete toolkits from email templates to real-time phishing proxy sites. (See Jay Kelley’s article on how phishing sites use valid TLS to spoof real sites.) As vendors compete to win the business of criminals, the highest prizes will go to those organizations providing the lowest cost end-to-end service—a dynamic likely to drive forward the automation of spear phishing. We can imagine organizations that specialize in various types of data gathering around targets, data aggregation, and LLMs focused on specific industries or that excel at distinct types of fraud.

Given the likelihood of increases in spear phishing to new targets, organizations need to bolster their existing anti-phishing practices.

Uplevel Phishing Awareness Training: It has long been important to regularly educate employees about the dangers of phishing, how to recognize suspicious emails, and what steps to take if they encounter a potential phishing attempt. However, many organizations train employees to recognize phishing emails by their spelling and grammar mistakes. Instead, training is going to have to go deeper to train people to look out for any request from a non-trusted, non-verified source. In conducting simulated phishing campaigns to test employees’ ability to identify phishing emails, use phishing messages that are well-written, professional, targeted at specific employees, and originating from sources that appear legitimate.

Defend Against Real-Time Phishing Proxies: Attackers often use phishing to bypass multi-factor authentication (MFA) via real-time phishing proxies. The criminals use phishing to fool users into entering their credentials and one-time password into a site that they control, which they then proxy to the real application to gain access. (For more on MFA bypass and defenses, see the whitepaper: Does MFA Solve the Threat of Account Takeover?)

Defend More Rigorously Against Account Takeovers: Criminals gain control of accounts at scale through credential stuffing using bots, which results in massive numbers of account takeovers. In addition to financial fraud, criminals gather additional personal data through scraping that they can use in further phishing attacks. Defending effectively against bots requires rich signal collection and machine learning.

Use AI to Battle AI: With criminals exploiting generative AI to commit fraud, organizations should leverage AI in their defense. F5 partners with organizations to take advantage of rich signal collection and AI to battle fraud. F5 Distributed Cloud Account Protection monitors transactions in real time from across the user journey to detect malicious activity and deliver accurate fraud detection rates. If you can detect fraud within applications, it reduces the harm of phishing. (Inspecting traffic with AI requires decrypting traffic efficiently, which you can accomplish efficiently with TLS orchestration.)

Conclusion

Generative AI poses a new set of security challenges. With the onset of automated spear phishing, we need to unlearn many of our heuristics of trust. While in the past we may have trusted based on the appearances of professionalism, we now need more rigorous protocols for determining the veracity of communications. We need to become more suspicious in this new age of misinformation campaigns, deep fakes, and automated spear phishing, and organizations will need to deploy AI in defense at least as rigorously as criminals use it against us.

To follow the evolution of the threat landscape, stay tuned to the latest from F5 Labs. If your organization is under attack, contact F5 for support.