Strategies

AI-powered Cyber Attacks

AI and Machine Learning can find the optimal cyberattack strategy by analyzing all possible vectors of attack.
December 30, 2020
5 min. read

“Those that fail to learn from history are doomed to repeat it.” Winston Churchill’s paraphrased wisdom rings true 72 years later as we brace ourselves for evolving cyber threats. Many companies have thousands of applications with long lost source code written by developers from days gone by, and no solution in place to understand the risks that lie within. Applications have been exploited over and over again, but now—more than ever—is the time to truly understand the risks hidden deep within their code.

Enter the Artificial Intelligence-Powered Attacker

As the bad guys become more sophisticated, we need to prepare for attacks using Artificial Intelligence (AI), Machine Learning, and evolutionary computation algorithms. But how are they different? Machine Learning, although widely considered a form of AI, is designed to allow machines to learn from data as opposed to programming. Its applicable use is to predict outcomes in the same way we recognize a red octagonal sign with white letters and know to stop. AI on the other hand can determine the best course of action on how to stop, when, and so on. The difference, simply put: Machine Learning predicts, AI acts.1

Evolutionary computation used within AI can take all the known information and solve problems with no known solutions using the concepts of biology: inheritance, random variation, and selection.2 Basically, looking at all the potential attack vectors, it would be able to analyze, predict the best solution, and take action while understanding how not to get caught. Like natural selection, the evolutionary computational algorithm’s approach to solving problems is to initialize, select, crossover and mutate (genetic operators), select again, and then terminate the weakest link (so to speak). In short, the strongest methods survive and the weakest die off.3 Darwinism at its best, but now baked into software code development. This is another powerful technique that can be brought to bear in future cyberattacks.

With all three combined, we must anticipate the attackers’ attempts at leveraging this advanced technology offensively against your company. It is possible they could design, or have already designed, a program “smart” enough to understand how to analyze all possible vectors of attack. This program could potentially select the best option, execute successfully, and remain undetected, running 24x7 and critically “thinking” about how to attack and—most importantly—how to avoid being detected.

How and Where It’s Happening

We have present-day examples of such programs operating, using data, and making decisions that best apply to their “endgame,” which in cybersecurity would be to successfully breach a company’s defenses and steal, disrupt, or destroy its data.

In 2017, Facebook shut down an experiment with AI programs. These programs were designed to negotiate trades of random items such as balls, hats, and books between two chatbots.4 However, as the negotiations progressed, the bots began communicating to each other in a new, previously unseen language. It turned out that the programs had spontaneously created a simplified version of English. What confused researchers was that the bots were not programmed to create a new structured language to make negotiations easier. Yet they did it on their own.

At first, people believed that the researchers panicked and shut down the experiment because the chatbots were operating beyond their program. However, the researchers denied these claims. So then what really happened? Was the AI program spontaneously evolving? Were we about to face Judgement Day?5 Let’s explore.

Artificially Intelligent Cyber Attackers?

The concept of intelligent machines was originally theorized by Alan Turing 72 years ago when he explored the possibility of machinery showing intelligent behavior, and mankind’s unwillingness to believe it was possible.6

Machine Learning has already been used to find (and patch) security vulnerabilities, but can it go a step further? Up until now, human beings have been the sole creators of malicious software. However, it has been difficult to expect malicious software capable of morphing on its own, hiding on its own, replicating on its own and thinking on its own. However, humans can now design programs that “think” independently, with the capacity to potentially attack hundreds of targets at a time. The impossible is now the inevitable. But how does this work?

What do AI-Powered Cyberattacks Look Like?

Now that we know that intelligent machines are mathematically possible, we must anticipate a future software program smart enough to understand how to analyze all possible vectors of attack, select the best option, execute successfully, and remain undetected. And potentially, these programs can be aimed at your organization. An evolutionary, algorithm-driven cyberattack program will run non-stop, 24x7; critically “thinking” about how to attack. Most importantly, it can evolve to avoid being detected.

These programs, for example, could be smart enough to figure out every single employee who works or has ever worked for your company, perhaps by trawling through LinkedIn data. It could then attack each one of their home networks and lie in wait for one of them to connect to the corporate network. No matter what authentication, VPNs, or firewalls are in place, the planted program simply rides along into the network and spreads its wings to find its target data. Disrupt, steal, destroy; no matter the objective, it could feasibly succeed.

Defending against AI-powered Cyberattacks

What will we do to protect ourselves from threats that have no morals, no boundaries, and no concern over the damage they deal? As Oscar in Armageddon said: “Okay, so the scariest environment imaginable. Thanks.”7

The good news is we understand our enemy. The next step is to plan your defense. Here are a few thoughts to consider:

  1. Know your code. Most importantly, given the easy target applications represent, you must ensure you analyze your software code for bugs, malware, and behavioral anomalies. Signature “scans” are not enough, as they only look for what is known. It is likely these new attacks will leverage techniques and tools previously unknown, so understanding the risks inside your code is more important than ever.
  2. Get back to basics. The weakest link is the human element, so let’s get back to basics. Policies need to be practiced and enforced. Some of the largest hacks have been successful via painfully preventable methods, such as employees picking up random USB drives in a parking lot and plugging them into their company computers. Despite cybersecurity training, people keep clicking on phishing emails—it isn’t going to stop. But as long as we have people in companies, you have to reduce the risk they introduce so train them, train them, and train them again.
  3. Monitor your logs. You have to monitor and detect the threats and look at the behavioral anomalies. Many people say they monitor logs, but it could be as infrequent as monthly, annually, or even never. As the 2020 Verizon Data Breach Investigation Report says, “Discovery in months or more still accounts for over a quarter of breaches.” So please, monitor your logs.
  4. Look for patterns. Finally, when monitoring your logs, use AI to fight AI. Machine Learning security log analysis is a great way to search for patterns and anomalies. It can incorporate endless variables to search for and produce predictive intelligence, which in turn provides predictive actions.
Join the Discussion
Authors & Contributors
Kathie Miley (Author)
EVP
Footnotes

1 https://pubs.spe.org/en/twa/twa-article-detail/?art=3781&gclid=Cj0KCQiAzsz-BRCCARIsANotFgNPYvdxBkY2EGrf8F7n1Ey21eWQ1TQP8X1MwYdeXi6AFo6mwJyh-IIaAq2-EALw_wcB

2 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5534026/

3 https://towardsdatascience.com/introduction-to-evolutionary-algorithms-a8594b484ac

4 https://www.independent.co.uk/life-style/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html

5 https://en.wikipedia.org/wiki/The_Terminator

6 https://www.historyofinformation.com/detail.php?id=4289

7 https://en.wikipedia.org/wiki/Armageddon_(1998_film)

What's trending?

Forward and Reverse Shells
Forward and Reverse Shells
09/15/2023 article 5 min. read
Web Shells: Understanding Attackers’ Tools and Techniques
Web Shells: Understanding Attackers’ Tools and Techniques
07/06/2023 article 6 min. read
What Is Zero Trust Architecture (ZTA)?
What Is Zero Trust Architecture (ZTA)?
07/05/2022 article 13 min. read