The Malicious Use of Artificial Intelligence in Cybersecurity
23 Aug, 2018
Artificial intelligence (AI) is one of the most important and active areas of research in the field of computer science.
AI has gone from the realm of science fiction to being something that we see and interact with on a daily basis. Voice assistants are increasingly finding their way into our homes and our smartphones. These devices operate with a sophistication that would have been all but unthinkable, especially in a consumer product, just a few decades ago.
Moreover, for a while now, cybersecurity companies have been experimenting how AI factors into improving the overall computing security experience from powerful devices at large enterprises to the endpoint security of devices found on laptops and desktops at home.
However, like all powerful technologies, AI is a double-edged sword. It has the potential to improve our lives, yet it also presents a unique set of challenges regarding cybersecurity.
At the heart of the issue lies the simple fact that AIs have no ethical bias. Even if we can code in something resembling a conscience, to simulate ethics in a machine, these would be the ethics of the designer, not an objective set of ‘good ethics.’ Thus, not surprisingly the technology can be used for nefarious purposes as well as for well-intentioned ones.
Many have warned about the potential dangers that we might face regarding AI; however, these concerns are no longer theoretical. We now know that malicious actors, including nation-states, are using AI to undermine the cybersecurity of other nations and foreign businesses.
Machine Learning (ML) is a particular concern. Whereas an AI system is one that displays something akin to simulated intelligence, an ML algorithm can be thought of as an AI that can learn and become more sophisticated. Far fewer people are aware of ML than AI, but this burgeoning technology is becoming more relevant to our everyday lives.
Machine learning requires feeding an algorithm a large set of correct answers to a problem. For example, if you want to develop an AI that can identify bananas, you would use a machine learning algorithm and feed it hundreds of thousands of images of bananas. The algorithm will be able to determine what it is that these images have in common, and therefore to identify a banana.
Our most sophisticated defenses against cyber-attacks, the kind of defenses employed by large corporations and nation-states that protect vital systems and infrastructure, rely upon machine learning. The standard anti-virus tools that most of us use employ databases of known threats to check for malicious files on a user’s computer. This is a reactive method of defense; it can protect against known threats, but not against new threats, also known as ‘zero-day attacks.’
Machine learning is a powerful concept. It allows us to train AI to do things without us having to ‘explain’ (through coding) what those things are and how they work.
It was always inevitable that, sooner or later, such a powerful technology would be used for malicious purposes. Now, a number of academics and cybersecurity professionals have sounded the alarm over the increasing use of AI to undermine conventional cybersecurity systems.
One concept that could compromise ML is known as poisoning the well. This is where malicious actors take advantage of the machine learning process and taint the data pool from which these systems learn how to identify malicious code. By inserting fraudulent code into the process, attackers can cause a system to generate false positives, undermining its intended functionality.
Protecting Ourselves from Cyber Attacks
The rise of AI-driven approaches to cyber-attacks is of particular concern because most major nation states are actively engaged in it. This carries the risk of normalizing the use of AI as an offensive weapon, and also increases the chances of powerful malware developed by nation states making its way into the wild, where it might affect everyday computer users. There is no such thing as a perfect security solution. However, this should provide a fresh impetus for all of us to ensure that we are sufficiently protected against cyber-threats.
In addition to making sure that we have antivirus software installed on all our devices, it is also worth adding a virtual private network into the mix. Given that we are seeing an increase in the number of cyber-attacks which are directed towards specific individuals and systems, a VPN, which allows you to obscure your IP address and physical location, can go a long way to enhancing your security setup. Businesses have long been known to use enterprise VPN technology to secure their connections against malicious attackers.
Both artificial intelligence and machine learning are becoming essential technologies that benefit us in numerous ways. However, we are increasingly seeing them used for nefarious purposes. This is something that we should all remain vigilant of, as these technologies also have the potential to undermine our common defenses.