Artificial intelligence and machine learning: the ultimate game-changers
While technological advances say they are on the brink of achieving that perfect artificial intelligence, we are not quite there yet. Fortunately for us, an AI does not need to be irreproachable, just better than a human. Take connected cars, for instance. An AI-based driver may not be mistake-proof, but it is certainly less imperfect than a human driver.
This is very much the case in cybersecurity where IT experts are changing the rules of the game using Machine Learning.
Machine Learning (ML) is far from being a new ground-breaking concept; yet its use has largely grown in popularity since the dawn of Big Data (see our previous article “The Big Question of Big Data” here). Whereas terms such as “Artificial Intelligence” and “Machine Learning” are often used as synonyms, it should be noted that ML is a sub-domain of AI associated with statistics and predictive analysis.
In other words, Machine Learning is a way to support the implementation of an AI. This is a well-known recipe by now meant drastically increase our chances in detecting and preventing advanced and persistent cyber-attacks.
Another question arises though: if we came to this conclusion in four small paragraphs, do you honestly think criminal organizations won’t? Do you think that hackers will simply sit on the sidelines and wait us to create this all-powerful weapon?
Artificial intelligence is a technology that promises to revolutionize cybersecurity – so why would hackers refrain from applying it to cybercrime? It is a scenario worth thinking about.
The cybersecurity of tomorrow could very well pave the way for a new era of hacking. Here is our supporting evidence:
The organizers of the 2016 Cyber Grand Challenge, a competition sponsored by the US defense agency DARPA, gave us a glimpse of a world where the power of an AI could thrive. During this edition, seven supercomputers fought amongst each other. The aim? Find software vulnerabilities faster than a human.
The technology is meant to be used for good, obviously, its purpose being to perfect any code by purging it of any exploitable defects. But what if, just what if this power would fall into the wrong hands?
Cybercriminals could use these features to scan software and find previously unknown vulnerabilities and then exploit them. Unlike a human, an AI can do it with the speed and accuracy of a machine. Hacking that would previously be incredibly time-consuming could become a more recurring reality in this nightmare scenario.
Let’s not even add here the fact that, nowadays, the “as-a-service” model (see our previous article “Cybercrime-as-a-service at all time high” here) has become a very attractive business strategy for cybercriminals. Just imagine: you no longer need to have hacking skills in order to hijack someone’s data. A whole range of tools (from exploit kits, malware etc.) is now available to the willing amateur… if only he/she will pay the price.
What if these rent-a-hack services would one day incorporate an AI? It’s not that difficult to imagine a world where artificial intelligence could actually aid cybercriminals in creating such powerful weapons.
So what’s keeping them from actually doing it?
One essential reason is the cost – to build an AI requires enourmous ressources, both human and material. Whoever attemps to develop such a technology with a criminal intent in mind would surely face this initial and very significant barrier. And while they’re at it, they will have to make sure they best developpers stay put and don’t shift in the direction of the highest bidder.
Another reason for this stalling is need. For the moment, there’s simply no need for hackers to invest that much into an AI. The Internet is full with uncovered 0-days and backdoors that are just waiting to be found.
With time, however, this situation might change. The costs of IT innovations will inevitably diminish and we will run out of vulnerabilities to exploit. On the off-chance that that will actually happen, we should at least take notice of the issue and ask ourselves: are we ready for an AI versus AI cyber-war?