As the world of information security becomes more complex, every week brings new announcements and promises in the shape of Machine Learning (ML) and Artificial Intelligence (AI) research. Whilst there is still a lot of marketing buzz, we are now seeing real development and products on the market.

We are slowly, methodically developing more intelligent algorithms, and there is already a strong desire to increase the impact of SOAR (Security Orchestration Automation & Response) via autonomic means. These new techniques might be to our advantage today, but could soon turn into an even more powerful ally for adversaries. One of the most complicated aspects of next generation of autonomous security systems is finding the right balance between intelligence and predictability.

Surely, it’s only a question of time before we will see the rise of intelligent attacks — whether it be smart-phishing (distributing a volume of spam, with automated spear phishing targeting), self-sustained hivenets or swarmbots capable of performing attacks without any human supervision; or polymorphic malware that will be continually evolving during the attack.

Social and ethical challenges faced by white hat researchers that serve to slow the overall progress of extreme use cases for AI today will not apply to tomorrow’s attackers and their methods. Dark AI will not be developed in a tightly controlled environment, and development is certainly not going to be regulated. Building the most effective (and destructive) artificial intelligence has higher priority than considering moral aspects or long-term implications. Nobody is expecting Dark AI to follow the three laws of robotics.

You may think that the digital underground is not capable of producing advanced algorithms such as those required by Dark AI. Caveat emptor. Modern malware and ransomware is no longer developed by small hacker groups, where a single coder was often tasked with writing the complete code for malware kill chain. Today, we have thousands of marketplaces littered across the Dark Web where you can buy specialised components, or shop for the latest exploits, like you can shop for books or clothes. Typical ransomware now consists of various third party and open source modules that are specialised in encryption/decryption, payments, command & control infrastructure and others.

Today, creating complex algorithms is not simple and requires specialised coding skills, but it is only a matter of time (and money) before someone builds a commodity module that will allow other bad actors to compile and use AI-infused malware. Large botnets, consisting of hundreds of thousands of infected machines and IoT devices, can provide the required computing power to fuel this next generation of attacks.

We should realistically expect algorithmically enhanced development in all stages of typical malware attack — whether it is recon stage, infection, command & control, expansion or extortion. With the threat of modularised and intelligent malware and ransomware attacks, it is time to rip apart your static security playbook and implement an adaptive layer of combined security tools. Defenders need to disrupt or break the cyber kill chain at as many stages as possible.

Instead of two huge armies that clash on a battlefield for a single large, decisive battle, we are going to see hybrid warfare that will consist of thousands of isolated incidents and engagements, most of them without any human supervision. Orchestration, automation and the ability to protect all resources using adaptive security tools is going to be the future of the digital battlefield.

Christian Reilly will discuss “ripping up the security rulebook” in more detail on Thursday, March 22, 12.05 p.m. at Cloud Expo Europe, ExCeL, London.