Artificial Intelligence (AI) is already being applied to diverse use cases, from consumer-oriented devices such as voice-controlled personal assistants and self-directed vacuum cleaners, through to groundbreaking business applications that optimise everything from drug discovery to financial portfolio management. Naturally, observes Delfina Chain, there’s growing interest within the information security community around how AI (which encompasses the concepts of machine learning and deep learning) might be leveraged to combat cyber threats.
The effectiveness and scaleability of cyber security-related tasks, such as malware and spam detection, has already been enhanced by AI. Many commentators expect ongoing AI innovations to have a transformative impact on cyber defence capabilities. However, security practitioners must also recognise that the rise of AI presents a potent opportunity for cyber criminals to optimise their malicious activities.
Much like the rise of Cyber Crime-as-a-Service offerings in the underground economy, threat actor adoption of AI-focused technology is expected to lower barriers to entry for lower-skilled actors seeking to conduct advanced malicious operations.
A report from the Future of Humanity Institute emphasises the potential for AI to be used towards beneficial and harmful ends within the cyber realm, which is amplified by its efficiency, scaleability and potential to exceed human capabilities. Potential uses of AI among cyber criminals could include the development of highly evasive malware, the ability for automated systems to exhibit human-like behaviour during Denial of Service attacks and the optimisation of activities such as vulnerability discovery and target prioritisation.
Fortunately, defenders have a head start on their adversaries in this arms race to harness the power of AI technology, largely due to the time- and resource-intensive nature of deploying AI at its current stage in development.
Implications for defenders
The purpose of intelligence is to inform a course of action. For defenders, this course of action should be guided by the level of risk (ie likelihood x potential impact) posed by a threat. The best way to evaluate how likely a threat is to manifest itself is by monitoring threat actor activity on the Deep and Dark Web forums, underground marketplaces and encrypted chat services on which they exchange resources and discuss their tactics, techniques and procedures.
Cyber criminal abuse of technology is nothing new, and by gaining visibility into adversaries’ ongoing efforts to develop more advanced tactics, techniques and procedures, defenders can better anticipate and defend against evolving attack methods.
Flashpoint’s analysts often observe cyber criminals abusing legitimate technologies in a number of ways, ranging from the use of pirated versions of the Cobalt Strike threat emulation software to elude server fingerprinting through to the use of tools designed to aid visually impaired or dyslexic individuals to bypass CAPTCHA in order to deliver automated spam.
Our analysts also observe adversaries adapting their tactics, techniques and procedures in response to evolving security technologies, such as the rise of ATM shimmers in response to EMV chip technology. In all of these instances, our analysts have provided customers with the technical and contextual details needed take proactive action in defending their networks against these tactics, techniques and procedures.
When adversaries’ abuse of AI technology begins to escalate, their activity within the Deep and Dark Web and encrypted channels will be one of the earliest and most telling indicators. By establishing access to the resources needed to keep a finger on the pulse of the cyber criminal underground, defenders can rest easy knowing full well that they’re laying the groundwork needed to be among the first to know when threat actors develop new ways of abusing AI as well as other emerging technologies.
Delfina Chain is Senior Associate Customer for Customer Engagement and Development at Flashpoint