Home Opinion Artificial Intelligence: How Will It Define Cyber Security over the Coming Years?

Artificial Intelligence: How Will It Define Cyber Security over the Coming Years?

by Brian Sims
Richard Menear

Richard Menear

There has been much excitement about Artificial Intelligence (AI) and it’s application in cyber security, writes Richard Menear. Here at Burning Tree, we’re monitoring the technology very closely to see where it can help our clients protect their business and data, and in what areas AI might be able to improve and enhance information security.

Many of today’s organisations already have in place some AI cyber security solutions in the form of machine learning tools. While these are not true AI solutions (which would have the capability to rewrite code in order to protect systems and shore up vulnerabilities), machine learning is a step closer to this scenario.

Currently, machine learning solutions are often used to monitor activity and take appropriate action if unusual behaviours are detected. These solutions ‘learn’ what’s normal, identify what isn’t and then, depending on predetermined rule sets, take that remedial action. This might involve flagging the issue as a priority for a security analyst, blocking access to certain users or encompass any other automated action chosen.

The key benefit of machine learning – and, ultimately, full-on AI cyber security solutions – is that data can be processed and analysed so much quicker than would be the case with many traditional tools. This means that breach detection times can be reduced significantly, in turn minimising the potential disruption a breach event could exert.

It also means that information security team members can prioritise their work much more effectively. They may be alerted when an incident meets certain rule sets, but the rest of the time they can focus on more rewarding work.

AI and Identity and Access Management

Of particular interest to our team at Burning Tree is AI’s application in Identity and Access Management. We firmly believe that IAM should be at the very heart of a company’s cyber security and data protection strategies, at least in part to protect businesses from insider actors, but also to safeguard them from instances of human error and breaches realised by way of social engineering and phishing episodes, etc.

However, IAM does present some problems for organisations. While the ‘least privilege access’ strategy – whereby users are only given access to the minimum resources needed to transact their role – is Best Practice, it can be difficult to both implement and then manage. Hence the need for IAM tools and expert support.

One particular challenge is where credentials are shared with the wrong people. That might be a user sharing logins with a colleague or with an external actor. In the first instance, a user may want one-time access to a system or has forgotten their password and requests access via a colleague’s account for non-malicious reasons. However, as is the case when sharing credentials with external parties, it could be a deliberate attempt by a malicious user to access data and systems for which they don’t have the right privileges.

Here, AI could help. Instead of checking a given user’s identity against predefined credentials, dynamic authentication tools such as using visual or aural clues could be employed. AI solutions could go beyond biometrics and really learn what the user looks like and sounds like and how they behave.

Also, this application has the potential to increase real-time security after a user has logged in. Is the person using the system the same person that logged in? Have they left their desk and someone else is now downloading files?

The scope for AI could go beyond monitoring user activity on the system. As well as using visual and aural clues, users could also be assessed based on other factors such as their social media profiles. Have they recently started engaging with competitors online, following company pages and making connections with people within those organisations? AI could then determine whether their behaviour, such as downloading certain files, might suggest a risk scenario for the host business. Perhaps they’re looking for a new job or are planning to sell data to a competitor?

We’re not quite there yet with AI, but it’s certainly a hot topic in IAM circles at the present time.

Richard Menear is CEO of Burning Tree

You may also like