Cyber security and AI: nothing to fear?

Written by Joseph Poppy on 21/09/2018

Artificial intelligence and its cousin machine learning are words that keep cropping up in almost every industry at the moment. Sometimes as buzzwords, sometimes with real innovation, but always with the outlook that both Ai and ML are going to be employed a lot more across a whole spectrum of businesses.

Most of us accept that AI is going to play a role in our lives in the near future, but reports show that we also fear companies will take it too far. Some people have legitimate concerns about giving smart computers too much control over our lives. Some people fear that the use of AI will lead to a widespread loss of jobs and subsequently economic turmoil. Some people, it seems, have watched the Terminator films.


The debate

Ultimately, the real impact will be determined by how this technology is used rather than the technology itself. Whilst I’m sure we’re all in agreement that a network of unstoppable killer robots is an example of ‘too far’, there are certain areas and industries in which these technologies will be a veritable boon. That’s right, boon.

The debate over AI and machine learning will no doubt continue for some time yet, covering topics such as morality, society, human rights and beyond. There is a lot of uncertainty behind it and it’s a debate that we refuse to be drawn into (largely because we simply don’t have the time, it is a Friday after all). However, if there’s one industry where AI and machine learning can provide numerous benefits, it’s cyber security.

Before we get into that, let’s define some terms:


AI vs machine learning

Often, AI and machine learning are used interchangeably, but they are in fact slightly different terms. Keeping things simple (which isn’t easy with a topic like AI), AI is the branch of computer science focussing on building intelligent machines that are capable tasks that attempt to mimic human thought processes. Machine learning is effectively a subset within AI and is what enables computers to act (and react) without being explicitly programmed. It could be said that machine learning is ultimately what will allow AI machines to actually function.

Machine learning is (basically) an alternative to programming every little detail. It is easier just to give a computer a number of smart algorithms and then grant it access to data sources (such as the internet, but more often private data sources) where it can work everything else out for itself and alter algorithms accordingly. Clever.


It's already here... sort of

Basic machine learning has already crept into our daily lives. Google uses it in a number of ways to enhance user experience. For example, if I type in ‘tom’ into a fresh browser I see thus:


googling tom


Standard stuff. Now, if I search for soup, as I often do:


googling soup


I see some delicious soup. Now, if I go back and type in tom:


googling tom again


Google has become aware of my love of soup. Google has remembered that my last search was about soup and has decided it’s highly likely I’m still on the subject and has tailored its results to me. Google notices patterns in my searches and the searches of others and makes a quick decision as to how to proceed.

Picking up on patterns and making decisions based on them lies at the very heart of machine learning. It’s also largely how the human brain works. This is just one example of the different ways machine learning is already making its way into our lives.


What about cyber security?

Machine learning is already working its way into cyber security. We’ve been pioneering our own machine learning module in our SIEM platform for a while, for example. To take a different example, ML could be used to detect and separate spam and phishing emails from legitimate mail. In such instances, security software which incorporates machine learning ‘learns’ the patterns and trends in the standard spam mail and also takes the sender, the destination and the geographic location of both into account. With this information, the software can make a judgement call as to whether it’s genuine or not. And that’s the point: unlike static spam filters, it’s making a judgement call and learning as it goes.

For example, someone in a customer service department emailing someone in the IT department would likely be a regular occurrence. However, Dave the marketing intern is unlikely to be sending multiple messages a day to Sharon the CEO. After monitoring the flow of emails, AI and machine learning could well work this out for itself and when Dave’s email is compromised, and a malicious email is suddenly sent to the CEO, it could be blocked straight away.

The interesting stuff, however, happens when ML works with SIEM platforms. A SIEM is all about seeing patterns in data and deciding on what is normal and, more critically, what falls beyond the norm. There’s rarely a substitute for human insight and ingenuity, so SOC analysts work tirelessly, analysing and investigating suspicious log files in order to work out which ones relate to actual security events and which ones are false positives. Incorporating AI to scan and sort the initial logs and then having machine learning elements analyse them and make a judgement, a SIEM can automatically separate the false positives from the genuine threats. This of course will make the process a lot faster, seeing as machines can sort through data faster than a human can.

To go a step further, machine learning can even take action against what it deems to be a threat and communicate as such to the end user. In a perfect world, AI and machine learning could automate practically every part of a SIEM. But, as should be obvious to anyone who’s ever tried to use the self-checkout machine at a supermarket, we do not live in a perfect world.


People are people

As previously mentioned, when it comes to the rise of the machines, one legitimate fear is that it will swallow up jobs otherwise meant for people. It happened in the industrial revolution, it happened in the mid-to-late 20th century with robots in factories, and it’ll happen again with AI. In terms of cyber security, however, history might play out differently.

Whilst AI and machine learning is very clever and works in a similar way to a human brain, it is not a human brain. We will very probably always need human brains (preferably still attached to a human body too, but who knows what the future holds there). Ai is coming on apace, but machines lack the ability to think creatively and, whilst they can analyse patterns and group certain concepts together, they’re not infallible.

To take an example from a real-world use of ML and human insight, a smart machine may fail to pick up a threat if a malicious party is making use of IP spoofing. If someone tries the same attack multiple times, but with each attempt masks their IP to make it seem as though they were coming from a different location, then machine learning and AI alone might struggle to connect them and (assuming there’s no history of malicious activity associated to the IPs) simply flag it as a false positive. A human will have the initiative to look further into the data and find patterns that aren’t immediately obvious. The way we see it at the moment, ML will free-up security analysts’ time to focus on threat hunting. The machines can sort out the obvious and the false positives, whilst the humans stand vigil against the more sophisticated and creative attacks.


I for one welcome our new robot overlords

In terms of cyber security, there really isn’t anything to fear from AI or machine learning. Quite the opposite, in fact, and our own machine learning enhancements to our SIEM service will continue to lead the way. It’s also unavoidable in the long run. Hackers will definitely be looking at this technology, so it's imperative that the good guys mature the technology first. So trust it or not, it looks all but inevitable that over the next 5 years or so, AI is going to become a very big thing indeed.


  • Bulletproof are CREST approved

    CREST approved

  • Bulletproof are ISO 27001 and 9001 certified

    ISO 27001 and 9001 certified

  • Bulletproof are Tigerscheme qualified testers

    Tigerscheme qualified testers

  • Bulletproof are a PCI DSS v3.2 Level 1 service provider

    PCI DSS v3.2 Level 1
    service provider

  • Bulletproof have 24/7 on-site Security Operations Centre

    24/7 on-site Security
    Operations Centre

Get a quote today

If you’re interested in our services, get a free, no obligation quote today by filling out the form below.