The cyber threats caused by non-existent people
Written by Joseph PoppyCommunications Executive
The security implications of the non-existent
We already have issues with bots. Social media is crawling with them. Some argue they’ve tried to influence elections by disseminating fake news and drumming up support for certain figures. Bots could pop up and engage with users, maybe even send them tantalising links via instant messaging. Usually, they’re easy to spot. But imagine a world where this bot has its own page full of convincing images. They start to look that much more convincing. They’ll also become harder for moderators to spot and remove, meaning malicious actors could theoretically use GANs to create faces for hundreds of bots to spread false information to their hearts’ content.
There’s also the risk that if we do see a rise of convincing but false people, we could enter a bizarre world where real people have their existence called into question. We spoke about bots before, and how CAPTCHAs or reCAPTCHAs tend to be the web’s go-to test to weed out the bots. However, this sort of technology could easily be used to circumvent these kinds of defences.
Suddenly, our CGI CEO doesn’t look all that unbelievable. An extreme case of whaling seems a not-to-distant reality. What’s to stop cyber criminals from contacting a CFO through video conferencing and using a convincing AI generated copy of the CEO (based on publicly available images and videos from talks and the like) and authorising a payment?
How will the CGI CEO respond to questions you ask? Well with yet more AI, such as a more advanced form of the one that managed to book a haircut for example.
This all sounds a bit tin foil hat
Of course, everything here should be taken with a pinch of salt and few shakes of oregano too. The main takeaway is that, in the wrong hands, AI and machine learning can be dangerous tools. Fortunately, like all tools, they can also be used for good. The same algorithms can (and often are) used to help detect malware. Whilst this may not be as fun as putting Steve Buscemi’s head on things, it is one of the more practical applications of the technology. Similar algorithms could help with fraud prevention.
It can theoretically be used to vet and weed out all the bots created by it. If effectively implemented into a managed SIEM system, complex AI can detect potential probing and incoming threats and raise alerts in a more organised fashion, allowing SOC analysts to prioritise their investigations. At Bulletproof we have integrated machine learning algorithms into our SOC and are already seeing positive results. When used correctly, these are powerful automated tools that can help keep businesses secure. It’s almost an unavoidable fact that the more people use the technology for bad, the more we’ll have to use it for good.
Trusted research to inform your 2022 strategy
For more unique insights on the threats your business is facing in 2022 and guidance on strengthening your security posture, download the Bulletproof Annual Cyber Security Report today.Learn more
Our experts are the ones to trust when it comes to your cyber security
Get a quote today
If you are interested in our services, get a free, no obligation quote today by filling out the form below.