Bot or not - Ai generated people on the web

Joseph Poppy Headshot
Joseph Poppy
Security Blogger
15th March 2019

Ai face generators for made up people

Sort of. In a discussion earlier this year at Bulletproof, someone casually mentioned ‘thispersondoesnotexist.com’. It’s a fairly harmless experiment in which AI randomly generates an image of a person who does not exist, thus solving the mystery of the name. This has since prevented me from sleeping at night, not least because I have turned up on it more than once.

It doesn’t exactly have a 100% success rate, but I have definitely fallen in love with several of the non-existent people and imagined my life living in a non-existent house with semi-existent children running around. Some are so convincing that the door of doubt blew ajar and I considered the possibility that the site could just have a database of stock imagery of people that do in fact exist. You should rarely take things at face value, especially faces.

However, a little research shows reputable sources that seem to confirm it is indeed real, or at least the technology behind it is. So, sleep will not be coming easy tonight.

thispersondoesnotexist.com generates a new image everytime you refresh the page. Give it a go!

Generative Adversarial Networks (GAN)

The technology behind thispersondoesnotexist.com is known as Generative Adversarial Networks or, because that sounds vaguely threatening, GAN. It has been around for a while – being spoken about as early as 2014. How they work is incredibly complicated , but a dumbed-down explanation would be: the sort of GANs used to create these images consist of two algorithms, generative and discriminative.

Discriminative algorithms analyse input data and try to predict a label based on what’s been given. For example, they are often used to take the content of an email, and based on its features, assign it the label spam or not spam. Generative algorithms can be said to do the opposite, in that given a label they attempt to guess features.

In the case of image creation, the generative algorithm churns out a number of images, whilst the discriminative algorithm decides which are fake, assumedly after it has already been fed a number of images known to be ‘real’.

So, in the case of thispersondoesnotexist.com, the generative creates a lot of random faces, whereas the discriminative algorithm vets them for authenticity. The more ‘realistic’ ones get through. Based on this, the generative algorithm can work on what is seen as successful for better consistency. The tech is by no means flawless. For about every ten scarily convincing pictures, you get one that looks as though it’s been pulled directly from the nightmares of the severely deranged.


CAPTCHA the bot

We already have issues with bots. Social media is crawling with them. Some argue they’ve tried to influence elections by disseminating fake news and drumming up support for certain figures. Bots could pop up and engage with users, maybe even send them tantalising links via instant messaging. Usually, they’re easy to spot. But imagine a world where this bot has its own page full of convincing images. They start to look that much more convincing. They’ll also become harder for moderators to spot and remove, meaning malicious actors could theoretically use GANs to create faces for hundreds of bots to spread false information to their hearts’ content.

There’s also the risk that if we do see a rise of convincing but false people, we could enter a bizarre world where real people have their existence called into question. We spoke about bots before, and how CAPTCHAs or reCAPTCHAs tend to be the web’s go-to test to weed out the bots. However, this sort of technology could easily be used to circumvent these kinds of defences.

In 2017, researches used AI to generate a fake Barack Obama speech using deepfake

CGI CEOs & the rise of the deepfake

During a team meeting, we got to discussing hypotheticals. At its most absurd point, our beloved MD suggested that one day we could see extreme cases of CFO/CEO fraud. Using CGI CEOs, criminals could take part in video calls and authorise the transfer of payments or demand certain files get installed on the company network. We all had a good chuckle at this nonsense and agreed it was good to laugh. But then, the whole world changed when we saw Steve Buscemi’s face on Jennifer Lawrence’s body discussing Real Housewives. There’s just no telling what lies around the corner.

So-called ‘deepfake’ videos have been around for a couple of years but are recently getting much harder to detect, and it’s inevitable that this kind of technology will be used for more nefarious deeds. Deepfake videos could be used to damage people’s reputation and discredit them. They could be used to disseminate false information via the mouths of supposedly trustworthy people. They could also be used for blackmail. Similarly, there’s a certain element of deniability. Rather than fake news, deepfake will be the get-out clause of choice.

Deepfake using Barack Obama to make a speech
A presidential speech from Barack Obama... or is it?

Suddenly, our CGI CEO doesn’t look all that unbelievable. An extreme case of whaling seems a not-to-distant reality. What’s to stop cyber criminals from contacting a CFO through video conferencing and using a convincing AI generated copy of the CEO (based on publicly available images and videos from talks and the like) and authorising a payment?

How will the CGI CEO respond to questions you ask? Well with yet more AI, such as a more advanced form of the one that managed to book a haircut for example.


An image of poeple and objects being detected by AI
You could have already been detected by AI.

Is Ai dangerous?

Of course, everything here should be taken with a pinch of salt and few shakes of oregano too. The main takeaway is that, in the wrong hands, AI and machine learning can be dangerous tools. Fortunately, like all tools, they can also be used for good. The same algorithms can (and often are) used to help detect malware. Whilst this may not be as fun as putting Steve Buscemi’s head on things, it is one of the more practical applications of the technology. Similar algorithms could help with fraud prevention.

It can theoretically be used to vet and weed out all the bots created by it. If effectively implemented into a managed SIEM system, complex AI can detect potential probing and incoming threats and raise alerts in a more organised fashion, allowing SOC analysts to prioritise their investigations. At Bulletproof we have integrated machine learning algorithms into our SOC and are already seeing positive results. When used correctly, these are powerful automated tools that can help keep businesses secure. It’s almost an unavoidable fact that the more people use the technology for bad, the more we’ll have to use it for good.

Joseph Poppy Headshot

Meet the author

Joseph Poppy Security Blogger

Joseph is a Communications Executive and Security Blogger who has contributed articles covering a range of topics including staying ahead of cyber threats.

10 Steps to Cyber Security

Find out how to secure your business in 10 steps with our free best practice infographic.

Download now

Related resources


Trusted cyber security & compliance services from a certified provider


Get a quote today

If you are interested in our services, get a free, no obligation quote today by filling out the form below.

(1,500 characters limit)

For more information about how we collect, process and retain your personal data, please see our privacy policy.