Feb 21





1 Like


Print Friendly, PDF & Email

What’s real? Practical Guide to Identifying Generative AI Content on the Internet

Two closeup shots of the human face side by side of Generative AI content

It is 2024, and the idea of computers generating images of people that have never existed at any point in time from text prompts does not sound as strange as it would have sounded as recently as five years ago.  Ladies and gentlemen, welcome to the Artificial Intelligence (AI) revolution, the age of AI-generated content.

What is Generative Artificial Intelligence(AI)? You may ask.  The UN defines it as a type of artificial intelligence where a computer program makes things that seem like they were made by humans.

It is interesting to note how far we have come with technology- from the days of “EDGE” to 2G networks, where we had to wait for a couple of seconds or minutes to download files of sizes that we would now consider insignificant; currently, we are at 5G, where we can type in prompts into a field and get AI generated results that could easily pass off as real for a significant number of people. There has never been an easier time to create and spread untrue information as fast, easily and convincingly as now. It just takes a few minutes to create full disinformation campaigns.

Before the age of the Internet and social media, information used to be pretty much one-way, where organised media houses compiled and were tasked with researching and confirming the authenticity of content before sharing it publicly. This, however, is no longer the case. Research Data suggests that a growing percentage of people get their news from social media with the trend set to continue. However, this is not a net negative on AI as there are a great number of positive use cases for technology such as this, regardless of whether one creates and uses Generative AI  content for fun recreational purposes, to tackle more important tasks, or perhaps just want to be able to tell when looking at something that might be AI-generated. Here are a few pointers to identify Generative AI content.


For ease, they have been broken down into three main categories; Source, Content, and Context.


Source (where is this information coming from?):

Establishing the source is critical as might be the easiest way to identify content that might have been generated or manipulated by AI.

How can this play out on social media?
If someone shares an image that might be questionable to your feed/timeline on social media, one of the quickest things you can do is check the profile of the individual who originally posted the content. 

This can save you a lot of time as some profiles clearly indicate they are satire or that they post AI-generated content.

However, if there is no clear indication of that, other helpful steps might include;

Cross-checking content against other sources: This might involve reverse searching an image found on social media on platforms such as Yandex or Google Images, or even Bing Visual or typing the content of the particular article into search engines such as Google, Bing, Duckduckgo to name a few.

This simple search exercise can point you to the original source of the content, which can provide further context on why or how it was created, and if the source has built enough credibility to be trusted or not.

Content (the elements of the reviewed content):

One thing that has differentiated machines from humans for the longest time is intuition, self-consciousness, and the ability to feel and gather experiences. 

Manipulated images are not exactly a new thing in the media space. Photoshop and other image editing software have existed for years, and they have been utilised to the maximum. The difference, however, was that one needed a certain level of expertise with such tools to be able to produce anything that would be able to convince people of authenticity on a large scale.

However, with the emergence of Generative AI, A lot of resources, such as Midjourney, Dal – E, ChatGPT, and Deep AI, exist, making it unbelievably easy to generate AI Images via text prompts, and even companies like Photoshop and Canva are beginning to incorporate them into their software for professionals to take advantage of.

Regardless of how easy it is to create, some problems seem to have remained consistent in most AI-generated content, specifically images that contain human characters in this context. 

Skin and Texture: There seems to be a trend of AI Generated images having a smooth lack of texture feel to them, humans and other natural elements have texture and we know what that should look like, If you’re looking at an image that’s airbrushed with an almost cartoon feel to them, then its very likely you’re looking at an image generated by Artificial Intelligence.

Facial Features: From the days of manually editing via software like Photoshop (and I would know because I have used Photoshop for over 5 years now), facial features have been a struggle to correctly match, and it seems some AI  models struggle from that same problem.
Important places to look are the eyes, teeth, and facial hair. Hair that just dissolves into the skin or into other surrounding elements? Lack of facial symmetry? Warped, inconsistent or eerie facial features can be a great telling sign that you are dealing with an AI-generated image. 

Hands and fingers: For some reason, a lot of Generative AI models struggle with hands too; oftentimes, they are placed wrongly, or they have more or less than five fingers on either or both hands. This can be another telling sign that you might be looking at Generative AI content.

Accessories: This is probably the most fun for me. A great way I have actually found helpful to identify AI-generated images has been inconsistent accessories. As humans, you understand that people often pair their accessories, such as jewellery with other items. In my observation, this concept has been a struggle for AI to understand. A good example is earrings that don’t match on both sides, nose piercings and glasses that melt into the skin and rings that don’t match across the fingers.

Background: AI struggles with generating the content for the background of an image outside the main focus, so be sure to check the background of an image if you see weird backgrounds, distorted people in the background, elements just floating, or even a clear uneventful background is a cue to be wary.

Generally, the less detailed the image is, the more one might want to pay attention to it.

Context (what is the content saying?):
The idea behind this is simple but requires a little understanding of the context that might surround the content, for instance:

Timeline: One easy way to tell if an image was AI generated is through the timeline. An image of Michael Jackson hosting the 2020 Oscars is technically impossible, and so is an image of Elon Musk in the 1800s as they did not exist in those timelines.

An Image of someone holding an iPhone in 2001 is impossible, as the first iPhones were released in 2007.

Context of the content: It is technically impossible to sit on a ball of cloud. So is it impossible to turn your face around 180 degrees? An Image of the Pope in street fashion wear should raise suspicion as there is a good chance it is AI-generated content.

To conclude, as I stated earlier, it is important to point out that Generative AI content is going to get significantly better as we progress, and it will become even harder to make out what is real and what is not. As we continue to call for policies that would guide the development of this technology. It is also important to arm ourselves with the necessary knowledge behind how these technologies work so we are better equipped to operate in the ever-changing world.

Having stated the above, I find it prudent to share some tools that can help sharpen our skills in identifying AI-generated content;


The writer is a Communications Officer at Paradigm Initiative

Leave a Reply

Your email address will not be published. Required fields are marked *