It is 2024, and the idea of computers generating images of people that have never existed at any point in time from text prompts does not sound as strange as it would have sounded as recently as five years ago. Ladies and gentlemen, welcome to the Artificial Intelligence (AI) revolution, the age of AI-generated content.
You may ask what Generative Artificial Intelligence(AI) is. The UN defines it as a type of AI in which a computer program makes things seem like they were made by humans.
It is interesting how far we have come with technology- from the days of “EDGE” to 2G networks, where we had to wait for a couple of seconds or minutes to download files of sizes that we would now consider insignificant; currently, we are at 5G, where we can type in prompts into a field and get AI generated results that could easily pass off as real for a significant number of people. There has never been an easier time to create and spread untrue information as fast, easily and convincingly as now. It just takes a few minutes to develop full disinformation campaigns.
Before the age of the Internet and social media, information was pretty much one-way, where organised media houses compiled and were tasked with researching and confirming the authenticity of content before sharing it publicly. This, however, is no longer the case. Research data suggests that a growing percentage of people get their news from social media, with the trend set to continue. However, this is not a net negative on AI as there are a significant number of positive use cases for technology such as this, regardless of whether one creates and uses Generative AI content for fun recreational purposes, to tackle more important tasks, or perhaps want to be able to tell when looking at something that might be AI-generated. Here are a few pointers to identify Generative AI content.
For ease, they have been broken down into three main categories: Source, Content, and Context.
Source (where is this information coming from?):
Establishing the source is critical, as it might be the easiest way to identify AI-generated or manipulated content.
How can this play out on social media?
If someone shares an image that might be questionable on your social media feed or timeline, one of the quickest things you can do is check the profile of the individual who originally posted the content.
This can save you a lot of time as some profiles indicate they are satire or post-AI-generated content.
However, if there is no clear indication of that, other helpful steps might include;
Cross-checking content against other sources: This might involve reverse-searching an image found on social media on platforms such as Yandex or Google Images, or even Bing Visual or typing the content of the particular article into search engines such as Google, Bing, and Duckduckgo to name a few.
This simple search exercise can point you to the source of the content, which can provide further context on why or how it was created and if the source has built enough credibility to be trusted.
Content (the elements of the reviewed content):
One thing that has differentiated machines from humans for the longest time is intuition, self-consciousness, and the ability to feel and gather experiences.
Manipulated images are not exactly new in the media space. Photoshop and other image editing software have existed for years and have been utilised to the maximum. The difference, however, is that one needs a certain level of expertise with such tools to produce anything that would convince people of authenticity on a large scale.
However, with the emergence of Generative AI, Many resources, such as Midjourney, Dal – E, ChatGPT, and Deep AI, exist, making it unbelievably easy to generate AI Images via text prompts. Even companies like Photoshop and Canva are beginning to incorporate them into their software for professionals.
Regardless of how easy it is to create, some problems have remained consistent in most AI-generated content, specifically images that contain human characters in this context.
Skin and Texture: There seems to be a trend of AI-generated images with a smooth lack of texture feel. Humans and other natural elements have texture, and we know what that should look like; if you’re looking at an image that’s airbrushed with an almost cartoon feel, then it’s very likely you’re looking at an image generated by Artificial Intelligence.
Facial Features: From the days of manually editing via software like Photoshop (and I would know because I have used Photoshop for over five years now), facial features have been a struggle to match correctly, and it seems some AI models struggle from that same problem.
Important places to look are the eyes, teeth, and facial hair. Hair that dissolves into the skin or into other surrounding elements? Lack of facial symmetry? Warped, inconsistent or eerie facial features can be a great telling sign that you are dealing with an AI-generated image.
Hands and fingers: For some reason, many Generative AI models struggle with hands, too. They are often placed wrongly, or they have more or less than five fingers on either or both hands. This can be another sign that you might be looking at Generative AI content.
Accessories: This is probably the most fun for me. A great way I have actually found helpful in identifying AI-generated images has been inconsistent accessories. As humans, you understand that people often pair their accessories, such as jewellery, with other items. In my observation, this concept has been a struggle for AI to understand. A good example is earrings that don’t match on both sides, nose piercings and glasses that melt into the skin and rings that don’t match across the fingers.
Background: AI struggles with generating the content for the background of an image outside the main focus, so be sure to check the background of an image if you see weird backgrounds, distorted people in the background, elements just floating, or even a clear uneventful background is a cue to be wary.
Generally, the less detailed the image is, the more one might want to pay attention to it.
Context (what is the content saying?):
The idea behind this is simple but requires a little understanding of the context that might surround the content, for instance:
Timeline: One easy way to tell if an image was AI-generated is through the timeline. An image of Michael Jackson hosting the 2020 Oscars is technically impossible, and so is an image of Elon Musk in the 1800s, as they did not exist in those timelines.
An Image of someone holding an iPhone in 2001 is impossible, as the first iPhones were released in 2007.
Context of the content: Sitting on a cloud ball is technically impossible. So is it impossible to turn your face around 180 degrees? An Image of the Pope in street fashion wear should raise suspicion as there is a good chance it is AI-generated content.
To conclude, as I stated earlier, it is important to point out that Generative AI content will get significantly better as we progress. It will become even harder to distinguish what is real from what is not as we continue to call for policies that would guide the development of this technology. It is also important to arm ourselves with the necessary knowledge behind how these technologies work to better equip us to operate in the ever-changing world.
Having stated the above, I find it prudent to share some tools that can help sharpen our skills in identifying AI-generated content;
The writer is a Communications Officer at Paradigm Initiative.