This is how Adobe's new AI functions work to prevent false images from sneaking in

Adobe has presented a large number of new developments in the field of generative artificial intelligence applied to images.

Oliver Thansan
Oliver Thansan
16 October 2023 Monday 10:35
4 Reads
This is how Adobe's new AI functions work to prevent false images from sneaking in

Adobe has presented a large number of new developments in the field of generative artificial intelligence applied to images. It happened at Adobe Max, an annual event in which the creators of Photoshop have also explained how they want to help us recognize manipulated images, or created by artificial intelligence, that circulate on the Internet.

Few companies know as much about image manipulation as Adobe. Currently, the company is engaged in a campaign to implement artificial intelligence in the Creative Suite tools, which includes Photoshop itself. But they are also showing interest in stopping the chaos that the indiscriminate use of these functions is generating.

Content Credentials is the name of the set of functions that allows you to know the level of manipulation of an image or if it has been created by artificial intelligence. This technology consists of storing encrypted metadata, which cannot be manipulated, that serves this purpose.

In reality, this initiative is not just Adobe's, which provides the technology. Part of the Content Authenticity Initiative, an organization supported by numerous technology companies involved in image creation, such as Nikon or Leica, but in which there are also information companies such as Agencia EFE.

Content Credentials wants to put truthful informative images and fictional images in their rightful place. Images created with Adobe's new Firefly 2 artificial intelligence engine (which can now be tested) incorporate, for example, Content Credentials information. The images may or may not show a small logo that identifies them.

If this type of virtual watermark becomes popular, we would stop playing forensics to determine the veracity of an image. Content Credentials, which is in beta development, has a tool to detect forgeries. It's called Verify and at the moment it is just a website that allows you to read the metadata of an image when uploading a visual file. This way it is verified if there are similar versions of the image and how the original image was generated.

That content is tagged with Content Credentials technology does not seem to undermine privacy: the creation tool is indicated, but not its author. We will have to see how this progresses. For now, content creators can use this tool to protect their works by incorporating authorship information.

Let's not forget that although anonymity is key to releasing false information, it is also crucial for certain images to spread. To check how private data is used, we used Adobe's Firefly 2 artificial intelligence (while registering on the web with our personal credentials).

When using the online verification tool, at no time does the information of who created this image appear. But we can know that it has been generated using artificial intelligence and that it is endorsed by Adobe.

But in reality all this is of little use if this is not supported by Meta, owner of WhatsApp and Instagram; Google, which has YouTube; Open AI, creators of Dall-E and Chat GPT; TikTok; and Elon Musk's social network X.

If tomorrow all these companies agreed to warn us if an image has been significantly altered or if it has been created using artificial intelligence, a giant step would have been taken against misinformation.

But as always happens with a new technology, doubts will have to be resolved, such as those regarding privacy or whether or not the metadata incorporated by Content Credentials can be used by social networks for advertising purposes. In any case, Content Credentials is so promising that it is worth following its journey.