Google is currently testing a digital watermark approach aimed at detecting images produced by artificial intelligence (AI) as part of its efforts to combat misinformation. Google watermark initiative, developed by Google’s AI division DeepMind, involves a system called SynthID, which is designed to identify AI-generated images. This mechanism works by subtly modifying individual pixels within images, rendering watermarks imperceptible to human eyes while remaining detectable by computer algorithms.
It’s important to note, however, that DeepMind acknowledges this technique isn’t entirely foolproof against extreme manipulation of images. With the evolution of technology, distinguishing between genuine images and those generated by AI is becoming increasingly challenging.
AI-powered image generation has become mainstream, as demonstrated by the popularity of tools like Midjourney, which boasts over 14.5 million users. Such tools allow users to create images quickly by inputting simple text instructions, raising questions about copyright and ownership in the process.
Imagen
Google also possesses its own image generator, Imagen, and the watermarking method developed by DeepMind will specifically apply to images produced using this tool. In contrast to traditional watermarks, which are susceptible to easy editing or removal, the DeepMind approach seamlessly embeds an almost imperceptible watermark within the image.
Unlike hashing techniques frequently employed by tech companies to generate digital fingerprints for recognized videos, this watermarking method retains its effectiveness even if the image undergoes cropping, resizing, or other alterations. This feature is particularly valuable in identifying AI-generated images that could be subject to various modifications.
Pushmeet Kohli, the head of research at DeepMind, emphasized that the watermarking technique is subtle enough that humans won’t perceive the changes. The current launch of the system is experimental, and its robustness will be gauged through user interaction and feedback.
In July, Google was among several prominent AI companies that voluntarily committed to ensuring the responsible development and utilization of AI. This included implementing watermarks to allow people to recognize AI-created images. While this move aligns with these commitments, Claire Leibowicz from the Partnership on AI noted the need for more coordination and standardization within the industry to effectively address the issue.
Other major tech players such as Microsoft and Amazon have also pledged to use watermarks for some AI-generated content. Additionally, Meta has revealed plans to apply watermarks to videos generated by its Make-A-Video tool to enhance transparency over AI-generated content.
China, for its part, has taken a proactive stance by banning AI-generated images without watermarks, showcasing the increasing recognition of the importance of transparency in AI-generated content. This decision has prompted companies like Alibaba to incorporate watermarks in images produced through its text-to-image tool, Tongyi Wanxiang, to ensure accountability and authenticity.