Here’s how Google will start helping you figure out which images are AI generated


seeks to be more transparent about whether a piece of content is created or modified using generative AI (GAI) tools. After joining the Coalition for Content Origin and Authenticity (C2PA) earlier this year as a steering committee member, Google has announced how the group will begin implementing the digital watermarking standard.

Next to it Google, including Amazon, Meta, and OpenAI, have spent the past few months learning how to improve the technology used to watermark content created or modified by GAI. The company says it helped develop the final version a technical standard used to maintain metadata that describes how an asset was created, as well as what and how it was modified. Google says that the current version of Content Credentials is more secure and resistant to tampering thanks to stricter verification methods.

In the coming months, Google will begin incorporating the current version of Content Credentials into some of its core products. In other words, it will soon be easier to tell whether an image was created or modified using GAI in Google Search results. If the opened image contains C2PA metadata, you should be able to find out how GAI affects it. instrument. It’s also available in Google Photos, Lens, and Circle for Search.

The company is exploring how to use C2PA to notify YouTube viewers when videos are captured by a camera. Expect to hear more about it later this year.

Google also plans to use C2PA metadata in its advertising systems. He didn’t reveal many details about his plans there, other than to say that he would use “C2PA signals to inform how we apply key policies” and do so gradually.

Of course, the effectiveness of all of this depends on whether companies such as camera manufacturers and GAI tool developers actually use the C2PA watermarking system. This approach will not prevent someone from deleting the image’s metadata. This can make it difficult for systems like Google to detect any GAI usage.

Meanwhile, during this year we saw Meta How to disclose whether images were created with GAI on Facebook, Instagram, and Threads. The company is just Making tags less visible in images edited with AI tools. Starting this week, if the C2PA metadata shows that someone (for example) used Photoshop’s GAI tools to modify the original image, the “AI data” tag will no longer appear front and center. Instead, it’s buried in the post’s menu.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *