Sunday, October 20, 2024
Home Business To insulate reality from Gen AI, Adobe’s content authenticity web app takes shape

To insulate reality from Gen AI, Adobe’s content authenticity web app takes shape

by
0 comment

“You can think of this as a nutrition label for digital content, just the way people should be able to look at a package of food, see what’s in it and make up their own minds about whether it’s a purchaser or not. We feel the same as through add digital content” — this is how Andy Parsons, senior director of content authenticity at Adobe, describes the importance of content credentials to prevent misattribution and alleviate potential ownership issues for audio, video and photo content, in an era of generative AI. Days ahead of their annual Adobe Max showcase, they’re building on efforts to bring an industry together on the issue of Content Credentials’ implementation, with a web app that gives creators and users some of that control.

For representational purposes only. (Image from Adobe)
For representational purposes only. (Image from Adobe)

Different stages of roll-out begin early next year for the Adobe Content Authenticity Web App, first for creators and then for everyone else. The idea is to allow for individuals to apply Content Credentials to individual pieces of work, or a batch of work, be it images, audio or video files. Creators and users will have control over the information included in these attached Content Credentials, such as their name, website and social media accounts. Social media accounts too can be linked, which would make this easier for integrating these credentials to content that’s already been published.

Also Read:Discerning users can counter Gen AI’s potential for misuse: Adobe’s Andy Parsons

Could this signal the end of a phenomenon, named ‘Taylor Swift AI’, after a number of deepfakes swamped social media platforms earlier this year. Taylor Swift, Drake and more celebrities were not best impressed with their deepfake videos making rounds on platforms including X. The timing may have been coincidental, but it was around that time when social media platform Meta, as well as AI company OpenAI, confirmed they’ll begin to include labels or watermarks to images generated using AI.

Quite how well that is working out for Meta remains to be seen, because our Instagram timelines and explore suggestions are filled with what definitively are AI generated images, with no labelling visible to suggest they are.

Adobe’s web app is in addition to Content Credentials support that’s already baked into Adobe’s apps, including Photoshop, Lightroom and Firefly. Parsons points out that the Adobe Content Authenticity Web App is and will remain free to use. “Content Credentials is totally open source. Adobe isn’t selling these tools and the UI that you see is free and available for anyone to use,” he says, in a briefing of which HT was a part.

“This will be based on an international standard that’s been officially vetted by the International Standards Organisation. This will be a global standard that’ll be free and realised in open-source code that my team has worked hard on with contributions from other companies and individuals,” Parsons adds.

There is of course the matter of Content Credentials implementations not just by generative tools or devices in use, but also platforms where a creator or user would post their content to showcase to the world. It is a complex chain, and camera makers including Leica, Sony, Fujifilm and Nikon are also part of the Adobe-led Content Authenticity Initiative (CAI) and the Content Provenance and Authenticity (C2PA) — their new cameras (and some earlier ones too, with software updates) will be able to bake in a creator or user’s name and other details to the photographs that are clicked. These details will be carried forward as is, irrespective of how or where these photos are shared.

Also Read:Adobe’s Firefly AI videos arrive only when it is safe, amidst an India push

The Leica M11-P, announced late last year, is an example of these newer generation cameras. Sony, too, has confirmed its 2024 camera line-up including the new Alpha 9 III will integrate this functionality from the outset, while their Alpha 1 and Alpha 7S III models will add this with firmware updates.

“With OpenAI joining, anything that’s produced in DALL-E3 and Sora, the video generation model, will carry content credentials,” Parsons points out. Adobe’s found success in getting tech companies to integrate this functionality. Since the turn of the year, there has been confirmation from Microsoft adding this to all AI-generated images that emerge from the Bing Image Creator. Chipmaker Qualcomm is adding this provenance tech at the chip level, something they’ve already embarked on with the Snapdragon 8 Gen3 chips for flagship Android phones, from late 2023.

However, not all platforms implement an ability to integrate or view the origins of any content thats posted. To that effect, Adobe has also announced a new Content Authenticity extension for Google Chrome (and by that effect, any browser based on Chromium including Microsoft Edge, Brave and Vivaldi too at some point). This will enable “inspect” of its origin and ownership details, for any content that is visible on a webpage at the time. It’ll also detail any edits, generative additions or changes, made over time.

Earlier this year, Google is also announced they’re joining the Adobe-led C2PA to support the push for “content credentials” with generated content. They join Microsoft, Amazon, Intel, Truepic (Qualcomm partnered with them for the chip-level implementation) and Meta, to name a few.

Also Read:Labels and watermarks become weapons of choice to identify AI images

“Adobe is committed to responsible innovation centered on the needs and interests of creators. Adobe Content Authenticity is a powerful new web application that helps creators protect and get recognition for their work. By offering creators a simple, free and easy way to attach Content Credentials to what they create, we are helping them preserve the integrity of their work, while enabling a new era of transparency and trust online,” says Scott Belsky, Chief Strategy Officer and Executive Vice President, Design & Emerging Products at Adobe.

The company insists that Content Credentials, once attached to any audio, video or photo, are durable. In other words, these cannot be altered by someone. This will continue to follow a triple technique methodology, including secure metadata, an undetectable (to the viewer, that is the human eye) watermark and digital fingerprinting. Even if the content is cropped, or a screenshot is shared, or other such methods used by nefarious actors, one of more methods to retain information, will continue to be available for verification. The web app will carry on using this technique too.

On the question about data that may or may not be used from the Adobe Content Authenticity Web App, the company confirms that all users will have an option to opt-out of having their data used for AI training. If this is enabled, a user’s already uploaded and subsequently uploaded content will not be part of Adobe’s AI training models, including the Firefly generative AI.

Also Read:As the world grapples with deepfakes, AI companies agree to a set of principles

Adobe’s push for an industry-wide consensus with what they’d called ‘nutrition labels’ then, gained momentum in the summer of 2023. “As generative AI becomes more prevalent in everyday life, consumers deserve to know whether content was generated or edited by AI. Firefly content is trained on a unique dataset and automatically tagged with Content Credentials, bringing critical trust and transparency to digital content. Content Credentials are a free, open-source technology that serve as a digital “nutrition label” and can show information such as name, date and the tools used to create an image, as well as any edits made to that image,” the company had said, at the time.

Last month, they announced the Firefly Video Model, a generative video tool for creating videos with text prompts. Yet, they haven’t confirmed a release timeline for the same. Except that they’ll release this model for beta testing, when it is “commercially safe”. Earlier in the year, OpenAI gave the world its first glimpse at Sora, a tool that used early demos to show off its realistic generations that at first glance would be difficult to identify as AI generations. OpenAI too hasn’t shared a release timeline.

You may also like

Leave a Comment

About Us

Welcome to Janashakti.News, your trusted source for breaking news, insightful analysis, and captivating stories from around the globe. Whether you’re seeking updates on politics, technology, sports, entertainment, or beyond, we deliver timely and reliable coverage to keep you informed and engaged.

@2024 – All Right Reserved – Janashakti.news