Thursday, December 26, 2024
Home Business Discerning users can counter Gen AI’s potential for misuse: Adobe’s Andy Parsons

Discerning users can counter Gen AI’s potential for misuse: Adobe’s Andy Parsons

by
0 comment

May 28, 2024 01:20 PM IST

As elections take precedence in many countries, Adobe India’s latest survey illustrates fears about misinformation across various networks, as the tech giant pushes a case for Content Credentials

Can generative artificial intelligence (AI) and increasing dangers posed by realistic deepfakes have such an impact that they can impact the elections in a country? When 2024, dubbed “election year”, draws to a close, voters in 64 countries, including India, the US, Pakistan, the UK, South Africa, Russia, and many in Europe, would have exercised their franchise. Some already concluded, more to come. Generative AI, and the powers of realistic content generation which makes it difficult for the viewer or reader to discern real and authentic from manipulation and misinformation, has the power to impact election results in India too, Adobe says.

Content Credentials can show important information such as the creator’s identity, creation date, and any AI use. (Official image.)
Content Credentials can show important information such as the creator’s identity, creation date, and any AI use. (Official image.)

“As generative AI becomes more powerful, it’s becoming increasingly important for consumers to discern how online content has been created,” Andy Parsons, who is Senior Director for the Content Authenticity Initiative at Adobe, tells HT. The findings of the tech giant’s latest Future of Trust Study for India survey, which saw 2056 Indian residents interviewed and albeit a small sample size, point to the fact that there seems to be a desire from consumers to have tools that can verify the trustworthiness of digital content. There is an “urgent need for proactive measures to address misinformation’s potential impact on election integrity in the country”, says the report.

Unlock exclusive access to the latest news on India’s general elections, only on the HT App. Download Now! Download Now!

Also read:As the world grapples with deepfakes, AI companies agree to a set of principles

It was in 2019 when Adobe co-founded the Content Authenticity Initiative, the first steps in the global tech space, to find a way to distinguish content created using generative AI as well as misinformation and morphed content, against authentic pieces, media and posts. The first move was to build “Content Credentials” into anything that emerges from AI, something that’s gained traction over the past year as generative AI tools and chatbots have become very easily accessible.

“Like a ‘nutrition label’ for digital content, Content Credentials can show important information such as the creator’s identity, creation date, and any AI use, empowering consumers with crucial context to assess the trustworthiness of the content,” says Parsons. A lot of progress has been made since, and as of April, the Adobe led Coalition for Content Provenance and Authenticity (C2PA), which includes Google, Microsoft, Intel, Leica, Nikon, Sony and Amazon Web Services, had begun to further push the case for “Content Credentials” with every generated content. Camera makers Sony and Nikon, for example, are looking to integrate content credentials for all photos captured using the cameras, to distinguish them for generated content.

On an ability of AI generated being consumed on social media networks (without realising it isn’t real or genuine content), Adobe’s report indicates that around 86% Indians believe that harmful deepfakes will impact future elections, if not already. A large number of respondents opine that candidates should be prohibited from using generative AI tools to create messaging. However, it may be a monumental challenge to implement that on ground, and at a scale, in a country such as India.

Also read: Labels and watermarks become weapons of choice to identify AI images

This February in Munich, big tech and AI companies including Adobe, Google, Microsoft, Meta, TikTok, OpenAI, IBM, Amazon and Anthropic, signed the Tech Accord to Combat Deceptive Use of AI in 2024 Elections – the focus here was also on the availability of metadata for viewers and consumers to be able to get information about when, where and how the content they see, came about. “Once we know we can’t trust what we see and hear digitally, we won’t trust anything, even if it is true. And that has never been as important as in 2024, with more than four billion voters expected to participate in more than 40 elections around the world,” Adobe’s executive vice-president, general counsel and chief trust officer Dana Rao had said at the time.

The latest Future of Trust Study for India survey is also a growing concern among social media users, with lingering suspicion, that the content they consume online could be altered to fuel misinformation. Around 81% of respondents say, “it is becoming difficult to verify whether the content, they are consuming online is trustworthy.”

Adobe’s push for credentials as the nutrition label that tells consumers and viewers whether a piece of content was generated or created. These labels or nuggets of information, need support from different platforms on which the content is likely to be shared, such as social media networks and messaging apps. Thorn and All Tech is Human, both non-profits had in April managed to bring AI companies to the table in an attempt to create new AI standards, particularly for the safety of children. At this time, 11 tech companies have signed up, including Meta, Google, Anthropic, Microsoft, OpenAI, Stability.AI and Mistral AI.

Also read: Adobe’s AI juggernaut adds new capabilities to Firefly and Creative Cloud apps

“We are excited about the potential for generative AI to enhance creativity and productivity, but it is also a transformational technology that demands thoughtful consideration of its societal impact. Our Future of Trust Study underscores the urgent need for media literacy campaigns to educate the consumers about the dangers of deepfakes and to empower them with tools to discern fact from fiction,” says Prativa Mohapatra, vice-president and managing director at Adobe India, pushing the case for adoption of content credentials.

These Content Credentials are a combination of cryptographic metadata and watermarking, to ensure this information is securely linked irrespective of where and how the content is shared. This includes important information such as the creator’s name, date an image was created or a generation was done, details of tools used for creation and any edits that were made since.

Elevate your career with VIT’s MBA programme that has been designed by its acclaimed faculty & stands out as a beacon for working professionals. Explore now!

Discover the complete story of India’s general elections on our exclusive Elections Product! Access all the content absolutely free on the HT App. Download now!
Get latest news on Education along with Board Exam, Competitive Exam and Exam Result at Hindustan Times. Also get latest Job updates on Employment News

  • ABOUT THE AUTHOR
    author-default-90x90

    Vishal Mathur is Technology Editor for Hindustan Times. When not making sense of technology, he often searches for an elusive analog space in a digital world.

Story Saved

New Delhi 0C

Tuesday, May 28, 2024

You may also like

Leave a Comment

About Us

Welcome to Janashakti.News, your trusted source for breaking news, insightful analysis, and captivating stories from around the globe. Whether you’re seeking updates on politics, technology, sports, entertainment, or beyond, we deliver timely and reliable coverage to keep you informed and engaged.

@2024 – All Right Reserved – Janashakti.news