Meta will label images generated by artificial intelligence from OpenAI, Google and other companies

Meta Platforms will begin detecting and tagging images produced by other companies’ AI services in the coming months, using a set of invisible tags embedded in the files, its chief policy executive said Tuesday.

Meta will apply the labels to any tagged content posted on its Facebook, Instagram and Threads services, in an effort to signal to users that the images — which in many cases resemble real photos — are in fact digital creations, the company’s global president said. Affairs, Nick Clegg wrote in a blog post.

The company already labels any content created using its AI tools.

Once the new system is up and running, Meta will do the same for images created on services run by OpenAI, Microsoft, Adobe, Midjourney, Shutterstock and Alphabet’s Google, Clegg said.

The announcement offers an early glimpse into an emerging system of standards that technology companies are developing to mitigate the potential harms associated with generative AI techniques, which can spit out fake but realistic-looking content in response to simple prompts.

This approach is based on a model created over the past decade by some of the same companies to coordinate the removal of banned content across platforms, including depictions of mass violence and child exploitation.

Audio and video classification technology is still under development

In an interview, Clegg told Reuters that he feels confident in the companies’ ability to reliably classify AI-generated images at this stage, but he said tools for identifying audio and video content are more complex and still under development.

“Although the technology is not yet fully mature, especially when it comes to audio and video, the hope is that we can create a sense of momentum and incentive for the rest of the industry to follow suit,” Clegg said.

watched How AI-generated videos can be used as a weapon in elections:

Can you spot a deepfake? How does artificial intelligence threaten elections?

Artificial intelligence-generated fake videos are being used in online scams and gags, but what happens when they are created to interfere in elections? CBC’s Katherine Toney explains how technology is being weaponized and looks at whether Canada is ready to hold a sham election.

In the meantime, Meta will begin requiring people to name their edited audio and video content and will apply penalties if they fail to do so, he added. Clegg did not describe the penalties.

He added that there is currently no viable mechanism for labeling typed text generated by AI tools like ChatGPT.

“That ship has sailed,” Clegg said.

A Meta spokesperson declined to say whether the company would apply labels to AI content shared on encrypted messaging service WhatsApp.

Meta’s independent oversight board on Monday criticized the company’s policy on misleadingly manipulated videos, saying it was too narrow and that the content should be labeled rather than removed. Clegg said he generally agreed with those criticisms.

He said the board was right that Meta’s current policy is “simply not fit for purpose in an environment where you’re going to have more synthetic content and mixed content than before.”

He cited the new ratings partnership as evidence that Meta was indeed moving in the direction suggested by the board.

(tags for translation)AI

Leave a Reply

Your email address will not be published. Required fields are marked *