Business

Meta to start labeling AI-generated images from companies like OpenAI, Google

Photo: Reuters

Meta Platforms will begin detecting and labelling images generated by other companies' artificial intelligence services in the coming months, using a set of invisible markers built into the files, its top policy executive said on Tuesday.

Meta will apply the labels to any content carrying the markers that is posted to its Facebook, Instagram and Threads services, in an effort to signal to users that the images - which in many cases resemble real photos - are actually digital creations, the company's president of global affairs, Nick Clegg, wrote in a blog post.

The company already labels any content generated using its own AI tools.

Once the new system is up and running, Meta will do the same for images created on services run by OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and Alphabet's Google, Clegg said.

The announcement provides an early glimpse into an emerging system of standards technology companies are developing to mitigate the potential harms associated with generative AI technologies, which can spit out fake but realistic-seeming content in response to simple prompts.

The approach builds off a template established over the past decade by some of the same companies to coordinate the removal of banned content across platforms, including depictions of mass violence and child exploitation.

In an interview, Clegg told Reuters he felt confident the companies could label AI-generated images reliably at this point, but said tools to mark audio and video content were more complicated and still being developed.

"Even though the technology is not yet fully mature, particularly when it comes to audio and video, the hope is that we can create a sense of momentum and incentive for the rest of the industry to follow," Clegg said.

In the interim, he added, Meta would start requiring people to label their own altered audio and video content and would apply penalties if they failed to do so. Clegg did not describe the penalties.

He added there was currently no viable mechanism to label written text generated by AI tools like ChatGPT.

"That ship has sailed," Clegg said.

A Meta spokesman declined to say whether the company would apply labels to generative AI content shared on its encrypted messaging service WhatsApp.

Meta's independent oversight board on Monday rebuked the company's policy on misleadingly doctored videos, saying it was too narrow and the content should be labelled rather than removed.

Clegg said he broadly agreed with those critiques.

The board was right, he said, that Meta's existing policy "is just simply not fit for purpose in an environment where you're going to have way more synthetic content and hybrid content than before."

He cited the new labelling partnership as evidence that Meta was already moving in the direction the board had proposed.

Comments

Meta to start labeling AI-generated images from companies like OpenAI, Google

Photo: Reuters

Meta Platforms will begin detecting and labelling images generated by other companies' artificial intelligence services in the coming months, using a set of invisible markers built into the files, its top policy executive said on Tuesday.

Meta will apply the labels to any content carrying the markers that is posted to its Facebook, Instagram and Threads services, in an effort to signal to users that the images - which in many cases resemble real photos - are actually digital creations, the company's president of global affairs, Nick Clegg, wrote in a blog post.

The company already labels any content generated using its own AI tools.

Once the new system is up and running, Meta will do the same for images created on services run by OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and Alphabet's Google, Clegg said.

The announcement provides an early glimpse into an emerging system of standards technology companies are developing to mitigate the potential harms associated with generative AI technologies, which can spit out fake but realistic-seeming content in response to simple prompts.

The approach builds off a template established over the past decade by some of the same companies to coordinate the removal of banned content across platforms, including depictions of mass violence and child exploitation.

In an interview, Clegg told Reuters he felt confident the companies could label AI-generated images reliably at this point, but said tools to mark audio and video content were more complicated and still being developed.

"Even though the technology is not yet fully mature, particularly when it comes to audio and video, the hope is that we can create a sense of momentum and incentive for the rest of the industry to follow," Clegg said.

In the interim, he added, Meta would start requiring people to label their own altered audio and video content and would apply penalties if they failed to do so. Clegg did not describe the penalties.

He added there was currently no viable mechanism to label written text generated by AI tools like ChatGPT.

"That ship has sailed," Clegg said.

A Meta spokesman declined to say whether the company would apply labels to generative AI content shared on its encrypted messaging service WhatsApp.

Meta's independent oversight board on Monday rebuked the company's policy on misleadingly doctored videos, saying it was too narrow and the content should be labelled rather than removed.

Clegg said he broadly agreed with those critiques.

The board was right, he said, that Meta's existing policy "is just simply not fit for purpose in an environment where you're going to have way more synthetic content and hybrid content than before."

He cited the new labelling partnership as evidence that Meta was already moving in the direction the board had proposed.

Comments

ভারতে বাংলাদেশি কার্ডের ব্যবহার কমেছে ৪০ শতাংশ, বেড়েছে থাইল্যান্ড-সিঙ্গাপুরে

বিদেশে বাংলাদেশি ক্রেডিট কার্ডের মাধ্যমে সবচেয়ে বেশি খরচ হতো ভারতে। গত জুলাইয়ে ভারতকে ছাড়িয়ে গেছে যুক্তরাষ্ট্র।

২৫ মিনিট আগে