Meta Is Labeling More AI-Built Video, Audio and Images

Meta — owner of Facebook, Instagram, WhatsApp and Threads — said Friday that it plans to expand efforts to label content that’s been manipulated or generated by artificial intelligence. The move expands on earlier efforts, with Meta’s platforms among a growing number of services, including YouTube and TikTok, that are responding to this issue.

Meta said it will label video, audio and images as “Made with AI” either when its systems detect AI involvement, or when creators disclose it during an upload. The company also said it may add a more prominent label if the content has “a particularly high risk of materially deceiving the public on a matter of importance.”

The company said it came to its decision while juggling transparency with the need to avoid unnecessarily restricting freedom of expression online.

“This overall approach gives people more information about the content so they can better assess it and so they will have context if they see the same content elsewhere,” Monika Bickert, Meta’s VP of content policy, wrote in a blog post.

Read more: AI Atlas: Your Guide to Today’s Artificial Intelligence

The move marks another way the tech industry is responding to growing concerns about the pervasiveness of AI-generated content and its risk to the public. Videos generated by AI technology like OpenAI’s Sora look increasingly lifelike. And though that tool hasn’t been made widely available to the public, other AI technologies have already begun to cause public confusion and chaos. 

Earlier this year, a political consultant made mass-scale robocalls using President Joe Biden’s voice, re-created by AI, encouraging people in New Hampshire not to vote in the primary election. Experts say more AI disinformation is likely on the way, particularly with the upcoming 2024 presidential election.

READ MORE  Margaret Qualley Dances Beautifully in Jack Antonoff’s New ‘Tiny Moves’ Music Video – Watch | Bleachers, Jack Antonoff, Margaret Qualley | Just Jared: Celebrity News and Gossip

Meta isn’t the only social media company working to identify AI-powered content. TikTok said last year that it will launch a tool to help creators label manipulated content, noting that it also prohibits “deepfakes” — videos, images or audio that’s been created to mislead viewers about real events or people. Meanwhile, Google’s YouTube subsidiary began requiring disclosure of AI-manipulated videos from creators last month, saying that some examples included “realistic” likenesses of people or scenes, as well as altered footage of real events or places. 

Read more: How Close Is That Photo to the Truth? What to Know in the Age of AI

Meta said it intends to enforce its rules. It cited a survey it conducted with more than 23,000 respondents in 13 countries, in which 82% favored labels on AI-generated content “that depicts people saying things they did not say.” 

“We will remove content, regardless of whether it is created by AI or a person, if it violates our policies against voter interference, bullying and harassment, violence and incitement, or any other policy in our Community Standards,” Bickert said in Meta’s Friday blog post.

Editors’ note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. The note you’re reading is attached to articles that deal substantively with the topic of AI but are created entirely by our expert editors and writers. For more, see our AI policy.

Leave a Comment