According to the company, “AI-generated content is also eligible to be fact-checked.”

Meta will introduce new standards for AI-generated content on Facebook, Instagram, and Threads in the coming months, according to a company blog post published on January 6.

Content that is identified as AI-generated, either through metadata or other intentional watermarking, will be labeled. Users on Meta platforms will be able to flag unlabeled content that appears to be generated by artificial intelligence.


If any of this sounds familiar, it’s because it corresponds to Meta’s early content moderation practices. Before the era of AI-generated content, the company (then Facebook) created a user-friendly system for reporting content that violated the platform’s terms of service.

Fast forward to 2024, and Meta is again providing users with tools to flag content on its social networks, tapping into what could be the world’s largest consumer crowd-sourcing force.

This also means that creators on the company’s platforms must label their work as AI-generated whenever possible, or face consequences.

According to the blog post:

“We may impose penalties if people fail to use this disclosure and label tool when posting organic content that includes a photorealistic video or realistic-sounding audio that was digitally created or altered.”

Detecting AI-generated content

Meta claims that whenever its built-in tools are used to create AI-generated content, the content is watermarked and labeled to clearly indicate its origin. However, not all generative AI systems incorporate these safeguards.

The company says it is collaborating with other companies through consortium partnerships, including Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock, and will continue to develop methods for detecting invisible watermarks at scale.

Unfortunately, these methods may only work on AI-generated images. “According to the blog post, “industries are beginning to incorporate signals into their image generators, but not into AI tools that produce audio and video on the same scale.”

According to the post, Meta is currently unable to detect AI-generated audio and video at scale, including deepfake technology.