Meta, the parent company of Facebook, is taking a firmer stance against manipulated media with stricter guidelines targeting deepfakes and other altered content. This move comes ahead of the upcoming US elections, a period historically vulnerable to the spread of misinformation.

The new policy focuses on two key aspects: flagging and labeling. Meta will now identify and label content that has been significantly altered using artificial intelligence (AI) tools. This includes deepfakes, which can create realistic videos of people saying or doing things they never did. Additionally, content that has been manipulated in a way that could mislead users, even if not created with AI, will be flagged for further review.

These changes stem from concerns raised by Meta’s own oversight board, which previously criticized the company’s handling of manipulated media. The board argued that existing rules were too vague, allowing potentially harmful content to remain online.

The stricter guidelines are expected to be implemented in May, with AI-generated videos, images, and audio receiving “Made with AI” labels. This will help users make informed decisions about the authenticity of the content they encounter.

While the specific details of how these guidelines will be enforced remain to be seen, the move represents a significant step towards tackling the spread of misinformation on social media platforms. Deepfakes and other manipulated media pose a growing threat to public discourse, and Meta’s efforts to address this issue will be closely watched by policymakers and users alike.

Shares: