Meta, the parent company of Facebook and Instagram, has tweaked its approach to labeling photos potentially modified with artificial intelligence (AI). In a shift sparked by user feedback, the company is replacing the “Made with AI” label with a more nuanced “AI info” tag.
The initial “Made with AI” label aimed to promote transparency and help users distinguish between genuine photos and AI-generated content. However, it faced criticism for inaccurately tagging photos with minimal edits done through common tools like Photoshop. This resulted in confusion and frustration among photographers whose human-made work received the “AI” designation.
Meta acknowledges that the previous label “wasn’t always aligned with people’s expectations.” The new “AI info” tag offers a broader scope. Clicking on it will provide users with additional details about how AI might have been involved in the photo. This could include anything from basic editing tools powered by AI to more extensive modifications like background removal or object manipulation.
The update signifies Meta’s ongoing effort to strike a balance between transparency and user experience. While informing users about potential AI involvement remains crucial, the new label aims to be more precise and avoid misleading classifications.
This change comes amidst a growing conversation about the increasing use of AI in photo editing and the potential for deepfakes, which are highly realistic AI-generated videos that can be deceptive. By providing clearer labeling, Meta hopes to empower users to make informed decisions about the content they encounter on its platforms.
However, challenges remain. Developing a foolproof system to differentiate between human-edited and AI-modified photos is an ongoing pursuit. Additionally, educating users on the nuances of AI-powered editing tools and the potential for misuse will be crucial as these technologies evolve.