OpenAI has announced a significant upgrade to its Moderation API, introducing a new multimodal moderation model. This enhanced model is designed to support both text and image inputs, offering developers a more comprehensive and accurate tool for content moderation.
One of the key improvements in the new model is its ability to handle non-English content more effectively. This is particularly valuable for developers working with diverse user bases and languages. By accurately detecting harmful content across multiple languages, OpenAI aims to create a safer online environment for everyone.
In addition to enhanced accuracy, the new moderation model detects harm in two new categories. These additional categories provide developers with a more granular understanding of potentially problematic content, enabling them to tailor their moderation strategies accordingly.
Importantly, OpenAI has maintained the Moderation API as a free service for developers. This commitment ensures that content creators and businesses of all sizes can benefit from these advanced moderation capabilities without incurring additional costs.
By introducing this new multimodal moderation model, OpenAI is demonstrating its ongoing commitment to developing AI tools that promote safety and inclusivity online. As the model continues to evolve, developers can expect even more robust and effective content moderation capabilities.