OpenAI, the creator of the popular chatbot ChatGPT, has released a new blog post outlining its global strategy for the 2024 elections.

Its primary goals are to increase transparency, improve access to accurate voting information, and prevent the misuse of artificial intelligence (AI). While emphasizing the importance of protecting the collaborative nature of elections, OpenAI wants to ensure that its AI service “is not used in a way that could undermine this process.”

The company aims to ensure that its technology does not undermine the integrity of elections, which requires everyone’s involvement.

“We want to make sure that the design, implementation, and use of our AI systems are safe. These tools, like all new technologies, have both advantages and disadvantages.”

OpenAI claims to have a “cross-functional effort” dedicated specifically to election-related work, which will quickly investigate and address potential abuses.

These efforts include preventing “misleading deep fakes,” chatbots posing as candidates, and scaled influence operations—as well as preventing abuse.  It stated that one of its measures has been to impose guardrails on Dall-E to prevent requests for image generation of real people, including political candidates.

In August 2023, US regulators were considering regulating political deep fakes and AI-generated ads ahead of the 2024 presidential elections.

OpenAI stated that developing applications for political campaigning and lobbying is currently prohibited. A politician running for the United States Congress is already using AI as a campaign caller to reach out to more potential voters.

The AI developer also stated that ChatGPT is constantly being updated to provide accurate information based on real-time news reporting from around the world while directing voters to official voting websites for additional information.

AI’s impact on elections has already been a hot topic, with Microsoft even publishing a report on how AI usage on social media has the potential to sway voter sentiment.

Microsoft’s Bing AI chatbot has already come under fire after European researchers discovered that it provided misleading election results.

Google has been a particularly proactive stance on artificial intelligence and elections. In September, it made AI disclosure mandatory in political campaign ads, as well as limiting answers to election queries on its Bard AI and generative search tools.

Shares: