With pivotal elections approaching in 2024 across major democracies, OpenAI has outlined its strategy for safeguarding its powerful large language and image models from being weaponized.
The artificial intelligence (AI) lab behind hugely popular generative AI products ChatGPT and DALL-E has over 180 million users and it continues to grow rapidly. The tools available on OpenAI’s GPT Store include software with the potential to be used nefariously to influence election campaigns. Synthetic media like AI-generated images, videos and audio can erode public trust and go viral via social platforms.
So with great power comes great responsibility and on Monday (Jan.15) the company outlined in a blog post how it would tackle the multitude of elections happening this year across the world.
Preventing abuse of OpenAI’s systems
A core focus is preemptively hardening AI systems against exploitation by bad actors through extensive testing, gathering user feedback during development, and encoding guardrails right into the foundation of models. Specifically for DALL-E, the image generator, rigid policies decline any image generation requests involving real people – including political candidates.
“We work to anticipate and prevent relevant abuse—such as misleading ‘deepfakes’, scaled influence operations, or chatbots impersonating candidates,” wrote OpenAI.
Strict usage rules also prohibit ChatGPT applications for propaganda, voter suppression tactics, or political impersonation bots.
Snapshot of how we’re preparing for 2024’s worldwide elections:
• Working to prevent abuse, including misleading deepfakes
• Providing transparency on AI-generated content
• Improving access to authoritative voting informationhttps://t.co/qsysYy5l0L— OpenAI (@OpenAI) January 15, 2024
Humans brought into the fold
Here’s something you don’t read every day: humans are going to replace AI. Well, specifically, they will be used by OpenAI as fact-checkers through new transparency features that trace an AI creation back to its origins. Digital watermarking and fingerprinting will verify DALL-E images, while news links and citations will appear more visibly within ChatGPT search responses. This expands on their previous partnership with Axel Springer, enabling ChatGPT to summarize select news content from the media publisher’s outlets.
The world’s flagship AI company hopes voters will benefit directly from OpenAI’s collaboration with nonpartisan voting agencies like the USA’s National Association of Secretaries of State (NASS). Furthermore, in the US, chatbot querying about any practical aspects of the nation’s voting process will surface official registration and polling details from CanIVote.org to cut through misinformation clutter.
Few would argue against any of these measures. The reality, however, is that as these tools exist there will always be some attempts by bad actors to abuse them for electoral purposes.
OpenAI is at least positioning itself to dynamically respond to the challenges it faces during election cycles. Collaboration across Big Tech and with governments may be one of the only sustainable paths forward to tackle AI fakes and propaganda.
Featured Image: Unsplash