Safeguarding Generative AI Models from Spreading Misinformation with Transparently Sourced Data
The rapidly advancing capabilities of AI models require transparently sourced data inputs for fine-tuning models and providing guardrails during post-processing of outputs. NewsGuard for AI solves this misinformation problem, which has undermined trust in AI.
By Veena McCoole and Elan Kane | Published on May 24, 2023
The propensity for AI to amplify harmful narratives and even infiltrate “news” websites is already well documented. NewsGuard’s own reports have found that ChatGPT-3.5 generated misinformation and false narratives 80% of the time when prompted to do so, a proportion that increased to 100% when its successor, ChatGPT-4, was tested.
As the AI industry acknowledges the threats its technology poses in the hands of malign actors, ethically minded leaders are looking toward solutions to govern the development of this nascent technology.
For example, during a May 2023 hearing before a US Senate subcommittee, Sam Altman, founder of OpenAI, welcomed regulation of AI, calling for independent audits, a licensing regime, and “warnings akin to nutritional labels on food.” The latter is akin to NewsGuard’s Nutrition Labels for news and information sources, which can be used as post-processing guardrails and self-regulation for AI.