Seven tech giants have made a “voluntary commitment” to the Biden administration that they will work to reduce the risks involved in artificial intelligence.
US President Joe Biden met with Google, Microsoft, Meta, OpenAI, Amazon, Anthropic and Inflection on July 21. They agreed to emphasize “safety, security and trust” when developing AI technologies. More specifically:
- Safety: The companies agreed to “testing the safety and capabilities of their AI systems, subjecting them to external testing, assessing their potential biological, cybersecurity, and societal risks and making the results of those assessments public.”
- Security: The companies also said they will safeguard their AI products “against cyber and insider threats” and share “best practices and standards to prevent misuse, reduce risks to society, and protect national security.”
- Trust: One of the biggest agreements secured was for these companies to make it easy for people to tell whether images are original, altered or generated by AI. They will also ensure that AI doesn’t promote discrimination or bias, they will protect children from harm, and will use AI to solve challenges like climate change and cancer.
The arrival of OpenAI’s ChatGPT in late 2022 was the beginning of a stampede of major tech companies releasing generative AI tools to the masses. OpenAI’s GPT-4 launched in mid-March. It’s the latest version of the large language model that powers the ChatGPT AI chatbot, which among other things is advanced enough to pass the bar exam. Chatbots, however, are prone to spitting out incorrect answers and sometimes sources that don’t exist. As adoption of these tools has exploded, their potential problems have gained renewed attention — including spreading misinformation and deepening bias and inequality.
What the AI companies are saying and doing
Meta said it welcomed the White House agreement. Earlier this month, the company launched the second generation of its AI large language model, Llama 2, making it free and open source.
“As we develop new AI models, tech companies should be transparent about how their systems work and collaborate closely across industry, government, academia and civil society,” said Nick Clegg, Meta’s president of global affairs.
The White House agreement will “create a foundation to help ensure the promise of AI stays ahead of its risks,” Brad Smith, Microsoft vice chair and president, said in a blog post.
Microsoft is a partner on Meta’s Llama 2. It also launched AI-powered Bing search earlier this year that makes use of ChatGPT and is bringing more and more AI tools to Microsoft 365 and its Edge browser.
The agreement with the White House is part of OpenAI’s “ongoing collaboration with governments, civil society organizations and others around the world to advance AI governance,” said Anna Makanju, OpenAI vice president of global affairs. “Policymakers around the world are considering new laws for highly capable AI systems. Today’s commitments contribute specific and concrete practices to that ongoing discussion.”
Amazon is in support of the voluntary commitments “as one of the world’s leading developers and deployers of AI tools and services,” Tim Doyle, Amazon spokesperson, told CNET in an emailed statement. “We are dedicated to driving innovation on behalf of our customers while also establishing and implementing the necessary safeguards to protect consumers and customers.”
Anthropic said in an emailed statement that all AI companies “need to join in a race for AI safety.” The company said it will announce its plans in the coming weeks on “cybersecurity, red teaming and responsible scaling.”
“There’s a huge amount of safety work ahead. So far AI safety has been stuck in the space of ideas and meetings,” Mustafa Suleyman, co-founder and CEO of Inflection AI, wrote in a blog post Friday. “The amount of tangible progress versus hype and panic has been insufficient. At Inflection we find this both concerning and frustrating. That’s why safety is at the heart of our mission.”
The agreement “is a milestone in bringing the industry together to ensure that AI helps everyone,” said Kent Walker, Google’s President of Global Affairs, in a blog post. “These commitments will support efforts by the G7, the OECD, and national governments to maximize AI’s benefits and minimize its risks.”
Google, which launched its chatbot Bard in March, previously said it would watermark AI content. The company’s AI model Gemini will identify text, images and footage that have been generated by AI. It will check the metadata integrated in content to let you know what’s unaltered and what’s been created by AI.
Image software company Adobe is similarly ensuring it’s tagging AI-generated images from its Firefly AI tools with metadata indicating they’ve been created by an AI system.
You can read the entire voluntary agreement between the companies and the White House here. It follows more than 1,000 people in tech, including Musk, signing an open letter in March urging labs to take at least a six-month pause in AI development due to “profound risks” to society from increasingly capable AI engines. In June, OpenAI CEO Sam Altman and DeepMind CEO Demis Hassabis, along with other scientists and notable figures, also signed a statement warning of the risks of AI. And Microsoft in May released a 40-page report saying AI regulation is needed to stay ahead of potential risks and bad actors.
The Biden-Harris administration is also developing an executive order and seeking bipartisan legislation “to keep Americans safe” from AI. The US Office of Management and Budget is additionally slated to release guidelines for any federal agencies that are procuring or using AI systems.
Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.