Industry leaders commit to safe, secure, and transparent development of artificial intelligence in agreement with the Biden administration.
Jason Nelson•
Leading AI companies, including OpenAI, Google, Microsoft, and OpenAI, have committed to developing safe, secure, and transparent AI technology, the White House said on Friday.
"Companies that are developing these emerging technologies have a responsibility to ensure their products are safe," the Biden Administration said, adding that the aim is to make the most of AI's potential and encourage the highest standards.
Additional companies committing to AI safety, the White House said, include Amazon, Anthropic, Meta, and Inflection.
“None of us can get AI right on our own,” Kent Walker, Google's President of Global Affairs, said. “We’re pleased to be joining other leading AI companies in endorsing these commitments, and we pledge to continue working together by sharing information and best practices.”
Some of the commitments include pre-release security testing of AI systems, sharing best practices for AI safety, investing in cybersecurity and insider threat safeguards, and facilitating third-party reporting of AI system vulnerabilities.
“Policymakers around the world are considering new laws for highly capable AI systems," Anna Makanju, OpenAI VP of Global Affairs, said in a statement. "Today’s commitments contribute specific and concrete practices to that ongoing discussion."
United States lawmakers introduced a bipartisan bill in June that seeks to establish a commission on AI and address questions raised by the rapidly growing industry.
The Biden Administration says it is also working with international partners, including Australia, Canada, France, Germany, India, Israel, Italy, Japan, Nigeria, the Philippines, and the UK, to establish a global framework.
"By endorsing all of the voluntary commitments presented by President Biden and independently committing to several others that support these critical goals, Microsoft is expanding its safe and responsible AI practices, working alongside other industry leaders," Microsoft President Brad Smith said.
Global leaders, including the United National Secretary General, have already sounded the alarm on generative AI and deepfake technology and their potential misuse in conflict zones.
In May, the Biden administration met with artificial intelligence leaders to lay the groundwork for ethical AI development. The administration also announced a $140 million investment by the National Science Foundation towards AI research and development.
"These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI—safety, security, and trust—and mark a critical step toward developing responsible AI," the administration said.