Google, OpenAI, Microsoft and Anthropic Form Coalition for Responsible AI: Frontier Model Forum
President Biden met with seven artificial intelligence companies last week to seek voluntary safeguards for AI products and discuss future regulatory prospects.
Now, four of those companies have formed a new coalition aimed at promoting responsible AI development and establishing industry standards in the face of increasing government and societal scrutiny.
“Today, Anthropic, Google, Microsoft, and OpenAI are announcing the formation of the Frontier Model Forum, a new industry body focused on ensuring safe and responsible development of frontier AI models,” a statement on OpenAI’s website read.
In what it says is for the benefit of the “entire AI ecosystem,” the Frontier Model Forum will leverage the technical and operational expertise of its member companies, the statement said. The group seeks to advance technical standards, including benchmarks and evaluations, and develop a public library of solutions.
Core objectives of the coalition include:
- Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
- Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.
- Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.
- Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.
The Forum describes 'frontier models' as large-scale machine learning models that outperform the capabilities of current top-tier models in accomplishing a diverse array of tasks.
The group plans to tackle responsible AI development with a focus on three key areas: identifying best practices, advancing AI safety research, and facilitating information sharing among companies and governments.
“The Forum will coordinate research to progress these efforts in areas such as adversarial robustness, mechanistic interpretability, scalable oversight, independent research access, emergent behaviors and anomaly detection,” it said.
Other future actions listed include establishing an advisory board to guide the group’s strategy and priorities and also forming institutions like a charter, governance, and funding with a working group and executive board. The Forum indicated its support for existing industry efforts like the Partnership on AI and MLCommons and plans to consult with civil society and governments on “meaningful ways to collaborate.”
Though the coalition has just four members, it is open to organizations that actively develop and implement these frontier models, show a firm dedication to their safety through both technical and institutional means, and are ready to further the Forum's objectives through active participation in joint initiatives, according to a list of membership requirements.
“The Forum welcomes organizations that meet these criteria to join this effort and collaborate on ensuring the safe and responsible development of frontier AI models,” the group wrote.
These goals are still somewhat nebulous, as were the results of the White House meeting last week. In his remarks regarding the meeting, President Biden acknowledged the transformative impact of AI and the need for responsible innovation with safety as a priority. He also noted the need for bipartisan legislation to regulate the collection of personal data, safeguard democracy, and manage the potential disruption of jobs and industries caused by advanced AI. To achieve this, Biden said a common framework to govern AI development is needed.
"Social media has shown us the harm that powerful technology can do without the right safeguards in place. And I've said at the State of the Union that Congress needs to pass bipartisan legislation to impose strict limits on personal data collection, ban targeted advertisements to kids, and require companies to put health and safety first," the President said, noting that we must remain "clear-eyed and vigilant about the threats emerging technologies that can pose – don't have to, but can pose – to our democracy and our values."
Critics have said these attempts at self-regulation on the part of the major AI players could be creating a new generation of technology monopolies.
President of AI writing platform Jasper Shane Orlick told EnterpriseAI in an email that there is a need for an ongoing, in-depth engagement between the government and AI innovators.
“AI will affect all aspects of life and society—and with any technology this comprehensive, the government must play a role in protecting us from unintended consequences and establishing a single source of truth surrounding the important questions these new innovations create, including what the parameters around safe AI actually are,” Orlick said.
He continued: “The Administration’s recent actions are promising, but it’s essential to deepen engagement between government and innovators over the long term to put and keep ethics at the center of AI, deepen and sustain trust over the inevitable speed bumps, and ultimately ensure AI is a force for good. That also includes ensuring that regulations aren’t defusing competition creating a new generation of tech monopolies — and instead invites all of the AI community to responsibly take part in this societal transformation.”