Microsoft, Anthropic, Google, and OpenAI Initiate Frontier Model Forum to Safeguard Frontier AI Development
On July 26, 2023, a landmark initiative was unveiled by four leading AI organizations – Microsoft, Anthropic, Google, and OpenAI. They announced the formation of the Frontier Model Forum, an industry consortium dedicated to promoting the safe and responsible development of cutting-edge AI models, known as frontier AI models. The Frontier
On July 26, 2023, a landmark initiative was unveiled by four leading AI organizations – Microsoft, Anthropic, Google, and OpenAI. They announced the formation of the Frontier Model Forum, an industry consortium dedicated to promoting the safe and responsible development of cutting-edge AI models, known as frontier AI models. The Frontier Model Forum plans to leverage the collective technical and operational prowess of its founding companies to pioneer best practices and standards in the AI sector and contribute to the AI ecosystem as a whole.
Frontier AI models, as defined by the Forum, are large-scale machine-learning models that supersede the capabilities of current leading models and can accomplish a broad array of tasks. These advanced models hold significant potential, but also come with substantial risks that necessitate careful management and oversight.
The Frontier Model Forum's key objectives are:
- Propelling AI safety research to promote responsible development and minimize the potential risks associated with frontier models. This will involve enabling independent, standardized evaluations of frontier models' capabilities and safety.
- Identifying best practices for the development and deployment of frontier models. The aim is to help the public comprehend the capabilities, limitations, and impact of these technological advancements.
- Collaborating with policymakers, academics, civil society, and fellow companies to spread knowledge about trust and safety issues in AI.
- Supporting initiatives to harness AI for addressing major societal challenges, such as mitigating and adapting to climate change, early cancer detection and prevention, and countering cyber threats.
Membership in the Frontier Model Forum is open to organizations that develop and deploy frontier models and demonstrate a strong commitment to frontier model safety. Member organizations should be willing to contribute to advancing the Forum's efforts, participate in joint initiatives, and support the initiative's development and functioning. The Forum actively encourages organizations that meet these criteria to join the effort and collaborate to ensure the safe and responsible development of frontier AI models.
As the global recognition of AI's enormous promise grows, so does the understanding that appropriate safeguards are necessary to manage its risks. Multiple stakeholders, including the governments of the U.S. and UK, the European Union, the OECD, and the G7 (through the Hiroshima AI process), have already made significant contributions to these efforts.
The Frontier Model Forum aims to build on these existing efforts, focusing on three key areas over the coming year to facilitate the safe and responsible development of frontier AI models:
- Identifying best practices: The Forum intends to promote knowledge sharing and best practices among industry, governments, civil society, and academia. This will focus on safety standards and safety practices to mitigate a broad range of potential risks.
- Advancing AI safety research: The Forum will support the AI safety ecosystem by pinpointing the most critical open research questions on AI safety. Initial efforts will be concentrated on developing and sharing a public library of technical evaluations and benchmarks for frontier AI models. Other research areas will include adversarial robustness, mechanistic interpretability, scalable oversight, independent research access, emergent behaviors, and anomaly detection.
- Facilitating information sharing among companies and governments: The Forum aims to establish trusted, secure mechanisms for sharing information among companies, governments, and relevant stakeholders regarding AI safety and risks, following best practices in responsible disclosure.
The founding members of the Frontier Model Forum expressed their commitment and enthusiasm for this initiative. Kent Walker, President, Global Affairs, Google & Alphabet said, "We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation. Engagement by companies, governments, and civil society will be essential to fulfill the promise of AI to benefit everyone.”
In the months ahead, the Frontier Model Forum will establish an Advisory Board to guide its strategy and priorities, representing a broad spectrum of backgrounds and perspectives. The founding companies will also set up key institutional arrangements, including a charter, governance, and funding structures. They plan to consult with civil society and governments in the coming weeks on the Forum's design and meaningful ways to collaborate.
The Frontier Model Forum is keen to support and contribute to existing government and multilateral initiatives, such as the G7 Hiroshima process, the OECD’s work on AI risks, standards, and social impact, and the U.S.-EU Trade and Technology Council. The Forum also aims to build on the valuable work of existing industry, civil society, and research efforts across each of its workstreams. Initiatives such as the Partnership on AI and MLCommons continue to make important contributions across the AI community, and the Frontier Model Forum will explore ways to collaborate with and support these and other valuable multi-stakeholder efforts.
In conclusion, the Frontier Model Forum marks a crucial step towards safe and responsible AI development. By fostering a collective approach among the major AI players, it hopes to steer the course of frontier AI towards a future that is not only innovative and transformative but also ethical, secure, and beneficial to all of society.