OpenAI announced that as the capabilities of AI models in the field of life sciences continue to improve, the company is systematically strengthening biosafety protection measures and establishing a cooperation mechanism with global scientific research and security institutions to prevent potential biological risks.
OpenAI said that the company plans to hold a global biosafety summit in July 2025, inviting government, scientific research institutions and industry experts to jointly discuss the application boundaries and security frameworks of artificial intelligence technology in biological research and experiments. This summit will be an important part of OpenAI's promotion of AI security governance.
According to reports, OpenAI has currently cooperated with professional institutions such as the Los Alamos National Laboratory in the United States to systematically evaluate the potential application scenarios of AI models in biological laboratories and their accompanying risks. At the same time, the company has launched multiple security mechanisms to curb the risk of technology being used to create biological threats.
These mechanisms include the model's automatic rejection of dangerous requests, a real-time risk detection system, an enhanced human review process, and a "red team attack and defense test" involving internal and external experts. For models identified as high-risk uses, OpenAI will adopt stricter release and access control policies to ensure that the use of models is safe, transparent, and controllable.
OpenAI said that it will continue to cooperate with governments, academia and enterprises in the future to jointly build a global biosafety defense system to ensure that artificial intelligence can effectively avoid potential abuse risks while accelerating innovation in life sciences.
This statement is OpenAI's latest response to AI security governance issues in the context of the rapid enhancement of large model capabilities, and it also echoes the international community's growing concern about the application of AI in sensitive fields.