安全

规模化协同漏洞披露

OpenAI基于诚信、合作与规模化的原则,构建了第三方软件漏洞报告机制。

我们正式发布《对外协同披露政策》,明确向第三方披露漏洞时遵循的规范框架。

OpenAI始终致力于推动数字生态系统安全建设。为此我们制定《对外协同披露政策》,规范我们在发现第三方软件安全问题时的负责任披露流程。当前推出此政策,是因为我们预见到随着AI系统发现和修补安全漏洞的能力持续增强,协同漏洞披露将成为必要实践。OpenAI开发的系统已多次发现第三方及开源软件的零日漏洞,此次政策出台正是为应对未来更多发现所做的前瞻性部署。

无论漏洞是通过持续研究、对所用开源代码的定向审计,还是AI工具的自动化分析所发现,我们的目标是以合作、尊重且有益于整个生态系统的方式进行报告。

政策覆盖范围

本政策规范我们通过自动化与人工代码审查发现的开源及商业软件问题,以及使用第三方软件和系统时内部发现的漏洞披露流程。

具体包括:

  • 漏洞验证与优先级评估机制
  • 供应商联系流程与披露操作规范
  • 公开披露的时机与方式(默认非公开优先,特殊情况除外)
  • 五大核心原则:影响导向、协作优先、默认审慎、高扩展低摩擦、适当署名

在披露时限方面,我们采取开发者友好立场,默认保持开放时间框架。这种灵活性设计源于漏洞发现的动态特性——特别是当AI系统在代码分析、强弱项评估及生成可靠补丁方面日益精进时。我们预计模型将检测到数量更多、复杂度更高的漏洞,这些可能需要更深入协作与更长时间来妥善解决。我们将持续与软件维护者共同制定平衡紧急性与长期韧性的披露规范,同时保留在涉及公共利益等情况下的自主披露权。

持续优化

我们将根据实践反馈不断完善本政策。如有关于披露实践的疑问,请联系outbounddisclosures@openai.com。

安全建设是永无止境的征程。我们感谢与我们同行的供应商、研究人员和社区成员,希望通过透明化的沟通方式,共同构建更健康、更安全的数字生态系统。

【原文】

June 9, 2025

Security
Scaling coordinated vulnerability disclosure

OpenAI’s approach to reporting vulnerabilities in third-party software, built on integrity, cooperation, and scale.

Share

We are publishing an Outbound Coordinated Disclosure Policy that we will follow when disclosing vulnerabilities to third-parties.

At OpenAI, we are committed to advancing a secure digital ecosystem. That’s why we’re introducing our Outbound Coordinated Disclosure Policy, which lays out how we responsibly report security issues we discover in third-party software. We're doing this now because we believe coordinated vulnerability disclosure will become a necessary practice as AI systems become increasingly capable of finding and patching security vulnerabilities. Systems developed by OpenAI have already uncovered zero-day vulnerabilities in third-party and open-source software, and we are taking this proactive step in anticipation of future discoveries.

Whether surfaced through ongoing research, targeted audits of open source code we leverage, or automated analysis using AI tools, our goal is to report vulnerabilities in a way that’s cooperative, respectful, and helpful to the broader ecosystem.

What the policy covers

This policy lays out how we disclose issues found in open-source and commercial software through automated and manual code review, as well as discoveries arising from internal usage of third-party software and systems.


It explains:

How we validate and prioritize findings
How we contact vendors and the disclosure mechanics we follow
When and how we go public (non-public first, unless the details demand otherwise)
Our principles, which include being impact oriented, cooperative, discreet by default, high scale and low friction, and providing attribution when relevant.


We take an intentionally developer-friendly stance on disclosure timelines and have elected to leave timelines open-ended by default. This approach reflects the evolving nature of vulnerability discovery, particularly as AI systems become more effective at reasoning about code, its strengths and weaknesses, and generating reliable patches to increase code security. We anticipate our models detecting a greater number of bugs of increasing complexity, which may require deeper collaboration and more time to resolve sustainably. We’ll continue working with software maintainers to develop disclosure norms that balance urgency with long-term resilience. We still reserve the right to disclose when we determine there is, for example, public interest in doing so.

Looking ahead

We will keep improving this policy as we learn. If you have questions about our disclosures practices, reach out to us at outbounddisclosures@openai.com⁠.

Security is a journey defined by continuous improvement. We’re thankful to the vendors, researchers, and community members who walk that road with us. We hope that transparent communication around our approach supports a healthier, more secure ecosystem for everyone.

Policies and Procedures
2025
Author
OpenAI