Vulnerability Disclosure Policy
Public-facing vuln disclosure. The policy.
Public-facing
Security researchers find vulnerabilities in your product. They are going to find them whether or not you have a way to report them. The vulnerability disclosure policy is the difference between researchers reporting to you (where you can fix quietly) and reporting publicly (where you find out from social media). Setting up the policy properly is one of the highest-leverage security investments a company can make.
What public-facing disclosure infrastructure looks like:
- security.txt at the well-known URL.: The /.well-known/security.txt file is the standard for advertising your security contact info. It includes the email or form to report vulnerabilities, the encryption key for sensitive reports, the policy URL, and the languages your team supports. Researchers know to look here first.
- /security URL with the full policy.: A page on your main domain documenting the disclosure policy. What is in scope, what is out of scope, what to expect on response time, what kinds of testing are authorized, what is forbidden. The page is public and easy to find from your main navigation.
- Documented contact path.: A dedicated email (security@yourcompany.com) or a vetted submission form. Generic info@ addresses bury security reports under marketing pings; the dedicated channel routes directly to the team that can act.
- PGP key for sensitive reports.: A published PGP key lets researchers encrypt sensitive details before sending. Most reports do not need encryption, but the option being available signals to researchers that the program is serious.
- Researchers know how to report.: The whole point of the public-facing infrastructure is to lower the friction for someone trying to do the right thing. A researcher who finds a bug at 2 AM should be able to file a report in 10 minutes without hunting through your website for the right contact.
Public-facing disclosure infrastructure is the cheapest part of the program and the most consequential. The cost is a few hours of setup; the benefit is that vulnerabilities reach you instead of social media.
Triage
Reports arrive. The next question is what happens to them. A program that solicits reports but takes weeks to respond loses the trust of the research community. The triage process is what turns the policy from a marketing artifact into a real security mechanism.
- Dedicated team.: Vulnerability reports go to a specific person or team responsible for triage. Not "whoever sees it first." The team has the technical skill to evaluate reports and the authority to engage engineering for remediation.
- SLA on initial response.: Acknowledge every report within 24 hours, even if just "we received this and are looking." Initial triage decision (valid, duplicate, out-of-scope) within 5 business days. Researchers compare programs on response time; slow response is a reputation cost.
- Severity assessment within 7 days.: Once the report is validated, assess severity using a standard scheme (CVSS). The severity drives the remediation timeline and the bounty amount if there is one. Communicate the severity back to the researcher within a week.
- Remediation SLA per severity.: Critical: fixed within 7 days. High: 30 days. Medium: 90 days. Low: 180 days or accepted as risk. The SLA is documented in the policy and held to. Researchers respect programs that honor their commitments.
- Coordinated disclosure on the timeline.: When the fix ships, the researcher is notified. Public disclosure (CVE, blog post, conference talk) happens on a coordinated timeline (typically 90 days from initial report or sooner if the researcher prefers). Surprise public disclosure is what happens when the program loses the researcher's trust.
Quality of triage matters more than speed. A fast but inaccurate triage that mis-categorizes a critical issue as low is worse than a slower triage that gets it right. The team has to be technically competent, not just responsive.
Avoid
The fastest way to destroy a vulnerability disclosure program is to mistreat the researchers reporting to it. The behaviors to avoid are well-known and the cost of falling into them is permanent reputation damage.
- Punitive actions on good-faith research.: Threatening legal action against researchers who report vulnerabilities responsibly is the single biggest mistake a program can make. The story spreads through the security community within days and your program effectively shuts down because nobody trusts you to respond well.
- Aggressive bug-bounty scope reduction after the fact.: Researchers who find serious bugs sometimes get told "that's actually out of scope" after they reported. This is a reputation-destroyer. The scope must be clear before the report; changing it afterward is bad faith.
- Silent fixes without acknowledgment.: Patching the issue without crediting the researcher (or paying the bounty if there is one) is the second-fastest way to lose trust. The reporter sees their finding fixed and not acknowledged; they post about it; future researchers stop reporting to you.
- Slow or absent communication.: A researcher who reports a critical bug and hears nothing for two weeks files publicly to force action. The program that does not communicate is the program that creates its own zero-day disclosures.
- Reputation matters.: The security research community is small and well-connected. A program that treats researchers well builds trust over years. A program that treats them badly burns trust in days. The asymmetry is the reason the policy must be defended carefully.
A vulnerability disclosure policy is one of those investments where the operational discipline matters as much as the technical setup. Nova AI Ops integrates with the disclosure intake (security.txt URL, dedicated email, submission form), tracks SLA compliance per report, and surfaces the program's response-time metrics so the security team knows whether their reputation in the research community is being earned or burned.