Penetration Testing Cadence
Pen tests find what scanners miss. The cadence.
Annual
Penetration testing finds the issues that automated scanners miss. A scanner can identify "this dependency has a known CVE"; a pen tester can identify "this combination of three innocuous-looking issues lets an attacker escalate from anonymous to admin." The two layers complement each other; the cadence at which each runs is the discipline that makes them useful.
What annual pen testing should cover:
- External pen test, broad scope.: Once a year, an external firm performs a comprehensive penetration test against your entire externally-facing attack surface. Web apps, APIs, network perimeter, cloud configurations, identity provider, source control, anything an outsider could reach. The scope is broad because annual is the only time you do this.
- Compliance and discovery.: Most compliance frameworks (SOC 2, PCI DSS, HIPAA in some contexts, ISO 27001) require annual penetration testing. The annual test satisfies the compliance requirement and produces actionable findings. The two purposes overlap; doing both with one engagement is efficient.
- Independent firm, fresh perspective.: The pen test team is external. They have not seen your architecture, do not know your internal politics, and are not constrained by what your team thinks is important. They find the things insiders missed because insiders are too close.
- Methodology disclosed.: The pen test report includes the methodology: which standards (OWASP, NIST, MITRE ATT&CK) were used, what was in scope, what was out of scope, what level of access the testers had. The methodology section is what makes the findings reproducible and the report defensible later.
- Findings rated by severity.: Each finding is rated using a standard scheme (CVSS, owasp risk rating). Severity is a function of impact and exploitability. Findings rated critical or high get immediate attention; medium and low go into the backlog.
The annual test is the discovery mechanism for the issues your scanners and your internal team missed. It is the deepest assessment your security program runs and the highest-leverage one.
Quarterly
Once-a-year is too infrequent for many engineering organizations. Architecture changes, new services launch, dependencies update. The pen test from January does not reflect what you are running in September. Quarterly targeted tests fill the gap with focused scope.
- Targeted internal scope.: Each quarter, a smaller pen test focuses on a specific surface that has changed since the last engagement. The new payments integration. The recently-redesigned admin console. The cloud migration that moved data to a new region. The scope is narrow but the depth is real.
- Recently changed surfaces.: The surfaces most likely to have new vulnerabilities are the ones that changed recently. Quarterly testing concentrates the effort there, which is more efficient than retesting the entire surface every quarter.
- Internal team or smaller external engagement.: Quarterly testing can be done by an internal red team if you have one, or by a smaller external engagement (a few days of effort rather than a few weeks). The cost is materially lower than the annual; the cumulative coverage across four quarterly tests can exceed the annual.
- Faster turnaround.: Quarterly tests have shorter scope and shorter report cycles. Findings land within a week or two of the test, not a month or two. Remediation can ship while the issue is still fresh in the team's mind.
- Coordinates with major releases.: Schedule quarterly tests to coincide with major releases or significant architecture changes. The test runs while the new surface is hot, which catches issues before they have been in production long enough to be exploited.
The quarterly cadence is what keeps pen testing useful for an engineering org that ships continuously. The annual is necessary; quarterly is what makes the practice keep up with the rate of change.
Respond
The pen test report is not the deliverable; the remediations are. A pen test that produces a 40-page report and zero closed findings is a compliance artifact, not a security improvement. The discipline that turns findings into improvements is the response process.
- Findings tracked, not just filed.: Every finding becomes a tracked ticket in the engineering backlog. The ticket has an owner (the team responsible for the affected surface), a severity (matching the pen test rating), and a target close date (driven by an SLA per severity).
- SLA per severity.: Critical findings: 30 days. High findings: 60 days. Medium: 90 days. Low: 180 days or accepted as risk. The SLA is in writing, agreed by engineering and security leadership, and enforced. Findings past SLA escalate.
- Demonstrate progress.: The pen test response is itself a metric. Number of findings closed per month. Average time-to-close per severity. Number of findings past SLA. The numbers go on the security team's dashboard. Trends over multiple test cycles tell whether the practice is maturing.
- Verify closure.: When a finding is marked closed, the security team verifies the fix. Sometimes the fix is partial, sometimes it addresses the symptom not the root, sometimes it introduces a new issue. Verification is the gate; "engineering says it's fixed" is not enough.
- Re-test in next cycle.: The next pen test re-tests the previously-found issues to confirm they remain fixed. Regression of a closed finding is itself a serious finding because it suggests the fix was not durable. The cycle is closed-loop.
Penetration testing with annual broad coverage, quarterly targeted depth, and a tight response cycle is the discipline that produces real security improvements rather than compliance theater. Nova AI Ops integrates with the security team's finding tracker, surfaces the SLA status across all open findings, and tracks the close rate over time so the security investment is visibly maturing.