Network Segmentation Patterns
Segment networks to bound breach impact.
VPC segmentation
Network segmentation is the security architecture pattern of dividing your network into smaller, controlled segments where traffic between segments is restricted by default. The flat-network alternative (everything reaches everything) is the architecture that makes lateral movement easy after a breach. Segmentation is what bounds the blast radius.
What VPC-level segmentation looks like:
- Per environment.: Production, staging, and dev each live in separate VPCs (or separate accounts in AWS multi-account architectures). Traffic between them is restricted to specific paths through specific endpoints. A compromise in dev cannot reach prod via implicit network connectivity.
- Per service tier.: Within an environment, the data plane lives in one VPC, the control plane in another, public-facing workloads in a third. The tiers communicate via specific endpoints with specific firewall rules. The DMZ pattern applies at modern cloud scale.
- Limits lateral movement.: When an attacker compromises a pod in the public-facing tier, they cannot reach the data plane through ambient network connectivity. They have to traverse a controlled path that has authentication, audit logging, and rate limiting. Each segment crossed produces a signal.
- VPC peering and Transit Gateway with discipline.: Inter-VPC connectivity uses peering or Transit Gateway, with route tables that explicitly allow specific traffic patterns. The default is "no traffic"; specific rules open specific paths. This is the network-layer equivalent of default-deny IAM.
- Per-tenant or per-customer segmentation.: Multi-tenant SaaS that holds high-sensitivity data sometimes segments per customer or per tenant tier. The blast radius of a single-tenant compromise is bounded to that tenant; cross-tenant reach requires a separate compromise.
VPC segmentation is the macro layer of network security. It does not catch every attack but it caps the damage of the attacks that succeed. Done correctly, it is the difference between "an incident affecting one service" and "an incident affecting the whole platform."
Namespace policies
Below the VPC layer, Kubernetes provides namespace-level segmentation through NetworkPolicy resources. The namespace is the natural boundary; the NetworkPolicy declares what crosses the boundary. Done correctly, namespace policies provide pod-to-pod segmentation at fine granularity.
- NetworkPolicy in Kubernetes.: The standard mechanism for declaring pod-level network rules. Ingress (what can reach this pod) and egress (where this pod can reach). A namespace can have a default-deny policy at the bottom of the rule stack and specific allow rules above it; pods get exactly the connectivity they declared.
- Pod-to-pod control.: Within a cluster, pods that should not talk to each other do not have network connectivity. The frontend can reach the API; the API can reach the database; the frontend cannot reach the database directly. The architecture is reflected in network policy.
- Default-deny per namespace.: Each namespace gets a default policy that denies all ingress and egress. Then specific rules open specific paths. The default-deny is the security floor; deviations require explicit allowlist entries.
- Cross-namespace selectors.: Network policies can reference labels on pods in other namespaces. The team can express "pods labeled tier=api in namespace=prod can be reached from pods labeled tier=frontend in namespace=prod" precisely, without naming individual pods.
- Egress to external services.: Pods that need to reach external services (third-party APIs, external dependencies) declare those endpoints in their egress policy. Anything not declared is blocked. This is the SSRF defense and the data-exfiltration defense at the network layer.
Namespace policies are the micro layer of network security. Together with VPC segmentation, they produce defense in depth from the cluster perimeter to the individual pod.
Review
Network segmentation drifts. New services get added with permissive default rules. Old services stop being used but their network rules persist. The map of "what can reach what" diverges from the map the team thinks they have. The fix is regular review.
- Quarterly review.: Each quarter, review the segmentation rules. Which segments still make sense? Which paths are no longer needed? Which new services should have been added but were not? The output is a list of changes; the changes go through normal change management.
- Are segments still meaningful?.: A segment that was created for a use case that no longer exists should be retired. A segment that has accumulated so many exception rules that everything reaches everything is no longer providing segmentation; it needs restructuring or replacement.
- Drift over time.: The team that does not review will find their segmentation slowly degrading into a flat network that calls itself segmented. The quarterly review catches the drift while it is still tractable to fix.
- Audit against documented architecture.: The intended architecture is documented. The actual rules are deployed. The review compares the two and flags discrepancies. Everything that does not match either the architecture is wrong or the rules are wrong; the team decides which is which.
- Test the segmentation.: Periodically verify that the segmentation actually works. From a pod in segment A, attempt connections to segment B. The expected denials should occur; the expected allows should succeed. Testing the segmentation is what catches the cases where the rules look right but do not work.
Network segmentation done right is the architecture pattern that bounds the damage of any single compromise. Nova AI Ops audits VPC and NetworkPolicy configurations, surfaces the cases where segments have drifted from the intended architecture, and runs continuous segmentation tests so the team has confidence the network defenses are actually working.