Kubernetes Practical By Samson Tanimawo, PhD Published Sep 14, 2025 4 min read

Pod Overhead Calculation

Pods have overhead beyond app resources.

Sources of pod overhead

Sidecar containers: service mesh proxies, log forwarders, metrics agents. Each adds CPU, memory, and startup time.

kube-proxy and CNI plugins: network overhead per pod. CPU on every packet traversing the pod boundary.

Container runtime overhead: containerd or CRI-O baseline. Not zero; not negligible at small pod sizes.

Typical overhead percentages

Without service mesh: 10-15% per pod. Mostly sidecar agents (logging, metrics).

With service mesh: 20-35% per pod. Envoy or similar mesh proxy adds CPU and memory.

Smaller pods feel the overhead more. A 100m CPU pod with 50m overhead is 33% wasted; a 1000m pod is 5%.

Planning capacity around overhead

Provision based on pod requests including overhead. If app needs 200m CPU, request 240m to cover overhead.

Reserve capacity per node for system overhead. Beyond pod-level overhead, kubelet, kube-proxy, and CNI consume node resources.

Standard reservation: 0.5 cores and 1GB per node for system. Larger nodes amortise this; smaller nodes feel it.

Reducing overhead

Sidecar consolidation. Multiple agents fold into one. Fewer processes, less memory, simpler operation.

DaemonSet for shared functionality. One log forwarder per node instead of one per pod. Massive overhead reduction.

Service mesh sampling. Reduce overhead by sampling traces and metrics rather than capturing every request.

What to monitor

Node-level resource utilisation versus pod requests. Gap is overhead plus actual headroom.

Per-pod actual versus requested. Overhead shows up as systemic difference between request and use.

Pod startup time. Heavy sidecars dominate. Optimise the slowest sidecar first.