-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Open
Description
Problem
The default sidecar proxy resources (100m CPU, 128Mi memory) are conservative but expensive at scale.
Current cost impact:
- Per sidecar: ~$39/year
- 50-pod mesh: ~$1,950/year
- 100-pod mesh: ~$3,900/year
For teams running Istio in dev/staging environments or with modest traffic, these defaults significantly increase infrastructure costs.
Analysis
I analyzed sidecar resource usage across various deployment sizes using Wozz and observed:
| Scenario | Pods | Current Cost | Optimized Cost | Savings |
|---|---|---|---|---|
| Dev cluster | 20 | $787/yr | $394/yr | $393/yr (50%) |
| Small prod | 50 | $1,968/yr | $984/yr | $984/yr (50%) |
| Medium prod | 100 | $3,936/yr | $1,968/yr | $1,968/yr (50%) |
Proposal
Reduce default sidecar resources to more typical usage levels:
# Current defaults
resources:
requests:
cpu: 100m
memory: 128Mi
# Proposed defaults
resources:
requests:
cpu: 50m # Still safe for most workloads
memory: 64Mi # Adequate for typical trafficEvidence
Typical sidecar usage patterns:
- Idle state: 5-20m CPU, 30-50Mi memory
- Light traffic (<10 rps): 20-40m CPU, 40-60Mi memory
- Moderate traffic (<100 rps): 40-80m CPU, 60-80Mi memory
The proposed values (50m/64Mi) still provide headroom for moderate traffic while reducing waste.
Precedent:
- Linkerd defaults: 50m CPU, 64Mi memory
- AWS App Mesh: 50m CPU, 64Mi memory
- Other service meshes use similar conservative-but-reasonable defaults
Backwards Compatibility
Users needing higher resources can:
- Global override (all sidecars):
# values.yaml
global:
proxy:
resources:
requests:
cpu: 100m
memory: 128Mi
- Per-pod annotation:
sidecar.istio.io/proxyCPU: "100m"
sidecar.istio.io/proxyMemory: "128Mi"
Metadata
Metadata
Assignees
Labels
No labels