Network Isolation

Network isolation is the practice of controlling which systems can communicate with which other systems. The goal is to ensure that a compromise in one part of the infrastructure doesn't grant access to everything else. A web server that gets exploited shouldn't be able to reach the database directly. A compromised container shouldn't be able to query the metadata service.

The foundational concepts — VPCs, segmentation, zero-trust — are covered in networking. This article focuses on the practical implementation: how to design and enforce network boundaries across the infrastructure we build.

Design Principles

Default deny. Every network boundary starts closed. Traffic is blocked unless an explicit rule allows it. This applies at every layer: host firewalls, security groups, network policies, and application-level access controls. The cost of opening a port you forgot about is much higher than the inconvenience of having to open one you need.

Least privilege by network path. A service should only be reachable by the services that actually call it. A background worker that processes queue messages has no reason to accept inbound HTTP connections. A database should only accept connections from the application servers that query it — not from the entire VPC.

Defense in depth. Network isolation is one layer of many. Even if a firewall rule is misconfigured, mTLS between services prevents unauthorized connections. Even if mTLS is compromised, application-level authorization rejects unauthorized requests. No single layer is trusted to be the only one that works.

Zones

Divide your infrastructure into zones based on trust level and exposure. The boundaries between zones are where you enforce the strictest controls.

Public Zone

The outermost layer — load balancers, reverse proxies, and CDN endpoints that accept traffic from the internet. Nothing else belongs here. The public zone terminates TLS, applies rate limiting, and forwards validated requests inward.

Internet → Load Balancer → Reverse Proxy (Caddy)

Only ports 80 and 443 are open inbound. The reverse proxy forwards to application services in the private zone over internal networking. See caddy for reverse proxy configuration.

Application Zone

Application services live here — API servers, web backends, background workers. They accept connections from the public zone's reverse proxy and make outbound connections to databases and external APIs. They do not accept connections directly from the internet.

Intra-zone communication (service-to-service calls) should use mTLS or signed tokens. Even within a private network, authenticate every connection. See authentication-authorization for service-to-service patterns.

Data Zone

Databases, caches, message brokers, and object storage. This is the most restricted zone. Only specific application services connect here, and access is controlled by both network rules and application-level credentials.

Application Server → PostgreSQL (port 5432, source-restricted)
Application Server → Redis (port 6379, source-restricted)

No direct access from the public zone. No SSH access except through a bastion host or VPN.

Implementation

Host Firewalls

Every host runs its own firewall, even within a private network. See host-config for nftables configuration. The host firewall is the last line of defense if network-level controls fail — and they do fail, through misconfiguration, infrastructure changes, or cloud provider bugs.

Security Groups and Network ACLs

In cloud environments, security groups provide stateful firewall rules at the instance level. Network ACLs provide stateless rules at the subnet level. Use both:

  • Security groups for service-specific rules — "this application server accepts connections on port 8080 from the load balancer security group"
  • Network ACLs for broad zone-level rules — "nothing in the data subnet accepts inbound connections from the public subnet"

Reference security groups by group ID rather than IP address. This keeps rules stable as instances scale up and down.

Kubernetes Network Policies

In Kubernetes, pods can communicate freely by default — every pod can reach every other pod. Network policies override this with explicit allow rules.

# Allow only the API server to reach the database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: db-allow-api-only
  namespace: app
spec:
  podSelector:
    matchLabels:
      role: database
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              role: api-server
      ports:
        - protocol: TCP
          port: 5432

Start with a default-deny policy in every namespace, then add specific allow rules for each legitimate communication path:

# Default deny all ingress in the namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: app
spec:
  podSelector: {}
  policyTypes:
    - Ingress

This mirrors the host firewall philosophy: deny by default, allow explicitly.

Service Mesh and mTLS

For service-to-service encryption without modifying application code, a service mesh (Linkerd, Istio) or Caddy's internal TLS can automatically encrypt all traffic between services with mutual TLS. See networking for the conceptual foundation and caddy for Caddy's internal CA capabilities.

The value of mTLS goes beyond encryption — it provides identity. Each service presents a certificate, and the receiving service verifies it. A compromised container that doesn't hold the right certificate can't impersonate a legitimate service, even if it has network access.

Documenting Network Boundaries

Every open port, every allowed connection path, and every firewall rule exception should be documented. A table works well:

Source Destination Port Protocol Purpose
Load balancer API server 8080 TCP HTTP reverse proxy
API server PostgreSQL 5432 TCP Application database
API server Redis 6379 TCP Session cache
Prometheus All services 9090 TCP Metrics scraping
Fluent Bit Loki 3100 TCP Log shipping

This table becomes a living document — reviewed when services are added or removed, and audited periodically against actual firewall rules to detect drift.

References