Kubernetes Networking — Services, Ingress, NetworkPolicy, and DNS

A practical guide to Kubernetes networking: Service types (ClusterIP, NodePort, LoadBalancer), Ingress controllers, NetworkPolicy for security isolation, and CoreDNS for service discovery.

Introduction

Kubernetes networking is one of the most confusing parts of the platform — and one of the most important to understand. Every Pod gets its own IP address, Pods can communicate directly with each other without NAT, and Services provide stable endpoints for dynamic Pod sets. This guide walks through the entire networking stack from Pod-to-Pod to external traffic ingress.

The Kubernetes Networking Model

Kubernetes mandates a flat network model with three rules:

  • Every Pod has a unique IP address across the cluster
  • Pods on any node can communicate with all Pods on all nodes without NAT
  • Agents on a node (e.g., kubelet) can communicate with all Pods on that node

This flat model is implemented by CNI (Container Network Interface) plugins: Calico, Flannel, Cilium, Weave, etc. The specific plugin you choose affects performance, NetworkPolicy support, and observability.

Services — Stable Endpoints for Dynamic Pods

Pods are ephemeral — they get new IPs when they restart. A Service provides a stable virtual IP (ClusterIP) that load-balances across matching Pods via label selectors.

ClusterIP — Internal Access Only

apiVersion: v1
kind: Service
metadata:
  name: api-service
  namespace: production
spec:
  type: ClusterIP        # default — accessible only within the cluster
  selector:
    app: api             # matches Pods with this label
    tier: backend
  ports:
    - name: http
      port: 80           # service port (what clients connect to)
      targetPort: 8080   # pod port (what the container listens on)
      protocol: TCP
# Verify service is routing to correct pods
kubectl get endpoints api-service -n production
kubectl describe service api-service -n production

# Test connectivity from within the cluster
kubectl run tmp --image=busybox --rm -it --restart=Never -- \
    wget -qO- http://api-service.production.svc.cluster.local

NodePort — External Access via Node IP

apiVersion: v1
kind: Service
metadata:
  name: api-nodeport
spec:
  type: NodePort
  selector:
    app: api
  ports:
    - port: 80
      targetPort: 8080
      nodePort: 30080    # 30000-32767 range, optional (auto-assigned if omitted)

Accessible at http://<any-node-ip>:30080. Used primarily for development and debugging, not production traffic (no load balancing across nodes, exposed node IPs).

LoadBalancer — Cloud Load Balancer

apiVersion: v1
kind: Service
metadata:
  name: api-lb
  annotations:
    # AWS: internal load balancer
    service.beta.kubernetes.io/aws-load-balancer-internal: "true"
    # GCP: static IP
    cloud.google.com/load-balancer-type: "External"
spec:
  type: LoadBalancer
  selector:
    app: api
  ports:
    - port: 443
      targetPort: 8080

Provisions a cloud provider load balancer (AWS ALB/NLB, GCP Load Balancer, Azure LB). Each LoadBalancer service gets its own external IP/DNS — expensive at scale, which is why Ingress exists.

ExternalName — DNS Alias

apiVersion: v1
kind: Service
metadata:
  name: database
spec:
  type: ExternalName
  externalName: mydb.us-east-1.rds.amazonaws.com
  # Pods access "database.default.svc.cluster.local"
  # and get a CNAME to the RDS endpoint

Ingress — HTTP Routing at the Edge

An Ingress exposes HTTP/HTTPS routes to Services, consolidating many services behind a single load balancer. You need an Ingress Controller (nginx, Traefik, HAProxy, AWS ALB Ingress Controller) installed — Ingress resources do nothing without a controller.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: production-ingress
  namespace: production
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - api.example.com
        - app.example.com
      secretName: tls-secret          # cert-manager creates this
  rules:
    - host: api.example.com
      http:
        paths:
          - path: /v1
            pathType: Prefix
            backend:
              service:
                name: api-v1-service
                port:
                  number: 80
          - path: /v2
            pathType: Prefix
            backend:
              service:
                name: api-v2-service
                port:
                  number: 80
    - host: app.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: frontend-service
                port:
                  number: 80
# Install nginx ingress controller (bare metal)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml

# Check ingress controller pods
kubectl get pods -n ingress-nginx

# Check ingress address
kubectl get ingress -n production

CoreDNS — Service Discovery

CoreDNS is the cluster DNS server. Every Service automatically gets a DNS record in the format:

<service-name>.<namespace>.svc.cluster.local

# Examples:
api-service.production.svc.cluster.local
database.default.svc.cluster.local

# Within the same namespace, short name works:
http://api-service/health

# From another namespace, use full name:
http://api-service.production.svc.cluster.local/health
# Debug DNS resolution
kubectl run dnsutils --image=gcr.io/kubernetes-e2e-test-images/dnsutils:1.3 \
    --rm -it --restart=Never -- nslookup api-service.production.svc.cluster.local

# Check CoreDNS config
kubectl describe configmap coredns -n kube-system

# Check CoreDNS pods
kubectl get pods -n kube-system -l k8s-app=kube-dns

NetworkPolicy — Firewall Rules for Pods

By default, all Pods can communicate with all other Pods (and the internet). NetworkPolicy restricts traffic at the Pod level. Requires a CNI plugin that supports NetworkPolicy (Calico, Cilium, Weave Net — NOT Flannel by default).

# Default-deny all ingress and egress for a namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}      # applies to ALL pods in namespace
  policyTypes:
    - Ingress
    - Egress
# Allow frontend to reach api, api to reach database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-api
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api          # this policy applies to api pods
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: frontend   # only from frontend pods
      ports:
        - protocol: TCP
          port: 8080

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-api-to-db
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: database
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: api        # only from api pods
      ports:
        - protocol: TCP
          port: 5432
# Allow egress to DNS and specific external services
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns-egress
  namespace: production
spec:
  podSelector: {}
  policyTypes:
    - Egress
  egress:
    - ports:
        - protocol: UDP
          port: 53   # DNS
        - protocol: TCP
          port: 53
    - to:
        - ipBlock:
            cidr: 10.0.0.0/8   # internal services only

Headless Services — Direct Pod Addressing

A headless Service (clusterIP: None) does not allocate a virtual IP. DNS returns the actual Pod IPs. Used for stateful applications (databases, Kafka) where clients need to connect to specific instances.

apiVersion: v1
kind: Service
metadata:
  name: postgres-headless
spec:
  clusterIP: None      # headless
  selector:
    app: postgres
  ports:
    - port: 5432

Debugging Network Issues

# Test pod-to-pod connectivity
kubectl exec -it pod-a -- curl http://pod-b-ip:8080/health

# Test service connectivity
kubectl exec -it pod-a -- curl http://api-service.production.svc.cluster.local/health

# Check iptables rules (on the node)
sudo iptables -t nat -L KUBE-SERVICES

# Trace network path with Cilium (if installed)
cilium connectivity test

# Check endpoints (are pods being selected?)
kubectl get endpoints api-service -n production

# Port forward for local debugging
kubectl port-forward svc/api-service 8080:80 -n production

DevKits Tools for Kubernetes Development

When working with Kubernetes services and APIs, these DevKits tools help:

Summary

Kubernetes networking in one mental model: Pods communicate directly via flat IPs, Services provide stable virtual IPs with load balancing, Ingress routes external HTTP traffic to services, and NetworkPolicy restricts traffic like a firewall. The key types:

  • ClusterIP — internal service discovery (default)
  • NodePort — debug access via node IPs
  • LoadBalancer — cloud load balancer, one per service
  • Ingress — HTTP routing, one load balancer for many services
  • NetworkPolicy — Pod-level firewall rules (default-deny is best practice)
  • CoreDNS — automatic DNS for all services

Deploy Your Kubernetes Apps — Recommended Hosting

🌐
Hostinger
Web Hosting from $2.99/mo
💧
DigitalOcean
$200 Free Credit