QCecuring - Enterprise Security Solutions

TLS Certificates in Kubernetes

Amarjeet Shukla

Key Takeaways

  • Kubernetes uses certificates at three layers: cluster infrastructure (API server, kubelet), ingress (external TLS termination), and service-to-service (mTLS between pods)
  • TLS certificates are stored as Kubernetes Secrets of type `kubernetes.io/tls` — containing `tls.crt` and `tls.key`
  • Cluster CA certificates (API server, kubelet) are managed by kubeadm and expire after 1 year by default — missed renewal breaks the entire cluster
  • cert-manager automates certificate lifecycle for application workloads but doesn't manage cluster infrastructure certificates

Kubernetes uses TLS certificates extensively — for securing the API server, authenticating kubelets, encrypting etcd communication, terminating TLS at ingress controllers, and enabling mTLS between services. Certificates exist at multiple layers with different management mechanisms: cluster infrastructure certificates are managed by kubeadm (or the managed K8s provider), while application certificates are typically managed by cert-manager or the service mesh. Understanding which certificates exist where is critical because an expired certificate at any layer can take down the entire cluster or break application connectivity.


Why it matters

  • Cluster integrity — the Kubernetes API server, kubelet, etcd, and controller-manager all authenticate each other with TLS certificates. If any cluster certificate expires, components can’t communicate and the cluster becomes unmanageable.
  • Ingress security — external traffic enters the cluster through ingress controllers that terminate TLS. These certificates are what users see in their browsers — expiry means a public-facing outage.
  • Service-to-service encryption — in zero-trust architectures, pod-to-pod traffic must be encrypted. This requires certificates for every service, managed at scale.
  • Secret management — TLS private keys stored in Kubernetes Secrets are base64-encoded (not encrypted) by default. Anyone with Secret read access has the private key.
  • Multi-cluster complexity — each cluster has its own CA. Cross-cluster communication requires explicit trust configuration between cluster CAs.

How it works

Layer 1: Cluster infrastructure certificates

  • API server serving certificate (clients connect to API server)
  • API server client certificates (API server authenticates to kubelet, etcd)
  • Kubelet client/server certificates (kubelet authenticates to API server and serves metrics)
  • etcd peer and client certificates (etcd cluster communication)
  • Front-proxy certificate (API aggregation layer)
  • Cluster CA certificate (signs all of the above)

Layer 2: Ingress certificates

  • Stored as kubernetes.io/tls Secrets
  • Referenced by Ingress resources or Gateway API routes
  • Ingress controller loads the certificate and terminates TLS

Layer 3: Application/service certificates

  • Issued by cert-manager, service mesh (Istio/Linkerd), or SPIRE
  • Used for mTLS between pods, webhook servers, and internal APIs
  • Stored as Secrets, mounted into pods as volumes

In real systems

TLS Secret for Ingress:

apiVersion: v1
kind: Secret
metadata:
  name: app-tls
  namespace: production
type: kubernetes.io/tls
data:
  tls.crt: <base64-encoded certificate + chain>
  tls.key: <base64-encoded private key>
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-ingress
spec:
  tls:
  - hosts:
    - app.example.com
    secretName: app-tls
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: app-service
            port:
              number: 80

Checking cluster certificate expiry (kubeadm):

# View all cluster certificate expiry dates
kubeadm certs check-expiration

# Output:
# CERTIFICATE                EXPIRES                  RESIDUAL TIME
# admin.conf                 Mar 15, 2027 10:00 UTC   364d
# apiserver                  Mar 15, 2027 10:00 UTC   364d
# apiserver-etcd-client      Mar 15, 2027 10:00 UTC   364d
# apiserver-kubelet-client   Mar 15, 2027 10:00 UTC   364d
# etcd-healthcheck-client    Mar 15, 2027 10:00 UTC   364d
# etcd-peer                  Mar 15, 2027 10:00 UTC   364d
# etcd-server                Mar 15, 2027 10:00 UTC   364d

# Renew all cluster certificates
kubeadm certs renew all
# Then restart control plane components

Managed Kubernetes (EKS, GKE, AKS):

# Cluster infrastructure certificates are managed by the provider
# You never see or manage API server, kubelet, or etcd certs
# Your responsibility: ingress certificates and application certificates only

# EKS: API server cert managed by AWS, rotated automatically
# GKE: cluster CA rotated via: gcloud container clusters update --rotate-credentials
# AKS: cluster certificates auto-rotated (check with az aks show)

Where it breaks

Cluster certificates expire (kubeadm) — kubeadm-managed clusters have certificates that expire after 1 year. If nobody runs kubeadm certs renew before expiry, the API server can’t authenticate kubelets, kubectl stops working, and the cluster is effectively dead. Recovery requires manual certificate regeneration on control plane nodes. Managed K8s (EKS, GKE, AKS) handles this automatically — self-managed clusters do not.

Secret not updated after certificate renewal — cert-manager renews a certificate and updates the Secret. But the ingress controller (or application pod) has the old certificate cached in memory. Nginx-ingress watches Secrets and hot-reloads, but some controllers or applications require a pod restart to pick up the new certificate. Verify that your ingress controller supports dynamic certificate reloading.

TLS Secret in wrong namespace — an Ingress resource in namespace production references a Secret app-tls, but the Secret was created in namespace default. Ingress can only reference Secrets in the same namespace (unless using cross-namespace features in Gateway API). The Ingress silently falls back to the default certificate or fails TLS entirely.


Operational insight

The most dangerous Kubernetes certificate failure is cluster infrastructure expiry on self-managed clusters. Unlike application certificates (where expiry causes one service to fail), cluster certificate expiry breaks kubectl, prevents deployments, stops scaling, and makes the cluster unmanageable — you can’t even fix the problem through normal K8s operations because the API server is unreachable. Set a monitoring alert for cluster certificate expiry at 60 days, and automate renewal with a cron job or systemd timer that runs kubeadm certs renew all monthly. Or better: use a managed Kubernetes service where the provider handles infrastructure certificate rotation.


Ready to Secure Your Enterprise?

Experience how our cryptographic solutions simplify, centralize, and automate identity management for your entire organization.

Stay ahead on cryptography & PKI

Get monthly insights on certificate management, post-quantum readiness, and enterprise security. No spam.

We respect your privacy. Unsubscribe anytime.