Using GKE DNS-based endpoints for Secure cluster access

2025-03-02

TL;DR

  • DNS-based GKE endpoints change how public and private control planes can be accessed externally and internally in Google Cloud.
  • Private GKE endpoints with internal IPs can now be accessed externally using a DNS-based endpoint—no need for bastion hosts or VPNs.
  • Public GKE endpoints can be hardened by layering Cloud IAM authorization on API server requests.
  • No other cloud provider offers secure external access to private cluster endpoints.
  • DNS-based access works today via gcloud or Terraform on new/existing GKE clusters.

Background: GKE Cluster Endpoint Models

Traditionally, GKE API servers are accessed via IP-based endpoints:

  • Public IP: Globally accessible, optionally restricted via Master Authorized Networks.
  • Private IP: Internal-only, routable within VPC. Requires VPN, Interconnect, or bastion host.

Introducing DNS-Based Endpoints

DNS-based GKE endpoints offer access to the control plane using a cluster-unique FQDN, e.g.:

gke-xxxxx-xxxxx.europe-west2.gke.google

This FQDN resolves to a Google Cloud IP, which fronts the GKE API server. This adds an authorization layer via Cloud IAM before requests hit the Kubernetes API server.

Works with both public and private GKE endpoints.

Architectural Implications

Before:

  • Private endpoints required private networking (VPN/Interconnect/Bastion).
  • Public endpoints exposed clusters to external traffic directly.

After:

  • Private GKE clusters can now be accessed externally via FQDN + IAM auth.
  • Public GKE clusters benefit from IAM-based filtering before RBAC applies.

Authorisation via IAM

Traditional IP-based GKE access:

  • Requests go straight to the API server.
  • Access restricted via IP + RBAC.

DNS-based endpoint:

  • Requests first go through a Google Cloud API.
  • IAM checks are enforced, e.g., container.clusters.connect permission.
  • Then forwarded to GKE control plane for RBAC enforcement.
# Grant IAM access
$ gcloud projects add-iam-policy-binding devops-mo \
  --member=serviceAccount:gke-priv-access@devops-mo.iam.gserviceaccount.com \
  --role=roles/container.developer

# Retrieve credentials via DNS endpoint
$ gcloud container clusters get-credentials gke-dns-private \
  --dns-endpoint \
  --location europe-west2

IAM failure example

$ kubectl cluster-info
Error: Permission 'container.clusters.connect' denied on resource

Benefits of DNS Endpoint Access

Private Clusters

  • No VPNs or bastion hosts required
  • Access internal-only GKE API servers using external connectivity
  • Removes attack surfaces and infra overhead

Public Clusters

  • Enforce IAM on public endpoint traffic
  • Block unauthenticated users from reaching /healthz, /version, and discovery APIs

Without DNS Endpoint:

$ curl -k https://<public-ip>/readyz
$ curl -k https://<public-ip>/version
# Both return 200 OK

With DNS Endpoint:

$ curl -k https://gke-dns-private.europe-west2.gke.google/readyz
$ curl -k https://gke-dns-private.europe-west2.gke.google/version
# Both return 403 Forbidden

Example: Deploying a Private GKE Cluster

Create Private GKE Cluster with DNS Endpoint

gcloud container clusters create-auto gke-dns-private \
  --enable-dns-access \
  --enable-private-nodes \
  --enable-private-endpoint \
  --location europe-west2 \
  --enable-master-authorized-networks

Inspect Cluster Networking

# Shows internal IP only
$ gcloud container clusters describe example-auto-priv --location europe-west2 --format="value(endpoint)"
10.154.0.13

# Shows FQDN
$ gcloud container clusters describe example-auto-priv --location europe-west2 --format="value(controlPlaneEndpointsConfig.dnsEndpointConfig.endpoint)"
gke-<hash>.europe-west2.gke.goog

# Resolve DNS
$ dig +short gke-<hash>.europe-west2.gke.goog
216.239.32.27

Try Direct Internal IP (fails externally)

$ kubectl config view -o jsonpath='{.clusters[0].cluster.server}'
https://10.154.0.13
$ kubectl get ns
# fails: timeout or unreachable

Use FQDN with External Access

$ gcloud container clusters get-credentials example-auto-priv \
  --dns-endpoint \
  --location europe-west2

$ kubectl cluster-info
Kubernetes control plane is running at https://gke-<hash>.europe-west2.gke.google

Public Endpoint Abuse vs IAM Gatekeeping

Problem:

Even restricted public endpoints expose cluster version and health APIs to unauthenticated users.

# Anonymous access
$ curl -k https://<public-ip>/readyz
$ curl -k https://<public-ip>/version

Authenticated (but external) users can access discovery APIs

$ kubectl auth whoami
Username: mo@devops.com
$ kubectl get --raw='/apis'
# Returns API groups

With DNS endpoint enabled:

$ curl -k https://<fqdn>/version
Error: Permission 'container.clusters.connect' denied

Migrating Existing Clusters

gcloud container clusters update example-auto-priv \
  --enable-dns-access \
  --enable-private-nodes \
  --enable-private-endpoint \
  --location europe-west2

or via Terraform;

resource "google_container_cluster" "dns_private" {
  name     = "gke-priv-dns"
  enable_autopilot = true

  control_plane_endpoints_config {
    dns_endpoint_config {
      allow_external_traffic = true
    }
  }

  master_authorized_networks_config {}

  private_cluster_config {
    enable_private_nodes = true
    enable_private_endpoint = true
  }
}
# Non-destructive: Can be added to existing clusters.

Wrap Up

DNS-based GKE endpoints:

  • Enable external access to private clusters without VPNs/bastions
  • Secure public endpoints by adding IAM authorization
  • Reduce infra complexity and operational overhead
  • Eliminate attack surfaces from bastions and public IPs

A clear evolution from IP-based perimeter security to identity-based secure access using Google Cloud’s APIs.

  • Similar to Identity-Aware Proxy (IAP), DNS-based endpoints shift access control to the cloud perimeter.

Use DNS endpoints now to modernise your GKE networking and improve your security posture.

Related Posts