Private AKS Cluster with Twingate: Secure API Access Without a Public Endpoint
2024-05-06
đ Introduction
Running Kubernetes clusters privately is a growing best practice. In this blog, Iâll walk you through deploying a private AKS cluster on Azure with no public API endpoint, and enabling secure access via Twingate VPN, which provides identity-based access without opening up your network.
This setup is ideal if:
- You want private networking in AKS (via Azure Private Link)
- You need granular access control over the cluster
- You want to avoid managing full VPN appliances or bastion hosts
What We'll Build
- A private AKS cluster with no public API server
- A Twingate Connector running in the same VNet
- Twingate configured with the AKS API server as a protected resource
- Remote access to the cluster using
kubectl
via Twingate
Infrastructure Setup
- Provision a private AKS cluster
You can use az cli
or Terraform (recommended):
az aks create \ --name dev-private \ --resource-group dev-private_group \ --enable-private-cluster \ --vnet-subnet-id /subscriptions/.../subnets/aks-subnet \ --node-count 2 \ --generate-ssh-keys
Terraform:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "rg" {
name = "rg-private-aks"
location = "UK South"
}
resource "azurerm_virtual_network" "vnet" {
name = "vnet-private-aks"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
}
resource "azurerm_subnet" "aks_subnet" {
name = "snet-aks"
resource_group_name = azurerm_resource_group.rg.name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.0.1.0/24"]
}
resource "azurerm_kubernetes_cluster" "aks" {
name = "private-aks-cluster"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
dns_prefix = "privateaks"
default_node_pool {
name = "default"
node_count = 2
vm_size = "Standard_DS2_v2"
vnet_subnet_id = azurerm_subnet.aks_subnet.id
}
identity {
type = "SystemAssigned"
}
network_profile {
network_plugin = "azure"
dns_service_ip = "10.0.2.10"
service_cidr = "10.0.2.0/24"
docker_bridge_cidr = "172.17.0.1/16"
}
api_server_access_profile {
enable_private_cluster = true
}
tags = {
Environment = "Private"
}
}
The --enable-private-cluster flag ensures the Kubernetes API server is only accessible over the VNet.
To check the API server endpoint:
az aks show --name dev-private --resource-group dev-private_group --query privateFqdn
Youâll get something like:
dev-private-dns-xxxxxx.privatelink.uksouth.azmk8s.io
Push Twingate Connector Image to Azure Container Registry (ACR)
If using a private ACR:
az acr login --name <your-acr> docker pull --platform linux/amd64 twingate/connector:1 docker tag twingate/connector:1 <your-acr>.azurecr.io/twingate/connector:1 docker push <your-acr>.azurecr.io/twingate/connector:1
Deploy Twingate Connector as Azure Container Instance
az container create \ --name twingate-connector \ --image <your-acr>.azurecr.io/twingate/connector:1 \ --resource-group dev-private_group \ --vnet dev-private_group-vnet \ --subnet twingate \ --cpu 1 \ --memory 2 \ --environment-variables \ TWINGATE_NETWORK="mo-demo" \ TWINGATE_ACCESS_TOKEN="<your-access-token>" \ TWINGATE_REFRESH_TOKEN="<your-refresh-token>" \ TWINGATE_TIMESTAMP_FORMAT="2" \ TWINGATE_LABEL_DEPLOYED_BY="azure"
Terraform:
resource "azurerm_container_group" "twingate_connector" {
name = "twingate-connector"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
os_type = "Linux"
container {
name = "twingate"
image = "youracrname.azurecr.io/twingate-connector:1"
cpu = "1"
memory = "1.5"
environment_variables = {
TWINGATE_NETWORK = "your_network"
TWINGATE_ACCESS_TOKEN = "your_access_token"
TWINGATE_REFRESH_TOKEN = "your_refresh_token"
TWINGATE_LABEL_DEPLOYED_BY = "terraform"
}
}
ip_address_type = "Private"
subnet_ids = [azurerm_subnet.aks_subnet.id]
}
Configure Twingate
- Setup a Twingate account
- Create a new remote network in Twingate
Add the AKS API Server as a Twingate Resource
In the Twingate Admin Console:
- Create a new Remote Network for your VNet (e.g., aks-vnet)
- Deploy the Connector to that Remote Network (you already did)
- Add a Resource with the private AKS API DNS name
- e.g., dev-private-dns-xxxxxx.privatelink.uksouth.azmk8s.io
- Port: 443
- Assign the Resource to a group (e.g., engineering)
Access the Cluster via Twingate
Once the Connector is live and you're authenticated via the Twingate client:
az aks get-credentials --resource-group dev-private_group --name dev-private kubectl get nodes
It worksâbecause your local traffic is tunneled securely through Twingate and routed to the private API server over the VNet!
You can also run commands remotely using:
az aks command invoke \ --resource-group dev-private_group \ --name dev-private \ --command "kubectl get pods -A"
Bonus: Ingress Whitelisting
If you're using an NGINX ingress controller, and want to restrict access to known IPs:
nginx.ingress.kubernetes.io/whitelist-source-range: 10.0.4.5
Summary
With this setup:
- Your AKS cluster is completely private
- Access is secured and identity-aware via Twingate
- No public exposure, no bastion, no hassle