AWS Controllers for Kubernetes
2024-05-06
Kubernetes as a Cloud Control Plane: Deep Dive into AWS ACK with kind
Kubernetes isn't just about container orchestration anymore — it's become the de facto control plane for everything. With Custom Resource Definitions (CRDs) and controllers, we can now describe and manage virtually any resource declaratively — whether it's an app, an S3 bucket, or an RDS instance. That’s where AWS Controllers for Kubernetes (ACK) come in.
ACK lets us manage AWS resources like RDS or EC2 the same way we manage Deployments and Services: using YAML manifests. The idea is powerful — replace scattered IaC tooling with a unified K8s-native approach. And if you’re already deploying everything through Argo CD, Flux, or Helm, why not include your cloud infra?
But does it actually hold up?
That’s what this post explores — we’ll walk through an end-to-end demo running ACK on a local kind cluster.
⚙️ Environment Setup
mkdir ack-demo cd ack-demo
🐳 Start Docker and Create Your Local Cluster Ensure Docker is running. We’ll use it to spin up a local Kubernetes cluster using kind.
kind create cluster --config kind.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: dot
nodes:
- role: control-plane
kubeadmConfigPatches:
- |-
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 8080
hostPort: 8080
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
dot.nu:
#!/usr/bin/env nu source scripts/common.nu source scripts/kubernetes.nu source scripts/ack.nu source scripts/crossplane.nu def main [] {} def "main setup" [] { rm --force .env main create kubernetes aws kubectl create namespace a-team kubectl --namespace a-team apply --filename rds-password.yaml main apply ack main apply crossplane --preview true --provider aws ( kubectl apply --filename crossplane-providers/cluster-role.yaml ) ( kubectl apply --filename crossplane-providers/provider-kubernetes-incluster.yaml ) kubectl apply --filename dot-sql-config.yaml main wait crossplane main print source } def "main destroy" [] { main destroy kubernetes aws main delete ack }
devbox shell chmod +x dot.nu ./dot.nu setup source .env
The kind.yaml file defines networking + node settings tailored for ACK.
🧬 ACK CRDs: AWS Infra as K8s Resources
Once the ACK controllers are installed, Kubernetes gains a bunch of new CRDs that represent AWS services — from RDS and S3 to EC2 subnets and VPCs.
Let’s take a look:
kubectl get crds | grep k8s.aws
You’ll see CRDs like:
dbinstances.rds.services.k8s.aws vpcs.ec2.services.k8s.aws subnets.ec2.services.k8s.aws securitygroups.ec2.services.k8s.aws ...
Each CRD represents an AWS resource you can now define using a Kubernetes manifest. Think of it as Terraform, but via kubectl apply.
🚀 Deploying the ACK RDS Controller (locally with kind)
Since we’re using kind, not EKS, there’s no IRSA — so we authenticate with static AWS credentials injected into a K8s secret.
You already did this with ./dot.nu setup, but for clarity:
kubectl create ns ack-system kubectl create secret generic ack-creds \ -n ack-system \ --from-literal=AWS_ACCESS_KEY_ID=<key> \ --from-literal=AWS_SECRET_ACCESS_KEY=<secret>
Install the RDS ACK controller with Helm:
helm repo add ack https://aws.github.io/eks-charts helm repo update helm install ack-rds-controller ack/ack-rds-controller \ -n ack-system \ --set aws.region=us-east-1 \ --set aws.secret.name=ack-creds
Now your cluster has a controller watching DBInstance resources and creating matching RDS instances in AWS.
🏗️ Defining an RDS PostgreSQL Instance (and dependencies)
Creating a DB in AWS isn’t one resource — it’s VPCs, Subnets, Gateways, RouteTables, and finally the DBInstance.
Here’s a peek from rds.yaml:
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: VPC
metadata:
name: my-db
spec:
cidrBlock: 11.0.0.0/16
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: InternetGateway
metadata:
name: my-db
spec:
vpcRef:
from:
name: my-db
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: Subnet
metadata:
name: my-db-a
spec:
cidrBlock: 11.0.1.0/24
availabilityZone: us-east-1a
vpcRef:
from:
name: my-db
...
apiVersion: rds.services.k8s.aws/v1alpha1
kind: DBInstance
metadata:
name: my-db
annotations:
services.k8s.aws/region: us-east-1
spec:
dbInstanceClass: db.t3.micro
engine: postgres
engineVersion: "16.3"
allocatedStorage: 20
masterUsername: masteruser
masterUserPassword:
name: my-db-password
key: password
dbInstanceIdentifier: my-db
publiclyAccessible: true
dbSubnetGroupRef:
from:
name: my-db
vpcSecurityGroupRefs:
- from:
name: my-db
Also, create a Secret with the DB password:
Also, create a Secret with the DB password:
apiVersion: v1
kind: Secret
metadata:
name: my-db-password
stringData:
password: MySuperSecretPassw0rd!
Apply all resources:
kubectl apply -f rds-password.yaml kubectl apply -f rds.yaml
Check the instance status:
kubectl get dbinstances.rds.services.k8s.aws
Once it's STATUS=available, your RDS instance is live.
🔍 Observability in ACK: Where’s the Feedback?
Once you start working with ACK, you’ll quickly hit a snag: visibility into resource states is... primitive at best.
You might try this:
kubectl -n a-team get \ vpcs.ec2.services.k8s.aws,internetgateways.ec2.services.k8s.aws,routetables.ec2.services.k8s.aws,securitygroups.ec2.services.k8s.aws,subnets.ec2.services.k8s.aws,dbsubnetgroups.rds.services.k8s.aws,dbinstances.rds.services.k8s.aws
Output:
NAME ID STATE my-db vpc-07cdab5bba559b994 available ... my-db subnet-0340... available ... my-db db-2NOHJMPDDGYBPY6MH... creating
You get barebones info like STATE or STATUS, but it’s inconsistent and often misleading. Some resources use STATE, others use STATUS, and many don’t update accurately.
😡 Case Study: Broken Subnet
kubectl describe subnet.ec2.services.k8s.aws my-db-x
Yields:
Message: api error InvalidParameterValue: Value (us-east-1x) for parameter availabilityZone is invalid Type: ACK.Terminal
That’s actually helpful — the AZ doesn’t exist. But now try describing a working DBInstance:
kubectl describe dbinstance.rds.services.k8s.aws my-db
You’ll get a dump of AWS metadata with no intuitive indication of what’s actually happening. You have to hunt down .status.dbInstanceStatus and hope it says something like available or configuring-enhanced-monitoring.
No Ready condition. No events. No unified status format. ACK controllers don’t expose standard Kubernetes conditions. That’s not just annoying — it breaks integrations with Argo CD, Crossplane, and others.