Container Networking Deep Dive Part 1: Single Network Namespace on a VM
2024-04-14
Introduction
This is Part 1 of our Container Networking Deep Dive series. In this hands-on deep dives, we show how to simulate container-like networking using Linux primitives: ip netns, veth pairs, and routing tables.
We're building it all from scratch. Think of it like plugging an Ethernet cable between two interface cards — one inside a Linux network namespace (our "container") and the other in the host.
Why Should You Care?
Understanding the low-level mechanics and first principles behind container networking — without relying on Docker or Kubernetes — gives you ultimate debugging power. This series is designed to demystify how containers, pods, bridges, overlays and CNI plugins actually work under the hood.
Scenario Overview
Prerequisites:
- Make sure to have
multipass
installed (if on macOS) or have a VM setup via UTM/VirtualBox.
We’ll set up:
- A VM using Multipass
- A network namespace (what containers are under the hood)
- A veth pair connecting host and namespace
- IP addresses on both ends
- Routing so they can talk
Goal: ping the namespace from the host and vice versa.
Deep Dive: Step-by-Step Setup
Provisioning the VM
Create a Makefile to provision the VM and transfer the files to it.
NAME=netns-lab
IMAGE=22.04
SCRIPT=scenario1.sh
up:
@if multipass info $(NAME) >/dev/null 2>&1; then \
echo "$(NAME) already exists. Skipping launch."; \
else \
echo "Launching $(NAME) VM..."; \
multipass launch --name $(NAME) --cloud-init cloud-init.yaml --memory 1G --disk 5G; \
fi
@chmod +x env.sh scenario1.sh test.sh
@echo "Transferring files to VM..."
@for file in env.sh scenario1.sh test.sh; do \
multipass transfer $$file $(NAME):/home/ubuntu/ || echo "Failed to transfer $$file"; \
done
env.sh is a simple script to set up the environment variables.
CON="netns1" IP="10.200.1.1"
#!/bin/bash -e . /home/ubuntu/env.sh echo "[+] Creating the namespace" sudo ip netns add $CON echo "[+] Creating the veth pair" sudo ip link add veth1 type veth peer name veth2 echo "[+] Moving one end to namespace" sudo ip link set veth2 netns $CON echo "[+] Assigning IP inside namespace" sudo ip netns exec $CON ip addr add $IP dev veth2 echo "[+] Bringing up interfaces" sudo ip netns exec $CON ip link set veth2 up sudo ip netns exec $CON ip link set lo up sudo ip link set veth1 up echo "[+] Routing setup" sudo ip route add $IP/32 dev veth1 || true sudo ip netns exec $CON ip route add default via $IP dev veth2 || true
We setup:
- We create a namespace called
netns1
- We create a veth pair called
veth1
andveth2
- We move one end to the namespace
- We assign an IP address to the interface in the namespace
- We bring up the interfaces
- We add a route to the namespace from the host
- We add a default route to the namespace from the host
Test connectivity
#!/bin/bash
NS=netns1
IP=10.200.1.1
check() {
echo "[TEST] $1"
eval "$2"
echo ""
}
fail_if_empty() {
if [ -z "$($1)" ]; then
echo "[FAIL] $2"
exit 1
else
echo "[PASS] $2"
fi
}
check "Namespace exists" "sudo ip netns list | grep -q $NS && echo 'Found namespace $NS'"
fail_if_empty "sudo ip netns list | grep $NS" "Namespace '$NS' exists"
check "Interface in namespace" "sudo ip netns exec $NS ip link show veth2"
check "Interface on host" "sudo ip link show veth1"
check "Route on host to $IP" "sudo ip route | grep $IP"
check "Ping from host to namespace" "sudo ping -c 3 $IP"
check "Ping from namespace to host IP" "sudo ip netns exec $NS ping -c 3 $IP"
check "Routes inside namespace" "sudo ip netns exec $NS ip route"
Rest of makefile
run:
@echo "Running scenario script..."
@multipass exec $(NAME) -- bash /home/ubuntu/$(SCRIPT)
shell:
@multipass shell $(NAME)
test:
@multipass exec $(NAME) -- bash /home/ubuntu/test.sh
destroy:
@echo "Destroying VM..."
@multipass delete $(NAME) --purge || echo "$(NAME) does not exist."
Takeaways & Lessons learned
ip netns
is the real container under the hood.- A veth pair gives you point-to-point connectivity. Similar to what a virtual Ethernet cable does.
- You control the routing tables just like in real containers.
- Network namespaces give you isolation.
- veth pairs connect those isolated spaces.
- Routing needs to be explicit.
- You can debug container networking using raw Linux tools.
Conclusion
We’ve now created our first standalone network namespace with basic connectivity from host to namespace. This is how most CNI plugins start under the hood.
In the next part of the series, we’ll look at creating two namespaces on the same node and wiring them together using a bridge — effectively building a virtual switch.
Stay tuned for Part 2!