Kubernetes Anycast with Self-Managed k3s
Note: This guide covers self-managed k3s deployed on NetActuate VMs — you install and operate k3s yourself. If you are looking for NetActuate's managed Kubernetes service, where NetActuate manages the control plane and handles upgrades, see Kubernetes.
Deploy a self-managed k3s cluster across NetActuate's global network with Kubernetes service IPs advertised as anycast prefixes via MetalLB and BIRD2 BGP.
What You Will Have
- k3s cluster spanning multiple NetActuate PoPs
- MetalLB configured with an anycast IP pool
- BGP sessions advertising Kubernetes LoadBalancer IPs globally
- Services reachable via anycast from anywhere on the internet
Architecture
MetalLB runs as a speaker on each worker node and advertises Kubernetes LoadBalancer service IPs to NetActuate's edge routers via BGP — the same mechanism used by all NetActuate anycast deployments. Traffic reaches the nearest healthy node running the service.
Prerequisites
BGP-enabled account, ASN, prefix. kubectl installed on your control node. Same BGP requirements as Anycast Global Deployment.
Playbook Repository
git clone https://github.com/netactuate/netactuate-ansible-k3s
cd netactuate-ansible-k3s
Deployment
Edit group_vars/all and hosts with your server and worker node entries.
ansible-playbook createnode.yaml
ansible-playbook bgp.yaml
ansible-playbook k3s.yaml
ansible-playbook loadbalancer.yaml
Validation
kubectl get nodes
All nodes should show Ready.
kubectl get pods -n metallb-system
MetalLB speaker and controller pods should show Running.
BGP Controller for Existing Clusters
If you already have a k3s or Kubernetes cluster and want to add automated BGP anycast with health-based failover, see the Kubernetes Anycast with BGP Controller guide. The open-source NetActuate BGP Controller works with MetalLB, Calico, Cilium, or BIRD and can be installed on any k3s cluster — including clusters deployed with this playbook.
Need Help?
If you need assistance, visit our support page.