End-to-End Anycast Global Deployment
This guide walks through deploying a complete BGP anycast cluster on NetActuate — from account setup through provisioning, BGP configuration, validation, and adding a service layer.
What You Will Have
By the end of this guide:
- VMs running in 2+ global PoPs
- BIRD2 BGP sessions established with NetActuate's edge routers
- Your prefix announced via anycast from each location
- Traffic routing to the nearest healthy node automatically
Prerequisites
- NetActuate account with BGP enabled (contact support@netactuate.com — provide your ASN and intended prefixes)
- BGP group ID (see Networking → BGP or Networking → Anycast groups in the portal)
- Contract ID (scroll to the bottom of Account → API Access in the portal)
- ASN (your own, or request one from NetActuate)
- IPv4 and/or IPv6 prefix(es) to announce
- SSH key pair for node access
How Anycast Works on NetActuate
VMs in each PoP advertise the same prefix to NetActuate's edge routers via BGP. NetActuate propagates those announcements to its upstream peers. End users reach the nearest PoP based on BGP path selection. If a node goes down, its routes withdraw and traffic fails over to remaining nodes automatically. See Anycast for a deeper explanation.
Step 1: Choose Your Provisioning Path
Option A: Ansible Only
Clone and configure netactuate-ansible-bgp-bird2:
git clone https://github.com/netactuate/netactuate-ansible-bgp-bird2
cd netactuate-ansible-bgp-bird2
Edit group_vars/all with your API key, BGP group, contract ID, ASN, and prefix. Edit hosts with your target locations. Add your SSH public key to keys.pub.
Option B: Terraform Then Ansible
Clone netactuate-terraform-bgp to provision infrastructure:
git clone https://github.com/netactuate/netactuate-terraform-bgp
cd netactuate-terraform-bgp
cp terraform.tfvars.example terraform.tfvars
# edit terraform.tfvars
terraform init && terraform apply
terraform output -raw ansible_inventory > inventory.ini
Then clone netactuate-ansible-bgp-bird2 and run bgp.yaml against the generated inventory. See Terraform + Ansible Handoff Pattern.
Step 2: Provision Nodes
ansible-playbook createnode.yaml
Nodes are provisioned in parallel. Each node reboots once after initial provisioning — the playbook waits for login availability before proceeding.
Step 3: Configure BGP
ansible-playbook bgp.yaml
This installs BIRD2, fetches BGP peer details from the NetActuate API for each node, generates the BIRD configuration, and starts BGP sessions.
Step 4: Validate
SSH to any node and run:
birdc show protocols
All sessions should show Established. Then:
birdc show route
Your prefix should appear in the route table. Ping your anycast IP from an external host to confirm reachability.
Step 5: Add a Service Layer (Optional)
With BGP working, deploy a service on your anycast nodes:
- Web serving: NGINX Workers
- Caching: Varnish Frontend
- Authoritative DNS: PowerDNS
- Anycast DNS: Anycast DNS with KnotDNS
- Kubernetes: k3s Anycast
Teardown
ansible-playbook deletenode.yaml
This cancels billing and terminates all nodes in the inventory.
Need Help?
If you need assistance with anycast deployment, visit our support page.