Connecting Multiple Sites with Magic Mesh
Magic Mesh creates a full-mesh BGP overlay between cloud routers at different NetActuate locations, enabling private connectivity across your infrastructure without manual tunnel or peering configuration. This guide walks through deploying two cloud routers and connecting them with Magic Mesh using Terraform.
What You Will Build
By the end of this guide you will have:
- Two cloud routers deployed in separate NetActuate locations (Los Angeles and Amsterdam)
- A Magic Mesh overlay connecting them automatically
- Private route exchange between both sites with no manual BGP or tunnel setup
Prerequisites
- A NetActuate account with API access
- Terraform 1.0 or later installed locally
- A NetActuate API key (generate one under Account -> API Keys in the portal)
- The NetActuate Terraform provider v2 configured in your project
How Magic Mesh Works
Magic Mesh automates three things that you would otherwise configure by hand:
- WireGuard tunnels -- the mesh creates encrypted WireGuard tunnels between every pair of enrolled routers automatically. You do not need to manage keys or endpoints.
- BGP peering -- once the tunnels are up, BGP sessions are established over them. Each router peers with every other router in the mesh.
- Route propagation -- routes advertised by any router in the mesh are distributed to all other members. A prefix announced in Los Angeles becomes reachable from Amsterdam without additional configuration.
The mesh manages its own VRF on each member router. All tunnel interfaces, BGP sessions, and route tables live inside this VRF, keeping mesh traffic isolated from other routing on the router.
Step 1: Deploy Cloud Routers
Start by deploying two cloud routers in different locations. Each router needs a name, location code, and plan.
terraform {
required_providers {
netactuate = {
source = "netactuate/netactuate"
version = ">= 2.0.0"
}
}
}
provider "netactuate" {
api_key = var.netactuate_api_key
}
variable "netactuate_api_key" {
type = string
sensitive = true
}
resource "netactuate_router" "lax" {
name = "mesh-router-lax"
location = "LAX"
plan = "VR2x2x25"
}
resource "netactuate_router" "ams" {
name = "mesh-router-ams"
location = "AMS"
plan = "VR2x2x25"
}
Run terraform apply to create the routers. Each router takes a few minutes to provision. Terraform will wait for them to become ready before proceeding.
Step 2: Create the Mesh
With both routers running, create a Magic Mesh resource. This is the logical container that ties the routers together.
resource "netactuate_magic_mesh" "site_mesh" {
name = "lax-ams-mesh"
}
The mesh resource itself is lightweight. It does not create any tunnels or sessions until you enroll routers.
Step 3: Enroll Routers
Add each router to the mesh. Enrollment triggers the automatic creation of WireGuard tunnels and BGP sessions between all members.
resource "netactuate_magic_mesh_router" "lax" {
mesh_id = netactuate_magic_mesh.site_mesh.id
router_id = netactuate_router.lax.id
}
resource "netactuate_magic_mesh_router" "ams" {
mesh_id = netactuate_magic_mesh.site_mesh.id
router_id = netactuate_router.ams.id
}
After terraform apply, the mesh provisions:
- A WireGuard tunnel from LAX to AMS (and the reverse direction)
- A BGP session over the tunnel in each direction
- A dedicated
magic-meshVRF on each router to hold the tunnel interfaces and BGP configuration
Step 4: Verify Connectivity
After enrollment completes, confirm the mesh is operational.
Check router status
terraform show | grep -A 5 "netactuate_magic_mesh_router"
Both router resources should show a healthy status. Verify that mesh_id is populated on each router enrollment resource.
Verify in the portal
Navigate to Networking -> Cloud Routers and select either router. Under the router's VRF list you should see a magic-mesh VRF with tunnel interfaces and active BGP neighbors.
Test route propagation
If you add a static route or BGP-advertised prefix on the LAX router, it should appear in the routing table of the AMS router within a few seconds. You can verify this through the router management interface in the portal or by checking the VRF routing table.
Key Considerations
Before building with Magic Mesh, keep these constraints in mind:
- Default VRF only -- enrolled routers are restricted to the default VRF. You cannot use custom VRFs alongside Magic Mesh on the same router. If you need custom VRFs, use a dedicated router outside the mesh.
- Automatic route sharing -- all routers in a mesh share routes automatically. There is no per-router filtering within the mesh. Every prefix advertised by one member is visible to all others.
- Non-disruptive changes -- adding or removing routers from a mesh does not disrupt existing members. New routers establish tunnels and sessions without affecting traffic on existing tunnels.
- Managed VRF -- the
magic-meshVRF is created and managed by the mesh. Do not create a VRF with this name before enrolling a router, and do not manually modify the VRF after enrollment.
Note: Router configuration changes are serialized with a 60-second interval between operations. Terraform handles the wait automatically, but plan for longer apply times when configuring multiple routers.
Use Cases
Magic Mesh is a good fit for scenarios where multiple sites need private connectivity with minimal configuration:
- Multi-site database replication -- connect database clusters across locations over encrypted tunnels for low-latency replication traffic
- Distributed application backends -- let application services at different POPs communicate directly without traversing the public internet
- Hybrid cloud connectivity -- link NetActuate cloud routers to form a private backbone alongside your existing infrastructure
- Disaster recovery -- maintain always-on connectivity between primary and failover sites so recovery traffic can flow immediately when needed
Scaling the Mesh
To add a third (or fourth, fifth, etc.) site, deploy another router and enroll it:
resource "netactuate_router" "sin" {
name = "mesh-router-sin"
location = "SIN"
plan = "VR2x2x25"
}
resource "netactuate_magic_mesh_router" "sin" {
mesh_id = netactuate_magic_mesh.site_mesh.id
router_id = netactuate_router.sin.id
}
The mesh automatically creates tunnels between the new router and every existing member. Routes propagate to the new site as soon as BGP sessions come up. Existing routers are not disrupted during this process.
With N routers, the mesh creates N*(N-1)/2 tunnels. For most deployments this scales well, but for very large meshes (10+ routers) consider the cumulative tunnel overhead on each router.
Complete Configuration
Here is the full Terraform configuration for a two-site mesh:
terraform {
required_providers {
netactuate = {
source = "netactuate/netactuate"
version = ">= 2.0.0"
}
}
}
provider "netactuate" {
api_key = var.netactuate_api_key
}
variable "netactuate_api_key" {
type = string
sensitive = true
}
# Routers
resource "netactuate_router" "lax" {
name = "mesh-router-lax"
location = "LAX"
plan = "VR2x2x25"
}
resource "netactuate_router" "ams" {
name = "mesh-router-ams"
location = "AMS"
plan = "VR2x2x25"
}
# Mesh
resource "netactuate_magic_mesh" "site_mesh" {
name = "lax-ams-mesh"
}
# Enrollment
resource "netactuate_magic_mesh_router" "lax" {
mesh_id = netactuate_magic_mesh.site_mesh.id
router_id = netactuate_router.lax.id
}
resource "netactuate_magic_mesh_router" "ams" {
mesh_id = netactuate_magic_mesh.site_mesh.id
router_id = netactuate_router.ams.id
}
# Outputs
output "lax_router_id" {
value = netactuate_router.lax.id
}
output "ams_router_id" {
value = netactuate_router.ams.id
}
output "mesh_id" {
value = netactuate_magic_mesh.site_mesh.id
}
Advanced Resources
For the full list of cloud router resources and options, see the Terraform provider repository:
https://github.com/netactuate/netactuate-terraform-router
Related Documentation
- Cloud Router reference -- all cloud router Terraform resources and examples
- Site-to-Site VPN with Cloud Routers -- IPSec, WireGuard, and GRE tunnel configuration
- Multi-Region Application Deployment -- deploying applications across multiple NetActuate locations
Need Help?
Contact support@netactuate.com or open a support ticket from the portal.