Skip to main content

Multi-Region Application Deployment

NetActuate's global network of PoP locations combined with Terraform provider v2 enables you to deploy and connect application infrastructure across multiple regions. This guide shows how to combine VPCs, managed Kubernetes, cloud routers, and storage into a multi-region architecture.

Architecture Overview

The target architecture spans two regions with private connectivity between them:

Region A (LAX) hosts a VPC containing application VMs or an NKE cluster, a cloud router for external and cross-region routing, and a storage bucket for region-local data.

Region B (AMS) mirrors the same topology — its own VPC, compute resources, cloud router, and storage bucket.

A Magic Mesh connection links the two cloud routers, establishing automatic private peering between regions. Application traffic between LAX and AMS flows over this private mesh rather than the public internet. Each region operates independently for local traffic, and the mesh provides failover and data replication paths.

Optionally, you can replace VMs with NKE (NetActuate Kubernetes Engine) clusters at each location for container-based workloads, or combine both approaches.

Building Blocks

ComponentResourcePurpose
Private Networknetactuate_vpcIsolated networking per region
Computenetactuate_server or netactuate_nke_clusterApplication workloads
Routingnetactuate_routerCross-region connectivity
Meshnetactuate_magic_meshAutomatic inter-router peering
Storagenetactuate_storage_bucketRegion-local data
Securitynetactuate_firewall_setPer-VM firewall rules
Secretsnetactuate_secret_listShared credentials

Pattern 1: VPC with VMs

This pattern deploys traditional VMs in isolated VPCs at each location, connected by cloud routers and Magic Mesh.

Deploy VPCs

Create a VPC in each region:

resource "netactuate_vpc" "lax" {
name = "app-lax"
location = "LAX"
subnet = "10.100.0.0/24"
}

resource "netactuate_vpc" "ams" {
name = "app-ams"
location = "AMS"
subnet = "10.200.0.0/24"
}

Deploy VMs

Place application VMs inside each VPC:

resource "netactuate_server" "app_lax" {
count = 2
hostname = "app-lax-${count.index + 1}.example.com"
plan = "SSD.4GB"
location = "LAX"
image = "Ubuntu 24.04 LTS"
vpc_id = netactuate_vpc.lax.id
}

resource "netactuate_server" "app_ams" {
count = 2
hostname = "app-ams-${count.index + 1}.example.com"
plan = "SSD.4GB"
location = "AMS"
image = "Ubuntu 24.04 LTS"
vpc_id = netactuate_vpc.ams.id
}

Deploy Cloud Routers

Create a cloud router at each location:

resource "netactuate_router" "lax" {
name = "router-lax"
location = "LAX"
vpc_id = netactuate_vpc.lax.id
}

resource "netactuate_router" "ams" {
name = "router-ams"
location = "AMS"
vpc_id = netactuate_vpc.ams.id
}

Connect via Magic Mesh

Link the two routers with Magic Mesh for automatic private peering:

resource "netactuate_magic_mesh" "lax_ams" {
name = "lax-ams-mesh"
router_ids = [
netactuate_router.lax.id,
netactuate_router.ams.id,
]
}

Magic Mesh establishes encrypted tunnels between enrolled routers automatically. No manual tunnel configuration is required.

Add Cross-VPC Routing

With the mesh in place, configure static routes or BGP so each VPC can reach the other:

resource "netactuate_router_static_route" "lax_to_ams" {
router_id = netactuate_router.lax.id
destination = "10.200.0.0/24"
next_hop = netactuate_router.ams.mesh_ip
}

resource "netactuate_router_static_route" "ams_to_lax" {
router_id = netactuate_router.ams.id
destination = "10.100.0.0/24"
next_hop = netactuate_router.lax.mesh_ip
}

VMs in LAX can now reach VMs in AMS over private addresses.

Pattern 2: NKE Clusters

This pattern uses managed Kubernetes clusters instead of VMs for container-based workloads.

Deploy NKE Clusters

resource "netactuate_nke_cluster" "lax" {
name = "app-lax"
location = "LAX"
node_count = 3
node_plan = "SSD.8GB"
k8s_version = "1.29"
}

resource "netactuate_nke_cluster" "ams" {
name = "app-ams"
location = "AMS"
node_count = 3
node_plan = "SSD.8GB"
k8s_version = "1.29"
}

Expose Services with MetalLB

NKE clusters include MetalLB for bare-metal load balancing. Deploy your application with a LoadBalancer service to get a routable IP at each location:

apiVersion: v1
kind: Service
metadata:
name: app
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 8443
selector:
app: myapp

Optional: Anycast with BGP Controller

For global traffic distribution, deploy a BGP controller in each NKE cluster to announce the same anycast prefix from both locations. End users reach the nearest cluster automatically based on BGP path selection. See End-to-End Anycast Global Deployment for the full BGP setup.

Shared Services

Secrets Manager

Use Secrets Manager to store credentials shared across regions. Both LAX and AMS VMs or clusters can reference the same secret keys in their cloud-init or application configuration:

resource "netactuate_secret_list" "shared" {
name = "multi-region-shared"
}

resource "netactuate_secret_list_value" "db_password" {
secret_list_id = netactuate_secret_list.shared.id
secret_key = "DB_PASSWORD"
secret_value = var.db_password
}

VMs in any region can reference ${{secret.DB_PASSWORD}} in their cloud-init configuration.

Storage Buckets

Deploy S3-compatible storage buckets in each region for local data access:

resource "netactuate_storage_bucket" "lax" {
name = "app-data-lax"
location = "LAX"
}

resource "netactuate_storage_bucket" "ams" {
name = "app-data-ams"
location = "AMS"
}

Use your application's replication logic or a tool like rclone to sync data between regional buckets as needed.

Networking Considerations

Magic Mesh

Magic Mesh is the simplest option for full-mesh connectivity between regions. All enrolled routers peer automatically with encrypted tunnels. Note that routers enrolled in Magic Mesh cannot use custom VRFs — the mesh manages routing tables directly.

Selective Connectivity with IPSec or WireGuard

If you need connectivity between specific sites rather than a full mesh, configure IPSec or WireGuard tunnels on the cloud routers instead of using Magic Mesh. This gives you granular control over which regions can communicate. See Site-to-Site VPN with Cloud Routers for setup instructions.

Anycast for Global Traffic Distribution

For public-facing services, anycast routing directs users to the nearest healthy region automatically. Announce the same prefix from VMs or NKE clusters in each location using BGP. NetActuate's edge network propagates the announcements to upstream peers, and end-user traffic follows the shortest BGP path.

Available PoP Locations

NetActuate operates PoPs across North America, Europe, Asia, and Oceania. For the full list of available locations and their identifiers, see API Locations.

Need Help?

If you need assistance with multi-region deployment, visit our support page.