Skip to main content

Redundant Anycast Groups

This guide covers how to design redundant anycast groups using NetActuate's infrastructure. The primary use case is authoritative DNS, but the same architecture applies to any service that benefits from tiered global redundancy.

Architecture Overview

A redundant anycast group architecture assigns different priority tiers to different sets of locations. If the primary tier becomes unavailable, traffic automatically fails over to the secondary tier, and then to the tertiary tier if needed.

Each tier is a separate anycast group announcing the same prefix but with different AS path lengths to control global preference:

TierRoleAS Path PrependingExample Locations
PrimaryHandles all traffic under normal conditionsNone (shortest path)Los Angeles, New York, London, Tokyo
SecondaryReceives traffic when primary locations fail1x prependChicago, Frankfurt, Singapore
TertiaryLast resort; receives traffic only when primary and secondary are both down2x prependMiami, Sydney

How It Works

  1. All tiers announce the same prefix (e.g., 192.0.2.0/24).
  2. Primary tier locations announce with the shortest AS path, so they are preferred by all upstream networks.
  3. Secondary tier locations prepend the AS path once, making them less preferred but still reachable.
  4. Tertiary tier locations prepend twice, making them the least preferred.

Under normal conditions, all traffic flows to primary tier locations. If all primary locations withdraw their announcements (due to failure), upstream networks converge on the secondary tier. If the secondary tier also fails, the tertiary tier handles traffic.

Setting Up Redundant Groups

Step 1: Plan your tier assignments

Map each NetActuate location to a tier based on:

  • Capacity -- primary tier locations should have the most resources
  • Geographic coverage -- each tier should cover major regions
  • Failure independence -- avoid putting all tiers in the same physical facility or network path

Step 2: Create anycast groups per tier

Create three anycast groups in the portal or via the API:

  • dns-primary -- sessions at primary locations
  • dns-secondary -- sessions at secondary locations
  • dns-tertiary -- sessions at tertiary locations

Step 3: Configure AS path prepending

On your VM routing daemon at each tier, configure the appropriate level of prepending:

Primary tier (no prepending):

protocol bgp netactuate {
local <VM_IP> as 65001;
neighbor <ROUTER_IP> as 13830;

ipv4 {
export where proto = "static_bgp";
};
}

Secondary tier (1x prepend):

protocol bgp netactuate {
local <VM_IP> as 65001;
neighbor <ROUTER_IP> as 13830;

ipv4 {
export filter {
bgp_path.prepend(65001);
accept;
};
};
}

Tertiary tier (2x prepend):

protocol bgp netactuate {
local <VM_IP> as 65001;
neighbor <ROUTER_IP> as 13830;

ipv4 {
export filter {
bgp_path.prepend(65001);
bgp_path.prepend(65001);
accept;
};
};
}

Step 4: Verify tier preference

Use a looking glass or BGP route viewer to confirm that your prefix is visible with the correct AS path lengths from each tier.

DNS-Specific Considerations

NS record configuration

For authoritative DNS, configure your NS records to point to the anycast addresses. All NS records can point to the same anycast IP if all tiers announce the same prefix, or you can use different prefixes per tier with separate NS records:

  • ns1.example.com → primary tier anycast IP
  • ns2.example.com → secondary tier anycast IP
  • ns3.example.com → tertiary tier anycast IP

TTL and convergence

DNS resolvers cache responses based on TTL. During a failover event:

  1. BGP convergence moves traffic to the next tier (seconds to minutes).
  2. Cached DNS responses continue to be served from resolver caches until TTL expires.

Set your DNS TTLs appropriately for your failover requirements. Lower TTLs mean faster failover but higher query volume.

Health checking

Use BFD on all BGP sessions to minimize failover time. Without BFD, failover depends on the BGP hold timer (default 90 seconds). With BFD, failover happens in sub-second timeframes.

See ECMP Load Balancing for BFD configuration details.

Next Steps


Need Help?

Contact support@netactuate.com or open a support ticket from the portal.