Deploying Managed Kubernetes with Terraform
NetActuate Kubernetes Engine (NKE) provides managed Kubernetes clusters that you can provision and manage entirely through Terraform. This guide walks through deploying a cluster, accessing it with kubectl, deploying a workload, and scaling.
What You Will Build
- A managed Kubernetes cluster running on NetActuate infrastructure
- A local kubeconfig file for
kubectlaccess - The Kubernetes Dashboard enabled and accessible via URL
- A sample nginx workload running on the cluster
Prerequisites
- Terraform 1.0 or later
kubectlinstalled locally- A NetActuate API key (generate one at Account -> API Keys in the portal)
- The NetActuate Terraform provider configured:
terraform {
required_providers {
netactuate = {
source = "netactuate/netactuate"
version = ">= 2.0.0"
}
}
}
provider "netactuate" {
api_key = var.api_key
}
NKE vs Self-Managed k3s
| Feature | NKE (Managed) | k3s (Self-Managed) |
|---|---|---|
| Control plane | Managed by NetActuate | You install and maintain |
| Upgrades | Handled by NetActuate | Manual |
| Node scaling | Built-in autoscaling | Manual or custom automation |
| Setup time | Minutes via Terraform | Longer, requires Ansible or manual steps |
| Customization | Standard Kubernetes API | Full control over cluster internals |
| Best for | Production workloads, fast deployment | Custom configurations, edge cases |
If you need full control over the cluster internals, see the Kubernetes Anycast with Self-Managed k3s guide instead.
Step 1: Discover Available Versions
Use the netactuate_nke_versions data source to list the Kubernetes versions available for your cluster:
data "netactuate_nke_versions" "available" {}
output "available_versions" {
value = data.netactuate_nke_versions.available.versions
}
Run terraform plan to see the available versions. Pick the version that matches your requirements and note it for the next step.
Step 2: Create the Cluster
Define the cluster resource with your chosen version, plan, and location:
resource "netactuate_nke_cluster" "demo" {
label = "demo-cluster"
version = "1.31"
plan = "VR2x2x25"
location = "SJC"
min_nodes = 2
max_nodes = 5
dashboard = true
replicas = 1
}
Configuration breakdown:
label-- a human-readable name for the clusterversion-- the Kubernetes version from the available versions listplan-- the worker node size;VR2x2x25provides 2 vCPUs, 2 GB RAM, and 25 GB disk per nodelocation-- the NetActuate data center code (e.g.,SJC,AMS,SIN)min_nodes/max_nodes-- the autoscaling range for worker nodesdashboard-- set totrueto enable the Kubernetes Dashboardreplicas-- the number of control plane replicas; set to 3 for high availability in production
Run the apply:
terraform apply
Cluster provisioning takes a few minutes. Terraform waits until the cluster is ready before completing.
Step 3: Retrieve Kubeconfig
Once the cluster is running, retrieve the kubeconfig so you can connect with kubectl:
data "netactuate_nke_kubeconfig" "demo" {
cluster_id = netactuate_nke_cluster.demo.id
}
resource "terraform_data" "kubeconfig" {
triggers_replace = [data.netactuate_nke_kubeconfig.demo.kubeconfig]
provisioner "local-exec" {
command = "echo '${data.netactuate_nke_kubeconfig.demo.kubeconfig}' > kubeconfig.yaml && chmod 600 kubeconfig.yaml"
}
}
output "kubeconfig_path" {
value = "kubeconfig.yaml"
}
After terraform apply, the kubeconfig file is written to your working directory.
Step 4: Verify the Cluster
Point kubectl at the new kubeconfig and confirm the cluster is healthy:
export KUBECONFIG=./kubeconfig.yaml
kubectl get nodes
You should see your worker nodes in Ready status:
NAME STATUS ROLES AGE VERSION
demo-cluster-node-1 Ready <none> 3m v1.31.2
demo-cluster-node-2 Ready <none> 3m v1.31.2
Check that system pods are running:
kubectl get pods -A
All pods in the kube-system namespace should show Running or Completed status.
Step 5: Deploy a Sample Application
Create a file named nginx-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
Apply the workload:
kubectl apply -f nginx-deployment.yaml
Wait for the external IP to be assigned:
kubectl get svc nginx-service --watch
Once the EXTERNAL-IP column shows an IP address, you can reach nginx at that address in your browser or with curl.
Step 6: Scale Workers
To scale the cluster, update the min_nodes and max_nodes values in your Terraform configuration:
resource "netactuate_nke_cluster" "demo" {
label = "demo-cluster"
version = "1.31"
plan = "VR2x2x25"
location = "SJC"
min_nodes = 3
max_nodes = 8
dashboard = true
replicas = 1
}
Apply the change:
terraform apply
Verify the new nodes join the cluster:
kubectl get nodes
The cluster autoscaler adds nodes as needed within the range you defined. If workload demand drops, it scales back down to min_nodes.
Kubernetes Dashboard
When dashboard = true, NKE deploys the Kubernetes Dashboard automatically. Retrieve the URL from the Terraform output:
output "kubernetes_dashboard_url" {
value = netactuate_nke_cluster.demo.kubernetes_dashboard_url
}
terraform output kubernetes_dashboard_url
Open the URL in your browser. The dashboard provides a visual overview of workloads, pods, services, and cluster health.
Cluster Features
Autoscaling
NKE automatically scales worker nodes between min_nodes and max_nodes based on resource demand. When pods cannot be scheduled due to insufficient resources, the autoscaler provisions new nodes. When nodes are underutilized, they are removed.
Dual-Stack IPv4/IPv6
NKE clusters support dual-stack networking. Pods and services can be assigned both IPv4 and IPv6 addresses, and external traffic can reach your services over either protocol.
High Availability Control Plane
Set replicas to 3 or more for a highly available control plane. Multiple control plane nodes ensure the Kubernetes API server remains accessible even if a control plane node fails.
resource "netactuate_nke_cluster" "production" {
label = "prod-cluster"
version = "1.31"
plan = "VR4x8x50"
location = "AMS"
min_nodes = 3
max_nodes = 10
dashboard = true
replicas = 3
}
Teardown
To remove the cluster and all associated nodes:
terraform destroy
This deletes the cluster, all worker nodes, and the control plane. The kubeconfig file remains on disk but is no longer valid.
Reference
- Repository: netactuate-terraform-nke -- complete working examples
- Provider docs: NKE Terraform Reference
Related Guides
Need Help?
Contact support@netactuate.com or open a support ticket from the portal.