Skip to main content

Block Storage for Persistent Data

NetActuate block storage provides Ceph-backed persistent volumes accessible via iSCSI. Use block storage for databases, application state, and any workload requiring low-latency persistent disk.

What You Will Build

By the end of this guide you will have:

  • A block storage namespace with Ceph pool credentials
  • A block volume provisioned as an RBD image within the storage cluster

Prerequisites

  • Terraform 1.0 or later
  • The NetActuate Terraform provider configured with a valid API key
  • A VM or bare metal server for mounting the block volume

Block Storage Concepts

Block Namespace

A Ceph storage namespace acts as a container for block volumes. Creating a namespace provisions a storage pool and returns credentials for client access. Use namespaces to organize volumes by project or environment.

Block Volume

A pre-provisioned RBD (RADOS Block Device) image within the storage cluster. Each volume gets its own namespace and credentials automatically. Volumes range from 1 to 1000 GB and are attached to VMs over the network.

Step 1: Create a Block Namespace

Define a netactuate_storage_block_namespace resource with a label, location, and capacity:

resource "netactuate_storage_block_namespace" "db_storage" {
label = "database-pool"
location = "SJC"
capacity = 50
}

The namespace provisions a Ceph pool and returns connection credentials on creation.

Step 2: Create a Block Volume

Define a netactuate_storage_block_volume resource. Capacity is specified in gigabytes and must be between 1 and 1000:

resource "netactuate_storage_block_volume" "db_vol" {
label = "postgres-data"
location = "SJC"
capacity = 100
}

Each volume is provisioned as an independent RBD image with its own namespace and credentials.

Step 3: Retrieve Connection Details

Export the connection details needed to mount the volume from your VM:

output "volume_endpoint" {
value = netactuate_storage_block_volume.db_vol.endpoint
sensitive = true
}

output "storage_pool" {
value = netactuate_storage_block_volume.db_vol.storage_pool
sensitive = true
}

output "storage_namespace" {
value = netactuate_storage_block_volume.db_vol.storage_namespace
sensitive = true
}

output "user_key" {
value = netactuate_storage_block_volume.db_vol.user_key
sensitive = true
}

output "secret_key" {
value = netactuate_storage_block_volume.db_vol.secret_key
sensitive = true
}

Run terraform output -json to retrieve the credentials in plaintext.

Connecting from a VM

Once you have the connection details, configure your VM as a Ceph client or use iSCSI to attach the block volume.

A typical Ceph RBD client setup involves:

  1. Install the Ceph client packages on your VM (ceph-common on Debian/Ubuntu)
  2. Write the keyring file using the user_key from Terraform output
  3. Map the RBD image using rbd map with the pool, namespace, and image name
  4. Create a filesystem on the mapped device and mount it

Note: For detailed Ceph client configuration, refer to the upstream Ceph RBD documentation. The exact steps depend on your operating system and Ceph version.

Capacity and Auto-Scaling

Block volumes support capacities from 1 to 1000 GB per volume. For namespaces, you can enable automatic capacity expansion:

resource "netactuate_storage_block_namespace" "db_storage" {
label = "database-pool"
location = "SJC"
capacity = 50
enable_auto_scaling = true
}

When auto-scaling is enabled, the namespace capacity increases automatically as usage grows.

Available Locations

Block storage is available in the same 13 locations as object storage:

RDU, AMS, SJC, DFW, LHR, FRA, SCL, HKG, SIN, DXB, SYD, LAX, MIA

Use data.netactuate_storage_locations to confirm availability for your account.

Teardown

Unmount the block volume from your VM before destroying the Terraform resources:

# On the VM
sudo umount /mnt/data
sudo rbd unmap /dev/rbd0

# From your Terraform workspace
terraform destroy

Terraform will deprovision the volume, namespace, and release all associated credentials.

Additional Resources

Need Help?

If you need assistance, visit our support page.