Skip to main content

Object Storage with S3 API

NetActuate provides S3-compatible object storage that you can provision via Terraform and access using standard S3 tools and SDKs. Storage is available in 13 locations worldwide.

What You Will Build

By the end of this guide you will have:

  • An S3-compatible storage bucket with access credentials
  • An object store with independent scaling
  • Working AWS CLI configuration for uploading and listing objects

Prerequisites

  • Terraform 1.0 or later
  • The NetActuate Terraform provider configured with a valid API key
  • AWS CLI or any S3-compatible client (optional, for verification)

Available Locations

NetActuate storage is available in the following 13 locations:

CodeLocation
RDURaleigh, NC
AMSAmsterdam, NL
SJCSan Jose, CA
DFWDallas, TX
LHRLondon, UK
FRAFrankfurt, DE
SCLSantiago, CL
HKGHong Kong
SINSingapore
DXBDubai, AE
SYDSydney, AU
LAXLos Angeles, CA
MIAMiami, FL

Note: Storage locations differ from compute locations. Use data.netactuate_storage_locations to discover which locations are available for your account.

Storage Types

NetActuate offers two S3-compatible storage resource types. Both use standard S3 protocols and work with any S3 client.

S3 Bucket

A standard S3-compatible bucket with access and secret keys. Each bucket includes a privacy toggle that controls public or private access.

Best for general object storage, static assets, and backups.

Object Store

An S3-compatible object store, similar to buckets but with independent scaling and no privacy toggle. Object stores are provisioned as standalone units.

Best for application data, media storage, and workloads that grow unpredictably.

Key Differences

Both types are fully S3-compatible and return access credentials on creation. The main differences:

  • Buckets expose a private attribute for controlling public access
  • Object stores scale independently and are suited to application workloads
  • Both support enable_auto_scaling for automatic capacity expansion

Step 1: Discover Locations

Use the netactuate_storage_locations data source to list available storage regions:

data "netactuate_storage_locations" "all" {}

output "storage_locations" {
value = data.netactuate_storage_locations.all.locations
}

Run terraform apply to see the full list of location codes and names.

Step 2: Create a Bucket

Define a netactuate_storage_bucket resource with a label, location, and capacity in gigabytes:

resource "netactuate_storage_bucket" "assets" {
label = "static-assets"
location = "SJC"
capacity = 2
private = false
}

Setting private to false allows public read access. Set it to true for private-only access.

Step 3: Create an Object Store

Define a netactuate_storage_object_store resource:

resource "netactuate_storage_object_store" "appdata" {
label = "app-media"
location = "SJC"
capacity = 10
}

The object store provisions independently from any bucket and receives its own set of credentials.

Step 4: Retrieve Credentials

Both resource types export connection details as sensitive outputs. Add outputs to your configuration:

output "bucket_endpoint" {
value = netactuate_storage_bucket.assets.endpoint
sensitive = true
}

output "bucket_access_key" {
value = netactuate_storage_bucket.assets.access_key
sensitive = true
}

output "bucket_secret_key" {
value = netactuate_storage_bucket.assets.secret_key
sensitive = true
}

Run terraform output -json to retrieve the credentials in plaintext.

Using with AWS CLI

Configure a named profile with the credentials from Terraform output:

aws configure --profile netactuate
# Enter the access_key when prompted for AWS Access Key ID
# Enter the secret_key when prompted for AWS Secret Access Key
# Leave region and output format as defaults

List bucket contents:

aws s3 ls --endpoint-url https://your-endpoint.netactuate.com --profile netactuate

Upload a file:

aws s3 cp myfile.txt s3://static-assets/ --endpoint-url https://your-endpoint.netactuate.com --profile netactuate

Replace https://your-endpoint.netactuate.com with the actual endpoint from your Terraform output.

Auto-Scaling

Enable automatic capacity expansion to avoid running out of space:

resource "netactuate_storage_bucket" "assets" {
label = "static-assets"
location = "SJC"
capacity = 2
private = false
enable_auto_scaling = true
}

When auto-scaling is enabled, NetActuate increases the storage capacity automatically as usage approaches the provisioned limit.

Teardown

Remove all stored objects before destroying the Terraform resources:

aws s3 rm s3://static-assets/ --recursive --endpoint-url https://your-endpoint.netactuate.com --profile netactuate
terraform destroy

Terraform will deprovision the bucket and object store and release all associated credentials.

Additional Resources

Need Help?

If you need assistance, visit our support page.